title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
SDMG: Smoothing Your Diffusion Models for Powerful Graph Representation Learning
Accept (poster)
Summary: The paper presents a new diffusion-based approach for self-supervised graph representation learning (SDMG). Rather than reconstructing all graph details, which often leads to the overfitting of high-frequency noise, the authors provide both theoretical and empirical evidence that focusing on low-frequency components yields more robust, globally informative representations. To this end, SDMG integrates the proposed low-frequency encoders for node features and graph topology along with a new multi-scale smoothing loss that enforces consistency across multiple neighborhood scales. Extensive experiments on node-level and graph-level benchmarks demonstrate that this targeted reconstruction strategy leads to state-of-the-art performance, while ablation studies validate the contributions of each component, demonstrating the method’s novelty and effectiveness. Claims And Evidence: The paper’s main claims are supported by solid experimental evidence. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are suited to the problem. Theoretical Claims: Yes, I checked the proofs for Theorem 3.1 and Theorem 4.1. They make sense to me: Theorem 3.1 provides strong theoretical support for the observed misalignment between generation and representation learning, while Theorem 4.1 directly validates the equivalence between the proposed MSS loss and a low-frequency filter. Experimental Designs Or Analyses: Yes, I reviewed all experimental designs and analyses, particularly the preliminary experiments and ablation studies, and found them to be sound with no significant issues. For example, the conclusions in Figures 1 and 3 are robust and intuitively reasonable since, for recognition tasks, reconstructing every detail is not always beneficial. Recent work in computer vision is just beginning to address this issue from different perspectives, and I am pleased to see such promising prospects in graph tasks. The only minor issue is that the authors should summarize the findings or conclusions more clearly and concisely in the captions of figures and tables. Supplementary Material: Yes, I reviewed the theoretical proofs, additional experiments (e.g., parameter analysis results), and the experimental setup, etc. Relation To Broader Scientific Literature: The paper’s key contribution is being the first to reveal the misalignment between graph generation and recognition. Previously, researchers trained generative diffusion models and then extracted the learned representations for classification or prediction. By uncovering this misalignment, the paper could point to a new direction for diffusion model–based graph representation learning. Essential References Not Discussed: The paper provides sufficient related work to understand its key contributions. I have one minor suggestion: consider discussing a recent vision paper [R1] in ICML 2024 that explores the relationship between recognition and generation. Although that work comes from a different domain and uses a different perspective, it shows that representation and recognition can misalign in visual data. This insight could help readers better understand the misalignment observed in graph data. [R1] Balestriero, Randall, and Yann LeCun. "How Learning by Reconstruction Produces Uninformative Features For Perception." in ICML 2024. Other Strengths And Weaknesses: **Other Strengths:** 1. I find this work interesting as it highlights a new potential direction for representation learning—designing a suitable loss function to encourage the reconstruction of key aspects rather than all details, which appears promising for improving downstream tasks like classification and prediction. 2. Unlike existing diffusion-based methods, this paper is the first to demonstrate that the graph generation optimization objective is not entirely suited for graph downstream tasks. 3. Instead of using a standard generation-oriented learning objective, the authors propose a new loss function that effectively prioritizes the reconstruction of global and smooth information. **Other Weaknesses:** 1. In Figure 1, the authors claim that after adding Gaussian noise, the model focusing on a narrow low-frequency band remains robust, while the model with full-spectrum reconstruction collapses in performance. However, the paper does not specify the noise intensity (e.g., the value of σ), which may raise concerns about the robustness. 2. The related work section is placed in the appendix, which might confuse readers who are not familiar with diffusion models or graph learning; this issue could be addressed in the camera-ready version. 3. The current approach focuses only on reconstructing node features, which may limit the model’s ability to capture topological geometry. Exploring the reconstruction of both graph topology and features, for example, by alternating reconstruction [R2], might allow the model to learn more meaningful structural-semantic information。 [R2] Jo, Jaehyeong et.al,. "Score-based generative modeling of graphs via the system of stochastic differential equations." In ICML 2022. Other Comments Or Suggestions: In page 15, line 773, one citation is not being displayed correctly. Questions For Authors: 1. In Table 1, what does the “–” symbol represent? Does it indicate that the method exceeded available memory or that the computation time was prohibitive? 2. In Equations (11) and (12), did you only concatenate the activations from the U-Net’s up-sampling layers, or did you also include representations from the down-sampling layers? If I understand correctly, the down-sampling information is preserved solely via skip connections into the final representation? 3. In Theorem 3.1, is the encoding “Z” equivalent to the node representation “H” used elsewhere in the paper? If so, it might be clearer to use consistent notation to avoid confusion. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your valuable input and positive comments, and we have carefully addressed your comments in our responses below. **Question 1: Clarification on the Meaning of “–” Symbol on Table 1.** In Table 1, the “–” symbol indicates that the corresponding method either exceeded the available memory, had prohibitive computation time on that particular dataset, or that the original authors did not report performance on that dataset. We will clarify this notation in the revised manuscript to avoid any ambiguity. **Question 2: Clarification the U-Net Representations on Equations (11) and (12).** For Equations (11) and (12), we only concatenate the activations from the U-Net’s up-sampling layers. However, the information from the down-sampling layers is fully preserved through skip connections that directly link these layers to the up-sampling path. As a result, although our final representation is built solely from the up-sampling activations, it inherently incorporates the local features captured during down-sampling, ensuring that both local and global information is maintained without redundancy. Thank you for pointing this out, and we’ll revise the final version to make this clearer. **Question 3: Clarification on the Notation of Encoding “Z” and Node Representation “H”.** In Theorem 3.1, the encoding “**Z**” typically refers to the intermediate representation produced by our graph encoder. For example, mapping the input graph to a low-dimensional space. On the other hand, “**H**” denotes the final node representation extracted from the activations of the U-Net. In the context of our paper, these two representations can be considered equivalent. Based on your suggestion, in the revised version, we will harmonize our notation by clearly stating that H is derived from **Z**, ensuring consistency throughout the paper. **Weakness 1: Clarification on Gaussian Noise Intensity Used in Figure 1.** We appreciate your observation regarding the noise intensity. In our experiments for Figure 1, we sample Gaussian noise from a normal distribution, $\mathcal{N}(0, 5)$, to simulate realistic high-frequency perturbations. We agree that specifying this value explicitly would improve clarity, and we will include these details in the final version. Furthermore, please refer to our response to *Reviewer Q4ro*, Question 1, where we provide a detailed explanation of the motivation behind Figure 1. **Weakness 3: Clarification on Reconstructing Only Node Features.** Thank you for your insightful comment. As shown in Figures 3(b) and 3(d), we demonstrate that reconstructing the smooth aspects of the graph structure benefits downstream tasks. Accordingly, in our work we also introduce a low-frequency encoder for graph topology to extract global, smooth structural information. Thus, when reconstructing node features, our SDMG essentially leverages both topology and node feature information to capture the distribution of node features. Nonetheless, we agree that an alternating reconstruction strategy for node features and graph topology is an interesting idea, and we plan to explore this approach in future work. --- Rebuttal Comment 1.1: Comment: Thank you for your response and for addressing my concerns. I've re-read the manuscript and found it enjoyable and easy to read. The minor issues I mentioned have been fully addressed and can be easily incorporated into the final version. Compared to Reviewer P6bv's view, I tend to be more open-minded. I understand that low-frequency information or low-frequency filters have proven effective in graph analysis tasks. However, if one brings the existing knowledge to reveal a new phenomenon for a specific domain or a kind of method, especially in the rapidly emerging field of diffusion model-based graph analysis, then that is both interesting and beneficial for community development. To me, a method is not expected to work for every application or dataset type, nor does it have to be overly sophisticated. A good recent example is [1], which uses low-frequency information to inspire researchers to rethink how to build an effective augmented graph for graph contrastive learning. Therefore, I agree with the authors' contributions and believe this paper shows a promising way for future generative or diffusion models for graph analysis. It could motivate people to rethink which parts of the graph information should be reconstructed and which application-specific manifolds should be extracted to better align generation with representation. Given this, I lean toward accepting this paper and will keep attitude to support its acceptance. [1] Liu, Nian, et al. "Revisiting graph contrastive learning from the perspective of graph spectrum." Advances in Neural Information Processing Systems 35 (2022): 2972-2983. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your encouragement and recognition of the novelty of our methodology. We will update our manuscript to enhance its overall clarity. Thank you again for your support!
Summary: The authors reveal that purely generation-oriented objectives can conflict with recognition goals, demonstrating that excessive non-smooth-frequency reconstruction can harm representation quality. Specifically, they systematically investigate how reconstructing different parts of the graph frequency spectrum affects downstream classification tasks. Their findings show that reconstructing the full spectrum significantly decreases classification performance. Due to the computational burden of direct spectral decomposition, the authors propose approximate methods, such as a novel low-frequency encoder and a multi-scale smoothing loss, focusing on reconstructing the spectral components most relevant to downstream tasks. Both theoretical analysis and empirical results demonstrate the effectiveness of the proposed approaches. ## update after rebuttal The authors have adequately addressed my concerns; therefore, I would like to maintain my positive evaluation. Claims And Evidence: The submission’s claims are validated by evidence from both theoretical analysis and extensive experiments. Methods And Evaluation Criteria: The proposed methods make sense for this task. The evaluation criteria rely on standard graph and node classification datasets, which are commonly used benchmarks in representation learning. Theoretical Claims: I checked the proofs for both theorems. They are clear and logically sound, and I found no issues. Experimental Designs Or Analyses: The experiments are clear and sound. However, I was a bit confused about the masking strategy experiments presented in the Appendix because I didn’t fully understand their motivation. Please see my questions for more details. Supplementary Material: I reviewed the supplementary material, which includes the proofs of Theorems, parameter analysis, and architectural details of the denoise U-Net. Relation To Broader Scientific Literature: Comment 1: As far as I know, DDM (Yang et al., 2024) is the first diffusion model-based graph representation learning method. However, it focuses on generation-based reconstruction and may suffer from misalignment with recognition tasks. Unlike DDM, the submission work shifts the focus to low-frequency signals, effectively aligning the diffusion process with downstream objectives. Comment 2: While low-frequency features have been shown to benefit existing GNN-based classification (Hoang et al., 2021; Liu et al., 2022), this submission work shows that traditional low-pass filters (like GNN layers) do not adequately capture these signals within diffusion models. The paper further addresses this gap by proposing a novel multi-scale smoothing loss, which convinced me that the proposed method is innovative in graph representation learning. Essential References Not Discussed: I did not notice any crucial references missing. Other Strengths And Weaknesses: Strengths 1: It’s the first work to address the key challenge of misalignment between reconstruction and classification in the graph domain. Strengths 2: The introduction of the multi-scale smoothing loss and low-frequency encoders is innovative and interesting. Strengths 3: The work demonstrates an interesting finding that existing low-frequency filters in the denoise decoder do not prevent the reconstruction of irrelevant high-frequency features, which motivates the new learning objective. Weakness 1: A minor weakness is that there is not enough discussion about the motivation for the masking strategy. I detail it more in the "Questions" part. Weakness 2: As shown in Table 3, the improvement provided by introducing low-frequency encoders appears to be greater than that from the MSS loss. This observation is interesting, but the paper lacks sufficient exploration and explanation of this point, which might raise concerns about the proposed learning objective. Moreover, these observations lead me to wonder whether applying additional low-frequency encoding (such as GNN layers) to the final node representations might be beneficial, or if the design of the MSS loss (Equation 13) already serves the function of GNN layers? Other Comments Or Suggestions: Some figure captions (e.g., Figure 2) have small font sizes that might affect legibility. Questions For Authors: Question 1: What is the motivation behind the masking strategy? Specifically, how does the masking interact with the low-frequency reconstruction process, and why is it expected to improve performance on node-level tasks? Question 2: If I understand correctly, Section 3 refers to the "20% lowest low-frequency components" as the first 20% of the full spectrum. Does this imply that, in Equation (6), $q$=$d$*0.2? If not, please explain how $q$ is calculated. Question 3: What modifications would be needed to extend the framework beyond classification, such as social community detection or traffic flow prediction? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive and positive feedback; we appreciate your insights and have addressed your comments below. **Question 1 and Weakness 1: Clarification on Masking Strategy Motivation.** The masking strategy has two primary motivations: enhancing robustness and promoting learning discriminative global representations. Recent literature (Daras et al., 2022) on diffusion-based models highlights that carefully designed masking or corruption processes encourage the model to selectively prioritize the most meaningful structural information during reconstruction. Specifically, masking introduces controlled information degradation, forcing the model to reconstruct from limited context, which aligns naturally with our objective of focusing on smooth, globally relevant graph structures. This process prevents the model from simply memorizing trivial local or high-frequency noise features. Consequently, such a masking strategy guides the model to leverage global graph topology, improving performance on downstream tasks through more robust and semantically meaningful representations. We have also demonstrated the performance of our model without masking; please refer to our response to Reviewer P6bv, Question 1, for detailed comparisons. Based on your suggestion, we will include further clarifications regarding this aspect in the final version of the manuscript. **Question 2: Clarification on Frequency Component Definition.** Thank you for the insightful question. Yes, in Section 3 (the Investigation Section), the "5% lowest frequency components" indeed refers specifically to the eigenvalues of the Laplacian matrix derived from the normalized adjacency matrix, which naturally captures the global structure of the graph. We adopt the Laplacian spectrum because it reflects the smooth, global signals of the graph. Please note that we perform the graph Fourier transform only in the Investigation section to validate our hypothesis. In the Method section, we instead propose an approximate computation method for extracting low-frequency information, which is more efficient and scalable. **Question 3: How to extend SDMG beyond classification, such as social community detection or traffic flow prediction.** Extending our framework to tasks such as social community detection or traffic flow prediction would involve a few targeted modifications that build on our core approach of low-frequency emphasis and multi-scale smoothing. For social community detection, one could replace the classification head with a clustering-oriented output by incorporating a clustering loss (e.g., modularity maximization or a spectral clustering regularizer) to directly optimize the learned embeddings for community structure, or by using the generated representations as input to an external clustering algorithm. Additionally, integrating community-aware regularizers into the diffusion process could further enhance the capture of global structural patterns. For traffic flow prediction, a regression task with significant temporal dynamics, the output layer would need to be restructured to produce continuous predictions (using, for instance, a mean squared error loss), and it would be beneficial to integrate temporal modeling components (such as recurrent layers, temporal convolutional networks, or attention mechanisms) into the denoising decoder. These modifications would allow our method to adapt its emphasis on smooth, global features and multi-scale consistency to effectively address the unique challenges of these different tasks. **Weakness 2: Clarification on Balancing MSS Loss and Low-Frequency Encoders.** Thank you for your insightful comment. While Table 3 shows a larger improvement from introducing the low-frequency encoders compared to the MSS loss alone, we believe these components have distinct yet complementary roles. The low-frequency encoders provide a robust initial approximation of the global, low-frequency signals from the graph topology and node features, which are critical for guiding the denoising process. In contrast, the MSS loss fine-tunes the representations by enforcing multi-scale consistency and preventing the reintroduction of high-frequency noise during training. Although its standalone impact may seem smaller, the MSS loss is essential for maintaining overall representation quality over time. Moreover, adding extra low-frequency encoding (e.g., additional GNN layers) to the final node representations would likely be redundant, as our ablation studies show that the best performance is achieved when both the low-frequency encoders and the MSS loss are used together. Thank you again for your helpful comment. We will incorporate the necessary modifications in the final version. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns, and I appreciate the additional clarification on the motivation behind the masking strategy. I have also reviewed the comments from other reviewers and found the analysis compelling—particularly the part demonstrating that even without masking, the model maintains strong performance on both node- and graph-level classification tasks. In my view, even without the masking strategy, the proposed combination of LF and MSS effectively mitigates the misalignment between generation and GRL, and is sufficiently novel and valuable. Overall, I remain positive about this work and maintain the positive score. --- Reply to Comment 1.1.1: Comment: Thank you so much for your efforts during the review phase and for maintaining a positive score. We will reorganize the mask strategy in the final version. Thanks again!
Summary: The authors introduce a new diffusion-based self-supervised framework for graph representation. It addresses the issue that minimizing generation-based learning objectives can overfit high-frequency noise instead of capturing important global structures. To overcome this, the authors propose (1) learnable encoders that approximate low-frequency components of both node features and graph topology, and (2) a new multi-scale smoothing (MSS) loss emphasizing multi-hop similarity for more semantically meaningful information. Experiments show the proposed method outperforms baselines in node-level and graph-level tasks. Claims And Evidence: The paper’s main claims on the importance of low-frequency components and the effectiveness of SDMG are backed by thorough experiments and theoretical arguments across multiple benchmark datasets. Methods And Evaluation Criteria: Yes. The authors verify their method on widely accepted node and graph classification benchmarks (Cora, Citeseer, PubMed, etc.). Theoretical Claims: The authors provide two core theorems on how low-frequency reconstruction aligns with improving representations. The proofs are clearly structured and do not show obvious errors; they seem consistent with known results in the field. Experimental Designs Or Analyses: The experimental design and analyses mainly follow standard practices (DDM in 2024) and appear valid with no evident methodological flaws. Supplementary Material: I reviewed the supplementary appendices. I mainly reviewed the theoretical proofs and the additional implementation details supporting the main text’s claims. Relation To Broader Scientific Literature: The paper extends prior diffusion-based generative models into frequency-aware graph representation learning. It bridges concepts from spectral GNNs with recent advances in diffusion-based self-supervision. This integration builds on findings about overfitting to high-frequency noise in graph tasks and proposes a new approach by introducing novel multi-scale smoothing loss with low-frequency encoders. This work provides a novel perspective for existing diffusion model-based graph learning works. Essential References Not Discussed: No clearly essential prior works appear to be omitted. Other Strengths And Weaknesses: Strengths: The paper’s theoretical and experimental analysis effectively demonstrates how pure MSE-based reconstruction can diverge from learning discriminative representations. Applying diffusion models to graph representation learning is an emerging yet crucial research topic that remains relatively underexplored. The fusion of a spectral perspective with diffusion modeling introduces a novel angle for graph learning field. Weaknesses: The multi-hop neighborhood weights and choice of how many low-frequency components to encode may require careful tuning on different graphs. Mainly emphasizing low-frequency signals might not be optimal for tasks that rely on high-frequency information (e.g., anomaly detection or graph isomorphism), and although the authors briefly mention the potential value of mid/high-frequency components, the current approach remains mainly focused on low-frequency prioritization. Other Comments Or Suggestions: None. Questions For Authors: Can you provide more details on the experimental setup in Figure 1? Specifically, why did you add noise in Figure 1(b), and what variance did you use for the Gaussian noise? In real-world applications such as biomedicine, interpretability can be crucial. Do you think the key frequencies, like the low-frequency components shown to be critical, could offer interpretive insights into the model’s outputs? In the $R_{MSS}$ loss proposed in Eq. (13), is the construction of $h_0$ the same as in existing GNN layers, or does it employ a new aggregation mechanism? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback. We appreciate your time and have addressed your comments below. **Question 1: Clarification on noise addition in Figure 1.** The primary goal of Figure 1(a) is to illustrate the misalignment between generative reconstruction objectives and representation learning. As shown, reconstructing only a small fraction of information (e.g., the lowest 5% frequency components) is sufficient to achieve competitive downstream performance. Conversely, reconstructing the full frequency spectrum (100%), as most current methods do, results in performance degradation. We attribute this to diffusion models allocating their limited information capacity to unnecessary reconstruction of high-frequency noise. To further confirm this hypothesis, we performed the experiment shown in Figure 1(b), where we explicitly added high-frequency Gaussian noise with variance $\sigma^2 = 5$, specifically sampling from $\mathcal{N}(0, 5)$. The rationale is that, if the diffusion model could effectively prioritize reconstructing task-relevant (low-frequency) information under limited capacity, its performance should remain robust despite the addition of irrelevant high-frequency noise. Indeed, in Figure 1(b), we observe that models focusing solely on reconstructing a small fraction of low-frequency information remain robust, maintaining strong downstream performance even after noise injection. In contrast, reconstructing the full frequency spectrum leads to a significant performance drop under added noise conditions. This clearly indicates that under limited capacity, enforcing full reconstruction exacerbates the misalignment between generation and representation. Our theoretical analysis in Theorem 3.1 further supports and explains this phenomenon. Thank you for your insightful comment. We will revise the final version accordingly. **Question 2: Clarification on Interpretability implications of focusing on low-frequency signals.** Low-frequency components summarize global structural information and thus can offer valuable interpretive insights in **certain** applications. For instance, in relatively homogeneous networks such as social networks, low-frequency signals help explain group memberships in clustering tasks by highlighting shared global characteristics among nodes. However, interpretability is inherently task- and context-dependent. In applications like biological anomaly detection or extreme event prediction, relying solely on low-frequency information may not provide sufficient interpretability, as critical insights could reside in higher-frequency or local features. **Question 3: Clarification on the Aggregation Mechanism in MSS loss.** The construction of $\mathbf{h}$ in the $R_{MSS}$ loss (Eq. 13) indeed employs an aggregation strategy similar to those used in standard GNN layers. Note that, we leverage this aggregation within our proposed MSS loss specifically to capture multi-scale reconstruction information across different neighborhood ranges. By explicitly comparing node representations at multiple neighborhood scales, our MSS loss uniquely encourages the model to prioritize smooth, global (low-frequency) signals during reconstruction, which differentiates it from conventional aggregation strategies that primarily aim for node-level feature updates. **Weakness: Clarification on Frequency selection and potential extensions.** We acknowledge the reviewer’s point regarding tasks that might benefit from high-frequency information. While our current approach intentionally emphasizes low-frequency signals due to their effectiveness in mitigating the misalignment between generation and representation, we do not exclude the potential value of mid- or high-frequency components. Instead, we view our method as providing a foundational perspective that future studies can adapt or extend based on specific task requirements. Future work may incorporate adaptive frequency selection mechanisms, which could better accommodate scenarios such as anomaly detection or extreme event prediction. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. Their explanations are clear and address all points well, providing additional analysis and details. The authors offer a detailed explanation of how the multi-scale smoothing loss works in their model, which convinces me of its novelty in mitigating the misalignment issue. I will maintain my positive score and recommendation for acceptance. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your careful review! Thank you so much for giving us a positive score!
Summary: The authors study the problem of graph representation learning (GRL) via diffusion models. The authors argue that current graph diffusion models for representation learning are sub-optimal due to their focus on the high-frequency signals. However previous literature, including a preliminary study by the authors, show that low frequency signals are more important for GRL. As such, the authors design a newer diffusion model that emphasizes lower graph frequencies. This is done through the use of two components. The first is a lowe frequency encoder that attempts to to only synthesize the low frequency dignals of the node features and graph structure. The second component is a multi-scale smoothing loss that places more emphasis on reconstructing lower frequency signals. The authors report their performance on node and graph classification tasks, showing good performance. Claims And Evidence: The claims are mostly well supported by evidence. This includes both the preliminary study (Finding 1 and 2 in Section 3) and both proposed components (see the ablation study in Table 3). However, I think the authors should note that the assumption that only the low frequency signals are crucial doesn't hold true for all graphs. For example, for heterophilic graphs, it's known that incorporating high frequency signals is helpful [1]. [1] Luan, Sitao, et al. "Revisiting heterophily for graph neural networks." Advances in neural information processing systems 35 (2022): 1362-1375. Methods And Evaluation Criteria: The baselines and evaluation make sense for this task and are commonly used for GRL. Theoretical Claims: The theoretical claims in Section 3 look good to me. Experimental Designs Or Analyses: The experimental results and analyses are mostly fine. However, I think there are a few crucial issues: 1. For node classification, the authors use an additional masking strategy to improve performance. This is quite problematic in my opinion, as Figure 6 shows it can have quite a large effect on performance (I assume mask\%=0 corresponds to an ablation of it.) Without it, the performance is very similar to DDM, another diffusion based approach which as far as I can tell doesn't use any masking. To me, this calls into question if most of the improvement of SDMG over DDM on node classification is due to this inclusion. 2. The performance improvement over simpler methods is quite small and often seems not statistically significant. For example, GraphMAE can achieve only slightly worse performance than SDMG despite being much simpler. As such, I recommend including whether each of the results are better than baselines in a statistically significant way (i.e., include the p-values). 3. No efficiency experiments are shown. It is well known that diffusion models are quite slow. As such, it's important for the authors to show the runtime needed for training + inference of SDMG. Because even if a model can achieve better performance, poor efficiency can make them impractical to realistically use. As such, detailing its efficiency against simpler methods like GraphMAE is important. Supplementary Material: I looked at all of it. Relation To Broader Scientific Literature: I think this paper is relevant for the field of graph representation learning. Specifically the emphasis on the low frequency signals is a useful insight. Essential References Not Discussed: A number of references are missing. For graph generative models, multiple models consider the graph spectrum. This includes [1, 2] (note that [2] was available since early 2024). To be clear, these methods are not designed for GRL. However, they are still generative models that operate on the graph spectrum and should be cited. Furthermore, these methods focus on the lowest k eigen-values/vectors. Of particular interest is [2] which is a diffusion model. The paper should be updated with a discussion of these papers. Furthermore, the use of the $q$ smallest eigenvectors of the laplacian is quite common in graph literature. See [3]. From my understanding SMDG relies on approximation of these eigenvectors, but the motivation and principle remains the same. [1] Martinkus, Karolis, et al. "Spectre: Spectral conditioning helps to overcome the expressivity limits of one-shot graph generators." International Conference on Machine Learning. PMLR, 2022. [2] Minello, Giorgia, et al. "Generating Graphs via Spectral Diffusion." The Thirteenth International Conference on Learning Representations, 2025. [3] Rampášek, Ladislav, et al. "Recipe for a general, powerful, scalable graph transformer." Advances in Neural Information Processing Systems 35 (2022): 14501-14515. Other Strengths And Weaknesses: There are a few notable weaknesses with the paper in its current form. 1. I find the emphasis on diffusion models to be a little strange. As noted by the authors, the idea that low frequency information is most helpful for graph learning is very well known in Graph ML. Specifically, many works have drawn the connection between GNN and low pass filters. The contributions in this paper reflect thus reflect an interest in focusing on low frequency information. However, this has nothing to do with diffusion. As such, both the LE and MSS components can be included in other architectures as well and are not specific to diffusion models. For example, I see no reason why they can't be incorporated into a method like GraphMAE (please correct me if I'm wrong). I'm therefore quite confused by the framing of this paper as I'm unsure what it has to do specifically with diffusion models. To me, it seems like it would apply to any generative methods. 2. Building on the previous weakness, I find the novelty to thus be quite low. This is particulary true in regards to the encoders. $\mathcal{E}\_{\theta}^x$ is just a GNN which is used by all frameworks. $\mathcal{E}\_{\phi}^x$ considers an approximation of the $q$ smallest eigenvectors. Many works consider pre-computing the $q$ smallest eigenvectors of the laplacian and using them as input to a GNN/Graph-Transformer [1]. Therefore, these contributions offer very little novelty over existing methods. 3. In my opinion, some crucial information is ommitted from the main text and either not mentioned entirely or put in the appendix. The main text should be self-contained (within reason of course). Some examples include: **(a)** What is the design of $\mathcal{E}\_{\phi}^x$ exactly? In 4.2, the authors say that they use a neural network function $f$? What is the design of $f$, what is it's input? Furthermore, how is $\mathcal{A}$ defined, is it simply a set of learnable embeddings? **(b)** For node classification, the authors use an additional masking strategy to improve performance. However, this is not mentioned in the main text at all. I expound on this more in the "Experimental Designs Or Analyses" section. [1] Rampášek, Ladislav, et al. "Recipe for a general, powerful, scalable graph transformer." Advances in Neural Information Processing Systems 35 (2022): 14501-14515. Other Comments Or Suggestions: 1. Please include a mention of the node masking in the main text. 2. Please explicitly define $\mathcal{E}_{\phi}^x$. Questions For Authors: 1. What is the performance of SDMG on node classification w/o the node masking? This allows for a fairer comparison with DDM. 2. Can you include the p-values of SDMG's improvement over other methods like DDM and GraphMAE? 3. Can the contributions in this paper be applied to other generative frameworks? If not, why is it specific to diffusion models? 4. Can you show a runtime comparison of SDMG compared to other methods (e.g., GraphMAE)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank you for taking the time to review our manuscript and for the suggestions that have helped clarify our original contributions. Due to its overarching importance, we would like to start with Weakness 2: **Novelty** Our work is not on GRL as such, but GRL for use in a downstream classification task. As highlighted by Reviewers 9RUT and DKTW we are the first to consider potential misalignment and possible solutions in this context. Specifically: 1. **Identifying and analyzing the misalignment between “graph generation” and “graph representation”.** - Existing diffusion-based GRL typically seeks to match the entire frequency spectrum, including irrelevant details. From both **theoretical and empirical** perspectives, we show that this reduces performance on downstream tasks. 2. **Proposing a novel **multi-scale smoothing (MSS) loss** that mitigates misalignment between generation and discriminative representation.** - Even if one applies known LF filters (e.g., by truncating high frequencies or using a GNN), relying on MSE for element-wise reconstruction reintroduces high-frequency noise. (see Figure 4 and Theorem 3.1). Our MSS loss overcomes this by enforcing stronger penalties on LF errors throughout the diffusion steps. This mechanism is new and, from a **theoretical** standpoint (Theorem 4.1), leads the reconstruction toward LF signals without entirely suppressing useful high-frequency components. Thus, we emphasize that the **key contribution** is **not** about which LF encoder one employs (we use a mature and efficient GNN simply because it is practical). Rather, it is **the combination of** LF and our MSS that resolves the inherent misalignment between generation and representation. We thank you for your comments and will revise our manuscript to reflect this more clearly. **Question 1 and Issue 1: Performance Without Masking** Below are our node/graph classification results without any masking. As shown, our method consistently improves over DDM on all datasets, with especially substantial gains in graph classification. *Node Classification* |Method|Cora|Citeseer|Pubmed|OGB-Arxiv|Computer|Photo| |-|-|-|-|-|-|-| |DDM|83.1|72.1|79.6|71.3|89.8|93.8| |SDMG w/o mask|83.6|73.2|80.0|71.7|90.4|94.1| *Graph Classification* |Method|IMDB-B|IMDB-M|PROTEINS|COLLAB|MUTAG| |-|-|-|-|-|-| |DDM|74.05|52.02|71.61|80.07|90.15| |SDMG w/o mask|76.03|52.05|73.17|82.23|91.58| **Question 2 & Issue 2: Statistical results (*p*-value)** In our paper, we report the widely used mean and standard deviation as standard performance statistics. However, we are also happy to include more statistical tests (e.g., Wilcoxon Signed-Rank Test) with p-values. In general, p < 0.05 indicates statistically significant improvements; while some datasets (e.g., Cora) show p > 0.05, most meet p < 0.05. |SDMG (Ours) vs.|Computer|Photo|Cora|MUTAG|COLLAB|IMDB-B| |-|-|-|-|-|-|-| |DDM|0.00098|0.00488|0.01367|0.00195|0.00098|0.00195| |GraphMAE|0.00098|0.00098|0.21582|0.00098|0.00098|0.02441| **Question 3 & Weakness 1: Why Diffusion Model Benefits** Thanks for your comment. We do not merely “insert” a low-frequency (LF) encoder and a multi-scale smoothing (MSS) module into generative models. Rather, we exploit the diffusion process’s iterative noise-injection–denoising scheme: at each timestep t, varying noise levels enable our MSS term to progressively emphasize coherent LF signals. This contrasts with single-step models (e.g., GraphMAE), which lack such iterative noise scheduling. While our MSS approach can be generalized elsewhere, the table below shows that it yields larger gains in our multi-step diffusion framework, aligning with our core viewpoint. ||photo|computer ||photo| computer| |-|-|-|-|-|-| |GraphMAE|93.6|88.6|SDMG w/o MSS|93.4|89.4| |GraphMAE+MSS|93.9|89.4|SDMG|94.7|91.6| |Relative Improvement|0.32%|0.90%|Relative Improvement|1.36%|2.44%| **Question 4 & Issue 3: Runtime Efficiency** Our method does not suffer from excessive runtime overhead (training + inference)—in fact, it often converges faster. By prioritizing global smooth signals (e.g., via our MSS), SDMG quickly encodes downstream-relevant information and requires fewer total epochs. All experiments were run on a single NVIDIA H100. |Method|Cora (epochs)|Citeseer (epochs)|Pubmed (epochs)| |-|-|-|-| |GraphMAE|13.32s (1500)|3.58s (300)|20.69s (1000)| |DDM|13.12s (800)|11.23s (350)|10.35s (300)| |SDMG|5.88s (200)|9.39s (150)|15.23s (100)| **Weakness 3: Notation Clarification** Thank you for the suggestion. As noted (page 6, line 290), $\mathcal{E}^x_{\theta}$ is a GAT encoder for LF node features. $f = \mathcal{E}^A_{\phi}$ is an MLP for topology. $\mathcal{A}$ is a learnable embedding matrix approximating the LF spectrum. $f$ can take various graph-structure inputs (e.g., adjacency), and we use random-walk positional encodings for preserving LF information. We will expand these details (including the mask strategy) in the final version. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response. However most of my concerns still stand. I will therefore keep my score. I respond to each point below: > Identifying and analyzing the misalignment between “graph generation” and “graph representation” My issue with this lies with two points: 1. It's trivial that graph generation attempts to reconstruct the entire frequency spectrum. Since we attempt reconstruct the entire adjacency such signals must, by definition, be what we also attempt to reconstruct. 2. It's already very well known that standard GNNs are low pass filters [1, 2]. In fact, this is why they tend to work so well, as only considering lower frequency signals is a strong inductive bias for many Graph ML tasks. Therefore, it's not surprising that you find that considering considering higher frequency signals hurts performance. It's why (a) standard GNNs work so well, (b) many methods consider the eigenvectors associated with the lowest k eigenvalues as positional encodings. I therefore find the misalignment to be fairly obvious to those in the field and am unsure what new information it is adding. [1] Nt, Hoang, and Takanori Maehara. "Revisiting graph neural networks: All we have is low-pass filters." arXiv preprint arXiv:1905.09550 (2019). [2] Wu, Felix, et al. "Simplifying graph convolutional networks." International conference on machine learning. Pmlr, 2019. > Proposing a novel multi-scale smoothing (MSS) loss that mitigates misalignment > Thus, we emphasize that the key contribution is not about which LF encoder one employs This is fair, however my concern is that the results show that the encoder has a much larger impact on performance than MSS. As shown in Table 3, ablating LF almost always has a larger effect on performance than ablating MSS. The difference is in fact quite large on Photo and Computer. I am therefore uncertain about how signficant of a contribution that the MSS loss really is (I also note later that it is related to the SCE loss in GraphMAE, however this connection is not discussed). > Performance Without Masking Thanks for the results. Looking at the node-level results, it's important to note that the gap between SDMG and DDM drops significantly (I assume the graph-level are the same since there isn't any masking). While the variance (and p-values) aren't included. At a glance, it seems that only Citeseer is statistically significant (please correct me if I'm wrong). This is very important to me, as it shows that when evaluated on equal footing, the results for DDM and SDMG on node classification are barely different. > Issue 2: Statistical results (p-value) My concern still stand for node classification, as the results w/o masking are very similar to DDM (which is needed to have a fair comparison with DDM). > Why Diffusion Model Benefits This doesn't get to the core of my issue. The main motivation of this work that reconstructing higher frequency signals can be detrimental. **However, this observation is true for any generative method that attempts to reconstruct the full adjacency matrix**. As such, it is not unique to diffusion models at all. The authors do mention that they also consider the iterative noise scheduling inherit in diffusion models to measure the MSS loss across noise levels. This is good. However (a) using it for GraphMAE, does help a little (b) it again does not get to my core point which that the motivation behind the MSS loss is not unique to diffusion models. Furthermore, you show the results when applying MSS to GraphMAE. However, GraphMAE, as opposed to many other GRL methods, does not reconstruct the adjacency. Rather it reconstructs the node features. In fact, it includes a similar loss to MSS, which they refer to as "SCE" (see Eq. 2 in their paper). If we remove the terms for hops $k=1$ to $hop-1$ in the MSS loss, they are equivalent. However, the connection is not discussed in your paper. It should be included. > Runtime Efficiency I appreciate the results. However, a core downside of diffusion models is not just their inefficiency, but their inefficiency on **medium-large graphs** (my apologies for not being clearer in my original review). Core/Citeseer/Pubmed are quite small compared to most benchmarks. What's the runtime comparison on larger graphs like ogbn-arxiv? It's necessary to scale any method beyond small graphs. **Other**: A few of my concerns included in my original review were not addressed. I understand the space is limited. I'm mentioning them again as I believe they're important. 1. Multiple methods (see [1, 2] in my review) are generative models that consider the spectrum. [2] is a diffusion model that focuses on the lowest k eigenvectors. 2. Using the eigenvectors associated with the $k$ lowest eigenvals as input to a GNN is common in Graph lit ([3] in my original comment is 1 example). The 1st term in the SDMG loss considers a similar concept (in addition to [2]). This should be noted in the paper. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your time on this second review and will address your concerns as follows: Because this is crucial to our method, we would like to first clarify that our approach **only reconstructs node features rather than the adjacency matrix (see Section 4.1: Reconstruction Objective).** We realize that this confusion might be due to deficiencies in our presentation of the Investigation in Section 3, specifically the discussion of reconstructing adjacency derived node features. As this reconstruction is not especially relevant to our overall paper, we are considering removing this task and clarifying the investigation section if the paper is accepted. > Q 1: Clarification of Our Main Contribution - We agree that reconstructing the full adjacency matrix covers the entire frequency spectrum, which is why our method reconstructs only the node feature matrix. - We also acknowledge that GNNs act as low-pass filters. However, the key difference lies in their objectives: generative models aim to generate, whereas GNNs extract discriminative features. **Although the importance of LF signals is well recognized, our work is the first to systematically analyze this phenomenon within a diffusion generative model framework. We show that using a standard MSE gradually introduces HF noise (Theorem 3.1), *even with GNN filters*, which degrades representation quality.** In contrast, our proposed MSS loss leverages multi-scale similarity to prioritize LF recovery (Theorem 4.1). It bridges the gap between generation objectives and GRL tasks. We believe this mechanism is novel. > Q2: Clarification of Ablation performance of LF and MSS We thank you for your observation. It's reasonable that the LF encoder has a larger impact on performance. However, this does not conflict with the claim of our contributions. **While the LF encoder captures LF signals, a standard MSE loss in the diffusion process eventually reintroduces noise (Fig. 4 and Theorem 3.1). This is why, even with LF encoders, our MSS loss is still necessary.** Our MSS loss explicitly counteracts this effect by enforcing multi-scale similarity across different neighborhood aggregations, ensuring that the model’s reconstruction remains focused on LF, globally consistent signals. > Q3: Clarification between SCE loss and our MSS loss Many thanks for highlighting the existing SCE loss. The SCE and our MSS loss are designed for different purposes. The SCE loss, which is designed for sensitivity issues, still operates at a node level and does not incorporate multi-hop or multi-scale neighborhood aggregation, limiting its ability to solve our challenge (i.e., encouraging LF reconstruction). In contrast, our MSS loss leverages multi-scale similarity, which both theory and experiments show encourages LF signal reconstruction. This may explain why our MSS improves GraphMAE GRL performance. We will update the discussion about SCE in the final version. > Q4: Performance without Masking and statistical results Without masking, we acknowledge that node classification gains are modest. However, SDMG still consistently outperforms DDM on all datasets, and the improvement in graph classification is substantial. Now, we are happy to supply additional statistical results for SDMG without masking (for fair comparison with DDM). |SDMG w/o mask vs.|Computer|Photo|Cora|MUTAG|COLLAB|IMDB-B| |-|-|-|-|-|-|-| |DDM|0. 00195|0.02441|0.13769|0.00195|0.00098|0.00195| > Q5: Applicability to Other Generative Methods Many thanks for this comment. We now have more space to clarify. **We agree that our proposed strategy could potentially benefit other generative methods, which we believe is also one of its potential advantages.** However, as we mentioned in our previous results, our strategy yields greater improvements within the context of the iterative noise scheduling of diffusion models. > Q6: Runtime Efficiency As suggested, we report runtime on the larger dataset, showing that SDMG does not incur excessive overhead. |Method|Arxiv (epochs)| |-|-| |GraphMAE|258s (1000)| |DDM|157s (400)| |SDMG|204s (300)| > Q7: Some References Missing We appreciate for highlighting the missing references. We agree that recent generative models operating on the graph spectrum, such as [1] and [2], are relevant for the DM context and will include them in the final version. We will also cite the Graph Transformer method [3] for its use of smallest eigenvectors, although our approach approximates these eigenvectors rather than computing them explicitly. We would also like to emphasize that the core novelty of our work does not lie in the choice of LF encoders (whether via the q smallest eigenvectors or through GNN filters), as these are tools used to address the scientific problem uncovered in our manuscript. **We sincerely hope our responses have addressed your concerns. If our clarifications and additional evidence meet your approval, we kindly request that you reconsider your score.**
null
null
null
null
null
null
Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models
Accept (poster)
Summary: This paper proposes a hierarchical framework for tackling the complex instruction following challenge in vision-language-action-based robotic control. The paper highlights challenges in existing methods that struggle with following intricate instructions. The proposed method, Hi Robot, addresses these issues by decomposing tasks into a high-level VLM policy, which interprets complex prompts and user feedback to generate low-level commands, and a low-level vision-language-action (VLA) policy. Evaluations on various tasks demonstrate that Hi Robot outperforms baselines. ## update after rebuttal I acknowledge the effort put into this work and appreciate that the authors have partially addressed my concerns. However, I still have reservations regarding the novelty of the work, how the two-layer framework is aligned, and its generalizability. Therefore, I am maintaining my score. Claims And Evidence: The main claims are well-supported within the evaluated domains. Methods And Evaluation Criteria: The methods and evaluations are well-justified for the problem. Theoretical Claims: No proofs. Experimental Designs Or Analyses: The experiments robustly validate Hi Robot’s performance within the tested domains. Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: Vision-Language-Action Models, Robot Control Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The related work is well investigated. 2. The idea is very intuitive. 3. Tested across diverse real-world robotic platforms. Weaknesses: 1. Limited contribution: hierarchical VLMs with synthetic data generation. 2. Lack of tests in unseen domains. 3. No validation of the quality of the generated data. 4. Only the average values of the metrics are counted, ignoring uncertainty and statistical significance. Other Comments Or Suggestions: 1. Include a discussion of failure modes. 2. Add more technical details to improve reproducibility. 3. Including statistical tests would strengthen claims. 4. include cross-domain tasks. Questions For Authors: 1. Have you tested Hi Robot on unseen domains? 2. During task execution, if an instruction interrupts the task, can the system restore its previous state to resume the original objective? 3. What is the main difference between Hi Robot and RT-H? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. We address each point below and will incorporate these improvements in the revision. > Limited contribution: hierarchical VLMs with synthetic data generation. While individual components build on prior work, Hi Robot's novel synthesis enables critical real-world capabilities: - **Open-ended instruction following** (e.g., "Can you make me a vegetarian sandwich? I don’t like pickles though.") - **Real-time feedback integration** (e.g., "That’s all I want") - **Unseen task generalization** via synthetic data (demonstrated in §5.3 and video supplementary at https://hi-robot-vla.github.io/) > No validation of synthetic data quality We evaluate data quality **end-to-end** through policy performance, as offline metrics for embodied data remain an open challenge. Future work could explore: - Automated fidelity checks for physical plausibility - Language-grounding consistency metrics - Interaction diversity > Statistical significance reporting We conducted **20 trials per task per method** (more than typical real-world robotic experiments). Error bars will be added to all plots in the camera-ready version. > Lack of tests in unseen domains We evaluated generalization through: 1. **Instruction perturbations**: - "I want something sweet" (requiring object categorization and physical grounding) - "I’m allergic to pickles" (requiring semantic knowledge and physical grounding)) 2. **Partial task execution**: The model is trained only on full-table cleaning, but we request the robot to “clean up only the trash, but leave the dishes” *Future Work*: Cross-environment transfer (e.g., kitchen at home → kitchen in restaurant). > Discussion of failure modes Common Failure Cases: 1. *High-level*: - Temporarily ignore instruction: E.g., grabbing cheese when the robot is close to it despite user’s lactose intolerance (due to training bias toward proximal objects) 2. *Low-level*: - OOD recovery: Dropped objects (recovery behavior is absent from training data) Mitigations (Future Work): - Stronger instruction-following model - Adversarial data generation for edge cases - Diverse data collection including failure recovery > Add more technical details We will expand: - Appendix Table: Full hyperparameters (learning rates, architecture specs) - Data Generation: Prompt templates and filtering examples - Failure Logs: Representative error cases > Can Hi Robot resume interrupted tasks? Current Implementation: - Can revert to previous objectives with explicit user permission - Future: Auto-resume via success detection (e.g. via value function learning) > Difference from RT-H? | Feature | RT-H | Hi Robot | |------------------------|--------------------------|---------------------------| | **High-Level Action Space** | Primitive movements (e.g., "Move arm forward") | Semantic commands (e.g., "Place a slice of bread on the chopping board") | | **Synthetic Data** | ✗ | ✓ (enables open-vocab feedback) | | **Instruction Scope** | Seen tasks | Open-ended | *Key Advantage*: Hi Robot's rich language-action space supports real-world ambiguity and feedback (e.g., handling "This isn't trash" corrections). Thank you for your suggestions—they have strengthened our paper. We will address all points in the revision. --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for their responses. However, could you further elaborate on the training details? Additionally, how can VLM and VLA be grounded? --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up question. Below, we provide additional technical details: 1. Input Modalities - Both the high-level policy and low-level policy are conditioned on two or three images, depending on the specific task. For each task, we use: - One third-person camera view. - One wrist-mounted camera per robot arm (one or two arms). - Each image has a resolution of 224×224 pixels. These images are separately processed by the model’s vision encoder, and their resulting latent tokens are then concatenated. 2. Language Conditioning - We also condition the policies on natural language instructions, tokenized using the language tokenizer from the underlying LLM. The language tokens are concatenated with the vision tokens inside the model to enable multimodal reasoning. 3. Model Initialization - While our method can be trained from scratch or finetuned from any VLM backbone, in practice we use PaliGemma [1] as the base model. This is an open-source, 3-billion-parameter VLM that offers a good balance between performance and computational efficiency. - We unfreeze the full model for finetuning. 4. Optimizer and Hyperparameters - We use the AdamW optimizer [2] with $\beta_1=0.9$, $\beta_2=0.95$, and no weight decay. - Gradient norm is clipped to a maximum magnitude of 1. - We maintain an Exponential Moving Average (EMA) of network weights with a decay factor of 0.999. - The learning rate starts with a short warm-up (1,000 steps) and then remains constant at $1 \times 10^{-5}$. - Batch size is 512. 5. Training Duration and Resources - Training the high-level policy is highly efficient, taking about 2 hours on 8×H100 GPUs. - The low-level policy follows a similar training pipeline, though training times can vary depending on the dataset size and complexity of the target tasks for action prediction. We hope these details clarify our training pipeline and hyperparameters. Please let us know if there is any other information we can provide. References [1] Beyer, Lucas, et al. “PaliGemma: A versatile 3B VLM for transfer.” 2024. [2] Loshchilov, Ilya, and Frank Hutter. “Decoupled weight decay regularization.” 2017.
Summary: This paper, inspired by "System 1" and "System 2" cognitive processes, proposes a hierarchical VLM-based system to interpret high-level instructions and convert them into commands for a low-level VLA model. To train the model, the authors employ both human-labeled and synthetically generated interaction data. Some real-world experiments are conducted to demonstrate the model's ability. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: NA. Experimental Designs Or Analyses: The experiments are only conducted on real-world robots. The demos are convincing, but the comparison with other methods has significant shortcomings in experimental settings. For example, for the comparison with GPT-4o high-level instruction decomposing experiments, what is the user prompt? What if using in-context learning, chain-of-thought, o1 or deepseek r1 model for better reasoning? Supplementary Material: NA. The video page was initially empty, and the video was provided after the review began. Relation To Broader Scientific Literature: This paper provides a dual system for robotics. This work is one of the early efforts in understanding, translating, and decomposing high-level human instructions, and it combines VLA models to design an integrated model from human instructions to actions. Essential References Not Discussed: Yes. The concept of “System 1” and “System 2” is not the first proposed in this paper, some other paper should be discussed, such as [A-C]. [A] Li et al. HAMSTER: Hierarchical Action Models For Open-World Robot Manipulation. ICLR 2025. [B] Bu et al. Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation. arXiv:2410.08001. [C] Zhou et al. Code-as-Monitor: Constraint-aware Visual Programming for Reactive and Proactive Robotic Failure Detection. arXiv:2412.04455. Other Strengths And Weaknesses: Strengths: - The paper is well-written and easy to follow. - The hierarchical understanding and reasoning for high-level human instructions are necessary for robotic VLA models. Weaknesses: - The motivation and method are somewhat disconnected. The proposed method merely involves understanding high-level instructions, which are cascaded structures rather than a second system. - This work only experiments with the pi0 VLA model as the action model. Different VLAs may have specific preferences for different low-level language command styles. How to address this issue to make the proposed method adaptable to different action models? Other Comments Or Suggestions: NA. Questions For Authors: Please refer to the motivation and experiment concerns mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We address your comments in detail below and will update our paper accordingly. > For the comparison with GPT-4o high-level instruction decomposing experiments, what is the user prompt? The user prompts for evaluation (e.g., "Hi robot, can you make me a sandwich with cheese, roast beef, and lettuce?") are the same across baselines and are included in the paper. If you are asking about the *system prompt* for GPT-4o, we provide it below (example for the Table Cleaning task) and will include it in the camera-ready version: >``` You are an AI assistant guiding a single-arm robot to bus tables. The robot can optionally place trash in the trash bin and utensils and dishes in the plastic box. Every 3 seconds, you can issue one instruction from a provided list. You will receive images from two cameras: one for a global view and one on the robot's wrist for detailed views. Interpret the user's instruction into one from the provided list for the robot to execute. Adhere strictly to the user's instruction. If ambiguous, reason out the best action for the robot. Only provide the exact instruction from the list without explanation. You will select your instruction from the following list: put food container in trash bin pick up chopstick drop wrapper in trash pick up plastic plate pick up the cup pick up white bowl place bowl to box pick up spoon place trash to trash bin drop box in trash place take out box to trash move to the left pick up container drop plate in bin pick up the trash pick up plastic bowl go higher place spoon to box pick up the paper container drop fork in bin pick up the bowl pick up the plastic container go lower pick up box move to the right drop plastic lid into recycling bin pick up wrapper put bowl in box pick up the container put the plate in the bin pick up cup put cup into box throw it in the trash pick up food container pick up blue cup drop the bowl into the bin move towards me pick up napkin rotate counterclockwise put the cup in the bin throw trash away rotate clockwise drop plastic bowl into box open gripper pick up plastic cup pick up the plate close gripper move away from me go back to home position <truncated due to character limit> > What if using in-context learning, chain-of-thought, o1 or deepseek r1 model for better reasoning? - **In-context learning**: We explored this with GPT-4o but found it did not generalize beyond in-context examples and significantly slowed inference due to long visual context. The VLM struggled to reason across many images for closed-loop tasks (e.g., sandwich making). Future work could improve learning from long-context inputs or summarize visual observations into text. - **Chain-of-thought**: A promising direction for future work. - **o1**: While strong in mathematical reasoning, its slow inference (>10s per step) makes it impractical for real-time robotic control. - **Deepseek R1**: Does not support visual reasoning, limiting its applicability to our task. > some other papers should be discussed, such as [A-C]. We will cite these concurrent works in the camera-ready version. Key differences: - HAMSTER [A]: Focuses on high-level VLMs generating 2D end-effector trajectories. Our work outputs language commands, enabling finer dextrous behaviors (e.g., separating sticky cheese slices during sandwich making) and on-the-fly verbal corrections. The approaches are complementary; trajectory prediction could be integrated as part of chain-of-thought reasoning in future work. - RoboDual [B]: Separates generalist (latent representations) and specialist (actions), emphasizing training efficiency. We focus on open-ended instruction following. - Code-as-Monitor [C]: Uses VLM-generated code for failure detection. Our work uses VLMs as high-level policies to guide low-level VLAs and interact with humans. > The proposed method merely involves understanding high-level instructions, which are cascaded structures rather than a second system. Our framework explicitly separates high-level reasoning (VLM) from low-level execution (VLA). The VLM acts as a "second system" by decomposing abstract instructions into actionable commands, unlike monolithic VLAs that conflate reasoning and execution. We will clarify this distinction in the paper. > Different VLAs may prefer different command styles. How to make the method adaptable to other action models? Hi Robot is architecture-agnostic and can integrate any language-conditioned policy. Future work could: 1. **Fine-tune the high-level policy** using successful rollouts (e.g., via SFT) to adapt to a low-level policy’s "affordance" (i.e., its language-following capabilities). 2. Use **policy performance as feedback** (e.g., via RLHF) to align the high-level policy’s outputs with the low-level policy’s strengths. Thank you again for your thoughtful suggestions—they have strengthened our paper. We will incorporate these changes in the revision.
Summary: This paper presents Hi Robot, a hierarchical vision-language-action (VLA) model for open-ended instruction following. The system integrates a high-level vision-language model (VLM) that interprets complex prompts and user feedback with a low-level VLA policy that executes atomic actions. A synthetic data generation pipeline augments training by creating user interactions that improve generalization to diverse tasks. Hi Robot is evaluated on three real-world robotic applications, demonstrating superior performance over GPT-4o-based high-level policies and flat VLAs. The results highlight significantly improved instruction accuracy, task progress, and real-time adaptability to human corrections. Claims And Evidence: I summary my review as below. See Other Strengths And Weaknesses. Methods And Evaluation Criteria: See Other Strengths And Weaknesses. Theoretical Claims: See Other Strengths And Weaknesses. Experimental Designs Or Analyses: See Other Strengths And Weaknesses. Supplementary Material: See Other Strengths And Weaknesses. Relation To Broader Scientific Literature: See Other Strengths And Weaknesses. Essential References Not Discussed: See Other Strengths And Weaknesses. Other Strengths And Weaknesses: Strength: 1. The VLM+VLA architecture separates high-level task decomposition from low-level execution. The high-level VLM dynamically adjusts commands using real-time visual context, while the low-level VLA handles physical nuances, outperforming flat VLAs by 40% in instruction accuracy. 2. Synthetic data generation expands the model's ability to generalize beyond training data, enhancing real-world usability. Weakness: 1. Lack of Novelty The proposed hierarchical robot closely resembles prior work on dual-process reasoning and control, as explored in [1]. While a high-level vision-language model (VLM) for reasoning and a low-level vision-language-action (VLA) model for execution is effective, similar paradigms have been previously established [2]. Beyond the overlap in datasets, it is important to clarify the contribution of Hi Robot’s hierarchical structure. 2. Unclear Differentiation from Planner + VLA Hi Robot leverages a VLM for high-level policy generation and π0 as the low-level control policy, which shares similarities with LLMPlanner [3] + OpenVLA [4]. A more explicit comparison is needed to highlight the architectural and results between Hi Robot and LLMPlanner+Openvla. Specifically, how does Hi Robot’s hierarchical decomposition improve over LLMPlanner’s approach to task abstraction and skill execution. A deeper discussion on these aspects would strengthen the paper’s contribution. 3. For mobile manipulation While Hi Robot is evaluated across single-arm, dual-arm, and mobile bimanual robots, the results do not clearly differentiate its effectiveness in mobile manipulation scenarios. Given the additional challenges posed by spatial reasoning, and bimanual coordination, how does the hierarchical policy structure adapt to these factors? Does the high-level VLM account for mobile constraints when generating commands, and how does it compare to prior mobile manipulation frameworks that integrate LLMs or VLMs for motion planning and task execution? More details on task success rates, failure cases, and adaptation strategies in dynamic mobile environments would provide a clearer assessment of Hi Robot’s scalability in real-world settings. 4. Unfair of comparison The paper uses GPT-4o as the primary LLM-based high-level policy baseline, but it is unclear why GPT-4o was chosen over GPT-4o-1 (o1), which has stronger reasoning capabilities. Since Hi Robot's high-level policy relies heavily on structured reasoning for hierarchical task decomposition, a fair comparison should include a model with comparable reasoning ability. 5. Confusion of model architecture The paper does not clearly specify whether the high-level and low-level policies are implemented as a single unified model or two separate models. If Hi Robot employs a single model for both high-level reasoning and low-level action generation, how does it maintain the ability to output action tokens while retaining reasoning? As discussed in the OpenVLA framework, VLA models are typically limited in their language reasoning abilities after being finetuned for action generation. If the high-level reasoning and low-level action generation are handled by separate models, then the system appears very similar to a Planner + VLA architecture. Clarification on the architecture would help assess the novelty and contribution of the proposed system. [1] Tian, Xiaoyu, et al. "Drivevlm: The convergence of autonomous driving and large vision-language models." [2] Han, ByungOk, Jaehong Kim, and Jinhyeok Jang. "A Dual Process VLA: Efficient Robotic Manipulation Leveraging VLM." [3] Song, Chan Hee, et al. "Llm-planner: Few-shot grounded planning for embodied agents with large language models.". [4] Kim, Moo Jin, et al. "Openvla: An open-source vision-language-action model.". Other Comments Or Suggestions: Based on the author's rebuttal, I will consistently adjust my rating. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback. We address each concern below and will revise the paper accordingly. > The hierarchical structure resembles prior work on dual-process systems. While building on foundational ideas, Hi Robot introduces key innovations: 1. **Interactive Open-Ended Instruction Following**: Unlike DriveVLM [1] (trajectory planning for low-level execution) or DP-VLA [2] (BC-Transformer policy for low-level execution), our VLM+VLA framework enables: - Real-time language feedback incorporation - Generalization to open-vocabulary tasks via synthetic data - Physical dexterity (e.g., separating sticky cheese slices while making sandwiches) 2. **Scalable Synthetic Data**: Manual annotation in [1] limits scalability; our pipeline automates diverse interaction generation. *Table: Capability Comparison* | Feature | DriveVLM | DP-VLA | Hi Robot | |------------------|----------|--------|----------| | Open-ended instructions | ✗ | ✗ | ✓ | | Real-time feedback | ✗ | ✗ | ✓ | | Synthetic data scaling | ✗ | ✗ | ✓ | | VLA low-level policy | ✗ | ✗ | ✓ | > How does Hi Robot improve over LLMPlanner+OpenVLA? Our experiments revealed critical limitations of ungrounded planners: - **Physical Grounding**: GPT-4o (stronger than LLMPlanner's GPT-3) fails to recover from real-world errors (e.g., misgrasps) due to lack of embodied understanding (Fig 6). - **Scalability**: LLMPlanner uses only 8 predefined actions; Hi Robot supports thousands and more via language-conditioned skills. - **Performance**: Hi Robot outperforms GPT-4o by 40% in instruction accuracy (Fig 5). *Key Advantage*: Hi Robot’s high-level VLM is aware of low-level affordances, enabling physically-realizable plans and feedback integration. > How does Hi Robot handle mobile challenges? The framework treats mobility as an augmentation of manipulation: - **Unified Control**: Base velocity commands are additional action dimensions, enabling whole-body coordination (e.g., reaching high shelves by moving forward while raising arms). - **Results**: - 85% task success in grocery shopping (vs. 71.7% for GPT-4o). - Failure recovery: Autonomous adaptation from teleop data (e.g., freeing stuck baskets by adjusting base/arm coordination). *Failure Analysis*: Primary issues involve unseen edge cases (e.g., dropped objects). Future work can expand teleop and synthetic data coverage to make the system more robust. > Why not compare with GPT-4o-1 (o1)? While o1 excels in mathematical reasoning: - **Speed**: >10s/inference makes it impractical for real-time control. - **Relevance**: Coding/math strengths don’t directly translate to embodied reasoning. GPT-4o represents the fairest *practical* baseline for real-world deployment. > Is this a unified or separate models? Hi Robot uses **separate but co-trained models**: 1. **High-level VLM**: Specialized for instruction decomposition and feedback integration. 2. **Low-level VLA**: Focused on action generation. The interface is learned through synthetic examples that map high-level commands to executable skills and annotated teleop data that map skill commands to low-level actions, preserving reasoning capability while enabling precise control. Thank you again for your insightful questions—they’ve helped us better articulate Hi Robot’s contributions. We’ll incorporate these clarifications in the revision.
Summary: In this work, the authors introduce Hi-Robot, a System-1/System-2 approach that leverages a Vision-Language Model (VLM) to interpret complex prompts and generate a more suitable sequence of instructions for a Vision-Language-Action Model (VLA) to complete a given task. The system also integrates feedback during execution. The authors evaluate Hi-Robot across a diverse set of robotic platforms, in tasks that require novel combinations of learned skills in real-world scenarios. The results show that Hi-Robot outperforms several prior approaches. Claims And Evidence: The system demonstrates advanced reasoning capabilities, allowing it to process complex prompts, dynamically incorporate feedback, and execute instructions beyond its training data. It enables real-time corrections during open-ended tasks, enhancing adaptability. Its novel capabilities stem from the combination of a high-level LLM planner, a low-level VLA policy, and synthetic data generation. Furthermore, the framework is inherently modular, allowing for the integration of alternative language-conditioned policies as needed. Experimental evidence supports these claims. As shown in Sections 5.3.1 and 5.3.3, the system outperforms larger models like GPT-4o in instruction accuracy and task progress, particularly in handling complex prompts and adapting to mid-episode prompt changes across different platforms. Section 5.3.2 highlights the system’s ability to modify actions based on feedback, though the term "real-time" might be misleading, as it requires inference from two 3B models; a time analysis would be beneficial. Additionally, Section 5.4.1 provides quantitative evidence that synthetic data improves system performance. While the system’s modularity is acknowledged, further analysis is needed to evaluate how different model choices, when fine-tuned on the same data, impact overall performance. Methods And Evaluation Criteria: Yes, but since the synthetic data are part of the contribution I would like to see a more detailed analysis on the creation of the said dataset on the main text and not in the appendix. Theoretical Claims: There are no theoretical claims in the paper, hence this section is not applicable. Experimental Designs Or Analyses: All the experiments are well designed and the analysis is sound. Supplementary Material: Both the appendix and the website were reviewed. One small comment is that some videos on the website were not working. Also, the logo on the robotic arm, on the first video, was not blurred. Relation To Broader Scientific Literature: This work is highly related to the broader scientific literature and specifically $\pi_0$ and applying the general concept of "system 1" "system 2" cognitive processes something really popular in the LLM-VLM and robotics. Essential References Not Discussed: Not applicable. Other Strengths And Weaknesses: The paper is well-written and very easy to follow. Other Comments Or Suggestions: I applaud the authors for developing a system that can run on consumer-grade GPUs, making research in VLAs for robotics more accessible to a wider audience. Questions For Authors: Do the authors plan to open-source the model weights? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review and constructive suggestions. We address each point below and will incorporate these improvements in the revision. > Real-time inference timing analysis We provide detailed latency measurements across components (tested on consumer-grade RTX 4090): *Low-Level Policy Per-Step Inference Times* | Component | Time (ms) | |----------------------------|----------| | Image encoding | 14 | | Observation processing | 32 | | Action prediction (x10) | 27 | | Total (on-board) | 73 | | Total (off-board + WiFi)| 86 | For the high-level policy (single decoding step): - RTX 4090: 47ms (prefill) + 13.2ms (decode) - H100: 17.3ms (prefill) + 5.7ms (decode) These measurements confirm real-time feasibility at ~10Hz control rates. With action chunking [1], we can use it to control robots at 50Hz. > Impact of different model choices Our ablation (Fig 8) shows hierarchy improves VLA model performance even with identical data. Future work directions include: - **Architecture Studies**: E.g. video-based VLMs for temporal reasoning - **Scaling Laws**: How VLM/VLA size affects performance - **Transfer**: Which models better inherit internet pre-training knowledge We will expand this discussion in §6 (Future Work). > Move synthetic data analysis to main text We will: 1. Relocate the synthetic data section from Appendix A to §4.5 2. Add example prompts for data generation 3. Include examples of bad samples and how to avoid them > Some videos not working We've: 1. Converted all videos to SDR format 2. Added streaming-optimized versions 3. Included **new demos** showing diverse instruction following > Plan to release model weights? We will discuss this with collaborators before finalizing the camera-ready version of the paper. Thank you again for your valuable feedback! [1] Zhao et al. Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware.
null
null
null
null
null
null
Hgformer: Hyperbolic Graph Transformer for Collaborative Filtering
Accept (poster)
Summary: The authors propose Hgformer: Hyperbolic Graph Transformer for Collaborative Filtering and conduct extensive experiments to analyze the proposed method. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: Good. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths 1. The paper is well-structured. 2. The experiments are comprehensive. Weaknesses 1. The two key problems addressed in the paper, local structure modeling and embedding distortion, have already been extensively studied in previous works. 2. The combination of hyperbolic space and transformers has also been explored before. For instance, see: Hypformer: Exploring efficient transformer fully in hyperbolic space, KDD 2024. 3. Using hyperbolic space introduces additional computational overhead. Is this trade-off justified? Moreover, how does the computational cost of the proposed method compare with existing approaches? 4. The hyperparameter settings for the baseline models are not specified in the paper. Other Comments Or Suggestions: See “Other Strengths And Weaknesses” Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q: The two key problems addressed in the paper have already been extensively studied in previous works.** **A:** Thank you for pointing out these issues. As you mentioned, these two problems have indeed been discussed in prior works, and we have acknowledged these discussions in our Introduction section. However, as we emphasize, these challenges remain unresolved in the recommendation domain. Specifically, as demonstrated in Section E.1 of the Appendix, our proposed method addresses long-tail problems better than previous approaches. Moreover, simultaneously tackling both issues is non-trivial, and as highlighted in the Challenges parts of our Introduction, there are still significant difficulties that need to be overcome. Additionally, extending kernel functions to hyperbolic space to enable the application of hyperbolic transformers in recommendation is a novel approach. **Q: The combination of hyperbolic space and transformers has also been explored.** **A:** Thank you for pointing out the issue. We have also noted the paper "Hypformer: Exploring Efficient Transformer Fully in Hyperbolic Space" and discussed the key difference in the Related Works section, which can be summarized as: 1) They focus on node classification, whereas our Hgformer is specifically tailored for collaborative filtering. 2) We propose LHGCN for effective local structure modeling in hyperbolic space, which is not explored in their work. 3) To achieve linear computational complexity, Hypformer first swaps the multiplication order of space-like values in self-attention vectors and then recalibrates the time-like values. In contrast, we adopt the cross-attention for better performance in CF tasks and directly extends the kernel function to hyperbolic space, which has a theoretical guarantee in approximation errors. Overall, we are the first to propose a hyperbolic graph transformer in the context of collaborative filtering. **Q: Using hyperbolic space introduces additional computational overhead. Is this trade-off justified? How does the computational cost of the proposed method compare with existing approaches?** **A:** Thanks for your question. The operations in hyperbolic space do not differ significantly from those in Euclidean space in terms of computational complexity. The main parts of our model include LHGCN and Hyperbolic Graph Transformer. For LHGCN parts, In Section 2.2, we define the convolution in hyperbolic space by $$ \mathbf{u}_i^{(l+1)} = \mathbf{Centroid}\bigl(\{\mathbf{u}_i^{(l)}, \{\mathbf{i}_k^{(l)} : k \in N_i\}\}\bigr) $$ and $$ \mathbf{i}_j^{(l+1)} = \mathbf{Centroid}\bigl(\{\mathbf{i}_j^{(l)}, \{\mathbf{u}_k^{(l)} : k \in N_j\}\}\bigr).$$ According to Definition B.7 in Appendix B, the above Centroid formula still yields an overall complexity $O(|E| \times d)$ for an entire pass over the graph, where $|E|$ is the number of edges and $d$ is the embedding dimension and we find that the computational complexity is similar to that of the LightGCN convolution process. The complexity of the Hyperbolic Graph Transformer is detailed in Figure 3 in our paper and we can find that it is consistent with that of the NodeFormer, which is a graph transformer model in Euclidean space and is capable of dealing large-scale graphs. Hence, the hyperbolic variant retains the same order of computational complexity as its Euclidean counterpart. **Q: The hyperparameter settings for the baseline models are not specified in the paper.** **A:** Thanks for pointing out the problem. Below we provide the hyperparameter tuning ranges for baselines. | **Model** | **Hyperparameters (Ranges)** | |--------------|---------------------------------------------------------------------------| | **BPR** | learning_rate: (2e-3, 1e-3, 5e-4) | | **NGCF** | learning_rate: (2e-3, 1e-3, 5e-4); node_dropout: (0.0, 0.1); message_dropout: (0.0, 0.1) | | **LightGCN** | learning_rate: (2e-3, 1e-3, 5e-4); n_layers: (2, 3); reg_weight: (1e-4, 1e-5) | | **HMLET** | learning_rate: (2e-3, 1e-3, 5e-4); n_layers: (3, 4); activation_function: (elu, leakyrelu) | | **HGCF** | learning_rate: (2e-3, 1e-3, 5e-4); n_layers: (2–5); scale: 0.1; learner: (adam, rsgd); margin: (0.1, 0.15, 0.2) | | **HICF** | learning_rate: (2e-3, 1e-3, 5e-4); n_layers: (2–5); scale: 0.1; learner: (adam, rsgd); margin: (0.1, 0.15, 0.2) | | **HRCF** | learning_rate: (2e-3, 1e-3, 5e-4); n_layers: (3–7); scale: 0.1; learner: (adam, rsgd); margin: (0.1, 0.15, 0.2) | | **SGFormer** | learning_rate: (2e-3, 1e-3, 5e-4); n_layers: 1; weight_decay: (1e-5, 1e-4, 1e-3); dropout_ratio: (0.0, 0.2, 0.3); num_heads: (2, 3) | | **NodeFormer** | learning_rate: (2e-3, 1e-3, 5e-4); n_layers: (2, 3); rb_order: (1, 2); num_heads: (2, 3) | | **Hypformer** | learning_rate: (2e-3, 1e-3, 5e-4); n_layers: 1; num_heads: (2, 3); margin: (0.1, 0.15, 0.2)
Summary: The paper introduces Hgformer, a novel Hyperbolic Graph Transformer framework designed to address two critical limitations in GNN-based collaborative filtering (CF): local structure bias caused by neighborhood aggregation and embedding distortion in Euclidean space. The proposed method combines a parameter-free Local Hyperbolic Graph Convolutional Network (LHGCN), which performs graph convolution entirely within the hyperbolic manifold to avoid information loss from tangent space projections, with a Hyperbolic Cross-Attention mechanism that captures global user-item interactions in bipartite graphs. To ensure scalability, the authors propose a linear-complexity approximation for the cross-attention, supported by theoretical guarantees of unbiased estimation and controllable error bounds. The numerical experiments show that Hgformer is superior to leading CF models. Moreover, compared with traditional hyperbolic graph neural network methods, this approach can further enhance the model’s performance on long-tail items. Overall, the work presents a compelling integration of hyperbolic geometry and transformers, advancing CF research with both theoretical and practical contributions. Claims And Evidence: Claim 1: LHGCN outperforms tangent-space projection-based methods (e.g., HGCF). Evidence: Figure 2(b) illustrates LHGCN’s direct hyperbolic aggregation vs. HGCF’s tangent-space mapping. Table 1 and ablation studies (Figure 4) show LHGCN achieves higher Recall/NDCG on most datasets. Claim 2: Hyperbolic cross-attention effectively captures global user-item interactions. Evidence: Removing the transformer module causes significant performance drops (e.g., -11.6% Recall@10 on Amazon Book). Case studies (Figure 5) highlight its ability to recommend distant tail items. Additionally, tail-item analysis (Appendix E) shows that Hgformer improves the recommendation of long-tail items compared to models without hyperbolic cross-attention. Claim 3: The linear-complexity approximation maintains performance while reducing costs. Evidence: Theorems 3.1–3.2 provide theoretical guarantees. Figure 3 visualizes the approximation workflow. However, empirical validation of training/inference time on large-scale data is missing. Methods And Evaluation Criteria: Method Soundness: 1.LHGCN’s design aligns with LightGCN’s simplicity, leveraging hyperbolic centroids for neighbor aggregation. 2.The unbiased estimation approach for cross-attention is theoretically grounded but may require tuning hyperparameters (e.g., random feature dimension m). Evaluation Adequacy: 1.Metrics (Recall@K, NDCG@K) are standard for CF. 2.Baselines include hyperbolic GNNs (HGCF, HICF) and graph transformers (SGFormer), but some (e.g., SGFormer) are not CF-specific, potentially affecting fairness. Theoretical Claims: Theorems 3.1 (unbiased estimation) and 3.2 (error bound) in Appendix C are logically sound. Experimental Designs Or Analyses: 1.Comprehensive experiments on 5 datasets with ablation studies and case analyses. 2.Code availability enhances reproducibility. Supplementary Material: Appendices include proofs, dataset details, and additional experiments. Suggestions: Add hyperparameter sensitivity analysis (e.g., curvature K, m). Clarify negative sampling strategies for the margin-ranking loss. Relation To Broader Scientific Literature: The paper connects hyperbolic GNNs (e.g., HGCF, HICF) and graph transformers, advancing CF through hyperbolic-local-global fusion. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: Novel integration of hyperbolic geometry and transformers for CF. Strong empirical results and theoretical contributions. Weaknesses: Limited comparison with 2023–2024 hyperbolic CF models. There is inconsistent formula formatting (e.g., lines 303–307 and Equation 12). Other Comments Or Suggestions: No. Questions For Authors: 1. Limited comparison with 2023–2024 hyperbolic CF models. How does Hgformer compare to recent hyperbolic CF methods in terms of performance and scalability? 2. Has a hyperparameter sensitivity analysis been conducted (e.g., curvature K, m)? 3. Can you clarify the negative sampling strategies used for the margin-ranking loss? 4. There is inconsistent formula formatting (e.g., Equation 12 and lines 304–307). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q: Empirical validation of training/inference time on large-scale data is missing.** **A:** Thank you for pointing this out. We have conducted empirical evaluations and reported the average computational time per epoch for several representative baselines and our proposed model on Amazon Book (our largest dataset) and Amazon Movie. In each cell, the first value represents the training time per epoch, while the second value represents the testing time. | Dataset | BPR | LightGCN | HMLET | HGCF | Hgformer | |---------------|-------------|-------------|-------------|-------------|-------------| | Amazon Book | 0.46s/0.74s | 2.22s/1.33s | 12.63s/1.28s| 6.06s/1.84s | 14.93s/1.92s| | Amazon Movie | 0.22s/0.56s | 1.41s/0.71s | 7.41s/0.5s | 3.03s/0.78s | 9.8s/0.83s | **Q: Limited comparison with 2023–2024 hyperbolic CF models. How does Hgformer compare to recent hyperbolic CF methods in terms of performance and scalability?** **A:** Thank you for pointing out the issue. We have looked into recent literature from 2023–2024 and plan to include HyperCL[1], which extend contrastive learning into hyperbolic space and applied it for collaborative filtering, as one of our baselines. The results are as follows: | Dataset | Metric | Our Method | HyperCL | |----------------|--------------------------|------------|----------| | **Amazon Book** | Recall@10 | 0.9010 | 0.0756 | | | Recall@20 | 0.1291 | 0.1189 | | | NDCG@10 | 0.0573 | 0.0482 | | | NDCG@20 | 0.0681 | 0.0582 | | **Amazon CD** | Recall@10 | 0.0977 | 0.0828 | | | Recall@20 | 0.1401 | 0.1221 | | | NDCG@10 | 0.0567 | 0.0496 | | | NDCG@20 | 0.0678 | 0.0598 | | **Amazon Movie**| Recall@10 | 0.0803 | 0.0776 | | | Recall@20 | 0.1203 | 0.1186 | | | NDCG@10 | 0.0503 | 0.0492 | | | NDCG@20 | 0.0612 | 0.0594 | | **Douban Book** | Recall@10 | 0.1462 | 0.1392 | | | Recall@20 | 0.2052 | 0.1968 | | | NDCG@10 | 0.1030 | 0.0976 | | | NDCG@20 | 0.1189 | 0.1126 | | **Douban Movie**| Recall@10 | 0.1405 | 0.1386 | | | Recall@20 | 0.2068 | 0.2046 | | | NDCG@10 | 0.1322 | 0.1316 | | | NDCG@20 | 0.1447 | 0.1435 | | **Douban Music**| Recall@10 | 0.1386 | 0.1268 | | | Recall@20 | 0.1955 | 0.1826 | | | NDCG@10 | 0.1024 | 0.0937 | | | NDCG@20 | 0.1165 | 0.1076 | **Q: Has a hyperparameter sensitivity analysis been conducted (e.g., curvature K, m)?** **A:** Thanks for the question. We provided a sensitivity analysis in Appendix E.2. However, we did not conduct a sensitivity analysis for K and m. We have now supplemented our work with an analysis for these two hyperparameters, and the results can be found at the link below. https://anonymous.4open.science/r/Hgformer-2AE0/sensitivity_analysis_rebuttal.pdf **Q:Can you clarify the negative sampling strategies used for the margin-ranking loss?** **A:** We adopted the default approach provided by the RecBole framework. Specifically, negative samples were uniformly drawn from items with which the user had no previous interaction. To ensure a fair and consistent comparison across different methods, all baselines utilizing negative sampling followed this same strategy and pair each positive sample with exactly one negative sample. **Q:There is inconsistent formula formatting (e.g., Equation 12 and lines 304–307).** **A:** Thanks for pointing our this inconsistency! We acknowledge that they can indeed be easily misunderstood. In lines 304–307, the $u^{final}$ and $i^{final}$ are derived from Equation 14, where $u^{global}$ and $i^{global}$ are obtained from Equation 5. Meanwhile, Equation 12 is introduced to reduce the computational complexity of Equation 3 (which leads to Equation 5) to linear complexity. We will add further clarifications in the paper to improve its readability. **References:** [1] Qin Z, Cheng W, Ding W, et al. Hyperbolic Graph Contrastive Learning for Collaborative Filtering[J]. IEEE Transactions on Knowledge and Data Engineering, 2024.
Summary: The paper proposes a Hyperbolic Graph Transformer architecture, to tackle the long-tail problems in CF tasks, which leverages LHGCN for graph structure modeling and hyperbolic cross-attention for global information modeling Claims And Evidence: Are the claims made in the submission supported by clear and convincing evidence? Methods And Evaluation Criteria: The author mentions not using tangent space for aggregation, but in the "Embedding Aggregation and Optimization" section, they still use aggregation, which seems inconsistent and unreasonable. Regarding the baselines, SGFormer, NodeFormer, and Hypformer were not originally designed for recommendation tasks. It remains unclear how the authors adapted these models for recommendation tasks and comparison purposes. Theoretical Claims: While the paper claims LHGCN performs "graph convolution entirely in hyperbolic manifold without mapping back to Euclidean space" (p.2), the mathematical justification for why this preserves information better than alternative approaches is more empirically than theoretically supported. Experimental Designs Or Analyses: is there any validation set for these experiments? How does the author get these results? there are no statistical significance tests (t-tests, confidence intervals) to verify if these differences are statistically meaningful or potentially due to random variation. missing Euclidean space transformer-based recommenders like SASRec and BERT4Rec regarding the baselines, SGFormer, NodeFormer, and Hypformer were not originally designed for recommendation tasks. It remains unclear how the authors adapted these models for recommendation tasks and comparison purposes. Supplementary Material: A, B, D and E Relation To Broader Scientific Literature: The linear-complexity approximation builds upon the theoretical framework of Linformer but adapts it for hyperbolic manifolds—a mathematical contribution extending beyond recommendations. Essential References Not Discussed: The paper proposes Hyperbolic Transformer but does not cite "Hyperbolic Attention Networks" (Gulcehre et al., ICLR 2019), which specifically established the theoretical foundations for attention mechanisms in hyperbolic space. the paper overlooks "Performers: Rethinking Attention with Performers" which introduced a kernel-based approximation method achieving linear complexity in Euclidean space using random feature maps. Similarly, "Nyströmformer" is not discussed despite offering another established approach to linear-complexity attention. For their long-tail recommendation contribution, the paper fails to cite "Causal Intervention for Leveraging Popularity Bias in Recommendation", which specifically addressed popularity bias with a causal inference approach. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Q: Inconsistency between two aggregation approaches** **A:** Sorry for the misunderstanding. These two types of aggregation serve different purposes in our model. The aggregation in Section 2.2 indicates gathering neighbor information for each node to capture multi-hop relationships, which is performed in multiple layers—thus, any distortion at this stage would accumulate significantly. In contrast, the aggregation of local and global information in Section 2.4 is simply fusing information from the two representation space for each node, which is a much simpler operation and involves negligible information distortion. In fact, aggregation step in Section 2.4 can be easily replaced by the weighted centroid of two embeddings defined in Definition B4 in the Appendix, and we conducted additional comparative experiments: | **Dataset** | **Metric** | **Our method** | **Hyperbolic aggregation** | |-------------------|-------------|----------------|----------------------------| | **Amazon CD** | Recall@10 | 0.0977 | 0.0978 | | | Recall@20 | 0.1401 | 0.1398 | | | NDCG@10 | 0.0567 | 0.0572 | | | NDCG@20 | 0.0678 | 0.0680 | | **Douban Book** | Recall@10 | 0.1462 | 0.1459 | | | Recall@20 | 0.2052 | 0.2045 | | | NDCG@10 | 0.1030 | 0.0995 | | | NDCG@20 | 0.1189 | 0.1186 | We observe that the differences between the two methods are marginal. **Q: Implementations of SGFormer, NodeFormer and Hypformer** **A:** We adopted the encoder parts of these models straightforwardly and replace their output layers with recommendation-specific modules (For NodeFormer, SGFormer we applied BPR loss, which is the same as LightGCN and NGCF and for Hypformer, we applied Hyperbolic Margin Ranking Loss, which is the same as our hyperbolic baselines). The detailed implementations are publicly available in the anonymized GitHub repository in the Abstract. **Q: Lack of mathematical justification for LHGCN** **A:** Thanks for pointing out this issue. In the following, we provide a theoretical analysis supporting our claims. For a set of hyperbolic points $N \subset H^{d+1, K}$, for each point $x_i \in N$, the centroid is defined as the point $c \in H^{d+1, K}$ that minimizes the sum of squared hyperbolic distances: $Centroid(N) = \arg\min_{c \in H^{d+1, K}} \sum_{x_i \in N} (d^K(c,x_i))^2 = \arg\min_{c \in H^{d+1, K}} f(c).$ Its closed-form solution is given by $c^* = \sqrt{K} \frac{\sum_{x_i \in N} x_i}{| \sum_{x_i \in N} x_i |_M}.$ Another way is to aggregate at a north pole point $o$ $\bar{c}' = \frac{1}{|N|}\sum_{x_i \in N} log_o(x_i),$ and then map them back: $\bar{c}^* = exp_o(\bar{c}') = cosh\Bigl(\frac{|\bar{c}'|_M}{\sqrt{K}}\Bigr)o + \sqrt{K},sinh\Bigl(\frac{|\bar{c}'|_M}{\sqrt{K}}\Bigr)\frac{\bar{c}'}{|\bar{c}'|_M}.$ Since $f(c)$ is convex in hyperbolic space (see [1]), it follows that $f\left(c^*\right) \le f\left(\bar{c}^*\right)$ This means that aggregation in Euclidean space is only a suboptimal solution and will lead to distortion. **Q: Is there validation set? How do authors get the results?** **A:** Thanks for the questions. To ensure accuracy and fairness in our experiments, all the models were tested on RecBole, a widely accepted framework in the field of recommender systems. The dataset was split into training, validation, and test sets with an 8:1:1 ratio. During training, we adopted an early-stop mechanism for all models, stopping training if no improvement was observed on the validation set for 30 epochs and the best-performing parameters on the validation set were selected for final evaluation on the test set. **Q: No statistical significance tests (t-tests, confidence intervals)** **A:** We respectfully clarify that we calculate p-values for our experiments, which is depicted in the caption of table 1. Here, we provide the specific p-values and t-values: | Dataset | p-value | t-value | |---------------|---------|---------| | Amazon Book | 0.023 | 2.5 | | Amazon CD | 0.019 | 2.6 | | Amazon Movie | 0.021 | 2.7 | | Douban Book | 0.018 | 2.9 | | Douban Movie | 0.027 | 2.4 | | Douban Music | 0.038 | 2.3 | **Q: Essential References Not Discussed** **A:** Thanks for pointing out the problem. For "Hyperbolic Attention Networks", "Performers" and "Nyströmformer", we would like to add them into Graph Transformer part in Related Works and for "Causal Intervention for Leveraging Popularity Bias in Recommendation", "SASRec" and "BERT4Rec", we would like to add an extra section in Appendix for discussion. **Reference:** [1]Bacák M. Computing medians and means in Hadamard spaces.
null
null
null
null
null
null
null
null
Differential Privacy Guarantees of Markov Chain Monte Carlo Algorithms
Accept (poster)
Summary: This papers studies differential privacy and R\'{e}nyi differential privacy for Markov chain Monte Carlo (MCMC) algorithms for the path and the final value of the algorithms. The general results are then applied to study two popular MCMC algorithms, the unadjusted Langevin algorithm (ULA) and stochastic gradient Langevin dynamics (SGLD). The paper establishes the privacy guarantees uniform in number of iterations, and the bounds on the privacy on the privacy of the entire trajectory. As a result, this answers an open question (Question 1.1. in Altschuler and Talwar (2022)) about uniform-in-time differential privacy guarantees in a non-convex setting on an unbounded space. The results generalize the results of Chourasia et al. (2021) to a non-convex setting, and match theirs in the strongly convex regime. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper seems to be rigorous though I don't have to check all the proofs details. Experimental Designs Or Analyses: N.A. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper establishes the privacy guarantees uniform in number of iterations, and the bounds on the privacy on the privacy of the entire trajectory. As a result, this answers an open question (Question 1.1. in Altschuler and Talwar (2022)) about uniform-in-time differential privacy guarantees in a non-convex setting on an unbounded space. The results generalize the results of Chourasia et al. (2021) to a non-convex setting, and match theirs in the strongly convex regime. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: Strengths (1) The paper is well written. (2) The analysis seems to be rigorous. (3) As the paper claims, the paper answers an open question (Question 1.1. in Altschuler and Talwar (2022)) about uniform-in-time differential privacy guarantees in a non-convex setting on an unbounded space. (4) The results generalize the results of Chourasia et al. (2021) to a non-convex setting, and match theirs in the strongly convex regime. Weaknesses (1) The assumption (equation (8)) in Proposition 4.5. seems super strong to me. As a result, when applied to ULA and SGLD, the Assumption 5.3. is super strong. Can you add some discussions on some real examples that satisfy such an assumption? (2) Even though the results generalize the results of Chourasia et al. (2021) to a non-convex setting, the non-convexity here seems to be quite restrictive, in the sense that it is simply a strongly convex and smooth function plus a function whose gradient is uniformly bounded. Other Comments Or Suggestions: 1) In the very beginning of Section 3.2., do you need any assumption on $f$? (2) In the last line in Lemma 4.2, it is better to write $(\alpha,\epsilon)$-R\'{e}nyi-DP instead. (3) Currently, Proposition 4.5. is about privacy of the path, and Proposition 4.6. is about privacy of the final iterate. However, when you present the corresponding results for ULA for example, you present the results for the final iterate first (Theorem 5.2) and before you present the results for the path (Theorem 5.4). I think it is better to make the ordering consistent. (4) In the first equation in Proposition 4.6., is it (uniformly) for every $s$? Please make it more clear. Questions For Authors: See my comments for weaknesses above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful to the reviewer for the positive review and for their comments. We will incorporate the reviewer's suggestions in the revised version of the paper. Below we reply to the questions that the reviewer raised: - **In the very beginning of Section 3.2., do you need any assumption on $f$?** We do not directly need to state assumptions on $f$ straight away, but restrictions on $f$ indirectly come into play to ensure that the assumptions that follow are satisfied. We will add a comment about this in the revised version of the paper. - **In the last line in Lemma 4.2, it is better to write $(\alpha,\varepsilon)$-Rényi-DP instead.** We agree and will correct this. - **Currently, Proposition 4.5. is about privacy of the path, and Proposition 4.6. is about privacy of the final iterate...** We agree with the reviewer and we will incorporate this change in the revised paper. - **In the first equation in Proposition 4.6., is it (uniformly) for every $s$? Please make it more clear.** Indeed, here we forgot to mention that the result should hold uniformly over $s$, as well as for any $x,y\in \mathbb{R}^d$. This will be fixed in the revised paper. - Regarding the weaknesses of the paper: we agree that the assumptions we consider are quite restrictive, but these should be compared to the existing literature. It is likely that relatively strict structural assumptions are needed in any case for the class of algorithms we consider. Regardless, we are going to add a discussion about the assumptions as suggested, highlighting their weaknesses and when they might be satisfied.
Summary: This paper analyzes the differential privacy (DP) guarantees of Markov Chain Monte Carlo (MCMC) algorithms, focusing on both general MCMC methods and specific Langevin-based variants. It establishes that the DP properties of the posterior distribution are crucial for ensuring the privacy of MCMC samples, showing that if the posterior is not differentially private, the MCMC chain cannot be either. Using Girsanov’s theorem and a perturbation technique, the authors derive DP and Rényi DP bounds for the entire trajectory and final iterates of the Unadjusted Langevin Algorithm (ULA) and Stochastic Gradient Langevin Dynamics (SGLD). Their results improve on standard composition bounds by providing uniform-in-time privacy guarantees, particularly in non-convex settings, which were previously challenging. Claims And Evidence: See questions Methods And Evaluation Criteria: NA Theoretical Claims: Yes. I checked the proof of Lemma 4.1. See questions Experimental Designs Or Analyses: NA Supplementary Material: Part of Appendix A and B Relation To Broader Scientific Literature: NA Essential References Not Discussed: No Other Strengths And Weaknesses: See questions Other Comments Or Suggestions: - Proposition 4.6. Is "for any x, y, s..." missing? Questions For Authors: - The authors claim that "In particular, we find that the DP of the posterior is the crucial starting point to perform Bayesian inference. This fact is supported by Proposition 3.4, which shows that if the posterior has weaker DP guarantees than the MCMC algorithm, the law of the MCMC chain after n iterations is far from the posterior in total variation distance." However, in Proposition 3.4, the $\pi$ is not defined as the stationary distribution of the transition kernel, which may leave a gap between Proposition 3.4 and the claim on the DP and convergence rate. Could the authors provide an example of MCMC that satisfies Proposition 3.4 (since it is a lower bound)? - Can the authors discuss the connection between Assumption 5.1 and the log-Sobolev inequality? Section 5 considers the non-convex setting. Does the function $K$ serve as a strongly convex regularization? Could the author explain how it is used in deriving the results for algorithms where discretization is needed? - In Proposition 4.6 (Line 283) and Line 865, could the author specify: "almost surely" with respect to what probability measure? - Could the author provide intuition on how this path-wise analysis is better than the advanced composition theorem of DP? Does the discretization reintroduce DP accounting similar to the composition-style analysis? Can the author further clarify how the resulting order is better than previous results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their useful comments and remarks. We are going to incorporate them in the revised version of the paper. Below we address the questions from the "Questions for authors" section. - **Proposition 4.6. Is "for any x, y, s..." missing?** Indeed, we will add this in the revised paper. - **The authors claim that "In particular, we find that the DP of the posterior is the crucial starting point to perform ..."** We shall clarify this aspect in the revised version of the paper. Indeed, in Proposition 3.4 we do not make an assumption on the stationary distribution of the Markov chain. This is because such an assumption is not required to obtain the result, and moreover, our statement then applies also to biased MCMC algorithms such as the unadjusted Langevin algorithm. In essence, the result states that the law of a differentially private MCMC algorithm can be far from any probability distribution that has worse DP guarantees. In our context, we are particularly interested in the distance to the posterior distribution, hence we state the result in such a form. The Proposition then applies very broadly to any MCMC algorithm with a DP guarantee after $n$-steps that is not satisfied by the posterior distribution. - **Can the authors discuss the connection between Assumption 5.1 and the log-Sobolev inequality? Section 5 considers the non-convex setting. Does the function serve as a strongly convex regularization? Could the authors explain how it is used in deriving the results for algorithms where discretization is needed?** The reviewer brings up a very interesting connection with structural properties of the target, one which we are interested in exploring in future works. It is true that under Assumption 5.1 the target density does obey a log-Sobolev inequality. However, it would not be possible to prove privacy bounds solely under the assumption that for each $\mathcal{D}$ the posterior $\pi_\mathcal{D}$ obeys a log-Sobolev inequality, since this is not enough to ensure the densities are close (for instance, by shifting the target densities). The assumption of a strongly convex part $K$ of the target posterior could indeed be a strongly convex regularizer, either for a Bayesian prior or a loss function, and we shall mention this in the revised version. We write $U_{\mathcal{D}}=K+V_{\mathcal{D}}$ since the former part induces a contraction and the latter part induces the processes to move apart. This is the same for the discretization and for continuous-time processes which target $\pi_\mathcal{D}$. - **In Proposition 4.6 (Line 283) and Line 865, could the authors specify: "almost surely" with respect to what probability measure?** Thank you for bringing this to our attention, we will clarify this aspect. The distinction between being almost surely true for $\mathbb{P}$ or $\mathbb{Q}$ is not significant since the measures are absolutely continuous and therefore have the same null sets (this is always true for measures constructed via Girsanov's theorem). We shall make this clearer in the revised version. More generally, we only consider randomness induced by the algorithm, while the dataset $\mathcal{D}$ is always considered fixed. - **Could the authors provide intuition on how this path-wise analysis is better than the advanced composition theorem of DP? Does the discretization reintroduce DP accounting similar to the composition-style analysis? Can the authors further clarify how the resulting order is better than previous results?** It is true that the results for the entire trajectory (Proposition 4.5, Theorem 5.4, and Theorem 5.8) essentially match those given by Rényi composition bounds. However, they exceed those given by the advanced $(\epsilon,\delta)$ composition bound and thus remove the need for complicated privacy accounting. We shall make this clearer in the revised version. However, the results for the final draw from the Markov chain (Proposition 4.6, Theorem 5.2, and Theorem 5.6) are new and use novel probabilistic techniques. In particular, these results are uniform in time, which is never possible with composition-based analysis. Additionally, they apply to processes taking values on the whole space with non-convex assumptions. We are not aware of any work in the literature that considers this setting.
Summary: This is a theoretical work studying DP and MCMC algorithms, focusing on: 1. Connections between mixing, the privacy of the exact posterior, and the privacy of intermediate iterates. 1. The privacy of Markov chains based on Langevin diffusion, e.g. the Unadjusted Langevin algorithm (ULA). I view the central new result as a uniform-in-time bound on the privacy loss of ULA in a certain non-convex loss function. Informally, we require the loss to be strongly convex outside a ball. The key step in the proof is to show that, for coupled versions of this process run on adjacent datasets, the iterates are never far apart. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I inspected parts of the theoretical arguments closely and believe them to be correct. Overall I think the theory is sound. Experimental Designs Or Analyses: n/a Supplementary Material: I read parts of the proofs in the appendices. Relation To Broader Scientific Literature: This submission has serious issues here. There are key references missing (see below) and, with the current version, some readers might walk away misunderstanding the connections to existing work. Here are a few issues I observe. 1. The informal takeaways from Section 3 I view as obvious. It is not clear to me what value the quantitative versions add. (For example: "Any MCMC algorithm that is asymptotically exact will fail to be $(\varepsilon,\delta)$-differentially private for some $n$ when the posterior itself is not $\(\varepsilon,\delta)$-differentially private.") 1. Section 4 claims a "novel proof strategy to obtain DP guarantees of MCMC algorithms," but it seems to me to be the normal strategy for proving approximate DP. I ask a question below. 1. The submission claims to solve Open Problem 1.1 from Altschuler and Talwar (2022), but (i) the submission gives an upper bound for a specific family of distributions and (ii) the submission gives no lower bound. 1. The paper says "we improved on known composition bounds," but then say "this essentially matches bounds presented in Chourasia et al. (2021)." Essential References Not Discussed: [1] study the privacy loss of Langevin diffusion. [2] give uniform-in-T bounds on the privacy loss of DP-SGD in a non-convex setting. [1] Ganesh, Arun, Abhradeep Thakurta, and Jalaj Upadhyay. "Langevin diffusion: An almost universal algorithm for private euclidean (convex) optimization." arXiv preprint arXiv:2204.01585 (2022). [2] Asoodeh, Shahab, and Mario Diaz. "Privacy loss of noisy stochastic gradient descent might converge even for non-convex losses." arXiv preprint arXiv:2305.09903 (2023). Other Strengths And Weaknesses: none. Other Comments Or Suggestions: none. Questions For Authors: Without answers to the following questions, I cannot recommend acceptance. 1. How does your proof strategy in Section 4 differ from that of [1, Section 1.1.1], for example? How does it differ from the approaches in Chourasia et al. (2021) or Ganesh et al. (2022)? 1. Can you sketch how your paper needs to change in light of the extra citations? 1. In particular, can you re-summarize how your technical contributions improve upon existing work? [1] https://dpcourse.github.io/2021-spring/lecnotes-web/lec-09-gaussian.pdf Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their careful comments, and particularly for the suggestion of relevant connected work in the literature. In response to specific questions, we have the following responses: - **The informal takeaways from Section 3 I view as obvious. It is not clear to me what value the quantitative versions add...** The takeaways from Section 3 are indeed rather intuitive, but we did not find any such result in the literature. On the contrary, several papers proposed novel MCMC algorithms with DP-guarantees of the one-step transition kernel, motivating these as means to perform differentially private Bayesian inference. Our results from Section 3 (e.g., Proposition 3.4) make it very clear that such MCMC algorithms cannot achieve their goal of giving samples that are close to the posterior distribution unless the posterior distribution itself enjoys DP. This strongly encourages a shift in the approach to differentially private Bayesian inference. - **Section 4 claims a "novel proof strategy to obtain DP guarantees of MCMC algorithms," ...** We shall make this aspect clearer in the next version of the paper, including a discussion on the following points. In Section 4.1, we give rigorous statements on how to obtain DP using the Radon-Nikodym (RN) derivative of the laws of the randomized algorithms for two adjacent datasets. This is much more general than the result in the notes the reviewer mentioned, but it is indeed not intrinsically a novel approach. However, thanks to our formal statement, it becomes clear that the DP of MCMC algorithms based on diffusions can be studied with Girsanov's theorem, which indeed gives an expression for the RN derivative of interest. Finally, we obtain results for the algorithm that releases only the $n$-th step of the chain with a novel perturbation technique, once again relying on Girsanov's theorem. - **The submission claims to solve Open Problem 1.1 from Altschuler and Talwar (2022), but (i) the submission gives an upper bound for a specific family of distributions and (ii) the submission gives no lower bound.** It is true that we do not calculate exactly the optimal privacy parameters. We thank the reviewer for noting this, and we shall be more precise in the revised version. However, we do believe that we address Question 1.2 (*Does the privacy loss of Noisy-SGD increase ad infinitum in the number of iterations?*) and make progress towards Question 1.1 (*What is the privacy loss of Noisy-SGD as a function of the number of iterations?*). We note that optimal privacy bounds are only known in the Gaussian case (see Theorem 3 in Chourasia et al. (2021)) and are unlikely to be tractable in general. - **The paper says "we improved on known composition bounds," but then says "this essentially matches bounds presented in Chourasia et al. (2021)."** Indeed, the improvement in composition bounds is only in comparison to known bounds for $(\varepsilon, \delta)$ privacy, and not in comparison to known Rényi composition bounds. We thank the reviewer for noting this and shall make this point clearer in the revised version. Chourasia et al. (2021) proves a similar result using PDE machinery. Regarding the Questions for authors section. The references provided by the reviewer are very interesting and explore pertinent connections. We are going to add them to the revised version of the paper, together with a discussion on the differences with our work. For these references and every work in the literature of privacy of Markov chains we are aware of, one of the following holds: 1. The domain of the process is bounded. 2. The privacy bounds degenerate with the number of steps. 3. The deterministic part of the Markov kernel is contractive (i.e., the gradient of a strongly convex function). The main novelty of our work is that we prove privacy bounds without 1, 2, or 3, and use different and novel technical tools to do so. The tools used to prove privacy in [1] are either increasing bounds on Rényi divergence (see Lemma 2.2) or composition bounds that require noise that increases with time to be uniform (see Lemma 3.1). In [2], all results are either on a bounded state space or degenerate with the number of steps (for large $T>0$, the RHS of (21) will be greater than $1$). Chourasia et al. (2021) considers a more similar setting and utilises a quite ingenious PDE argument to show uniform-in-time privacy bounds for a single draw from the SGD chain. However, in order to obtain explicit bounds, the authors have to assume strong convexity of the loss function. In comparison, we use a hybrid pathwise-perturbation approach, wherein we show that the two processes in question are almost surely close, then show the Radon-Nikodym derivative of their laws is mostly small by means of a perturbative argument and Girsanov's theorem. We are not aware of any such argument in the privacy literature. --- Rebuttal Comment 1.1: Comment: Thank you for your response, this has clarified the main questions I had.
Summary: This paper present theoretical results on differential privacy guarantees for Markov Chain Monte Carlo (MCMC) methods. They develop DP guarantees for both full chains and for the final state of the chain, and demonstrate how these results can be applied for specific instances of MCMC dynamics. ## Update after rebuttal The authors have answered my questions and I believe with the promised clarifications that the paper will be a valuable contribution. Claims And Evidence: Clear and convincing evidence for the proofs with which I could properly engage (all up to Appendix B.3), after that everything looks ok to me but I am less confident about given that I do not routinely use these theoretical tools. Methods And Evaluation Criteria: Not applicable. Theoretical Claims: I went through all the proofs in detail up until Appendix B.3. The proofs in Appendix B.3 and onwards I checked, but as stated above I am less confident about commenting on given that I do not routinely use these theoretical tools. Experimental Designs Or Analyses: Not applicable. Supplementary Material: Yes, everything up until B.3 in detail and in less detail after that (see above). Relation To Broader Scientific Literature: From my perspective the relation to the broader literature is strong. The authors connect well to both background literature on MCMC and privacy, so I do not see issues here. Essential References Not Discussed: None. Other Strengths And Weaknesses: I think the paper is clear and well-presented. The introduction to differential privacy and Renyi privacy was good and well-paced. I especially appreciate the efforts the authors went to to provide hyperlinks in both directions from propositions and proofs, allowing me to easily jump back and forth between statements and their proofs presented in the Appendix. To my knowledge, the work is original and in my view presents significant results at the intersection of MCMC and privacy guarantees. Just a few comments on improving clarity: - $P_{\mathcal{A}(\mathcal{D})}$ is used in Definition 2.2 on Page 2, but not defined until Page 5 (as far as I could see) – it was reasonably clear from context what it meant, but I think for improved clarity it would be beneficial to have defined it earlier. - I didn't like the overloading of notation, e.g., $\eta$ is first a measure on $(E, \mathcal{B}(E))$ but then a real parameter appearing in the discussion the DP of Monte Carlo estimators Other Comments Or Suggestions: Here are what I think are some typos I spotted: - In the line preceding Section 3, the divergence $D$ uses a comma rather than $\Vert$ to separate arguments, inconsistent with other instances of this divergence - Section 3, 2nd paragraph: did you mean to say $(\epsilon, \delta + \beta(e^{\epsilon} + 1))$-DP of $\nu_D$? - Final sentence of first paragraph of 4.2, and in the proofs in the Appendices, you write $(X_T^D)\_{t \in [0,T]}$ rather than $(X_t^D)_{t \in [0,T]}$ and I imagine the latter is what you want so that the index actually appears somewhere? - Appendix B.1: final equality should have $\mathbb{P}$ not $\mathbb{E}$ on RHS Questions For Authors: **Conclusions** Sorry for the basic question but what does it mean to "choose" a Bayesian posterior that has good DP properties? Do you just mean that you need to choose a model-prior combination that results in differentially private posteriors in order for any of the guarantees you provide to hold? **Proposition A.3** - What are $\mu$ and $\nu$ without subscripts? - Should it be $\leq$ at bottom of page 11? - Re. “Suppose now that $\vert{\vert{\mu_{\mathcal{D}'} - \nu_{\mathcal{D}'}\vert}\vert}\_{\text{TV}} \leq \zeta$ for some constant $0 < \zeta < \delta_{\mu}−\delta_{\nu}$.” – is this Assumption 3.1 in action? - In the final inequality, where did the min over the two arguments come from? **Assumption A.4** What is $\tilde{\delta}$ and how does it relate to the $\delta$ that is assumed to exist in that Assumption? **Appendix A.4** $\delta$ in first factor of RHS of equation on top of page 13, shouldn’t this be $\tilde{\delta}$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are grateful to the reviewer for the careful comments. In the revised version of the paper, we will incorporate all their comments and fix all the typos, notation clashes, and other issues that were mentioned by the reviewer. Below we reply to the questions that the reviewer raised: - **Sorry for the basic question but what does it mean to "choose" a Bayesian posterior that has good DP properties? Do you just mean that you need to choose a model-prior combination that results in differentially private posteriors in order for any of the guarantees you provide to hold?** Indeed, we mean exactly what the reviewer wrote. We will clarify this aspect in the revised version of the paper. - **What are $\mu$ and $\nu$ without subscripts?** We believe the reviewer refers to Proposition A.2. This was indeed an imprecise notation. We meant to refer to the families of probability distributions $\{\mu_{\mathcal{D}}:\mathcal{D}\in\mathcal{S}\}$, $\{\nu_{\mathcal{D}}:\mathcal{D}\in\mathcal{S}\}$. We will use the correct notation in the revised paper. - **Should it be $\leq$ at bottom of page 11?** Here there was a typo before the inequality that the reviewer points to. Since we use the strict inequality on line 590, it follows we have strict inequalities also in lines 597-599. For this reason, we have a strict inequality also in the display at the bottom of page 11. - **Re. “Suppose now that $\lVert \mu_{\mathcal{D}'} - \nu_{\mathcal{D}'} \rVert_{TV}\leq \zeta$ for some constant $\zeta<\delta_\mu - \delta_\nu$.” – is this Assumption 3.1 in action?}, and also: \emph{In the final inequality, where did the min over the two arguments come from?** In the revised version of the paper, we will clarify our proof strategy. At this stage of the proof of Proposition A.3, we show that if $\lVert \mu_{\mathcal{D}'} - \nu_{\mathcal{D}'} \rVert_{TV}\leq \zeta$, then it must be that $\lVert \mu_{\mathcal{D}} - \nu_{\mathcal{D}} \rVert_{TV} > e^{-\varepsilon}(\delta_\mu-\delta_\nu-\zeta)$. Alternatively, $\lVert \mu_{\mathcal{D}'} - \nu_{\mathcal{D}'} \rVert_{TV} > \zeta$. Hence, in either case, the two distributions are "far" in TV distance either for $\mathcal{D}$ or for $\mathcal{D}'$. We do not know whether $\lVert \mu_{\mathcal{D}'} - \nu_{\mathcal{D}'} \rVert_{TV} \leq \zeta$ or $\lVert \mu_{\mathcal{D}'} - \nu_{\mathcal{D}'} \rVert_{TV} > \zeta$, but regardless we know that for $\tilde {\mathcal{D}} \in \{\mathcal{D},\mathcal{D}'\}$ we have the lower bound shown in the equation on lines 613-614. By taking the minimum, we take the less stringent lower bound, which always holds. Finally, we optimize for $\zeta$ to obtain the largest lower bound. This is possible since $\zeta$ can be chosen arbitrarily. - **What is $\tilde \delta$ and how does it relate to the $\delta$ that is assumed to exist in that Assumption?** This was a typo. In Assumption A.4, we should have written "There exist $\eta,\tilde{\delta}$ such that ..." - **In the first factor of the RHS of the equation on top of page 13, shouldn’t this be $\tilde\delta$?** Indeed, we will correct this in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response and for clarifying these things. I think the paper is good (with the promises the authors give to correct typos/clarifications in the revision) and I am happy to maintain my score.
null
null
null
null
null
null
An Adaptive Orthogonal Convolution Scheme for Efficient and Flexible CNN Architectures
Accept (poster)
Summary: The paper considers the problem of constructing convolutions which correspond to orthogonal operators. While the general approach is based on BCOP (Li et al. 2019), the authors extend this framework to certain variants of convolutions like stride, dilations, grouping and transposing and consider some aspects for reducing the computational cost. A numerical example for classification on CIFAR10 shows that these extensions can improve the expressiveness of orthogonal convolutional neural networks. Claims And Evidence: The claims are clear are sufficiently supported. Methods And Evaluation Criteria: Methods and evaluation are feasible. Theoretical Claims: I only read the proofs, which are not stated to be contained already in the literature. The remaining ones seem to be correct, even though their are quite simple (the strengths of the paper are more in the modelling/conceptual part than in proving complicated theorems). Experimental Designs Or Analyses: The numerical setup is generally feasible to validate the claims of the paper. However, the description in the paper should be adjusted: - What is accurate and robust AOC? Are only the architectures different or also the training? Can you also construct an accurate/robust BCOP? - Please include some SOTA non-orthogonal network as a reference to show the gap between orthogonal and non-orthogonal networks - It seems like hyper-parameters and precise architectures play a massive role for the results. It would be good to compare the different convolution approaches with minimal differences in the rest of the hyperparameters... Supplementary Material: I did not look at the supplementary. Relation To Broader Scientific Literature: The general construction of orthogonal convolutions is taken from (Li et. al 2019, Xiao et al. 2018). The contributions of the paper are the extension to common variants of convolutions like stride, dilation, grouping. Essential References Not Discussed: Many properties of orthogonal convolutions (matrix algebra/manifold property, orthogonal projections, an additional training algorithm etc.) were discussed in the paper: Hertrich et al. "Convolutional proximal neural networks and plug-and-play algorithms", Linear Algebra and its Applications, 2021 This is highly relevant to the paper and should be discussed. Other Strengths And Weaknesses: The main construction of the orthogonal convolutions in this paper is taken from the literature (Li et. al 2019, Xiao et al. 2018). However, the authors take care of all the variants of convolutions, which are used in literature. This includes stride, dilations, grouping and computational efficiency (and also transposed convolutions, but this part is a bit trivial). I consider this as a solid contribution. Additionally, the authors provide a PyTorch library for implementing their and several other orthogonal or 1-Lipschitz networks. Some weaknesses are listed below. In summary, I would see the paper above the acceptance threshold, but I would wish that the authors reconsider their wording to describe their contribution more appropriately. ## Weaknesses ### Inaccurate names, abstract, introduction - I find the name of the method (adaptive orthogonal convolutions) a bit odd. Under adaptivity I would understand that the architecture adapts to the data/training process or whatever. - The authors overstate (or rather misstate) their contributions a bit. In abstract and introduction, the authors write, they would introduce a new method. Instead they should rather present their contribution as it is: The introduction of stride, dilation, grouping etc. to BCOP. ### Other - It would be useful to give a bit more intuition on some of the claims. For instance, Prop 2.4 basically considers the setting where each pixel in the input image is covered exactly once from each kernel. This is one of the main reasons, why the orthogonality condition simplifies in this case. - Proposition 2.7 is trivial. It basically states that the transposed matrix of a matrix with orthogonal columns has orthogonal rows... Other Comments Or Suggestions: Please proofread the paper again and correct typos. Some examples: - line 196/197 left: missing spaces before references - line 212/213: "convolutions kernel" -> "convolution kernels" - line 1587: broken reference "Table ??" Questions For Authors: see above. # After Rebuttal Many thanks to the authors for answering my questions and comments. I increased the score from 3 to 4. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the detailed proof verification, the interest you've shown in our paper, and the evaluation towards acceptance. We answer the main weaknesses raised in your review, and hope this can help you to increase your score. About experiments --------- > What is accurate and robust AOC? Are only the architectures different or also the training? The architectures, detailed in Appendix H, play a role in the accuracy/robustness tradeoff, but the main factor remains the loss: - The “robust” setting uses the scaled cross-entropy with margin (Prach et al.) to promote robustness. - The “accurate” setting uses Cosine similarity, chosen for its scale invariance. (Bethune 2021) showed the standard cross-entropy contains a term that promotes robustness (as softmax is not scale invariant). This will be clarified in the main part of the manuscript (sect 3.1 l383). > Can you also construct an accurate/robust BCOP? The “BCOP accurate” leads to the same accuracy as the “AOC accurate” on Cifar-10, since as stated l405 “AOC does not improve the expressiveness of its original building blocks (namely BCOP)”, and was thus not reported. The BCOP robust on Cifar10 is equivalent to the first line of Table 2. However, note that the BCOP method cannot scale on the Imagenet dataset. > Please include some SOTA non-orthogonal network as a reference to show the gap between orthogonal and non-orthogonal networks Table 2 includes some SOTA non-orthogonal networks, such as AOL, SLL, or Li-Resnet, that are non-orthogonal. > It would be good to compare the different convolution approaches with minimal differences in the rest of the hyperparameters… We made the choice to compare the published version for each method since the optimal architecture can be different. An empirical comparison of previous methods was already done by (Pratch et al.). The authors chose to use a fixed budget of 24 GPU hours, Since AOC is much faster than BCOP, It is expected to outperform BCOP in their comparison. About novelty and name of the method ----------- > The authors overstate (or rather misstate) their contributions a bit. In abstract and introduction, the authors write, they would introduce a new method. Instead they should rather present their contribution as it is: The introduction of stride, dilation, grouping etc. to BCOP The abstract indicates that “AOC [is] a scalable method for constructing orthogonal convolutions, effectively overcoming these limitations [i.e. strides, dilations, group, convolutions, and transposed convolutions]”, we don’t understand what in this sentence “overstates” the contribution? Could you please clarify? In the introduction, we state clearly the novelty and difference with BCOP in Table 1. The bullet points in l82-l102 list the properties AOC achieves **simultaneously**. Also, besides the kernel construction, our contribution is 3 fold: - The method itself. Despite the method's apparent simplicity, the work to achieve this is non-trivial: as discussed in the paper, the construction requires multiple criteria to be orthogonal, and its proof requires numerous results to be complete. - A mathematical framework that allows one to navigate between three aspects of the convolutions seamlessly: its kernel tensor, its Toeplitz matrix, and its pytorch code. This makes many results simple to prove and extends beyond AOC (Appendix E shows how we can improve SLL, SOC, and Sandwich in nontrivial ways) - An efficient implementation that makes the usual slowdown negligible. That is packaged in an open-source library. This library also centralizes and improves multiple existing methods (as stated in Appendix E) > name of the method (adaptive orthogonal convolutions) a bit odd We understand the concern of the reviewer , we use the word ‘adaptive’ in the sense that it can be adjusted to the layer modern features. We were thinking about Flexible Orth Conv , but the acronyms is often understood as Free-Of-Charge Others ------ > give a bit more intuition on some of the claims. For instance, Prop 2.4 We appreciate the reviewer’s suggestion to provide more intuition. We will provide when possible in the main paper an intuition of the proof. > Proposition 2.7 is trivial. It basically states that the transposed matrix of a matrix with orthogonal columns has orthogonal rows We acknowledge that the mathematical framework we introduced makes the proof of this proposition trivial. However, it has implications regarding practical implementation (direct use of torch.nn.conv_transpose_2d) and orthogonality under stride, groups and dilation. For instance, as indicated l251, transposition of stride convolution could be used in upsampling layers of U-Nets. Reference ------ We would like to thank the reviewer for the pointer on (Hertrich et al.), which we missed as we focused on reparametrization methods. We totally agree with its relevance. --- Rebuttal Comment 1.1: Comment: Many thanks for your detailed answers. I have some additional comments/questions/clarification for my original comments below. I am happy to reconsider my score afterwards. > The abstract indicates that “AOC [is] a scalable method for constructing orthogonal convolutions, effectively overcoming these limitations [i.e. strides, dilations, group, convolutions, and transposed convolutions]”, we don’t understand what in this sentence “overstates” the contribution? Could you please clarify? My critics about this point is the following: The paper mainly proposes to combine BCOP and RKO in a clever way, but does not introduce any new building blocks by itself. But the sentence "we introduce AOC (Adaptative Orthogonal Convolution), a scalable method..." complete erases this history. To provide a suggestion, I would be fine with something like > "We introduce AOC (Adaptative Orthogonal Convolution), which combines BCOP and RKO. In this way, we obtain a scalable method..." A similar addition about the close relyance on BKOP/RKO is necessary in the contributions and conclusions part. To be clear: My concern is not about novelty/contribution of the paper itself, but rather that it heavily builds on BCOP/RKO without acknowledging this fact properly in abstract, contributions and conclusions. As a side note: I agree that I should have mentioned the code library in my original review. I added it in the strenghts and weaknesses part. Regarding: > Table 2 includes some SOTA non-orthogonal networks, such as AOL, SLL, or Li-Resnet, that are non-orthogonal. In this case I would suggest to mark in Table 2 which approaches are orthogonal and which not. Additionally, these approaches are still 1-Lipschitz right? My original intention was to include a comparison what is possible with a network without such constraints (which can be very restricting). --- Reply to Comment 1.1.1: Comment: Thank you for clarifying your point, it can be clarified as mentioned to better contextualize our contributions in abstract, introduction and conclusion. About Table 2: Yes, all methods train Lipschitz-bounded networks to ensure certified accuracy. Some methods don't train orthogonal networks, which can be inferred from Table 1. Since some approaches don't propose new layers, the orthogonality of those is not stated explicitly. In order to provide a better picture here is the list of all networks from Table 2 with this information: | method | orthogonal | |-------------|------------| | BCOP | yes | | GloRo | no | | Local-Lip-B | no | | Cayley | yes | | SOC | yes | | CPL | no | | AOL | no | | SLL | no | | Li-Resnet | no | | AOC | yes |
Summary: The paper proposed a new method to design scalable and versatile orthogonal convolutional layers. This layer allows scaling further architecture composed of orthogonal layers; indeed, previous orthogonal layers lack common features of regular convolutional layers (strides, dilations, group convolutions, etc). The so-called AOC layer encompasses those new features, theoretical demonstrations are provided to support those claims. Also, AOC relies on a faster implementation of block convolution, which makes the computational overhead close to a vanilla convolutional layer. The utility of AOC layers is demonstrated on certified adversarial robustness tasks and computational time comparisons across various methods. Claims And Evidence: All the claims in the paper are backed by clear and convincing evidence. Methods And Evaluation Criteria: In the abstract, it is claimed that orthogonal convolutional layers are important blocks for adversarial robustness, normalizing flows, GANs, and Lipschitz-constrained models. However, in the experiments, only adversarial robustness tasks are considered. More diverse applications are welcome to demonstrate the use of AOC and orthogonal property. Theoretical Claims: I checked the correctness of all proofs, and they seem correct. Experimental Designs Or Analyses: Experimental designs are valid for the adversarial robustness section, but more experience on other tasks is required (with application in normalizing flows and GANs as promised in the introduction or other tasks for orthogonal architecture). Supplementary Material: I checked the supplementary material, and the provided code is clean and very clear. In particular, I verify the Lipschitz property of the designed orthogonal layer. Relation To Broader Scientific Literature: Certified robustness with Lispchitz networks still lags behind the empirical robustness obtained with the adversarial training method or randomized smoothing. Thus, work pushing boundaries in this direction is important. Essential References Not Discussed: All references are correct, but maybe adding a reference on (Salman et al., 2019), which cast randomized smoothing network as Lipschitz network. Other Strengths And Weaknesses: # Strengths ## Reduced computational overhead The implementation of AOC has been well optimized. ## Training stability The proposed method seems to achieve competitive accuracy on ImageNet (68.2%) without any batch norm. ## Code base The authors provided a code base that is clear and complete. # Weakness ## Introduction l.45: "with 1-Lipschitz networks being a prime candidate (Anil et al., 2019) – an approach that requires the use of orthogonal layers". $1$-Lipschitz network can be designed through the product upper bound and Lipschitz layers one particular case of 1-Lipschitz layer is orthogonal layers **but it is not the only solution** (Meunier et al.; 2022, Araujo et al.2023; Wang et al., 2023; Hu et al., 2023). l.27: "Wasserstein GANs (WGANs) [...] orthogonality in both the discriminator and generator (Miyato et al., 2018; [...]" Maybe I did not understand well, but Miyato et al., 2018 proposed a spectral normalization technique and not orthogonalization. Also, the Lipschitz constraint is only enforced on the discriminator in WGANs. l.38: "Efficient orthogonalization of these structured matrices has theoretical importance, affecting generalization (Bethune et al., 2022)" I don't see the **direct link** with efficient orthogonalization for layer and the work of Bethune et al., 2022. The lipschitz constant of the neural network divided by the margin has been linked to reducing the generalization gap (Bartlett et al., 2017), but those considerations are independent of the implementation of Lipschitz layers and, in particular, orthogonal ones. l.87: "Relaxed orthogonality approaches. In some cases, strict orthogonality is relaxed to mitigate vanishing gradients, avoiding the computational demands of full orthogonaliza- tion (Prach & Lampert, 2022; Meunier et al., 2022; Araujo et al., 2023)" here the approaches of (Meunier et al., 2022; Araujo et al., 2023) do not rely on orthogonality and relaxed orthogonality. Overall, normalizing flows, GANs are much discussed in the introduction but do not appear later in the paper, and their mentioning in the Appendix is too superficial and lacks contribution to motivate AOC. Also, the introduction feels a bit like a related work section, particularly the part on orthogonal convolutions, there is a back and forth between intro - related work - intro. ## Section 2 l.169: "For the second kernel, RKO (Serrurier et al., 2021) is the only viable option, as all other methods depend on stride emulation." RKO has been introduced in prior work by (Li et al., 2019), also redundant with l.219. l.171 : "For standard orthogonal convolutions with a fixed kernel size, we rely on three main works (Xiao et al., 2018), (Li et al., 2019), and (Su et al., 2022)." Redundant with l.133. Prop. 2.4 : It is row /column orthogonal as there is stride (not $\ell_2$norm preserving) and not orthogonal $Q^\top Q = Q Q^\top = I$ ($\ell_2$norm preserving). ## Novelty It should be stated clearly that the proposed AOC is an incremental update of BCOP and RKO (Li et al., 2019). The contributions related to **Orthogonal**, **Explicit** are already developed by (Li et al., 2019) and cannot be claimed. Overall, the paper is a well-written review of orthogonal convolutional layers. However, I don't think combining two existing methods (RKO and BCOP) is a sufficiently novel contribution. ## Experiments 1) in table 2. Why not put ResNet18, which also has 0 % provable accuracy and compare with AOC ? l.380 : "The overall Lipschitz constant of a sequence of layers is typically estimated as the product of the individual layer constants; however, this bound is often loose, and computing the ex- act constant is NP-hard (Virmaux & Scaman, 2018)" in introduction. l.365: for certificate equation give reference. # References Bartlett, P. L., Foster, D. J., & Telgarsky, M. J. (2017). Spectrally-normalized margin bounds for neural networks. Advances in Neural Information Processing Systems (NeurIPS). Other Comments Or Suggestions: l.20: a dot is missing "finvertible residual networks (Behrmann et al., 2019) Additionally" Table 1: BCOP (Li et al., 2019) not (Singal & Feizi, 2021) l.1465 : there is a dot alone. Questions For Authors: ## Questions ### Q0) In the introduction, there is confusion between the Lipschitz constraint and the orthogonal constraint. Please modify it to better reflect that orthgonality is a subset of the Lipschitz constraint. Also, in section 2, there is confusion between row/column orthogonal and orthogonal. ### Q1) Can you confirm that you do not use any batch norm to obtain accuracy on ImageNet with AOC architecture (68.2%)? I think it should be discussed more in the paper as it is a good contribution. A related work on optimization without batch norm, gradient clipping could be a good add to the paper. ### Q2) The proposed method unlocks scalability with the AOC layer. Can you try to scale the capacity of models beyond what has been proposed yet for orthogonal CNNs (even Lipschitz CNNs)? In terms of depth and width. The scaling law of robustness states that Lipschitz networks require a lot more parameters than regular networks to scale (Bubeck et al., 2021). ### Q3) Regarding invertible networks for NF, is there a particular use of AOC possible to define the inverse of an orthogonal convolutional layer? Is there a link with transposed orthogonal convolution? ### Q4) In experiments, "This is especially notable as our experiments did not use techniques such as last layer normalization(Singla & Feizi, 2022), certificate regularization (Singla & Feizi, 2021b), or DDPM augmentation (Hu et al., 2023)." Can you test your method with those techniques to have a fair comparison with LiResNet and SOC? Reported certified accuracy on CIFAR-10 $64.3$% is way below the LiResNet SOTA ($78.1$%). ### Q5) Can you provide tasks for normalizing flows and GANs or other applications of orthogonal networks that would highlight the AOC layer? I am willing to increase my score if my concerns were to be addressed. ## References Sebastien Bubeck and Mark Sellke. A Universal Law of Robustness via Isoperimetry. Advances in Neural Information Processing Systems, 2021. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank the reviewer for his comprehensive review. We sincerely appreciate the effort put into it and hope our response will be at the same level. We organized our answers into 4 sections that cover the various questions and remarks. About the experiments: ------------------------------ - Q5) GANs and normalizing flows: We also believe that AOC can be impactful for GANs/NF; however, the training competitive GAN/NF deserves its own 9 pages and are let for future works as indicated in the conclusion. In L375, we explained our choices to evaluate AOC. These experiments were done to support two claims: 1. AOC is expressive, and 2. AOC is the most viable option for training large-scale orthogonal CNNs. - Q2) Thank you for the relevant reference. The scalability of AOC is first demonstrated In Table 2 as AOC is the only method to **successfully train an *orthogonal* CNN on Imagenet-1K**. Moreover Table 3 shows the low memory consumption and low overhead which are key for further scaling of orthogonal networks. - Q4) In Table 2, we took the number from (Hu et al. 2023, “A recipe for…”) instead of (Hu et al.2023, “Unlocking …”). We apologize for the issue, and these numbers will be updated. **AOC tested with similar techniques as (Hu et al. 2023) achieves 74.85% of certified robust accuracy with a network that has half the width of LiResnet** (the width was set to 256 instead of 512 to obtain results within the rebuttal period). - Why we did not put ResNet18 in Table 2: We chose to display only Lipschitz-constrained networks (i.e. that offer robustness certificates) to avoid losing the reader. About novelty: ------------------- We state clearly the novelty and difference with BCOP in Table 1. The bullet points in l82-l102 list the properties AOC achieves **simultaneously**. Our contributions are 3 fold: 1. The method itself. Despite the method's apparent simplicity, the work to achieve this is non-trivial: as discussed in the paper, the construction requires multiple criteria to be orthogonal (and a special attention on the choice of channel dimensions and the order of convolution composition), and its proof requires multiple results to be complete. 2. A mathematical framework that allows one to navigate between three aspects of the convolutions seamlessly: its kernel tensor, its Toeplitz matrix, and its pytorch code. This makes many results simple to prove and extends beyond AOC (Appendix E shows how we can improve SLL, SOC, and Sandwich in nontrivial ways) 3. An efficient implementation that makes the usual slowdown negligible. That is packaged in an open-source library. This library also centralizes and improves multiple existing methods (as stated in Appendix E) About the writing: ---------------------- - Q0.a: Lipschitz/orthogonal constraints: We agree with suggestion to move the L380 in the introduction and will use it to clarify the point. - Q0.b: For ease of reading, we use “orthogonal” to refer to “row/column orthogonal” depending on the shape. We state “row/column” when this information is important. It will be clarified in L161. - L45: orthogonal refers to (Anil et al. 2019) and not to the global 1-Lip approach. This will be clarified by moving “(Anil et al. 2019) an approach that requires the use of orthogonal layers.” to line 12. - l27: (Miyato et al., 2018) will be moved to l26. - L38: We cited this theoretical paper to balance the citations of (Wang et al., 2020; Qi et al., 2020) that are more empirical. We agree that the link is indirect since the generalization bounds assume 1-Lipschitz networks constructed using the product bound and not orthogonal layers. It can be removed if the reviewer considers this misleading. - L87: We agree that “mitigating vanishing gradient” is a more appropriate title for this paragraph. Thanks for the suggestion. - L365: We cited (Anil et al. 2019) for its direct application to certifiable robustness, but we agree with the relevance of (Bartlett et al. 2017). Questions: --------------- Q1 [Batchnorm]: Yes, all our networks are trained without batch normalization (in the line of (Xiao et al., 2018)). This is especially true for our imagenet results. We agree this is We agree this is important to highlight in the paper. Q3 [invertible convolution layer]: In our framework, this demonstration becomes easy. Given $ y = \text{conv}_{K}\text{(x, stride=s)} \quad \Longleftrightarrow \quad \bar{y} = (\mathcal{S}_s\mathcal{K})\bar{x} \quad \text{(Eq.2-3)}$ If the convolution is orthogonal (not row/column orthogonal), we have $(\mathcal{S}_s\mathcal{K})^{-1} = (\mathcal{S}_s\mathcal{K})^T \quad \text{(Def.2.2)}$ and then, $ \bar{x} = (\mathcal{S}_s \mathcal{K})^T \bar{y} \quad \Longleftrightarrow \quad x = \text{ConvTranspose}_K\text{(y, stride=s)} \quad \text{(Def 2.6)}$ --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal and clarifications. While I appreciate the engineering effort and clear exposition, I still believe the contribution is too incremental and lacks novelty, mainly combining existing methods (BCOP and RKO) from the same paper (Li et al. 2019). I will keep my score unchanged. --- Reply to Comment 1.1.1: Comment: We understand the reviewer's personal point of view but would like to clarify that having a method that is simple to understand and apply does not mean that such a method is simple to find (or to prove). We also want to highlight that the applications of our framework go beyond the sole construction of AOC. Appendix E, where we improve SLL, SOC, and sandwich layers **in nontrivial ways**, is a good example of this.
Summary: This paper explores research on orthogonal convolutional layers and introduces AOC, a scalable method for constructing orthogonal convolutions. The authors provide a detailed introduction and implementation of their methodology, including the construction of strided, transposed, grouped, and dilated orthogonal convolutions using AOC. Experimental results demonstrate the effectiveness of the proposed framework. **update after rebuttal** Thank you for your rebuttal. I will keep my score unchanged and remain positive about this paper. Claims And Evidence: Please refer to Strengths And Weaknesses. Methods And Evaluation Criteria: Please refer to Strengths And Weaknesses. Theoretical Claims: Please refer to Strengths And Weaknesses. Experimental Designs Or Analyses: Please refer to Strengths And Weaknesses. Supplementary Material: Yes Relation To Broader Scientific Literature: Please refer to Strengths And Weaknesses. Essential References Not Discussed: Please refer to Strengths And Weaknesses. Other Strengths And Weaknesses: **Strength** 1. The research on Adaptive Orthogonal Convolution is both intriguing and highly practical. It makes a significant contribution to neural network architecture development and holds great potential for the research community. 2. The paper is well-structured and easy to follow. 3. The implementation of AOC is thorough, and the experimental results effectively demonstrate the significance of the proposed method. **Weakness** 1. I guess there may be some criticisms regarding novelty (possibly?), as the paper leans more toward an implementation-oriented approach from my perspective. But for me, it's basically good enough. And it does provide additional information/knowledge to me. If the authors could offer a more in-depth and insightful analysis to meet the taste of academic community, it would further enhance the paper’s impact. Other Comments Or Suggestions: Please refer to Strengths And Weaknesses. Questions For Authors: Please refer to Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful evaluation and the interest you've shown in our paper, particularly regarding its practical contributions. In addition to the implementation-oriented aspects that you seem to highly appreciate, we would like to highlight the theoretical novelty of our work: The proposed mathematical framework provides a unified perspective over three key aspects of convolutions that allows the theoretical proofs of the orthogonality for five distinct types of convolution operations (classic, dilated, strided, group, transpose). Besides, we would like to emphasize that, due to the constraint of orthogonal convolution (such as the existence described in Achour et al. 2022, or proposition 2.3), the choice of channel dimensions and the order of convolution composition are both non-trivial and essential to ensure orthogonality (which is another contribution of this paper). We hope this further clarifies the depth and originality of our contributions and reinforces your positive view of the paper’s potential impact.
Summary: This paper purposes a orthogonal CNN structure. Orthogonal convolution has been shown success in BCOP and RKO. Utilizing the property of the product of orthogonal matrices are still orthogonal (prop 2.3), this paper invents AOC in Eq. 8, which is the product of RKO and BCOP. A parallel computing technique (fig 2) is also proposed to significantly increase the performance (Tab 3). Comparing to existing works, this algorithm is also easily applied to striding, dilating, etc. I am not very familiar with the area so please point out when anything I say is incorrect. 1. It seems to me the main method AOC is to simply attach a RKO layer to a BCOP layer. Although this idea seems straight forward, it has good performance shown in Tab 2. Do you have an intuition why it is so good? 2. On CIfar 10, why the accuracies shown in Tab. 2 are so low? I imagine modern CNNs with several million parameters will easily reach >90% accuracy. 3. Why AOC provable accuracy is 00.0 in Tab. 2? Is that a typo? 4. Tiny suggestion: do you think it would be better if containing the number of parameters in Table 3 would be better? It seems although AOC has less number of parameters comparing to resnet (53.1M vs 86.0 M as shown in Tab. 2 for IN-1K), it requires longer time during inference in Tab. 3. Am I missing something? Currently I would keep my rating at weak accept and I will keep trying to better understand this work during the discussion period. Looking forward to the reply from the authors! Claims And Evidence: - Methods And Evaluation Criteria: - Theoretical Claims: - Experimental Designs Or Analyses: - Supplementary Material: - Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for his review. We will provide additional information that we hope you to find relevant. **About 1.** While AOC does not improve the expressiveness of its original building blocks, its flexibility unlocks the construction of complex blocks as depicted in Fig. 5a. Given a fixed compute budget, AOC allows the training of larger architectures for more steps, (thanks to our fast implementation, which is also our contribution). This ultimately leads to improved performances. **About 2&3.** Certifiable robustness gives guarantees for any possible adversarial attack. However, multiple works have shown a tradeoff between robustness and accuracy. This tradeoff is responsible for the low accuracies found in this field. “AOC accurate” lines show that our networks are as expressive as unconstrained networks when the training does not optimize certified robustness. This leads to 0.0% certified robustness (even if empirical attacks may not achieve this 0% score). **About 4.** Although the authors dubbed their network as “liresnets” their architecture significantly differs from the usual resnets (as our network). Given these differences, we chose to stick to a known architecture (Resnet34) in Table 3 to ensure a fair comparison. We hope that these points will convince you of the paper's potential. --- Rebuttal Comment 1.1: Comment: i deeply thank the authors for the nice paper and the effort from AC for holding such a great conference. I will keep my rating for the following reasons. I am partially convinced for 1. For 2, 3, 4, I am not convinced that `tradeoff between robustness and accuracy' leads to low accuracy. To me, I still think the performance on CIFAR is too low. For example, the original paper of ResNet https://arxiv.org/pdf/1512.03385 shows CIFAR10 with >90% accuracy with <1M parameters. Best wishes yVT2 --- Reply to Comment 1.1.1: Comment: Thank you for your remark about the trade-off between accuracy and robustness; one could think that higher robustness implies a higher accuracy. However, the work of (Béthune et al. 2022) showed that the minimization problem to solve robust training involves such a trade-off (in Fig 2 of their paper); this is confirmed empirically by (Prach et al. 2024) in Fig 4. The literature usually reports only the best robust accuracy, as shown in our Table 2, which hides this tradeoff. (Béthune et al. 2022) Béthune, Louis, et al. "Pay attention to your loss: understanding misconceptions about lipschitz neural networks." Advances in Neural Information Processing Systems 35 (2022): 20077-20091. (Prach et al. 2024) Prach, Bernd, and Christoph H. Lampert. "Intriguing Properties of Robust Classification." arXiv preprint arXiv:2412.04245 (2024).
null
null
null
null
null
null
LGDM: Latent Guidance in Diffusion Models for Perceptual Evaluations
Accept (poster)
Summary: In this paper, an algorithm named LGDM (latent guidance in diffusion models) is proposed for NR-IQA. It utilizes the pretrained latent diffusion models for sampling process toward perceptually consistent regions on the manifold. Specifically, it extracts diffusion hyperfeatures, which are multi-scale and multi-timestep feature maps, from the denoising U-Net of a pretrained LDM. Then, it aggregates the features to predict the quality score by using a lightweight regression network. Experimental results on various IQA benchmarks show that the proposed algorithm achieves better results than conventional methods. Claims And Evidence: Yes, this paper provides the propositions and theorem with proper explanation to support their claims. Also, it shows extensive experimental results to support their claims. Methods And Evaluation Criteria: Yes, the proposed algorithm is technically reasonable for me and the evaluation process seems fair. Theoretical Claims: Yes, theoretical claims seem to be correct for me. Experimental Designs Or Analyses: Yes, the proposed algorithm follows the standard evaluation protocol in this field. Supplementary Material: I've reviewed supplementary material especially Section C and D for more implementation details and experimental results. Relation To Broader Scientific Literature: The proposed algorithm exploits the pretrained LDM for better NR-IQA. Specifically, it extracts diffusion hyperfeatures from denoising U-Net. Using the pretrained LDMs for better performances is not new for data augmentation, segmentation, and generation. However, for IQA, there are only few methods, which uses LDM, such as GenzIQA. Hence, I think the proposed algorithm provides some contribution to the field as a reasonable and simple approach to use LDMs for NR-IQA. Essential References Not Discussed: Some recent papers are not addressed and compared. It would be better to compare with these algorithms as well. [1] Learning generalizable perceptual representations for data-efficient no-reference image quality assessment. WACV24 [2] Blind image quality assessment based on geometric order learning. CVPR24 Other Strengths And Weaknesses: The proposed algorithm shows good results on various IQA datasets. However, since the proposed algorithm exploits the pretrained LDM, which is trained on large scale datasets, the proposed algorithm has some advantages compared to conventional IQA methods. Therefore, I believe it should be compared more extensively with IQA methods using LDMs. At least, it would be better to provide a more detailed explanation of Table 2—for example, how the proposed algorithm fundamentally differs from other methods and why it performs better in Table 2. Also, even though Table 4 provides the time test, it would be better to compare the inference speed with other algorithms. Other Comments Or Suggestions: - L129: D -> \mathbb D - eq (3): missing ], missing ) - L146 (right side): x0 -> x_0 - L373-374: editing comment? - "": incorrectly oriented quotation marks. It frequently appears in the paper. Questions For Authors: In overall, I think this paper has some technical contribution for this field. However, the paper does not seem to be fully revised yet. It still contains many typos and some editing comments that should have been removed. Also, I have some questions listed in other strengths and weaknesses section. I would like to see the response from authors. Then, I will decide my final rating. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Your feedback for our work is greatly appreciated. In the following, we will address your questions and concerns in detail. 1. **Missing References:** Thank you for pointing out the missing recent papers, GRepQ and QCN. We have already included both methods in our updated manuscript. See point-3 in our response to Reviewer GzVH (3). 2. **Comparison with Other LDM-based Methods:** We had included GenzIQA in our AIGC comparison (Table 2). In our updated manuscript, we have included the GenzIQA and DP-IQA Table 1 and discussed them in detail. | Method | LIVEC (PLCC) | LIVEC (SRCC) | KonIQ (PLCC) | KonIQ (SRCC) | FLIVE (PLCC) | FLIVE (SRCC) | SPAQ (PLCC) | SPAQ (SRCC) | | :- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | | GenzIQA | 0.897 | 0.873 | 0.932 | 0.916 | 0.718 | 0.613 | - | - | | DP-IQA (Teacher) | 0.913 | 0.893 | 0.951 | 0.942 | 0.683 | 0.579 | 0.926 | 0.923 | | **LGDM$\psi_{SDv1.5}$ (Ours)** | **0.940** | **0.908** | **0.972** | **0.967** | **0.812** | **0.705** | **0.948** | **0.947** | We attribute LGDM’s superior performance over methods like GenzIQA to several key technical differences. In terms of feature extraction, LGDM directly extracts diffusion hyperfeatures, which are multi-scale and multi-timestep. This approach captures a rich spectrum of visual information—including texture, structure, and semantic cues across both spatial scales and temporal steps—providing a more robust and nuanced representation than GenzIQA’s reliance on cross-attention maps between image features and learned text prompts that are subsequently pooled. Regarding the guidance mechanism, LGDM employs Perceptual Manifold Guidance (PMG) as described in Equations (9) and (10) to actively steer the DDIM sampling trajectory at each reverse step. This dynamic, step-wise guidance ensures that the intermediate latent states—and therefore the extracted hyperfeatures H—are continuously aligned with both the input image $x$ and its perceptual representation $\psi_p(x)$. In contrast, GenzIQA depends on extensive prompt tuning and cross-attention weights to adapt the attention mechanism for quality assessment, which lacks the direct, iterative adjustment provided by our PMG. Finally, LGDM leverages a completely frozen pretrained LDM backbone (can and be adapted by any DM) and only trains a lightweight external regression network $g_{\phi}$. This zero-shot use of the core model enables LGDM to fully exploit the broad, general-purpose priors learned from massive datasets, thereby enhancing both robustness and generalization. GenzIQA, by comparison, requires extensive finetuning of cross-attention weights and prompt tuning, which may risk overfitting and reduce adaptability. In essence, LGDM’s combination of direct multi-step hyperfeatures extraction with active, guided sampling ensures that its features are optimally tailored for perceptual quality assessment, leading to state-of-the-art performance without complex finetuning. We have extended the discussion for Table 1, and comparison with other methods (see our point-3 in response to Reviewer-3), in our updated manuscript. 3. **Inference Speed Comparison:** We acknowledge the importance of comparing inference speed. Below, we compare the inference time with LGDM(sampling only with half precision): | Method | Est. Time (s) | Notes | | :----------------- | :------------ | :------------------------------ | | QCN | ~0.15 | Geometric ordering | | Q-Align | ~0.1 | LORA Finetuning | | DP-IQA (Teacher) | ~0.023 | Distilled/Finetuned DM | | GenzIQA | ~1.4 | Finetuned DM (8 steps avg) | | LGDM (1 step) | ~1.1 | Single Step | | LGDM (10 steps)| ~9.7 | SOTA Accuracy | While slower than distilled/finetuned competitors, LGDM achieves higher accuracy. LGDM (10 steps) is indeed slower than highly optimized/distilled methods like DP-IQA and GenzIQA. However. this runtime reflects the cost of our multi-step PMG guidance, which is crucial for achieving SOTA accuracy, despite this our single-step version can provide comparative inference runtime with accuracy trade-off. The speed should be viewed in the context of the zero-shot backbone, which saves enormous computational resources during the training/adaptation phase compared to methods requiring finetuning or distillation. It represents a trade-off, sacrificing some inference speed for significantly higher accuracy and the flexibility of using a frozen foundation model. We have updated the table 3 in our updated manuscript to incorporate the suggested changes. We sincerely apologize for the minor formatting errors. We appreciate them being pointed out. We have incorporated suggested changes and thoroughly checked the manuscript to remove such formatting oversight. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns well. So, I will raise my rating to 3. Thank you for the detailed response. --- Reply to Comment 1.1.1: Comment: Dear Reviewer Smqd, We are pleased to hear that we were able to address all of your concerns well. We appreciate your feedback and updated review. We remain committed to addressing any remaining points you may have during the discussion phase and helping you gain more confidence in our work. Best, Authors of Paper #7621
Summary: This work proposes an NR-IQA model by leveraging latent diffusion models (LDMs). The core idea is that diffusion models inherently learn perceptually consistent regions within their data manifold. The authors introduce Perceptual Manifold Guidance (PMG), a technique that extracts multi-scale and multi-timestep feature maps (diffusion hyperfeatures) from pretrained latent diffusion models to assess image quality. Experiments on multiple datasets demonstrate that pretrained latent diffusion models inherently capture perceptually meaningful features, making them effective for NR-IQA without explicit fine-tuning. ## update after rebuttal The additional expriments address my conerns about the choice of SD. I will keep my positive rating. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed methods LGDM and PMG make sense because they align well with the core challenges of NR-IQA (lack of reference images, perceptual consistency). The evaluation criteria (benchmark datasets & metrics) are comprehensive, covering a wide range of real-world and AI-generated distortions, ensuring fair and robust assessment. Theoretical Claims: The proofs for Propositions 1 and 2 and Theorem 1 are generally correct. However, the proofs depend heavily on assumptions such as the existence of a perfect autoencoder and a noise model that perfectly concentrates on a lower-dimensional manifold. In practice, these conditions are approximations, there may be some gaps when applying them to real-world applications. Experimental Designs Or Analyses: The experimental design and analyses are well conducted and align with standard practices in the IQA community. The primary concern is the choice of SD model. The method is built on Stable Diffusion v1.5, which consistently shows high performance. While Table 5 compares different versions of Stable Diffusion, the overall design might be sensitive to the particular pretrained model used. The conclusions are drawn from the results on LIVE-C and FLIVE, and may change when tested on other datasets. Supplementary Material: Yes. I reviewed the full appendix section. Relation To Broader Scientific Literature: This work extends established ideas from diffusion model research and manifold learning to tackle the specific challenges of NR-IQA. Essential References Not Discussed: No. Other Strengths And Weaknesses: Overall, the paper stands out for its innovative integration of diffusion models into NR-IQA, its rigorous theoretical underpinnings, and its extensive experimental validation. Addressing the noted weaknesses (reliance on Idealized Assumptions, dependency on a specific pretrained model, hyperparameter sensitivity) could further enhance its impact and ease of adoption in practical scenarios. Other Comments Or Suggestions: It's better to compare this method with more recent NR-IQA models published after 2023, e.g., ARNIQA, LIQE, QualiCLIP+, Q-align, etc. Questions For Authors: NO. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback and your recognition of our work. We would like to answer your questions and address your concerns below. 1. **Reliance on Theoretical Assumptions:** We agree with the reviewer that our theoretical proofs rely on assumptions, which are idealizations of real-world scenarios. However, we argue for their practical applicability and justification within our framework. For manifold concentration (Assumption 1, Proposition 1), the assumption that high-dimensional data like images lies near a lower-dimensional manifold has been used in a lot of work in machine learning and generative modeling ([Chung et al., 2023]). While global linearity might not hold, data often exhibits significant local low-dimensional structure ([Turk & Pentland, JoCN 1991]), which is relevant as our guidance steps operate locally around the current sample estimate. Furthermore, Proposition 1 (building on [Chung et al. 2023]) discusses probabilistic concentration near the manifold, acknowledging deviations rather than strict adherence, which is more realistic. Perfect Autoencoder (Proposition 2), while practical VAEs have reconstruction errors, high-quality autoencoders like the one used in SD are trained for perceptual fidelity. That is, a well-trained autoencoder also have similar effects for mapping the guidance to the data manifold ([He, Yutong, et al., 2023]). Most importantly, the strong SOTA performance of LGDM consistently demonstrated throughout the paper serves as robust empirical validation. It suggests that despite being approximations, these assumptions provide a useful theoretical framework and that our PMG mechanism functions effectively with the practical VAE and data distributions encountered. 2. **Dependency on SD and Hyperparameter sensitivity:** We understand the concern about potential sensitivity to the specific SDv1.5 model. We selected SDv1.5 as it is a widely used, powerful, and publicly available baseline, and it empirically yielded the best results among the versions tested (Table 5). We agree that testing on more datasets would strengthen the comparison. However, we chose LIVEC and FLIVE as they represent challenging and diverse authentic distortion benchmarks (largest). The consistent trend observed across these two datasets (v1.5 > v1.4/1.3 > v2.x) suggests the finding hold more broadly. Crucially, Table 5 also shows that LGDM performs very well even with other SD versions (e.g., v1.4 achieves 0.938/0.903 on LIVEC, still outperforming most prior SOTA). This demonstrates that our core contribution – the PMG framework and hyperfeature extraction – is robust and provides significant benefits across different underlying LDM versions, even if v1.5 offers the peak performance. The method's success is not solely hinged on v1.5. LGDM is designed to be modular, the PMG mechanism can be applied to guide feature extraction from other pretrained DMs, leveraging their respective priors. We discuss the impact of $\zeta_1$ and $\zeta_2$ in section-4.3. Note that even though, appropriate values of these parameters gives us SOTA performance, the maximum deviation is well under 5\%, suggesting that sensitivity is limited. 3. **Comparison with Recent NR-IQA Models:** We acknowledge the rapid progress in NR-IQA and agree that comparison with the latest methods is important. We have updated our related work section and comparison tables to include the suggested models (ARNIQA, LIQE, QualiCLIP+, Q-align) and other relevant recent works like DP-IQA, GenZIQA, QCN, and GRepQ. We provide an extended comparison table below: | Method | LIVEC | KonIQ | FLIVE | SPAQ | | :----------- | :-----------: | :-----------: | :-----------: | :-----------: | | LIQE | 0.866/0.865 | 0.913/0.898 | - | - | | ARNIQA | 0.823/0.797 | 0.883/0.869 | 0.670/0.595 | 0.909/0.904 | | QualiCLIP+ | 0.867/0.821 | 0.898/0.889 | 0.665/0.618 | 0.914/0.911 | | Q-Align | 0.853/0.860 | 0.941/0.940 | - | 0.933/0.930 | | QCN | 0.893/0.875 | 0.945/0.934 | 0.741/0.644 | 0.928/0.923 | | GRepQ | 0.867/0.859 | 0.916/0.908 | 0.582/0.531 | - | | GenZIQA | 0.897/0.873 | 0.932/0.916 | 0.718/0.613 | - | | DP-IQA (Teacher) | 0.913/0.893 | 0.951/0.942 | 0.683/0.579 | 0.926/0.923 | | **LGDM$\psi_{Sdv1.5}$(Ours)** | **0.940/0.908** | **0.972/0.967** | **0.812/0.705** | **0.948/0.947** | [Chung et al., 2023] “Diffusion posterior sampling for general noisy inverse problems”, In Proc. ICLR, 2023. arXiv:2209.14687 (2022). [He, Yutong, et al., 2023] "Manifold preserving guided diffusion." arXiv preprint arXiv:2311.16424 (2023). [Turk & Pentland, JoCN 1991] "Eigenfaces for recognition." Journal of cognitive neuroscience 3.1 (1991): 71-86. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. The additional expriments address my conerns about the choice of SD. I will keep my positive rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer GzVH, We are pleased to hear that the additional experiments addressed all your concerns. If you are satisfied, we kindly request that you consider revising your score. We remain committed to addressing any remaining points you may have during the discussion phase. Best, Authors of Paper #7621
Summary: This paper is the first to propose using a pre-trained latent diffusion model as a perceptual model to extract perception features aligned with human perception for NR-IQA. The authors introduce a novel sampling method that adjusts samples to align with human perception while preserving the manifold. Additionally, they leverage multi-level, multi-timestep features from the diffusion model to jointly estimate the final quality score. The proposed method achieves SOTA performance across all benchmarks and demonstrates superior generalization capability. ## update after rebuttal After the author explained the settings of previous methods and their own method in the generalization experiments, I believe that this paper is indeed very meaningful for the development of the field. Therefore, I give it a score of 3. Claims And Evidence: Yes. Methods And Evaluation Criteria: I find some parts of the authors' explanation unclear, and my biggest question concerns the input measurement. The paper does not specify how this is calculated at any point, only mentioning that methods like MUSIQ and RE-IQA can be used, which I can understand. However, it is unclear why SD1.5 can also be directly used as a measurement. According to Algorithm 1, the multi-scale features **H** have not yet been fully collected at the stage where the final quality score is supposed to be computed. So, what does $\psi_{SD_{v1.5}}$ represent? How is it calculated? Theoretical Claims: I'm not an expert in this field the I could not fully understand the theoretical claims, I will read the comments of other reviewers and try to understand it. Experimental Designs Or Analyses: The experiment is complete. Supplementary Material: I read all of it. Relation To Broader Scientific Literature: NR-IQA is particularly important in the current era when image generation and related fields are gaining significant popularity. This is because real-world data often lacks ground truth, necessitating evaluation metrics that are more aligned with human perception to better assess the quality of generated images. Similarly, this approach will also be crucial for future advancements in video generation and restoration tasks. Essential References Not Discussed: No. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: 1. At this stage, assuming that the theoretical derivations and proofs are correct, I believe this paper is highly significant. However, I have some concerns about the generalization experiments. In my understanding, different benchmarks in Table 1 require different score predictors. The results in Table 3 only demonstrate one domain shift, but to establish the method as a truly usable evaluation metric, it is essential to ensure that after training on one domain, the model can achieve SOTA performance across all other domains. Therefore, I hope the authors can supplement this experiment. 2. I find some parts of the authors' explanation unclear, and my biggest question concerns the input measurement. The paper does not specify how this is calculated at any point, only mentioning that methods like MUSIQ and RE-IQA can be used, which I can understand. However, it is unclear why SD1.5 can also be directly used as a measurement. According to Algorithm 1, the multi-scale features **H** have not yet been fully collected at the stage where the final quality score is supposed to be computed. So, what does $\psi_{SD_{v1.5}}$ represent? How is it calculated? 3. For the theoretical derivations and proofs, I will read other reviewers' comments and try to understand them, then make my judgment. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your valuable insights and acknowledgment of our efforts. Below, we respond to your questions and discuss the issues you have raised. 1. **Clarification on the calculation of $\psi_p$:** We apologize for the lack of clarity regarding the calculation of the perceptual features $\psi_p$ used in our Perceptual Manifold Guidance (PMG). In our framework (specifically in Eq. 10 and Algorithm 1, Line 9), $\psi_p$ represents a feature representation (e.g., a vector or set of feature maps) extracted from the perceptual quality model for the original input image $x$ (the image being evaluated). It serves as a fixed target during the guided diffusion sampling process (think of it as features from models like RE-IQA). We simply pass the input image $x$ through their respective perceptual quality model (feature extractors) before starting the main diffusion loop in Algorithm 1. The resulting features are stored as the target $\psi_p(x)$. In the case of $\psi_{SD_{v1.5}}$, a special case where we leverage the $SD_{v1.5}$ model to pre-compute the perceptual features. Crucially, this calculation is done separately and beforehand, and the resulting features are distinct from the H collected later. One can think of these features as having a similar structure as **H**, and calculated in a similar way, but without line 9 in Algorithm-1. We hope this clarifies that $\psi_p(x)$ is a pre-computed target feature set derived from the input image $x$, guiding the sampling process, while **H** is the set of features collected throughout this guided process used for the final prediction. We have revised the manuscript (Section 3) to make this distinction explicit and more detailed. 2. **Scope of Generalization Experiments (Cross-Dataset Evaluation):** We presented results for key representative pairs in Table 3, demonstrating strong performance, particularly when training on smaller datasets (LIVEC) and testing on larger ones (KonIQ), which aligns with common practices in literature [Saha et al., CVPR 2023; Agnolucci, Lorenzo, et al. WACV 2024]. We believe these results, combined with LGDM's core design, provide strong evidence for its generalization capabilities. A key strength of LGDM is that the powerful LDM feature extractor (SDv1.5) remains frozen and is not finetuned for IQA. Only an extremely lightweight regression head ($g_{\theta}$) is trained on IQA labels for each specific training dataset. This reliance on the general-purpose priors of the foundation model inherently promotes generalization across different datasets and distortion types. Achieving SOTA performance within each of the diverse datasets (Tables 1, 2, 3, and 7) using this approach strongly suggests the robustness and general applicability of the extracted hyperfeatures. Many methods require significant finetuning or adaptation of their entire backbone on large aggregated datasets for good cross-dataset performance. A direct "train once, test all" comparison might unfairly disadvantage our approach, which explicitly avoids large-scale backbone tuning to maintain adaptability and leverage zero-shot priors. The strong results in Table 3 already show LGDM generalizes better than competing methods under the standard protocol. Despite this, below, we show an extension of our cross-data validation where models are trained on FLIVE and evaluated on the rest (we include the extended version of this in our updated manuscript). | Method | Test: LIVEC | Test: KonIQ | Test: SPAQ | | :------------------------ | :---------: | :---------: | :---------: | | CONTRIQUE | 0.734 | 0.777 | 0.820 | | RE-IQA | 0.690 | 0.796 | 0.825 | | ARNIQA | 0.699 | 0.798 | 0.837 | | QCN | 0.653 | 0.728 | 0.815 | | LIQE | 0.743 | 0.813 | 0.713 | | GRepQ | 0.751 | 0.810 | 0.829 | | QualiCLIP+ | 0.725 | 0.817 | 0.841 | | LGDM$\psi_{SDv1.5}$ (Ours) | 0.849 | 0.802 | 0.838 | [Saha et. al., CVPR 2023]. "Re-iqa: Unsupervised learning for image quality assessment in the wild." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [Agnolucci, Lorenzo, et al. WACV 2024] "Arniqa: Learning distortion manifold for image quality assessment." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024. --- Rebuttal Comment 1.1: Comment: You mean that other methods require fine-tuning on additional data? If they only train a larger backbone than your projecter, this cannot be used as evidence that your generalization performance is similar, because in fact, you are using a pre-trained model with more parameters. Additionally, considering the real application, although the pre-trained knowledge of diffusion is richer, the nature of your projector limits the generalization performance of your method, which is a fatal flaw. --- Reply to Comment 1.1.1: Comment: Thank you very much for engaging further in the discussion phase and for your thoughtful feedback on our rebuttal. We truly appreciate you taking the time to provide additional comments, which help us improve the clarity and rigor of our work. First, we have performed the specific cross-dataset evaluations the reviewer suggested and discussed in previous comment. Thank you for this suggestion; these additional experiments indeed strengthen the evaluation of LGDM's generalization capabilities, and we observe that LGDM consistently demonstrates strong performance even in this challenging setting. We also appreciate the opportunity to clarify our statement regarding fine-tuning in other methods. When we mentioned, *"Many methods require significant finetuning or adaptation of their entire backbone on large aggregated datasets for good cross-dataset performance,"* we were specifically referring to the practice within the IQA domain. Such methods involve extensive fine-tuning of their entire pipeline, including their large parameter feature extraction backbones (for eg. clip in liqe, qualiclip+, and clipiqa+), on specifically IQA datasets. For cross-dataset generalization specifically, approaches further fine-tune or adapt their pipeline on large-scale IQA datasets (like FLIVE or KADID-10k) to achieve better performance when testing on unseen datasets. In contrast, a core design choice of LGDM is to keep the powerful backbone completely frozen. We do not perform any fine-tuning of this backbone on any IQA data at any instance. The only trainable component is the extremely lightweight regression head ($g_{\theta}$). You raised a valid point regarding the potential limitation of the regressor head on generalization. However, we argue that this design is actually a key strength that promotes generalization and not uncommon in IQA space [Saha et. al., CVPR 2023, Agnolucci, Lorenzo, et al. WACV 2024]. Our extracted hyperfeatures **H** capture strong perceptual quality without any IQA-specific training. The lightweight regression head ($g_{\theta}$) primarily serves to map these highly informative perceptual features to the specific MOS. This mapping task does not require large amounts of data or a complex head, as the frozen LDM or, more specifically, our PMG handles the heavy lifting of representative feature extraction. Therefore, the projector head does not act as a bottleneck limiting generalization. Instead, by relying on the robust perceptual features from PMG, our method achieves strong generalization. This is evidenced by the consistent SOTA performance in Tables 1, 2, 3, and 7 and is now further corroborated by the strong results in the newly added cross-dataset evaluations (updated Table 4 and previous comment). While the LDM itself has many parameters, the number of parameters trained for the IQA task in our approach is very minimal, making it highly adaptable and efficient while effectively leveraging the zero-shot capabilities of the foundation model. We hope the above clarifications and the additional experiments sufficiently address your concerns. If you are satisfied, we kindly request that you consider revising your score. We remain committed to addressing any remaining points you may have during the discussion phase. Best, [Saha et. al., CVPR 2023]. "Re-iqa: Unsupervised learning for image quality assessment in the wild." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [Agnolucci, Lorenzo, et al. WACV 2024] "Arniqa: Learning distortion manifold for image quality assessment." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024.
Summary: This paper introduces Latent Guidance in Diffusion Models (LGDM) for No-Reference Image Quality Assessment (NR-IQA), leveraging the powerful representation capabilities of pretrained Latent Diffusion Models (LDMs). Specifically, the authors propose Perceptual Manifold Guidance (PMG) to steer the sampling process toward perceptually consistent regions on the data manifold. Additionally, intermediate multi-scale and multi-time features from the denoising U-Net are utilized to estimate image quality through a lightweight network. The evaluation is conducted on both real and synthetic IQA datasets. ## update after rebuttal I appreciate the authors’ detailed response. After re-evaluating the paper in light of their reply, most of my concerns have been addressed. However, the overall performance heavily relies on ψp, which is an existing NR-IQA model. In particular, the case where ψ = ∅ yields results that are inferior the comparison methods. On the other hand, the approach exhibits some innovation, and the experimental outcomes suggest that it leads to improved performance. Taking into account the feedback from other reviewers as well, I have increased my score to 3. Claims And Evidence: No, why use multi-timestep feature maps instead of just the features from the final step? The authors need to provide more evidence. Methods And Evaluation Criteria: Yes, the method was evaluated on public datasets with standard metrics. Theoretical Claims: I did not thoroughly check the correctness of some proofs. Experimental Designs Or Analyses: Yes, I have reviewed the validity of the experiment design and analysis. Supplementary Material: This paper does not include supplementary materials. Relation To Broader Scientific Literature: A novel approach for using pretrained unconditional latent diffusion models for NR-IQA. Essential References Not Discussed: Some diffusion model based NR-IQA methods are not including and discussed. DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild. Other Strengths And Weaknesses: Strengths: 1. Using pre-trained diffusion models for no-reference image quality assessment is interesting. 2. The proposed method achieves state-of-the-art performance on public datasets. Weaknesses: 1. The performance of the proposed method heavily depends on ψp. ψp is an existing NR-IQA method. 2. The time complexity of the proposed method is high. 3. Why use multi-timestep feature maps instead of just the features from the final step? In the initial stages, the features may contain excessive noise, which could potentially affect the accuracy of the evaluation. Other Comments Or Suggestions: N/A Questions For Authors: Please see the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We would like to address your questions and concerns below. 1. **Multi-Timestep Features (Weakness 3):** Thank you for bringing up this question. In our method, Perceptual Manifold Guidance (PMG), operates iteratively during the DDIM sampling process. At each step t, PMG applies guidance (Eqs 9, 10) to refine the estimated clean latent $\hat{z}_{0|t}$ towards the input image $x$ and its perceptual features $\psi_p(x)$. Using only the final step (t=1) allows for only one application of this guidance, which may not guarantee a convergence on perceptually relevant sub-manifold (Theorem-1). In contrast, performing multiple steps (we use 10 steps) allows PMG to iteratively steer the latent trajectory more effectively onto the perceptually relevant submanifold. This multi-step refinement process is crucial for generating perceptually aligned hyperfeatures **H** that lead to SOTA performance. As shown in Table 4 and Figure-7(Appendix D), using only a single step yields lower performance (SRCC 0.624) compared to using 10 steps (SRCC 0.705). While the 1-step features have the least intrinsic noise, the lack of sufficient PMG iterations makes them less perceptually aligned with the target quality. Aggregating features $h_t$ across time captures the evolution of the image representation during guided reconstruction, providing richer information about quality-related aspects (e.g., texture refinement, artifact removal) than a static snapshot at the final time step. In summary, using single-step, can still produce comparable results, and we encourage the use of fast inference and trade-off with accuracy. 2. **Dependence on $\psi_p$ (Weakness 1):** We would like to clarify that while the choice of $\psi_p$ influences the degree of performance boost, LGDM's success is not heavily dependent on it. As shown in Tables 1 (Authentic) and 7 (Synthetic), our method LGDM-$\psi_{\phi}$ (where $\psi_p$ is disabled by setting $\zeta_2$ = 0) already achieves strong results, often matching or exceeding previous SOTA. This baseline performance relies only on the pretrained LDM's features **H** extracted using data consistency guidance (Eq. 9). This demonstrates the inherent perceptual capabilities captured by the LDM with appropriate sampling and our hyperfeature approach. Furthermore, the $\psi_p$ term (Eq. 10) acts as a refinement step, guiding the sampling towards a specific perceptual submanifold defined by $\psi_p(x)$. Its effectiveness depends on how well $\psi_p(x)$ correlates with human perception. Optimal Guidance is Internal ($\psi_{SDv1.5}$ Crucially, our best results across all datasets are achieved with LGDM-$\psi_{SDv1.5}$. This inherently reduces the over-reliance on external quality models. 3. **Time Complexity (Weakness 2):** We acknowledge that LGDM’s inference time is higher than that of non-diffusion methods. However, our approach leverages a zero-shot pretrained LDM backbone, avoiding the expensive training or fine-tuning required by methods like DP-IQA or GenZIQA. Furthermore, since our PMG is adaptable, it can be used with any other efficient DM model with strong priors to potentially reduce the inference cost. Below, we compare the inference time with LGDM(sampling only with half precision): | Method | Est. Time (s) | Notes | | :----------------- | :------------ | :------------------------------ | | QCN | ~0.15 | Geometric ordering | | Q-Align | ~0.1 | LORA Finetuning | | DP-IQA (Teacher) | ~0.023 | Distilled/Finetuned DM | | GenzIQA | ~1.4 | Finetuned DM (8 steps avg) | | LGDM (1 step) | ~1.1 | Single Step | | LGDM (10 steps)| ~9.7 | SOTA Accuracy | 4. **Missing Reference (DP-IQA [Fu et al., arXiv 2024]):** Thank you for pointing out DP-IQA [Fu et al., arXiv 2024]. We have added it to our related work discussion and included it in our comparison tables in the updated manuscript. | Method | LIVEC (PLCC) | LIVEC (SRCC) | KonIQ (PLCC) | KonIQ (SRCC) | FLIVE (PLCC) | FLIVE (SRCC) | SPAQ (PLCC) | SPAQ (SRCC) | | :---------------------- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: | | DP-IQA (Teacher) | 0.913 | 0.893 | 0.951 | 0.942 | 0.683 | 0.579 | 0.926 | 0.923 | | **LGDM-$\psi_{SDv1.5}$ (Ours)** | **0.940** | **0.908** | **0.972** | **0.967** | **0.812** | **0.705** | **0.948** | **0.947** | 5. **Supplementary Material Clarification:** The Appendix included at the end of the main PDF serves as our supplementary material. [De et al., arXiv 2024] "GenzIQA: Generalized Image Quality Assessment using Prompt-Guided Latent Diffusion Models." arXiv preprint arXiv:2406.04654 (2024). [Fu et al., arXiv 2024] Fu, Honghao, et al. "DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild." arXiv preprint arXiv:2405.19996 (2024).
null
null
null
null
null
null
Best of Both Worlds: Regret Minimization versus Minimax Play
Accept (poster)
Summary: This submission studies whether an algorithm can achieve O(1) regret compared to a specific fixed comparitor strategy while also guaranteeing O(T^0.5) regret compared to the best strategy in hindsight in symmetric two-player zero-sum games. The authors focus on bandit feedback settings, where the learner only observes the cost of their chosen action. The authors answer this answer affirmatively in symmetric two-player zero-sum games by introducing algorithms that nagivate the trade-off between no-regret learning and minimax equilibrium strategies in both normal-form and extensive-form games. The authors’ algorithm is a generalization of the “phased aggression” approach of Even-Dar et al. 2008, which was originally proposed for the full information setting. At a high level, the algorithm plays a convex combination between the comparator strategy and the strategy chosen by a no-regret algorithm at each time-step. Whenever the algorithm estimates that the comparator is a poor choice, it decreases the weight on the comparator strategy (thereby increasing the weight on the no-regret strategy). The authors use importance weighting to estimate the losses of each action, given the bandit feedback present. The no-regret algorithm used is online mirror descent with the KL divergence regularizer. The regret guarantees for this algorithm suffer a multiplicative dependence on a term corresponding to the “exploration gap”. However, the authors prove a lower bound which shows that this dependence is indeed necessary. The authors also extend their results to handle extensive-form games. Their algorithm for this setting is similar, but uses online mirror descent with an unbalanced dilated KL divergence regualrizer as the no-regret algorithm. As was the case in normal-form games, the authors show that their performance guarantees are “nearly tight” by providing a corresponding lower bound. Finally, the authors empirically evaluate their algorithm for extensive-form games on Kuhn poker and compare its performance to standard no-regret learning and minimax play. Their empirical evaluations largely confirm their theoretical findings, i.e. the proposed results gives “best of both worlds” performance. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: This submission belongs to the literature on learning and equilibrium computation in games. Specifically, this work is the first to study best-of-both-worlds performance under bandit feedback in symmetric, zero-sum games. Essential References Not Discussed: No. Other Strengths And Weaknesses: The authors are the first to provide compelling results for best of both worlds learning in two-player zero-sum games under bandit feedback. Their algorithms are natural generalizations of those for the full information setting, and they provide nearly matching upper- and lower-bounds. A minor weakness is the restriction to zero-sum games that are symmetric, although there are many zero-sum games which satisfy this property. Other Comments Or Suggestions: n/a Questions For Authors: What are the barriers preventing you from extending your results to non-symmetric zero-sum games? Or to no-swap-regret? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper and providing valuable feedback! ## **Remark on non-symmetric games:** > What are the barriers preventing you from extending your results to non-symmetric zero-sum games? **Response:** We would like to point out that, in fact, the game in question does *not* have to be symmetric. It is sufficient if the min-max value of the game is zero. In this case, all our results hold immediately without modifying the statement or proof. The reason we pointed to symmetric zero-sum games as an important example is merely that, in such games, it is easy to prove that its value is *always* zero. However, there are many asymmetric games in which the value is still zero, and our results readily hold for these games, too. In general, even if the value $V$ of the game is non-zero, our algorithm is guaranteed an expected payoff of at least $V\cdot T - 1$. More generally, we can readily apply our results even to EFGs that are not zero-sum, or not even two-player (with exactly the same proofs but cost functions defined accordingly). In this case, we guarantee the same expected payoff as the chosen baseline comparator $\mu^c$ (which is considered *safe* in some sense) up to one additive unit, while still having $O(\sqrt{T})$ regret to any strategy $\mu$. Our motivation for stating the results for symmetric zero-sum games was mainly for illustrative purposes. We will move these remarks to a more prominent place in the writing so it is clear that the limitations are not necessary. --- > Or to no-swap-regret? **Response:** The obvious idea would be to combine the classical reduction of Blum & Mansour (BM) with our phased algorithm. However, this poses several technical challenges, among which are the following: i) It is unclear how to check the regret condition for entering a new phase for swap regret, as there are exponentially many swap functions. ii) If, instead, we check the regret condition for the $A$ different no-regret algorithms in (BM) individually, then these algorithms may be in different phases, and it is unclear how to combine them into a single policy. Hence, whether we can provide the same guarantees for no-swap regret remains an interesting open question.
Summary: In this paper, the goal is to develop a bandit algorithm that guarantees simultaneously constant regret with respect to a given strategy and $\sqrt{T}$ regret with respect to the best strategy in hindsight. Claims And Evidence: - An extension of the phased algorithm of Even-Dar to the bandit case - An analysis showing good regret with respect to both the given strategy and the best strategy in hindsight as long as the given strategy has a probability of playing any arm bounded away from zero. - An application of their approach to NFG and EFG (with lower bounds). I think the above claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The main focus of this work is theoretical. Some simulations are presented based on the game of Kuhn poker. I mostly see them as checks that the theory indeed works. Theoretical Claims: I did not check the details of the proofs of theoretical claims. The claims themselves seem plausible. Experimental Designs Or Analyses: I did not check the experimental designs or analyses. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: I think the relevant literature is correctly cited. Essential References Not Discussed: I did not find any missing references. Other Strengths And Weaknesses: I think the paper is overall well written and the idea of looking for a strategy that simultaneously guarantees constant regret with respect to another (known strategy) and $\sqrt{T}$ regret with respect to the best strategy in hindsight is interesting. I think the novelty of the work can be challenged - The algorithm presented by the authors is really close to Even-Dar et al. (2008). Authors should focus on what are the differences beyond simply using Exp3 as a base algorithm. - What is the novelty in the presented reduction from OLM to NFG or EFG ? Other Comments Or Suggestions: See "strengths and weaknesses" section. Questions For Authors: See "strengths and weaknesses" section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper and providing valuable feedback! > The algorithm presented by the authors is really close to Even-Dar et al. (2008). Authors should focus on what are the differences beyond simply using Exp3 as a base algorithm. **Response:** Indeed, the high-level difference between the algorithms is using Exp3 as a base algorithm. In addition to this: - We use importance weighting according to the algorithm's iterates $\mu^t$, not the Exp3 iterates $\hat{\mu}^t$ as usual. - We modify the condition for entering a new phase as the true cost vectors are not observed. - We switch the learning rate in the last possible phase (see below). This being said, the main difference lies in the analysis of the algorithm (Section 3.1/Regret Analysis), which is not just a direct combination of the analysis by Even-Dar et al. and the standard Exp3 guarantee. One technical difficulty lies in the fact that the estimated cost functions may be unbounded in general. Our analysis tackles this by a refined treatment of the algorithm's phases: In the last possible phase, the cost estimates can be unbounded but it is sufficient to resort to a regret bound in expectation (which does not need boundedness). In all other phases, for the analysis to go through we need an estimated regret bound with probability one. This is only possible since we are able to bound the estimated costs in these phases due to the comparator assumption and the phasing scheme. --- > What is the novelty in the presented reduction from OLM to NFG or EFG ? **Response:** Technically speaking, the application of safe OLM algorithms to NFGs/EFGs by using e.g. the min-max equilibrium as comparator is relatively straightforward (c.f. Section 2). However, we are not aware of any prior work making this interesting connection to learning in games, even under the easier full-information feedback. We thus view it as an important conceptual contribution of our work. --- **PS:** Please refer to our **Remark on non-symmetric games** in reply to reviewer aZNc for a common response on the generality of our results beyond symmetric games.
Summary: The paper proposes a bandit algorithm for two-player symmetric zero-sum games that guarantees $O(1)$ regret against the minimax strategy and $O(\sqrt{T})$ regret against any strategy. ## update after rebuttal I keep my score, which remains positive. Claims And Evidence: I do not see any problematic claims. Methods And Evaluation Criteria: I do not see any major issues with the methods. Theoretical Claims: The proofs seem correct. Experimental Designs Or Analyses: The numerical experiments seem sound. Supplementary Material: I have gone over the detailed proofs in the appendix. Relation To Broader Scientific Literature: The developments seem novel and of interest to a variety of fields because of the game theoretic implications. Essential References Not Discussed: There does not appear to be missing essential references. Other Strengths And Weaknesses: The regret guarantees in both directions is a strength. The related work and its comparisons with the manuscript can be more adequately discussed. Other Comments Or Suggestions: Further discussions and comparisons with Lattimore (2015) and Ganzfried & Sandholm (2015) will be appreciated. Questions For Authors: I have no further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper and providing valuable feedback! > The related work and its comparisons with the manuscript can be more adequately discussed. **Response:** We agree that further elaboration on the related work would increase the clarity of the paper. In the current version, an extensive discussion of related work is deferred to Appendix B. We plan to move parts of this to the main text in the camera-ready version. --- > Further discussions and comparisons with Lattimore (2015) and Ganzfried & Sandholm (2015) will be appreciated. **Response:** Further comparisons with Lattimore (2015): Lattimore (2015) shows that in multi-armed bandits, $O(1)$ regret compared to a single comparator action implies a worst-case regret of $\Omega(AT)$ compared to some other action. In our terminology, a single action corresponds to a deterministic strategy. We show that, perhaps surprisingly, it is possible to circumvent this lower bound if the comparator strategy plays each action with some non-zero probability $\delta>0$ (while maintaining the optimal order of $\sqrt{T}$ regret). As further discussed in the main text, this minimal possible assumption is sensible in various game-theoretic contexts. Additional comments regarding Ganzfried & Sandholm (2015): Ganzfried & Sandholm (2015) do not provide any sort of regret guarantees. Instead, they ask the following question: In which rounds of the game is it possible to deviate from the min-max strategy? Their algorithmic approaches rely on best-responding to an *opponent model* whenever the algorithm has accumulated just enough utility to risk losing it again. While the authors do prove safety guarantees, this is not the case for *exploitation*. The latter would likely heavily depend on the other player and the quality of the opponent modeling. In our paper, we do not encounter this difficulty because we consider standard regret minimization against any (time-varying) adversary. --- **PS:** Please refer to our **Remark on non-symmetric games** in reply to reviewer aZNc for a common response on the generality of our results beyond symmetric games. --- Rebuttal Comment 1.1: Comment: Thank you for the response. I have no further questions. My score remains positive.
Summary: This paper derives regret upper bounds that vary depending on the comparator in the setting of online learning with bandit feedback. Specifically, the paper proposes an algorithm that simultaneously achieves an $O(1)$ regret upper bound when the comparator of the regret lies in the interior of the probability simplex, which is the feasible region, and an $O(\sqrt{T})$ regret upper bound otherwise. The authors primarily discuss this in the context of symmetric two-player zero-sum normal-form games and further extend their analysis to extensive-form games. Claims And Evidence: Yes, all propositions and claims in the paper are with proofs or references. Methods And Evaluation Criteria: Yes, the proposed algorithms are variants of existing algorithms in online learning and are thus valid.
Moreover, the evaluation metrics (i.e. regrets) are standard in the literature. Theoretical Claims: I have reviewed the claims in the main body and confirmed that there are no major issues. Experimental Designs Or Analyses: Yes, I have briefly checked the experimental setup and results and confirmed that there are no major issues. Supplementary Material: no Relation To Broader Scientific Literature: In the context of games, I have not seen an approach that achieves different regret upper bounds depending on the comparator. From the paper, the motivation of deriving different regret upper bounds based on the comparator is unclear. For more detailed comments, see the section Other Strengths and Weaknesses below. Essential References Not Discussed: no Other Strengths And Weaknesses: The major issue with this paper is the unclear motivation for its contributions.
The authors derive different regret upper bounds depending on whether the comparator, but it is difficult to see how these bounds offer advantages from the perspective of learning in games.
What are the benefits of achieving a lower regret when choosing the minimax policy ${argmin}_\mu \max_{\nu} V(\mu, \nu)$ as the comparator?
If there are advantages to using the minimax policy as the comparator in the context of learning in games, the reviewer would like the authors to clarify them. Additionally, some statements are inaccurate, and certain terms are used in a non-standard manner, making the paper a little harder to understand.
For example, in Line 17, the statement "the regret compared to any fixed strategy $\mu$ satisfies … $\leq O(\sqrt{T})$" depends on the algorithm being used, and such a regret upper bound is not necessarily always achievable.
Moreover, in Line 46, the term "the best strategy" is ambiguous, making it difficult to understand.
Similarly, in Line 89, the phrase "a special ('safe') comparator strategy" is unclear at this point.
Additionally, in Line 104, the term "the worst-case expected regret $\max_{\mu} \mathcal{R}(\mu)$" is used, but referring to the worst case over the comparator (rather than over a sequence of gradients) in this way is maybe uncommon. This work focuses on symmetric games.
Can the authors address this limitation? It is possible that I have not fully understood the motivation behind this work.
If I receive a convincing response on this point, I am willing to raise my score. Other Comments Or Suggestions: no Questions For Authors: The reviewer expects the authors to address the questions raised in Other Strengths and Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for carefully reviewing our paper and providing valuable feedback! > It is possible that I have not fully understood the motivation behind this work. If I receive a convincing response on this point, I am willing to raise my score. **Response:** From the viewpoint of learning in games, the motivation of our paper is the following: Suppose you play a repeated game such as Rock-Paper-Scissors or Heads-Up Poker against an unknown opponent. Should you rather run a no-regret algorithm, or simply play the Nash equilibrium? The former would guarantee that you linearly exploit the opponent when, for example, they decide to play a static policy that is slightly suboptimal (non-equilibrium). The latter would guarantee you to not lose money (beyond the Nash value of $0$) in expectation. But it is easy to see that neither guarantees *both* of these preferable properties. Our work states that you can achieve essentially both simultaneously under the given minimal assumption. --- > What are the benefits of achieving a lower regret when choosing the minimax policy $\arg\min_{\mu} \max_{\nu} V(\mu, \nu)$ as the comparator? If there are advantages to using the minimax policy as the comparator in the context of learning in games, the reviewer would like the authors to clarify them. **Response:** This is indeed a key motivation behind our work (c.f. Section 2): Notice that for the minimax policy $\mu^\star$, we have $V(\mu^\star,\nu^t) \leq 0$ no matter what policy $\nu^t$ Bob plays. Hence, a regret bound of $\mathcal{R}(\mu^\star) \leq 1$ compared to the minimax policy implies that $\sum_{t=1}^T \mathbb{E}[V(\mu^t,\nu^t)] - 0\cdot T \leq 1$. This proves that we lose at most $1$ unit to Bob throughout the interactive play (in expectation), no matter Bob's play. Simply running a standard no-regret algorithm would not guarantee this; it can lose up to $\sqrt{T}$ units, which can be a significant amount. But, since we simultaneously maintain $O(\sqrt{T})$ regret compared to *any* $\mu\neq\mu^\star$, we still enjoy all the benefits of playing a no-regret algorithm, including that we linearly exploit, for example, a static opponent that slightly deviates from the equilibrium. --- > This work focuses on symmetric games. Can the authors address this limitation? **Response:** Yes! Our results hold beyond symmetric games, which only serve as an important motivation. Please refer to our **Remark on non-symmetric games** in reply to reviewer aZNc. --- --- **Additional minor clarifications:** > In Line 17, the statement "the regret compared to any fixed strategy $\mu$ satisfies $O(\sqrt{T})$" depends on the algorithm being used, and such a regret upper bound is not necessarily always achievable. **Response:** Here, we are referring to any no-regret algorithm that has regret of $O(\sqrt{T})$ uniformly over adversaries and comparators. Such no-regret algorithms exist both for normal- and extensive-form games (for example online mirror descent). The property itself does not depend on the specifics of the algorithm and holds for any such no-regret algorithm. > In Line 46, the term "the best strategy" is ambiguous, making it difficult to understand. **Response:** By regret compared to the best strategy in hindsight, we mean that the regret bound holds compared to any strategy $\mu$ (in particular to a minimizer of the observed sequence of play). We are happy to change the wording to "against any fixed strategy $\mu$" for clarity. > in Line 89, the phrase "a special ('safe') comparator strategy" is unclear at this point. **Response:** We will change this to the following in the final version: *Alice receives a special comparator $\mu^c$ that she considers a 'safe' strategy. The motivation for this is that we can later choose $\mu^c$ to be a minimax equilibrium $\mu^\star$, which is safe in the sense of guaranteeing zero expected loss in symmetric zero-sum games.* > In Line 104, the term "the worst-case expected regret $\max_{\mu} \mathcal{R}(\mu)$" is used, but referring to the worst case over the comparator (rather than over a sequence of gradients) in this way is maybe uncommon. **Response:** We agree that here the regret refers to the worst case over the comparator. (Note that since our bounds are uniform over the opponent's play, they also hold in the worst case over the sequence of gradients.) We are willing to change this to simply "regret" (as opposed to "regret compared to $\mu$") throughout to avoid confusion. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Since my concerns regarding the motivation have been resolved, I have raised my score from 2 to 3. That said, there still appear to be several issues. First, there seems to be significant room for improvement in the writing and presentation. Additionally, as pointed out by other reviewers, the case of non-symmetric games is not sufficiently discussed. Lastly, references that support the motivation seem to be not discussed (the reviewer guesses that there may be existing literature that discusses similar motivations). It is strongly expected that these issues will be addressed in the revised version. --- Reply to Comment 1.1.1: Comment: Thank you for your response and the positive evaluation. We greatly appreciate the additional suggestions and will carefully incorporate them into the camera-ready version. We believe that the extra page will leave ample space for this.
null
null
null
null
null
null
Ranked Entropy Minimization for Continual Test-Time Adaptation
Accept (poster)
Summary: This work studies continual test-time adaptation and proposes a method to tackle it. In particular, based on experimental motivation on how entropy minimization collapses under continual TTA to predicting the same class, they propose to optimize the cross entropy between the prediction of the model on two masked versions of the input. Experiments are conducted on ImageNet-C, CIFAR10/100-C showing consistent performance gain under the provided baselines. Claims And Evidence: While the provided experiments generally support the claims of this work, the only claim that is not fully supported is the efficiency claim. The summation in Equation (3) and (4) requires 2*N forward passes. This makes the adaptation either very slow (under fixed computational budget), or very computational intensive. Further, it is Also not clear how costly it is to construct the N masks for each received input image at test time. While Table 5 shows that REM requires only 3 forward passes, extra discussion on the computational requirement of REM is necessary. Methods And Evaluation Criteria: While the proposed method is evaluated on ImageNet-C and CIFAR10/100-C, it is also important to report the results on the more realistic benchmark ImageNet-3DCC [A] [A] 3D Common Corruptions for Object Recognition, CVPR 2022. Theoretical Claims: There are not theoretical claim in this work Experimental Designs Or Analyses: While I really like the experimental analysis presented in this work (especially the ones in Figure 3), certain experiments are missing and it would make the paper much stronger to include them: (1) Missing baselines: There are a couple of strong missing baselines in the comparisons tables such as EATA and SAR. It is important to compare against both method in the main paper. (2) Missing ablation: There are two main missing ablations in this work: analyzing the impact $\lambda$ and the impact of $M$. (3) Computational budgeted evaluation: Since the proposed method requires additional forward passes making it more computationally intensive, it is important to compare different methods (especially the efficient EATA) under computational time constraint settings [B]. (4) Evaluation schemes: while this work mainly focuses on continual TTA, it is important to show experiments under different evaluation protocols; namely practical TTA [C]. [B] 3D Common Corruptions for Object Recognition, ICML 2024. [C] Robust Test-Time Adaptation in Dynamic Scenarios, CVPR 2023. Supplementary Material: I checked Appendix A and B. Relation To Broader Scientific Literature: The findings of this work are related to the TTA literature Essential References Not Discussed: - 3D Common Corruptions for Object Recognition, CVPR 2022. Other Strengths And Weaknesses: Please refer to the earlier sections of my review. Other Comments Or Suggestions: There are a couple of typos in that paper: 1) In line 325-326: " Section 4.2 presents the performance on the 5 unseen domains after training on 10 domains." I think here you should refer to the Table, rather than referring to the section you are at. 2) In line 418-419: "Compared to the re- cent state-of-the-art, Continual-MAE,", I believe the first comma should be replaced with ;. Questions For Authors: In addition, I have the following question to be clarified: - How are the supervised learning results obtained? As far as I know, ImageNet-C does not have any training sets. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback and insightful suggestions. Below, we provide detailed responses to your questions. >**1. Computational intensive for mask** A1. The token-wise attention used to compute the mask is applied only to the final self-attention layer and accounts for a very small portion of the overall forward pass. We provide an analysis of the mask generation time per batch with respect to $N$ in the table below: Forward pass|Mask computation ($N=1$)|Mask computation ($N=2$) :---:|:---:|:---: 7.012$\pm$0.390ms|0.091$\pm$0.008ms|0.134$\pm$0.007ms Although our method relatively requires more total time than entropy minimization methods, it achieves higher accuracy with lower computational cost compared to recent state-of-the-art CTTA methods based on consistency regularization. Our goal is to find a compromise between the efficiency of entropy minimization and the stability of consistency regularization methods. Positioned in between these two paradigms, our method achieves the highest classification performance with balanced efficiency. >**2. Additional baseline** A2. Thanks to your suggestion, we provide the following table comparing our method with the additional baselines: Method|EATA|SAR|Ours ---|---|---|--- ImageNetC|41.3|45.2|**39.2** >**3. Ablation for mask parameter** A3. We added an ablation study on masking. As a trade-off between computational efficiency and accuracy, we utilize masks with ratios of {0, 10%, 20%}, and the results regarding the lambda parameter are provided in Appendix E. M_N (N=1)|{0,5%}|{0,10%}|{0,15%} ---|---|---|--- Error|40.6|39.7|39.5 M_N (N=2)|{0,5%,10%}|{0,10%,20%}|{0,15%,30%} Error|39.4|39.2|39.4 M_N (N=3)|{0,5%,10%,15%}|{0,10%,20%,30%}|{0,15%,30%,45%} Error|38.9|39.4|40.0 >**4. Experiment on realistic TTA** A4. Thank you for suggesting an evaluation under realistic scenarios to verify the practical applicability of our method. The table below presents the experimental results on the ImageNet-3DCC [A] dataset under the time-constrained protocol [B]. We compare EATA and our proposed method using ViT-B/16. EATA requires 2.41$\times$ the time when $C(g)=1$, while REM requires 5.10$\times$ the time. Therefore, in the *episodic* scenario, where the model is re-initialized for each domain, REM shows lower performance than EATA due to relatively slower adaptation. However, in the *continual* scenario, where domains are learned sequentially without model re-initialization, our method achieves higher performance due to the accumulation of learned knowledge and demonstrates stable adaptation across domains. ImageNet-3DCC|Depth of field|Noise|Lighting|Weather|Video|Camera motion :---:|:---:|:---:|:---:|:---:|:---:|:---: EATA-Episodic|31.5|40.7|55.6|54.7|57.0|46.3 Ours-Episodic|32.7|43.1|59.7|53.2|61.6|50.0 EATA-Continual|30.0|39.9|57.2|56.3|62.2|53.0 Ours-Continual|31.6|38.9|56.2|51.2|59.2|46.3 [A] 3D Common Corruptions for Object Recognition, CVPR 2022. [B] Evaluation of Test-Time Adaptation Under Computational Time Constraints, ICML2024. >**5. Practical TTA evaluation** A5. We also present experiments on CIFAR10C under practical TTA protocol [C] in the following table. These results confirm that our method operates robustly across various realistic scenarios as suggested. CIFAR10C|CoTTA|ViDA|Continual-MAE|Ours ---|---|---|:---:|--- Error(%)|79.2|27.4|15.7|**14.2** [C] Robust Test-Time Adaptation in Dynamic Scenarios, CVPR 2023. >**6. Details in supervised learning** A6. The supervised result refers to the outcome obtained by training on the test set using cross-entropy loss with access to target labels in an online manner. Since target labels are not accessible during test time in TTA, this serves as an upper bound of the adaptation performance. >**Minor** Thank you for pointing out the typo. We have corrected the reference from Sec 4.2 to Table 4 and fixed the typo accordingly. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the efforts put in responding to my comments. Thus, I am raising my score to 4.
Summary: This paper addresses the model collapse issue in CTTA. Specifically, it aims to reconcile the trade-off between fast but unstable EM methods and stable yet computationally expensive CR methods. The authors propose REM, a novel EM-based approach incorporating a progressive masking strategy. This strategy gradually adjusts prediction difficulty by structuring entropy sequentially, thereby aiming to enhance both stability and adaptability. ## update after rebuttal The masking augmentation strategy proposed by the authors seems reasonable and well-motivated. That said, this approach relies on the assumption that ranking should be preserved, which can also be evaluated using more standard augmentation techniques such as varying rotation angles or crop levels. Given this, it is challenging to attribute the observed effects specifically to the proposed ranking structure. Furthermore, the performance gains over prior methods that combine entropy loss with data augmentation are relatively modest. For these reasons, I would like to retain my original score. Claims And Evidence: **Strengths:** - Clearly visualizes the model collapse phenomenon and convincingly demonstrates the effectiveness of the proposed REM method through informative graphs and illustrations. **Weaknesses:** - The trivial solution-induced model collapse problem in EM-based methods has already been analyzed extensively in prior works such as EATA, SAR, and DeYO. This diminishes the novelty of the analytical contribution in Section 2. - The paper lacks a comparative analysis between the proposed masking-based augmentations and diverse existing augmentation techniques, thus insufficiently supporting the claimed novelty. Methods And Evaluation Criteria: **Strengths:** - Employs standard benchmarks for CTTA evaluation and widely-adopted ViT models, thereby ensuring credibility and comparability of results. **Weaknesses:** - Although REM claims a novel mechanism to prevent model collapse, its fundamental idea of diverse augmentations for entropy minimization has already been explored in existing literature [a,b]. > [a] Marsden et al., "Universal test-time adaptation through weight ensembling, diversity weighting, and prior correction," WACV, 2024. > > [b] Lee & Chang, "Continual momentum filtering on parameter space for online test-time adaptation," ICLR, 2024. > - Table 5 reveals that REM requires more than ten times the computational resources compared to TENT, questioning its practical viability. Theoretical Claims: **Weaknesses:** - The theoretical explanation supporting REM's capability to prevent model collapse, as mentioned in Section 5, is insufficiently developed. - The paper lacks a clear logical analysis explaining why the proposed progressive masking strategy prevents model collapse. - Although the authors assert that sequential entropy structuring avoids overconfidence in difficult samples, they do not rigorously analyze how sequential entropy ordering directly mitigates model collapse. Experimental Designs Or Analyses: **Strengths:** - The effectiveness of various masking strategies (Fig. 10) and the impact of removing specific loss terms (Fig. 9) were thoroughly analyzed, experimentally demonstrating that MCL and ERL significantly contribute to preventing model collapse. **Weaknesses:** - Lack of evaluation regarding the robustness of the proposed object-focused masking strategy in highly domain-shifted scenarios. Additional experiments or analyses are necessary to clearly establish its effectiveness under extreme domain shifts. - Inadequate analysis of failure cases. The manuscript does not provide an experimental investigation into situations where REM fails, which is essential for a comprehensive understanding of method limitations. - A comparison with EM approaches using varying intensities of augmentation is necessary to demonstrate the relative advantage of the proposed masking augmentation more explicitly. - Although Appendix C reports accuracy values, the use of downward arrows in the tables introduces confusion and could mislead readers regarding the interpretation of experimental results. Clarifying the representation method would improve the manuscript's readability. Supplementary Material: I checked for the code and verified its consistency with the implementation. Relation To Broader Scientific Literature: see above Essential References Not Discussed: I keep up with the literature in this area. Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your constructive suggestions. We address your questions below. >**1. Model collapse problem** Model collapse in entropy minimization methods is a well-known issue, but we would like to emphasize that it remains an unresolved challenge in achieving stability during TTA process. Fig. 2 shows the sensitivity of entropy minimization to training strategy (learning rate), suggesting that even small variations can lead to model collapse in TTA. Fig. 6 also shows the model’s sensitivity to various learning rates. Since entropy minimization operates within a narrow range where collapse does not occur, it requires careful tuning. In contrast, our proposed method shows stable adaptation across different training strategies highlighting its relative robustness against model collapse, and we provide the comparison table with prior works. Method|EATA|SAR|DeYO|Ours ---|---|---|---|--- ImageNetC|41.3|45.2|42.9|**39.2** >**2. Explicit mask chaining augmentation** Augmentation-based EM methods[a,b], which employ diverse data augmentations, apply stochastic augmentations, e.g., color jitter and random affine, similar to CoTTA and its variants. In contrast, our method is distinguished by applying interpretable, sample-specific, masking-based data augmentation. By designing a ranked prediction distribution, we offer an intuitive and simple solution that progressively refines the prediction distribution while preserving the ranking order. We provide the comparison table with [a,b]. Method|ROID[a]|CMF[b]|Ours ---|---|---|--- ImageNetC|41.4|40.7|**39.2** >**3. Computational resource compared with TENT** In Table5, our method requires approximately twice the training time (_not ten times_, tent: 8min vs ours: 17min) compared to TENT; however, it achieves higher performance while reducing the computational cost to 1/3 compared to Continual-MAE (59min). EM methods are efficient but raise concerns about model collapse, whereas mean teacher-based methods offer stability at the cost of high computational overhead. Our goal is to integrate the strengths of both approaches to achieve a balanced solution from the viewpoint of both computational efficiency and classification performance. >**4. Theoretical explanation** While our method provides a solution to mitigate model collapse, it does not theoretically guarantee complete prevention. Therefore, we conducted extensive experiments to empirically validate the effectiveness of our approach. In Fig. 3, the analysis of entropy and accuracy with respect to different masking ratios supports the soundness of our method’s intuition. The visualizations of masked images in Fig. 7 show that masking is applied as intended. Lastly, Fig. 8 shows class attention map visualizations, revealing that entropy minimization methods often rely on class-irrelevant local pixels as the basis for predictions, leading to suboptimal solutions. >**5. Model collapse** We observed that entropy minimization can lead to predictions collapsing into a single class, regardless of the input. To mitigate this issue, it is important to focus on contextual information as the basis for prediction while addressing the influence of incorrect supervision signals. Our method alleviates model collapse by improving relative relationships rather than relying on direct supervision signals, through relational algorithms: our MCL, which focuses on contextual information, and our ERL, which preserves the ranking order of entropy based on prediction difficulty. >**6. Avoid overconfidence** In Reviewer MEbz’s A5, we investigate the effect of the margin in the entropy ranking loss. As the margin increases, the influence on the ranking diminishes, leading to a rise in ECE. This suggests that as the weight on direct entropy minimization decreases, the overconfidence issue is alleviated. >**7. Highly domain-shifted scenario** In addition to the CTTA scenario of the main paper, we conducted experiments under standard protocols for various domain shifts in Appendix C and D, including Online TTA and Vision-Language Model(VLM)-based TTA scenarios. Our method shows successful adaptation not only on datasets where corruptions are applied to the source domain but also in cases involving substantial domain shifts, e.g., adapting from ImageNet to ImageNet-R, ImageNet-V2, and ImageNet-Sketch. We also validate ours using a realistic benchmark with ImageNet-3DCC (please, refer to Reviewer VSix’s A4). >**8. Failure case** We establish a ranking structure using 10% and 20% masked images. However, when the object is entirely masked, there is a possibility that some predictions may become biased toward the background, as shown in the bottom-right visualization of Fig. 8. To alleviate this issue, we propose Entropy Ranking Loss in Sec. 3.4. Moreover, please refer to reviewer sBwW’s A2 regarding the failure case related to distributional discrepancy analysis. >**Minor** We revised the arrow in Appendix C to point upward. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response and the additional experiments. However, some questions remain. - As with the proposed progressive masking method, it seems possible to design an entropy ranking loss by adjusting the intensity of various existing augmentation techniques. This experiment could highlight a distinction from existing methods that combine augmentations with EM. Does this kind of experimental design also lead to performance improvements? - The table shows a slight performance improvement compared to the existing methods (i.e., ROID, CMF) that combine augmentation and EM. How does the performance relative to computational cost compare between the existing methods and REM? - Even if there is no theoretical guarantee, could you describe the theoretical basis for why this method works well? --- Reply to Comment 1.1.1: Comment: Thanks for your constructive suggestions. We leave the following responses for solving your concerns. >**A1. Augmentation methods** Yes, we agree that other solutions could work, provided that they can ensure a structured ranking relationship for explicit mask chaining. However, we would like to emphasize that our foreground masking strategy offers an intuitive solution by reducing randomness, thereby helping to preserve this ranking structure. To address your concern, we provide the following simple extension experiments using the original-weak-strong augmentation strategy. ImageNetC|Noise|Blur|Weather|Digital ---|---|---|---|--- RandAug|43.0|48.1|33.8|78.3 Ours|40.3|47.1|32.8|36.9 Since our loss function is based on the ranking relationship between accuracy and entropy, we observe that when this ranking is preserved, the existing augmention methods works. However, when the ranking is not preserved, it fails to mitigate collapse depending on the adaptation order (e.g., from Noise to Digital), partially due to the failure in selecting an appropriate augmentation strategy. It can also be interpreted as evidence of the robustness of our method, which provides a simple and intuitive way to ensure a clear prediction ranking structure. >**A2. Computational cost comparison** ROID and CMT, like EATA, employ the Active Sample Criterion (ASC), which sets the loss of inaccurate samples to zero, thereby skipping the backward pass for certain samples. Therefore, when $N=1$, applying ASC to our method can achieve a similar level of computational cost. ASC that utilizes only accurate samples achieves stable adaptation and is a promising approach for achieving efficiency by not training on all data. However, as explained in our response to reviewer f4Hw under "Comparison with entropy-based method (A6)," our major goal is on the method that leverages the entire set of samples and we aim to reduce the domain-dependency at test time by establishing clear predictive relationship within the image, thereby mitigating the impact of unpredictable domain shifts. Method|ROID|CMT|Ours (N=1)|Ours (N=1, ASC) ---|---|---|---|--- Time|9m 33s|9m 38s|11m 47s|9m 22s Error|41.4|40.7|39.5|39.7 >**A3. Theoretical basis** We appreciate the reviewer’s insightful comments regarding the theoretical foundation of our work. We would like to clarify that our approach is primarily grounded in empirical observations rather than rigorous theoretical analysis (as provided in authors’ rebuttal A5). Nonetheless, in an effort to offer a theoretical perspective, we seek to draw a connection between types of errors using the Bayes risk within the probably approximately correct (PAC) learning framework [R41]. In particular, [R42] demonstrates that for two hypothesis spaces, $\mathcal{F}_1 \subseteq \{f: \mathcal{X} \rightarrow \mathcal{Y}\}$ and $\mathcal{F}_2 \subseteq \{f: \mathcal{X} \rightarrow \mathcal{Y}\}$, if $\mathcal{F}_1 \subseteq \mathcal{F}_2$, then the approximation errors satisfy: $Err_D^{apx}(\mathcal{F}_1) \geq Err_D^{apx}(\mathcal{F}_2)$. Inspired by these results, we try to conceptually model the relationship between errors under explicit augmentations. However, this requires a strong assumption that the information contained in the augmented data is less than or equal to that of the original data, which is a condition that is not always guaranteed in the wild. To address this challenge, we proposed a simple yet intuitive solution: reducing the information content via foreground masking, thereby aligning with the assumption in a more controlled manner. In the context of our method, $\mathcal{F}_1$ corresponds to the model incorporating masking, while $\mathcal{F}_2$ represents the model without it. Our explicit mask chaining mechanism can be interpreted as an extension to a broader hypothesis space $\mathcal{F}_N$. By reducing the approximation error $Err_D^{apx}(\mathcal{F}_1)$ associated with the masked prediction, the proposed MCL improves the upper bound of the original prediction’s approximation error $Err_D^{apx}(\mathcal{F}_2)$, thereby mitigating model collapse and promoting progressive performance improvement. Furthermore, our ERL is designed to maintain a consistent ranking structure between $\mathcal{F}_1$ and $\mathcal{F}_2$. While this perspective aligns with our intuition, it has not been rigorously proven. Out of consideration for the potential impact on the ML community, we chose not to include this theoretical reasoning in the main paper. We are transparent in stating this limitation in Section 5, noting the absence of a formal theoretical justification. Nonetheless, we hope that this supplementary explanation of the theoretical concept behind our work helps to address the reviewer’s concerns regarding the theoretical basis. [R41] Estimating the bayes risk from sample data, NeurIPS1995. [R42] Sample-specific Masks for Visual Reprogramming-based Prompting, ICML2024.
Summary: This paper proposes a novel Ranked Entropy Minimization method for test-time adaptation. While leveraging entropy as a supervision signal may risk model prediction collapse, the authors address this challenge by first constructing an explicit masking chain with varying masking ratios on the original images. They introduce two critical constraints: enforcing consistency between predictions of highly masked images and those with lower masking ratios, and ensuring that the entropy of predictions for low-masking-ratio images remains consistently lower than that of high-masking-ratio images. These constraints effectively enhance the model’s cross-domain generalization capability while updating only a small number of parameters. The authors validate the effectiveness of their approach through comprehensive experiments across multiple benchmarks, demonstrating significant improvements in stability and adaptability under continual test-time adaptation scenarios. Claims And Evidence: The paper presents a clear and compelling argument. The authors observe that relying solely on entropy minimization as a supervisory signal can lead to model collapse in certain scenarios due to convergence to trivial solutions, where probability distributions collapse to a single point in polar coordinate representations. Inspired by Zeno’s paradox of Achilles and the tortoise, the authors propose a novel method based on entropy constraints for masked images to address this issue. The study is strengthened by extensive visualizations and insightful observations that illustrate the research motivation and theoretical foundations. Methods And Evaluation Criteria: This paper tightly integrates observations with the motivation to propose an insightful and effective algorithm. By constructing two complementary constraints, the authors successfully address the challenge of models collapsing into trivial solutions when relying solely on entropy minimization. The introduction of masking on critical regions of images generating high-entropy data while imposing constraints, which ensures the preservation of global semantic comprehension capabilities. Furthermore, visualizations of the masking strategy demonstrate that obscuring object-related regions effectively establish a hierarchy of prediction difficulty, thereby preventing model collapse. Theoretical Claims: I have reviewed the authors' formulation of Explicit Mask Chaining and the two constraints, and they align with the theoretical descriptions provided by the authors. Experimental Designs Or Analyses: This paper thoroughly validates the effectiveness of Ranked Entropy Minimization in continuous test-time adaptation tasks through comprehensive experiments, demonstrating significant improvements over existing methods on benchmark datasets such as ImageNet-C, CIFAR10-C, and CIFAR100-C. The visualization analysis reveals REM’s core mechanism: the self-attention-based foreground masking strategy precisely localizes object regions, forcing the model to learn global semantics through mask consistency loss and thereby avoiding model collapse caused by over-reliance on local features. Supplementary Material: The supplementary materials include code implementations under various experimental configurations. Furthermore, the experiments in the appendix extend REM’s applicability, demonstrating its real-time performance in online streaming data adaptation and enhancing zero-shot generalization through cross-modal masked alignment in vision-language models like CLIP, thereby providing novel insights for multimodal continual learning. Relation To Broader Scientific Literature: [1] published at ICML 2024, primarily investigates the application of active learning in test-time adaptation. Both this study and the current paper highlight the positive role of challenging samples in enhancing model domain generalization and propose insightful methods from different perspectives. [1]. Gui S, Li X, Ji S. Active Test-Time Adaptation: Theoretical Analyses and An Algorithm[C]//The Twelfth International Conference on Learning Representations. Essential References Not Discussed: The references cited in this paper are sufficient. Other Strengths And Weaknesses: This paper identifies a critical issue in test-time adaptation through a fascinating observational experiment: how to avoid model collapse caused solely by entropy constraints. The authors ingeniously propose two constraints: ensuring that the entropy of highly masked images is greater than that of lightly masked ones, and guiding the probability distribution of highly masked images to align with that of lightly masked images. These constraints effectively prevent prediction collapse. The motivation of the paper is clear, and the proposed method is highly effective. Other Comments Or Suggestions: None. Questions For Authors: 1. What are the advantages of constructing challenging samples through image masking in this paper compared to selecting challenging samples from a queue using active learning? 2. In Table 3, some experimental results show significant differences compared to the current state-of-the-art (SOTA) methods (e.g., for brightness and JPEG). For certain types of image corruptions, if the masking ratio is too high, could it lead to a situation where the output probability distribution of high-masking-ratio samples consistently fails to align with that of low-masking-ratio samples? I believe this analysis is crucial, as it is directly related to the masking ratio set during Explicit Mask Chaining. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your constructive comments and positive reviews. We address your questions below in detail. >**1. Sampling through image masking vs. sampling from a queue using active learning** A1. The active approach of capturing accurate samples and adapting using selected samples is meaningful in that it prevents the model from being degraded by uncertain samples. However, in this paper, we are concerned that sample selection based on active learning may heavily depend on the initial model and may be difficult to generalize due to the need to define an entropy threshold against varying training environments. Instead, we explicitly control the learning difficulty while training on all samples equally, aiming for stable operation and maintaining the goal of continual adaptation. >**2. Failure case analysis** A2. Thanks to your insightful suggestion, we analyzed the discrepancy between the output of original and masked images using Total Variation Distance (TVD) for two domains where our method achieved significant performance improvements (Gaussian and Shot noise) and two domains where it showed relatively lower performance (Brightness and JPEG). Interestingly, the domains with successful performance gains, e.g., Gaussian and Shot noise, exhibited larger differences in the predicted probability distributions with and without masking. One possible interpretation is that, for relatively easier domains, a small discrepancy between the predicted distributions of the original and masked images may lead to a low loss, which in turn could reduce the adaptation speed. This led us to the insight that adjusting the loss magnitude according to the domain gap may further aid adaptation. We sincerely appreciate your suggestion, which enabled a deeper understanding of both the strengths and limitations of our method. CIFAR100C|Gaussian|Shot|Brightness|Jpeg ---|:---:|:---:|:---:|:---: TVD (first 50%)|5.54$\pm$1.36|3.44$\pm$1.38|1.69$\pm$0.45|2.90$\pm$0.95 TVD (last 50%)|5.03$\pm$1.38|3.82$\pm$0.69|1.55$\pm$0.41|2.59$\pm$0.78
Summary: This paper proposes a masked consistency loss (MCL) and entropy ranked loss (ERL) based learning mechanism for continual test-time adaptation (CTTA). The MCL involves incrementally masks the images for data augmentation, and the ERL contributes in ensuring that the entropy of predictions with a low masking ratio is lower than those with a high masking ratio. The proposed approach utilizes the self-attention structure to cluster similar content in order to mask the content. Experiments on ImageNet-to-ImageNetC, CIFAR10-to-CIFAR10C, and CIFAR100-to-CIFAR100C benchmarks using ViT-B/16 architecture. Claims And Evidence: Some of the claims are not well supported or lack evidence. Refer to Weaknesses and Questions. Methods And Evaluation Criteria: The proposed approach shows improved performance for the compared ViT-B/16 architecture. However, the lack of experiments on CNNs, such as WideResNet-28, ResNeXt-29 and ResNet-50 that have been widely used by the state-of-the-art continual TTA methods, hurts the applicability and utility of the proposed approach. Moreover, several state-of-the-art approaches, such as EcoTTA [1], EATA [2], BeCoTTA [3], and PETAL [4], are missing from experimental comparisons. References: 1. Song, Junha, et al. "Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. 2. Niu, Shuaicheng, et al. "Efficient test-time model adaptation without forgetting." International conference on machine learning. PMLR, 2022. 3. Lee, Daeun, Jaehong Yoon, and Sung Ju Hwang. "BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation." International Conference on Machine Learning. PMLR, 2024. 4. Brahma, Dhanajit, and Piyush Rai. "A probabilistic framework for lifelong test-time adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. Theoretical Claims: NA Experimental Designs Or Analyses: Refer to the weaknesses and questions. Supplementary Material: Yes, I have referred to the Appendix sections that were referred to in the main paper, such as Appendices B, C, D. Relation To Broader Scientific Literature: The key contribution of this paper is utilizing well established ranking loss using an incremental masking along with mask consistency loss which seems to improve CTTA. Essential References Not Discussed: Recent state-of-the-art methods are not discussed, as well as the comparisons are missing. 1. Song, Junha, et al. "Ecotta: Memory-efficient continual test-time adaptation via self-distilled regularization." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023. 2. Lee, Daeun, Jaehong Yoon, and Sung Ju Hwang. "BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation." International Conference on Machine Learning. PMLR, 2024. Other Strengths And Weaknesses: **Strengths** * An interesting idea to improve data augmentation via incremental masking for Continual TTA * The experiments show improved performance on ViT **Weaknesses** * An elaborate detail about explicit mask chaining mechanisms seems to be lacking. This is a key element of the proposed approach. * Lack of experiments on CNNs, such as WideResNet-28, ResNeXt-29, and ResNet-50, that have been widely used by the state-of-the-art continual TTA methods, hurts the applicability/utility of the proposed approach. * Details about hyperparameters such as "M and N involved in mask ratios and mask chains" are missing * Exploiting the self-attention structure limits the applicability to ViT architectures. Other Comments Or Suggestions: 1. Line 325: Section 4.2 is cited inside the same section. 2. Include references to hyperparameter details and tuning in the main paper. 3. This sentence is not grammatically correct: "The principal idea is to explicitly enhance the prediction complexity of a sample by masking objects that domain invariant features." Questions For Authors: 1. What are the values of M and N involved in mask ratios and mask chains? Are these values tuned? 2. How does incrementally masking content involve only the content containing the domain-invariant information? How is the self-attention structure utilized for doing this? 3. Do they authors have any experiments on CNNs, such as WideResNet-28, ResNeXt-29 and ResNet-50, that have been widely used by the state-of-the-art continual TTA methods? 4. How is the hyperparameter λ in Equation 5 tuned? In Appendix E, is the error computed on the test split itself, on which the performance is reported? 5. It is mentioned "we set λ = 1 and m = 0", and the Figure 12 shows that the margin does not matter. Why is it so, do the authors have any explanation or intuition about it? 6. Is explicit mask chaining idea adopted from other existing literature? If so, a citation with better description will improve clarity. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your positive reviews and valuable suggestions. We address your concerns and questions below in detail. >**1. Values of M and N involved in mask ratios and chains** A1. We provide the ablation test regarding hyperparameters of the mask. Although the best accuracy was achieved when N=3, we used the combination of M_N ={0,5%,10%} to strike a balance between sensitivity to the masking ratio, increased computational complexity, and accuracy. M_N (N=1)|{0,5%}|{0,10%}|{0,15%} ---|---|---|--- Error|40.6|39.7|39.5 M_N (N=2)|{0,5%,10%}|{0,10%,20%}|{0,15%,30%} Error|39.4|39.2|39.4 M_N (N=3)|{0,5%,10%,15%}|{0,10%,20%,30%}|{0,15%,30%,45%} Error|38.9|39.4|40.0 >**2. Details of incremental masking and its references** A2. Our research is inspired by [R2A], which demonstrates that the self-attention mechanism in ViTs naturally clusters tokens with similar contexts, merging semantically similar tokens. Building on this, we follow the implementation of [R2B], which improves ViT efficiency by omitting background tokens. Specifically, for TTA, we compute attention scores by averaging the similarity between the class token's query and the image tokens' keys across all heads in the final multi-head self-attention layer. Tokens with high attention scores are identified, and forward passes for the corresponding foreground tokens are selectively omitted. Based on the findings of [R2A] and [R2B], we confirm that efficient masking is achievable through the self-attention structure. By progressively masking foreground regions, we control the prediction difficulty and provide a structured approach to address the challenges of continual TTA. [R2A] Token Merging: Your ViT But Faster, ICLR2023. [R2B] The Role of Masking for Efficient Supervised Knowledge Distillation of Vision Transformers, ECCV2024. >**3. Experiments on CNNs** A3. Thank you for suggesting experiments on CNNs. While our method leverages the powerful self-attention mechanism in ViTs, it can be easily extended to other networks as long as the difficulty can be structured through explicit mask chaining. To explore this, we provide two additional experiments: one that uses the activation of the final CNN feature map (FA), and another that adjusts masked pixels using Grad-CAM. For CIFAR10-C, we present results based on EcoTTA using WRN-28, and for CIFAR100-C, we report results from BECoTTA using WRN-40. Dataset|EATA|EcoTTA|BECoTTA|Ours(FA)|Ours(Grad-CAM) ---|---|---|---|:---:|:---: CIFAR10C|18.6|16.8|-|16.9|**16.5** CIFAR100C|37.1|36.4|35.5|**34.5**|34.6 >**4. Hyperparameter $\lambda$** A4. We also share the concern regarding tuning hyperparameters on the test set in TTA. In fact, there exist combinations of lambda and margin that yield higher performance, but we use 1 and 0 as default values for lambda and margin, respectively. Additionally, to verify the robustness of our method to hyperparameters, Fig.12 highlights that our method maintains consistent performance across a wide range of values rather than being sensitive to specific settings. >**5. Margin $m$** A5. The margin is related to whether entropy minimization is applied to each sample. A higher margin $m$ leads to entropy minimization for more samples, enabling faster adaptation, while a lower $m$ prioritizes stability. This corresponds to the stability-plasticity trade-off commonly observed in continual learning. In line with the goal of our proposed method, we adopt a margin value of 0 to mitigate overconfidence. m|0|0.1|0.2|0.3 ---|---|---|---|--- Error|39.2|38.8|38.8|38.9 ECE|8.7|10.2|10.8|11.0 >**Weakness** - [Mask chaining mech.] Please refer to A2. - [CNN experiments] Please refer to A3. - [Hyperparameter M and N] Please refer to A1. - [Self-attention structure limitation for ViT] While our method leverages the self-attention mechanism, it is not limited to standard ViT architectures. As demonstrated in Response 8 to Reviewer f4Hw, our approach is applicable to various transformer backbones such as MobileViT and SwinTransformer. Furthermore, we also provide CNN-based experiments in Response A3. >**Minor** - Thanks for your suggestions on typos and references. We will revise it for better readability. --- Rebuttal Comment 1.1: Comment: Thanks for responding to the queries. I would like to point out that CoTTA [1] reports an error rate of 16.2 on the CIFAR10-to-CIFAR10C dataset using the WRN-28 backbone (3. Experiments on CNNs), so it is doing better than the proposed approach in this experiment. As pointed out, tuning hyperparameters on the test set is a matter of concern. Other than this, I do not have any questions at this moment. Thanks. References: 1. Wang, Qin, et al. "Continual test-time domain adaptation." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer MEbz for your time and valuable feedback on our work. >**A1. CNNs experiments** Thank you for your in-depth suggestions. We will add the results for CoTTA to the final version of the paper. We would like to emphasize that our method has been extensively evaluated on ViT to confirm that it achieves successful results, and that our work on utilizing transformer structures has broader applicability, including its use in multimodal systems such as CLIP. Our message regarding CNN experiments is that our method can be extended to CNNs if we establish a ranked structure with explicit difficulty control, which is the basic philosophy of our mechanism. In this respect, we verified the scalability of our method by following the experimental protocols of EcoTTA and BECoTTA. >**A2. Concern about hyperparameters** We fully understand and appreciate your concerns regarding the hyperparameters. To address this, we confirmed that our method is robust across a wide range of hyperparameters, including $\lambda$, $m$, and $M$_$N$, through extensive investigations in Fig.12 and our previous response (A1). We adopted the basic values by setting the coefficient for the loss function to 1 and the margin for the ranking loss to 0, using the same settings consistently across ImageNetC, CIFAR10C, and CIFAR100C. We would like to emphasize that we did not tune the hyperparameters specifically for the test data to enhance performance. Furthermore, as shown in Figure 6, we investigated the learning rate across a broad range and observed stable adaptation of our method. This supports our claim that the proposed method adapts reliably across a wide range of hyperparameter settings. We again thank you for your valuable and constructive feedback.
Summary: This work introduces two novel loss functions for CTTA, which utilize different views of the samples to enforce consistency alignment while preserving their relative ranking. Experimental results demonstrate the effectiveness of the proposed approach. Claims And Evidence: Yes Methods And Evaluation Criteria: Lack of novelty. the idea of consistency learning between strong and weak augmentation has been thoroughly explored by existing TTA works like [1,2]. Meanwhile, these methods are not discussed or compared. 1. Contrastive Test-Time Adaptation 2. Revisiting Realistic Test-Time Training: Sequential Inference and Adaptation by Anchored Clustering Regularized Self-Training Lack of important details in method details like foreground masking in figure 10, where does the paper introduce foreground masking? The propose method may lead to biased attention towards the background content, compared to source, as evidenced by the 3rd row of Figure 8. Theoretical Claims: NA Experimental Designs Or Analyses: Missing comparisons with strong baseline for CTTA: 1. EATA 2. Roid 3. Test-Time Ensemble via Linear Mode Connectivity Lack of important ablation and discussion about the key hyperparameters, N and M_N in Eqn. (3). Inefficiency in computation compared to the entropy-based method, and potentially much higher memory consumption by using more bp per samples. Lack of justification of resizing images to 384x384 on CIFAR, which significantly increases the computation burden but seems only beneficial to the proposed method that requires masking. Limited evaluations on only ViT-Base. Suggest to add more evaluations on architectures like MobileViT, SwinTransformer. Random masking without a projector can be unstable, but the paper's results do not show the variance in performance. Supplementary Material: Yes. Relation To Broader Scientific Literature: NA Essential References Not Discussed: The work would benefit from further exploration and discussion of key references. Other Strengths And Weaknesses: The paper claims to address overconfidence but provides no numerical experiment on metrics like false positive rate or ECE. Other Comments Or Suggestions: I would raise the score if the authors address all concerns Questions For Authors: Questions: Is this fully TTA? More comparisons on ViT weights that is not pre-trained with MAE, like Moco-V3, to support this. Overall, the technical contribution is limited and some key technical details are unclear with insufficient evaluations and ablations. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We appreciate your constructive comments. We address all the concerns and provide new experiments to support our contributions. >**1. Novelty** Compared with [1,2], our novelty is summarized; - [1] proposed a contrastive learning method based on strong-weak augmentations, but our method addresses the problem within a ranked structure through progressive augmentation, and it differs in that ours does not require an additional momentum encoder and does not rely on random augmentation. - [2] analyzed that using the prediction from weak augmentation as a pseudo label to improve the prediction of strong augmentation is not effective. It aligns with the argument of CoTTA that data augmentation without considering sample characteristics is not suitable for handling abrupt distribution shifts. Ours applies masking on objects as a domain-invariant property and maintains a ranked structure, enabling stable adaptation by reducing discrepancies in prediction distributions. Therefore, ours has its own novelty compared with [1,2]. >**2. Foreground masking** The detailed descriptions on foreground masking are found at Sec 3.2. Briefly, for the final self-attention layer of the transformer, attention scores are computed based on the similarity between the class token's query and the image tokens' keys. Using these scores, we can efficiently and naturally mask the pixels where objects are located without any extra modules. >**3. Biased attention** Due to differences in object sizes across samples, a bias toward the background can occur when small objects occupy a relatively small portion of the pixels. Based on our observations in Fig. 3, to alleviate this concern, we apply low masking ratios of 10% and 20% where a linear relationship between entropy and error is maintained. Note that our goal is to find a simple yet efficient solution without additional computational cost. >**4. Additional comparison** We provide the following table as requested. Note that TTE [3] and its code are not presented yet. Method|EATA|SAR|PETAL|Roid|Ours ---|---|---|---|---|--- ImageNetC|41.3|45.2|52.3|41.4|**39.2** [3] Test-Time Ensemble via Linear Mode Connectivity, ICLR2025. >**5. Ablation on mask** We provide the ablation study regarding hyperparameters of the mask. Although the best accuracy was achieved when N=3, we used the combination of M_N ={0,5%,10%} to strike a balance between sensitivity to the masking ratio, increased computational complexity, and accuracy. M_N (N=1)|{0,5%}|{0,10%}|{0,15%} ---|---|---|--- Error|40.6|39.7|39.5 M_N (N=2)|{0,5%,10%}|{0,10%,20%}|{0,15%,30%} Error|39.4|39.2|39.4 M_N (N=3)|{0,5%,10%,15%}|{0,10%,20%,30%}|{0,15%,30%,45%} Error|38.9|39.4|40.0 >**6. Comparison with entropy-based method** Similar to entropy-based methods, we use a single backpropagation per sample. Recent TTA approaches, including EATA, update parameters only using low-entropy samples, thereby improving learning efficiency. Furthermore, parameter-free methods such as T3A and LAME adapt to new domains by aligning logits using optimization algorithms or historical memory without backpropagation. Each of these methods is being studied independently and is based on a different underlying philosophy. We acknowledge that our method is slightly less computationally efficient compared to these approaches. Nevertheless, we believe that utilizing uncertain samples in alignment with the motto of continual adaptation is necessary for broader applicability in diverse environments. We think that these three works will converge by complementing each other’s strengths and weaknesses, and we believe our contribution can narrow the gap between them. >**7. Resolution** For a fair comparison of SOTA, we followed the experimental settings of ViDA and Continual-MAE, using 384×384 for CIFAR, and ours was built on Continual-MAE. We agree that higher resolution can offer advantages for masking, and to clarify this point, we provide a 224x224 experiment. CIFAR10C|Source|Tent|CoTTA|Continual-MAE|Ours ---|:---:|:---:|:---:|:---:|:---: ViT-B/16-224|40.1|36.0|39.2|16.8|**7.9** >**8. Transformer backbone** We provide experiments using different transformer backbones. ImageNetC|Source|Tent|CoTTA|ViDA|Ours ---|---|---|---|---|--- Mobile-ViT-S|75.28|75.61|75.72|75.27|**74.28** SwinTrans.-B|59.26|73.17|46.84|57.84|**46.56** >**9. Mask** Though our method employs a masking strategy, it is used solely to control the difficulty of prediction, and thus does not require a projector during the masking process or a decoder to reconstruct the masked regions, and it is relatively free from instability. >**10. Overconfidence** We provide a table for calibration error analysis. ImageNetC|Tent|ViDA|Ours ---|---|---|--- ECE (low)|12.6|14.6|**8.7** >**11. MoCo-v3** Thanks for your suggestion to utilize MoCo-V3, not trained with labels on source domain. ImageNetC|Source|Tent|CoTTA|SAR|PETAL|Ours ---|---|---|---|---|---|--- Moco-v3|76.5|76.6|78.2|75.4|78.2|**65.7** --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response from the authors. I have carefully read all responses. However, the ablations on hyperparameter N and M_N fail to convey the effectiveness of the proposed ranked structure, where the method achieves 39.5% with N=1 and 39.2% with N=2. Further, how is the overconfidence compared to Source and SAR? Moreover, I have further questions for the foreground masking approach. 1) While foreground masking can create difficult mask using a low masking ratio, it ignores the potential benefit of learning background-invariant features. To create difficult masks, I assume using the random masking strategy, while increasing the mask ratio to 40% (or above) would work fine. I suggest tuning the mask ratios of different masking strategy in Figure 10 for a fair comparison. 2) How to create foreground masking for transformers that do not use classification token, e.g., averaging the features of all tokens as the input of the classifier. --- Reply to Comment 1.1.1: Comment: We thank the reviewer f4Hw for your thoughtful and constructive feedback. >**A1. Ablation on M_N** In the course of our study, we first confirmed that our ranking-based solution is working when applied with a single mask (e.g., N=1). Notably, our error rate of 39.5% (e.g., M_N={0,15%}) represents a 3% improvement over the recent state-of-the-art result of 42.5% by Continual-MAE, supporting the effectiveness of our ranked structure. We furthermore investigated whether our approach could be generalized to a chained masking structure for N = 2, 3, …. We achieved the error rates of 39.2% for N = 2 and 38.9% for N = 3 (please, remember that Continual-MAE is still 42.5%), which validates our method that generalizes across various values of N. Finally, we selected M_N by considering the trade-off between performance and computational cost as N increases. The proposed method not only demonstrates its effectiveness at N = 2, but also offers flexibility in selecting appropriate N values depending on computational budget and performance requirements. >**A2. Additional ECE comparison** Additionally, we include a comparison table of the ECE between the source, SAR, and our method as you requested. We observe a trend in which the ECE tends to increase with performance improvement over the initial source model. Notably, our method maintains a low ECE while achieving low error rates. ImageNetC|*Source*|Tent|*SAR*|ViDA|Ours ---|:---:|:---:|:---:|:---:|:---: ECE (%) |5.3|12.6|10.3|14.6|8.7 Error (%) |55.8|51.0|45.2|43.4|39.2 >**A3. Masking strategy** We acknowledge that the foreground masking strategy using self-attention is not the only possible approach. It is one of several design choices, and **our primary contribution lies in utilizing the ranked structure induced by an explicit augmentation.** In particular, our usage of foreground masking was motivated by two main considerations: (i) it can be efficiently applied without any additional complex modules, as it directly utilizes the self-attention mechanism; and (ii) it ensures the construction of ranking relationships through explicitly designing the prediction difficulties. - 1 ) Our concern with random masking was that when a large proportion of masking tokens are located in the background, it may inadvertently focus attention on the foreground, potentially failing to preserve the intended ranking relationships. This potential risk is evident in the experiment on random masking in CIFAR100C shown in Figure 10, where effective adaptation is initially achieved but a dramatic performance drop occurs from the 5th task, e.g., glass blur. - 2 ) As you suggested, utilizing feature attention (FA) is a promising approach to extend our method to architectures beyond self-attention structures. This is confirmed to be a feasible solution, as demonstrated in Reviewer MEbz's A3, which is an extension of our method to CNNs. For clarity, we present the results using 40-60% Random Masking (RM) and Feature Attention (FA) in the table below. As a detailed implementation of FA, the attention map is computed as the average of the L2 norm of the features over all image tokens. Method|Ours-RM (40%)|Ours-RM (50%)|Ours-RM (60%)|Ours-FA|Ours ---|:---:|:---:|:---:|:---:|:---: Error (%) |40.5|41.1|42.1|39.9|39.2 Finally, we would like to clarify a typo in our previous response (A5) regarding the ablation on masking. M_N = {0, 10%, 20%} was used in the experiments. We hope that our response has helped address your concerns. Once again, we sincerely appreciate your in-depth review of our paper and your insightful suggestions for improving the quality of our paper.
null
null
null
null
Inverse Optimization via Learning Feasible Regions
Accept (poster)
Summary: This paper proposes an inverse optimization approach to learn parameters in an optimization problem using historical decision data. Specifically, the authors adapt the predictability and sub-optimality losses, previously used in the literature for learning objective functions, to the context of constraint learning. The resulting inverse problems are non-convex due to the presence of bilinear terms. To address this, the authors propose two algorithms: one based on gradient descent and the other on mixed integer programming reformulation. These algorithms are tested on two synthetic problem instances. Overall, this paper introduces a novel approach to the challenging problem of constraint learning. While I do not find any technical flaws in the paper, I have some concerns regarding its exposition and positioning. If these concerns are addressed during the rebuttal phase, I would be willing to increase my score. Claims And Evidence: The claims are generally well supported. However, as detailed in weaknesses, the authors might consider clarifying that the proposed method is only applicable to one type of inverse problems, i.e., those intended to provide insights into the system of interest, and can not be used to support future decision making (due to the lack of feasibility guarantees). Methods And Evaluation Criteria: The evaluation makes sense. However, again, it only makes sense for one type of inverse problems. Theoretical Claims: I check the proofs and did not identify major flaws. Experimental Designs Or Analyses: The experimental design is sound. Yet the presentation can be improved (as detailed below). Supplementary Material: I read the proofs. Relation To Broader Scientific Literature: This is of interest to the inverse optimization (and potentially econometrics) community because it is a new method to learn parameters in constraints, which is known to be a hard problem. Essential References Not Discussed: Not really. Other Strengths And Weaknesses: **Strengths** - The proposed approach is novel (to the best of my knowledge) - The research question is challenging. - The computational performance is good **Weaknesses** - Exposition. While I appreciate the authors' effort in writing a highly technical paper, I believe the current exposition does not effectively convey the intended message. - First, Figures 1 and 2 are interesting and informative, but their presentation is quite messy. Clearer illustrations are needed to help readers better digest the information presented. - Second, the optimization models presented in this paper lack clear explanations. Specifically, the constraints in (10a) and (10b), which are key contributions, need to be thoroughly clarified. Additionally, Problem (14) could have been better explained. As it stands, the descriptions of Problems 1 and 2 are somewhat difficult to understand. - Third, related to the previous point, Problem 1 (Section 4.1) is particularly difficult to follow. Specifically, what does "lacks knowledge of the cosntraint' structure" mean? - Positioning. Inverse optimization can serve two main purposes: (1) inferring parameters to gain insights into the system of interest, and (2) using the learned parameters to support future decision-making. The proposed method is suitable only for the first purpose because (a) the learned parameters do not guarantee feasibility for future decisions, and (b) the evaluations are based on out-of-sample loss, which measures goodness of fit rather than the quality of prescribed decisions. The authors should consider clarifying this in the paper. Alternatively, they might add theoretical/empirical insights into the feasibility and quality of the prescribed decisions. - Building on the previous point, it is crucial for the estimated parameters to be interpretable in order to gain insights into the system of interest. In lines 223--225 on page 5, the authors introduce a fairly complex parameterization, which helps to address the computational challenges. However, this complexity may hinder the interpretability of the parameters. The authors should consider clarifying why this parameterization remains interpretable (i.e., what do the parameters stand for in practical terms?). Other Comments Or Suggestions: - Page 8, lines 403--405. "In a network with 5 nodes, there are 10 possible transmission line connections" is confusing. The authors should refer to Figure 3 before presenting this sentence. Questions For Authors: - Table 2. What does $\ell^p_\theta / \ell^p$ mean? - Page 8, line 417. Why is $A \in \{0, 1\}^{5\times 10}$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comments/questions. Here is our response: **[Q1: Loss functions meaning]** Table 2 presents the performance of different methods across four metrics: true predictability loss $(\ell^\text{p})$, true suboptimality loss $(\ell^\text{sub})$, estimated predictability loss $(\ell^\text{p}\_\theta)$, and estimated suboptimality loss $(\ell^\text{sub}\_\theta)$. As discussed in Section 2.2, $\ell^\text{p}\_\theta$ and $\ell^\text{p}$ are identical, a result also illustrated in Figure 2 (bottom). Therefore, Table 2 includes a single column for both, denoted as $\ell^\text{p}\_\theta/\ell^\text{p}$. We will clarify this in the revision. **[Q2: Binary matrix $A$]** In Section 4.2, we consider a network with five nodes (Figure 3), allowing up to $\binom{5}{2} = 10$ potential transmission lines, with the original network containing five of them. The equation $\boldsymbol{x}(\boldsymbol{s}) = \boldsymbol{A} \boldsymbol{z}(\boldsymbol{s}) + \boldsymbol{s}^{\text{demand}}$ models the network, where $\boldsymbol{x}(\boldsymbol{s}) \in \mathbb{R}^5$ represents generation decisions, $\boldsymbol{z}(\boldsymbol{s}) \in \mathbb{R}^{10}$ represents flow decisions, and $\boldsymbol{A} \in \\{0,1\\}^{5 \times 10}$ defines network connectivity (1 indicating a connection, 0 indicating absence). We do not aim to model all possible connection configurations, as $\boldsymbol{A}$ is subject to constraints. As discussed later, $\boldsymbol{A}$ only indicates 10 potential connection lines, yielding up to $2^{10}$ possible connection patterns. We will clarify this in the revision to avoid confusion. **[W1: Remark regarding positioning]** The referee raises an important point. Indeed, inverse optimization is widely used in econometrics to estimate specific parameters. However, in such cases, the optimization model is typically assumed to be known up to certain parameters and often results in highly nonlinear problems. In contrast, our approach, particularly in learning network structures, offers scalability by leveraging off-the-shelf solvers. We however respectfully disagree that our method is unsuitable for future decision-making. Inverse optimization serves as a hypothesis class in a supervised setting, learning the input-output mapping much like any supervised learning method such as neural networks. In this view, one can apply the classical generalization bounds under some mild assumption, such as boundedness of the training parameter $\theta$, thanks to the classical covering techniques and the finite exact reformulation in Theorem 3.3; see for instance, Section 13.3 "VC-Dimension and Uniform Convergence" in [Shalev2014]. Moreover, by leveraging problem structure (e.g., knowing the objective), we systematically control model complexity. For instance, in Section 4.1, we increase hypothesis flexibility by varying the primitive set $\mathcal{Z}$, where choosing the unit simplex and expanding its dimension improves out-of-sample performance (see Table 2), keeping the complexity of the problem under control. It is important to note that, as the reviewer mentioned, these generalization bounds do not provide a feasibility guarantee for the original/true sub-optimality. However, instead, the forward problem’s optimal value provides an upper bound on the out-of-sample values of these true losses. This motivates our inclusion of both estimated losses $(\ell^{\text{p}}\_\theta, \ell^{\text{sub}}\_\theta)$ and true losses $(\ell^{\text{p}}, \ell^{\text{sub}})$ in Section 2.2. We agree with the reviewer on this point, and following his/her request for clarification, we will ensure these points are addressed in the final version of the paper. **[W2: Remark regarding interpretability]** Our hypothesis class, $\boldsymbol{x} = \boldsymbol{A}\_\theta(\boldsymbol{s})\boldsymbol{z} + \boldsymbol{b}\_\theta(\boldsymbol{s})$, was designed with the following rationale. When $\boldsymbol{A}\_\theta(\boldsymbol{s}) = \boldsymbol{A}$ and $\boldsymbol{b}\_\theta(\boldsymbol{s}) = \boldsymbol{b}$, it maps the primitive set $\mathcal{Z}$ to $\boldsymbol{x}$, where $\boldsymbol{A}$ enables rotation, scaling, or projection, and $\boldsymbol{b}$ allows translation (see Section 3, lines 212–Remark 3.1). Allowing $\boldsymbol{A}\_\theta(\boldsymbol{s})$ and $\boldsymbol{b}\_\theta(\boldsymbol{s})$ to depend on $\boldsymbol{s}$ makes this transformation signal-dependent. The choice of a linear form, $\boldsymbol{A}\_\theta(\boldsymbol{s}) = \boldsymbol{A}\_0 + \sum_{k=1}^{K} \boldsymbol{A}\_k s\_k$, was inspired by linear regression. If $\mathcal{Z}$ is a singleton, the hypothesis class reduces to standard linear regression. While we kept the discussion brief, we agree that a more detailed explanation would be beneficial. **References:** [Shalev2014] Shalev-Shwartz, Shai, and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I found the clarification helpful. Although I do not fully agree with the justification for W1 and W2, I have updated my score to a 3 to reflect my sympathy for the authors’ efforts in tackling this challenging problem **Q1 & Q2**: thank you, yes adding clarifications will be helpful. **W1**: I respectfully disagree with the claim that the method can be used to support decision making without feasibility guarantees. In many real-world applications (esp high-stake ones), bounds on loss are not a sufficient indicator of practical performance, as an infeasible solution could effectively correspond to an "infinite loss." The loss function here serves more as a proxy for what we actually care about, rather than being the true performance metric -- much like how cross-entropy loss is used to approximate classification accuracy in binary classification. That said, I recognize this is a debatable point and ultimately reflects my personal view. As long as the authors clearly acknowledge in the revised version that the method does not come with feasibility guarantees, I am comfortable with the clarification. **W2**: yes, I agree adding more explanations regarding this policy would be helpful. However, even if this parameterization reduces to a linear regression, it is unclear what insights one can derive by looking at the estimated parameters. Perhaps the authors could give an example in the context of an application in Section 4. --- Reply to Comment 1.1.1: Comment: Thank you for the swift reply, additional comments, and also encouraging words. We appreciate your feedback and acknowledge your point regarding the feasibility issue. We will emphasize this explicitly in the revision, and following the recommendation, also include additional information and insight concerning the proposed policy via an example in Section 4.
Summary: This paper investigates inverse optimization, with a particular focus on learning feasible regions in optimization problems with linear objectives. The authors introduce two loss functions, propose a hypothesis class for the constraint function, and develop reformulations and smoothing techniques to address the inverse optimization problem. In particular, Section 2 presents two loss functions, termed predictability loss and suboptimality loss. Section 3 introduces a hypothesis class for the constraint function, along with an adaptive smoothing technique to reformulate the loss functions under the assumption of a linear objective. Section 4 reports numerical experiments conducted to assess the efficiency of the proposed techniques. Claims And Evidence: The claims made in the submission are clear and also supported by proposed numerical experiments. Methods And Evaluation Criteria: The reformulation for linear objective functions and the convex/mixed-integer reformulation appear relatively straightforward. The adaptive smoothing technique is well-integrated into the optimization framework, though its convergence analysis is not appealing. But, of course, we cannot assume too much for this type of inverse optimization. The numerical experiments are conducted on two types of applications with relatively small dimensions (less than 10), which might weaken the reformulations and theoretical results provided before. Theoretical Claims: Proofs in this paper are okay on my side. Experimental Designs Or Analyses: Experimental designs are sound. As mentioned above, it is better to discuss how dimension $p$ influences the numerical performances. Supplementary Material: I went through all the parts in the appendix. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper is clearly written and well-organized. The authors effectively introduce the geometric interpretation of the two loss functions, provide insights into their hypothesis class through the primitive set representation, and elaborate on the block coordinate descent algorithm in detail. However, while the introduction of the loss functions is straightforward and has an intuitive geometric interpretation, it remains unclear whether minimizing these loss functions guarantees a meaningful solution to the inverse optimization problem. A comparison with commonly used suboptimality metrics in inverse optimization would strengthen the work. Additionally, theoretical guarantees, such as error bounds under mild assumptions, would enhance the paper’s rigor. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the comments/questions. Here is our response: __[Contributions of loss functions:]__ We agree that introducing the loss functions is not a major contribution. However, their combination in (5a) and (5b) with the hypothesis class in (7) enables reformulation into finite optimization problems (10a) and (10b) (Thm. 3.3). More importantly, when $A\_{\theta}(s)$ is a positive scalar, it admits a finite convex reformulation (13) (Prop. 3.7). To our knowledge, no existing inverse optimization or imitation learning approach achieves convex training in such cases. __[Concerning the theoretical guarantees:]__ There are two key considerations: (i) **Out-of-sample performance** (generalization error), which bounds the loss on unseen/test data based on training loss. (ii) **Suboptimality bound** of the training loss, crucial when the loss is nonconvex in model parameters $\theta$. The out-of-sample bound (i) is straightforward under mild assumptions, such as bounded $\theta$, using classical covering techniques and the finite reformulation in Theorem 3.3 (see [Shalev2014], Section 13.3). However, the suboptimality bound (ii) is more interesting, as most cases in Theorem 3.3 are nonconvex in $\theta$, except for Proposition 3.7. We derive an a-posterior bound for (ii) in both (10a) and (10b) as follows (we refrain from using bold symbols to economize on characters): **Proposition (Training suboptimality bound):** _The optimal value of problem (10a) associated with the suboptimality loss is bounded from below by the optimal value of the finite tractable convex problem_ $$\max\_{||\alpha_i||\_*\leq 1} \min\_{z\_i\in\mathcal{Z}}\max \displaystyle\frac{1}{N} \sum_{i = 1}^N \alpha\_i^\top (A\_{\theta}(s\_i)z\_i + b\_{\theta}(s\_i) - x\_i) + \gamma_{o,i} + \nu\_i( c(s\_i)^\top(x\_i-b\_{\theta}(s\_i)) + \lambda\_i^\top h - \gamma\_{o,i}) + ( c(s\_i)^\top A\_{\theta}(s\_i) + \lambda\_i^\top H)\tau\_i + \lambda\_i^\top \mu\_i$$ _where maximization of the innermost max is done over $A\_k\in\mathbb{R}^{n\times p},b\_k\in\mathbb{R}^{n}$, for all $k=1,\ldots,K$, and $\gamma\_{o,i}\in\mathbb{R}\_+$, $\nu\_i\in\mathbb{R}\_+$, $\tau\_i\in\mathbb{R}^p$, $\mu\_i\in\mathcal{K}$ for all $i=1,\ldots,N$._ **Sketch of the proof:** To save space, we outline the intuition behind the proof rather than the full derivation. The first constraint of the suboptimality problem is $\gamma\_{i} = A\_{\theta}(s\_i)z\_i + b\_{\theta}(s\_i) - x\_i$. Substituting this into the objective function, we rewrite it as $\frac{1}{N} \sum\_{i = 1}^N \|A\_{\theta}(s\_i)z\_i + b\_{\theta}(s\_i) - x\_i\| + \gamma\_{o,i}$. Using the dual norm, this is equivalent to $$\max_{|| \alpha\_i ||\_{*}\leq 1} \frac{1}{N} \sum\_{i = 1}^N \alpha\_i^\top (A\_{\theta}(s\_i)z\_i + b\_{\theta}(s\_i) - x\_i) + \gamma\_{o,i}.$$ Interchanging minimization and maximization leads to $$ \max\_{||\alpha\_i||\_*\leq 1} \min \frac{1}{N} \sum\_{i = 1}^N \alpha\_i^\top (A\_{\theta}(s\_i)z\_i + b\_{\theta}(s\_i) - x\_i) + \gamma\_{o,i}. $$ The optimal value provides a lower bound on (10a). Since the inner problem is bilinear in $A\_k$ and $z\_i$, but their constraints are disjoint, we can dualize with respect to $A\_k$ and $\lambda\_i $, yielding the proposed formulation. For clarity: (i) $\mu\_i$ are dual variables for $-\lambda\_i \in \mathcal{K}^*$. (ii) $\tau\_i$ are dual for $c(s\_i)^\top A\_{\theta}(s_i) + \lambda\_i^\top H = 0$. (iii) $\nu\_i$ are dual for $c(s\_i)^\top(x\_i-b\_{\theta}(s\_i)) + \lambda\_i^\top h \leq \gamma\_{o,i}$. The lower-bounding problem is a two-stage robust optimization problem solvable via column-and-constraint generation [Zeng2013] or affine policies [Kuhn2011], ensuring a valid lower bound. A similar bound can be derived for the predictability loss (10b). If needed, we can provide it. Given an extra page in the final version, we believe we can incorporate these details in the final version. **References:** [Shalev2014] Shalev-Shwartz, Shai, and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. [Zeng2013] Zeng, Bo, and Long Zhao. "Solving two-stage robust optimization problems using a column-and-constraint generation method." Operations Research Letters 41.5 (2013): 457-461. [Kuhn2011] Kuhn, Daniel, Wolfram Wiesemann, and Angelos Georghiou. "Primal and dual linear decision rules in stochastic and robust optimization." Mathematical Programming 130 (2011): 177-209.} --- Rebuttal Comment 1.1: Comment: Thank you for your response. While I found your clarifications helpful, some of my concerns remain. **Loss function.** I respectfully disagree with the assertion that the proposed optimization problems (10a) and (10b) offer additional geometric or computational insights. A more detailed discussion is necessary to support this claim. Furthermore, if I understand correctly, the subsequent convex relaxation (13) mainly uses the well-established KKT conditions in existing inverse optimization literature. Therefore, the statement that "no existing inverse optimization or imitation learning approach achieves convex training in such cases" appears inaccurate. **Numerical experiments.** As mentioned above, the numerical experiments were conducted on low-dimensional instances. This choice may undermine the proposed reformulation, particularly concerning its computational performance. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's response and feedback. Please allow us to reply to your **loss function** comment on "_no existing literature inverse optimization or imitation learning approach achieves convex training in such cases_": What we mean is that there is no existing inverse optimization framework for **constraint learning** that results in a tractable convex training program (akin to Proposition 3.7 in this study). We fully acknowledge that the use of KKT conditions and robust optimization techniques is well-established and has been successfully applied across a variety of domains. However, our key contribution lies in introducing a new hypothesis class (parametrized constraint functions in (7)) together with meaningful loss functions (equations (5)), for which, if we use the standard robust optimization techniques, we arrive at a tractable convex optimization (see (13) in Proposition 3.7). To the best of our knowledge, no such tractable formulation exists in the context of constraint learning within inverse optimization.
Summary: The authors derive a method based on inverse optimization to learn the feasible region of an optimization model. To this end, they use two loss functions, incorporating infeasibility and suboptimality of the solutions over the hypothesized feasible region. Besides, they set a parametrized hypothesis class for the constraint functions. The problem can be approximated in this setting by performing block coordinate descent, for which an algorithm is given. Adaptive smoothing techniques are given to improve the algorithmic performance. Theoretically, it is discussed that for certain restrictions to the hypothesis set, the problem can be reformulated into a convex problem or MILP. The authors apply their method to a network flow problem for power systems. A comparison to other methods is made when learning the constraint’s structure. Also, an experiment is conducted in which the goal is to learn the structure of the power network. "## update after rebuttal No update. Claims And Evidence: Overall, the claims made about the numerical experiments are justified by the results presented. However, there is no reference to the code, resulting in a lack of reproducibility. The authors' claims regarding their contributions are mostly convincing but are limited: - The hypothesis class of the g functions and the corresponding solution algorithm are new approaches and, therefore, are relevant contributions - The claim on the novelty of the loss functions is unconvincing as the authors refer to literature where these losses are introduced. It seems like a generalization rather than a novel contribution. Methods And Evaluation Criteria: - In experiment 1, there is a nice comparison of different methods (convex, Gurobi, algorithm 2). - It is unclear which methods are used as a benchmark, especially for experiment 2. - Both runtime and loss are considered. Theoretical Claims: A few concerns regarding the theoretical claims: - The proof of Theorem 3.3 is not referring to any results/shows no derivatives. It is just Lemma B.1, which is not properly proved? For the overall comprehensibility of the paper it would be good to show the reformulation steps and why dualization leads to the final reformulation. The latter is not directly obvious and uses a nice trick to get rid of the minimization problem in the constraints. - The proof of Proposition 2.1 states that a proof for the suboptimality loss follows similar arguments. This makes sense, but why is it not added? What is different, or is it completely the same? - Corollary E.2.: There is no proof, derivation, or reference to support this claim mentioned. Experimental Designs Or Analyses: A few concerns regarding the Experimental Designs Or Analyses: - The code of the experiments is not shared, so reproducing the results is difficult. - The illustrative experiment to compare Algorithms 1 and 2 has no set-up discussed. The plausible description in Appendix H is not referenced. The results of this experiment in Table 1 are mildly convincing as the details of the setup are unknown. - Experiment 1 compares a decent number of methods, except Algorithm 1. The reason for this is not mentioned. - For Experiment 1, in Table 2, the Gurobi approach seems to have always the same loss for predictability and suboptimality, except for p=9. The reason for the noticeable result is not discussed. - The claim “(ii) Learning a quadratic cost is significantly inferior to other methods” (around line 367, second column) in the experiments is too intense; there is a decent trade-off between loss and runtime between the methods. - In Experiment 2, the method achieves zero loss, a perfect fit for the train set. However, Figure 3 shows that the method does not find the original structure. This is a major problem, the method is not suited to recover networks even in small settings. It is not mentioned why this occurs or in which setting this can be prevented. As it is presented, it seems this method is not the right approach for this problem. - Experiment 2 has no benchmark, only an estimated comparison on runtime - Experiments in both the paper and the appendix are mainly limited to network flow problems for power systems. It would be interesting to see experiments for other applications and problem structures. Supplementary Material: There is no supplementary material Relation To Broader Scientific Literature: This research focuses on learning the feasibility region contrary to other works that focus on learning the objective function. Also, this work considers specific hypothesis classes for this problem, which is a novel approach for learning the feasibility region. Essential References Not Discussed: No comment Other Strengths And Weaknesses: No Comment Other Comments Or Suggestions: - Nowhere is it mentioned that bold notation is used for vectors/matrices. - N_out is used but not properly defined. Line 153, column 2. - “Let A_(s) and b_(s) be a matrix and vector with appropriate dimensions…” I would recall these dimensions formally. Line 200-201, column 1. - In Line 12, column 2, signal s is in R^k. In Line 224-226, column 1, we have k=0,…, K. These notations are confusing as signal s first has k components and later in the paper K components. - Calligraphic K^* is not defined. Is it the dual cone? Line 236, column 1. - Armijo’s rule is mentioned, but there is no citation or explanation. Line 222, column 2. - gamma_{s1*} and gamma_{s1}^* are both used in Algorithm 2 which is inconsistent. - R and calligraphic R are both used in Lines 361-362, column 1, which is inconsistent. - It is not known what C_n represents. Line 380, column 1. - Experiment/Problem 2 assumes the capacity constraints are known. Which ones, the C_n , the bar(f)_m, or both? Line 380, column 2. - The actual problem using an input signal and output was abstract until Example 3.2. The relevance would have become more apparent if a tangible example had been discussed in the introduction and perhaps outside of the domain of power systems. Questions For Authors: - The proposed loss functions seem to be known in the literature: What are the main contributions of this paper? - What are the common benchmark methods for learning feasible regions which could compete with you? - What is the reason for the method finding zero loss experiment 2 while not finding the right network structure, and how can this issue be resolved? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the comments/questions. We will address all minor points in the revision. Please find all the code used for the paper in https://github.com/orRepo/InverseOptimization. Below is our response: __[Q1: Main contributions]__ The primary contribution is the hypothesis class (7), which transforms constraint learning in inverse optimization into a finite structured optimization problem (Thm. 3.3) that, in a special case, yields the first finite tractable convex formulation (Prop. 3.7). Prior work results in unstructured nonconvex programs or relies on enumeration in discrete cases [Aswani2018]. To reach this end, we extended the predictability and suboptimality losses from [Mohajerin2018], previously limited to objective function learning, to constraint learning (loss functions (5a) and (5b), see also Prop. 2.1). Given the complexity of the resulting optimization problems, we also introduce tailored solution methods (Algorithms 1 and 2). In summary, the hypothesis class is the main contribution, with loss functions and solution methods serving a secondary and supporting role. __[Q2: Benchmarking]__ To our best knowledge, the only existing approach for learning constraints under similar settings (1. observing signal-(optimal) solution pairs, 2. a non-fixed objective function varying with signals, 3. allowance for noise) within inverse optimization is [Aswani2018]. However, this lacks a tractable solution method, relying instead on enumeration, which, as noted in Section 4.2, becomes impractical even for small problems. __[Q3: Zero training loss]__ The synthetic data generated by the forward problem (14) can be explained by multiple network configurations. Specifically, both networks in Figure 3 can exactly reproduce the training data, meaning there is no single "right" network structure, i.e., multiple structures can be consistent with the observed input-output pairs. Without additional information beyond the training data, inverse optimization cannot differentiate between these structures. However, this also presents an opportunity: One can design simpler network structures, such as those with fewer connections, that perform equally well. This is precisely the case demonstrated in Section 4.2. Additionally, in response to the comment: "It is unclear which methods are used as a benchmark, especially for Experiment 2," we clarify that, as discussed at the end of Section 4.2, the enumeration approach proposed by [Aswani2018] can also achieve zero loss. However, this approach requires exponential computational effort to reach the same result. Moreover, it does not circumvent the existence of multiple optimal network structures, meaning it faces the same fundamental challenge discussed above. **References:** [Aswani2018] Aswani, A., Shen, Z.-J., and Siddiq, A. Inverse optimization with noisy data. Operations Research, 66(3):870–892, 2018. [Mohajerin2018] Mohajerin Esfahani, P., Shafieezadeh-Abadeh, S., Hanasusanto, G. A., and Kuhn, D. Data-driven inverse optimization with imperfect information. Mathematical Programming, 167:191–234, 2018. We would also like to address the referee's other concerns below. **[Proof of Theorem 3.3]** Thank you for pointing out this concern. We kept the proofs short due to lack of space. We will make sure that both Theorem 3.3 and Lemma B.1 are fully shown in the revision, elaborating further on the dualization trick. **[Proof of suboptimality loss]** Yes, they are the same. We will modify the text of the existing proof to be applicable for both losses to improve clarity. **[Algorithm 1 in Experiment 1]** The numerical results in Table 1 indicate that Algorithm 1 can become trapped in local optima, whereas Algorithm 2, due to its smoothing properties, is better able to escape these local optima and generally performs better. For this reason, Algorithm 1 was not included in the extended numerical experiments. **[Quadratic Cost Performance]** In Example 3.2, it is mathematically demonstrated that learning a quadratic objective effectively results in a linear policy. In contrast, the proposed hypothesis class induces a richer policy class with piecewise behavior, leading to improved performance. However, there is indeed a trade-off between loss and runtime among the methods. **[Repeated Gurobi approach numbers in Table 2]** Thanks a lot for catching this copy error; the correct values for the rows related to "predict." are as follows: | Method | Loss | \$\ell_\theta^p /\ell^p\$ | \$\ell_\theta^\text{sub}\$ | \$\ell^{\text{sub}}\$ | time | |---------------|----------|-------------------------|--------------------------|---------------------|----------| | Gurobi \$p=3\$ | predict. | \$6.32\$ | \$4.44\$ | \$0.49\$ | \$1800\$s | | Gurobi \$p=6\$ | predict. | \$6.54\$ | \$4.31\$ | \$0.76\$ | \$1800\$s |
Summary: The paper proposes a novel inverse optimization (IO) framework to learn feasible regions (constraints) from observed decisions. Key contributions: Two Loss Functions: Predictability Loss: Minimizes the perturbation required to make observed decisions feasible and near-optimal. Suboptimality Loss: Penalizes violations of feasibility and optimality using slack variables. Hypothesis Class: Models feasible regions through a parametric function $g_theta(x, s) = min_z ||x - A_theta(s) z - b_theta(s)||^2$, where z belongs to a primitive set Z (e.g., hypercube), and A_theta(s), b_theta(s) are learnable parameters. Algorithms: Block coordinate descent with adaptive smoothing for general cases. Convex and mixed-integer linear program (MILP) reformulations for specific primitive sets Z. Results: Recovers synthetic constraints (e.g., polygons, Lp-balls) and power grid topologies with 0% violation in noiseless cases, outperforming baselines. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Proposition 2.1: L=0 iff x is optimal. Theorem 3.3: Reformulates learning into tractable programs. Experimental Designs Or Analyses: Synthetic Tests: Simple 2D regions Power Systems: Realistic DC optimal power flow task. Limitation: No high-dimensional/complex tests on conic primitive set. Supplementary Material: Yes, the proof of theoretical results Relation To Broader Scientific Literature: Extends prior work (e.g., Aswani et al., 2018) by avoiding constraint enumeration. Advances IO by shifting focus from objective learning to constraint learning. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. The paper is well written and structured 2. First framework for learning feasible regions in IO. 3. Handles discontinuous/nonconvex regions. Weaknesses: 1. Relies on predefined primitive sets Z (e.g., hypercube). 2. Limited to linear objectives; nonlinear cases untested. Other Comments Or Suggestions: The figures are hard to understand. I would suggest the authors to prepare a better illustration Questions For Authors: 1. In predictability loss, why J_{\theta}<=0 implies optimality? As J_{\theta}<=0 only imply the constraints set learned is tightened and exclude x as the optimum 2. How would the model perform with adversarial inputs? 3. What is the main contribution in the paper? Is it the new design of the approximation function g_theta (x,s)? In addition, I don't quite understand the MILP part, does the integer comes from the binary variables in A? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the comments/questions. Here is our response: __[Q1: Implication of nonpositive loss]__ The reviewer is indeed correct that for the function $J_\theta$ defined in (4), the condition $J_{\theta}(\boldsymbol{x}, \boldsymbol{s}) \leq 0$ does not necessarily imply that the point $\boldsymbol{x}$ meets both optimality and feasibility conditions. However, there may be a misunderstanding between $J\_\theta(\boldsymbol{x}, \boldsymbol{s})\le 0$ and $\ell\_\theta(\boldsymbol{x}, \boldsymbol{s})\le 0$. In this regard, let us note that considering $\ell^{\rm p}\_\theta$ and $\ell^{\rm sub}\_\theta$ introduced in (5a) and (5b), if these loss functions at a given point are nonpositive (e.g., $\ell^{\rm p}\_\theta(\boldsymbol{x},\boldsymbol{s}) \le 0$, which by definition implies $\ell^{\rm p}\_\theta(\boldsymbol{x},\boldsymbol{s}) = 0$), then not only we have $J\_{\theta}(\boldsymbol{x}, \boldsymbol{s}) \leq 0$ but we also have $g\_{\theta}(\boldsymbol{x}, \boldsymbol{s}) \leq 0$ because of the constraints of the function $\ell^{\rm p}$ in (5a). This combination imposed in (5a) guarantees both feasibility and optimality of $\boldsymbol{x}$. An identical line of argument can be applied for the suboptimality loss $\ell^{\rm sub}$ defined in (5b). This discussion is formally provided in Proposition 2.1 in the main body and its respective proof in the supplementary. __[Q2: Adverserial inputs]__ There are two possible interpretations of adversarial inputs: (i) One interpretation is that the input $\boldsymbol{s}$ lacks sufficient exploration, making it impossible to uncover the ground truth even when the correct hypothesis class is chosen. However, our proposed method can still provide out-of-sample guarantees (or generalization bound) if the data are i.i.d. However, this out-of-sample bound is perhaps less significant, as it readily follows under some mild assumption, such as boundedness of the training parameter~$\theta$, thanks to the classical covering techniques and the finite exact reformulation in Theorem 3.3, see Section 13.3 "VC-Dimension and Uniform Convergence" in [Shalev2014]. (ii) Another interpretation is that adversarial inputs refer to input-output pairs corrupted by noise. To better understand their impact, we have conducted several numerical experiments presented in Appendix H. A classical approach to mitigating this effect is to apply generic distributionally robust optimization techniques similar to those in [Mohajerin2018]. However, we have chosen not to include this discussion, as it would require introducing a new hypothesis class for learning constraints, analyzing its interaction with the loss function, and presenting the associated solution method. **References:** [Shalev2014] Shalev-Shwartz, Shai, and Shai Ben-David. Understanding machine learning: From theory to algorithms. Cambridge university press, 2014. [Mohajerin2018] Mohajerin Esfahani, P., Shafieezadeh-Abadeh, S., Hanasusanto, G. A., and Kuhn, D. Data-driven inverse optimization with imperfect information. Mathematical Programming, 167:191–234, 2018. __[Q3: Main contributions]__ The referee is correct in noting that the main contribution of the paper is the hypothesis class for $g_\theta$ in (7). To the best of our knowledge, this is the first attempt to incorporate constraint learning within the inverse optimization framework (Thm. 3.3), which, in a special case, yields a finite tractable convex optimization problem (Prop. 3.7). To establish these results properly, we revisited the existing predictability and suboptimality loss functions in the objective function learning case and extend them to the case of constraints learning (cf. loss functions (5a) and (5b), see also Prop. 2.1). Additionally, the resulting learning optimization problems are challenging to solve. As such, we also consider the proposed tailored solution method (Algorithms 1 and 2) as a contribution of this study. __[Q3: MILP part]__ There are two distinct cases in which the learning problem can be formulated as a MILP: (i) Binary matrix $\boldsymbol{A}\_\theta(\boldsymbol{s})$: As the reviewer noted, when the matrix $\boldsymbol{A}\_\theta(\boldsymbol{s}) = A \in \\{0,1\\}^{n \times p}$ is binary (example in Sec. 4.2 where $A$ representing network connectivity), the bilinear problems (10a) and (10b) can be reformulated as MILPs by leveraging McCormick inequalities to linearize the bilinear terms $\boldsymbol{A}_\theta(\boldsymbol{s})\boldsymbol{z}_i$. (ii) Discrete Primitive Set $\mathcal{Z}$ and continuous Matrix $\boldsymbol{A}\_\theta(\boldsymbol{s})$: As discussed in Observation 3.8, when $\mathcal{Z}$ is discrete, the bilinear problems (10a) and (10b) can be reformulated as MILPs because the bilinear terms $ \boldsymbol{A}\_\theta(\boldsymbol{s})\boldsymbol{z}_i$ can be linearized using the same techniques as in McCormick inequalities.
null
null
null
null
null
null
Low-Rank Adapting Models for Sparse Autoencoders
Accept (poster)
Summary: This paper proposes improving the use of SAEs in LLMs by fitting a LORA to the LLM with the SAE inserted. The results indicate that LORAs are an effective countermeasure to the reduction in performance of inserted SAEs. Claims And Evidence: The claims of the paper are well supported through a variety of experiments. The use of various different SAEs, LLMs, and training strategies is thorough and convincing. Methods And Evaluation Criteria: The eval is in line with current work on evaluating SAEs. Everything is very sensible. Theoretical Claims: N/A Experimental Designs Or Analyses: I didn't see any particular issues with the experimental designs. I was a bit confused with 5.2 in general. While I am familiar with steering LLMs, I am only vaguely able to follow the setup for this experiment. I feel Pres et al.'s setup needs to be more clearly articulated, at least in the supplement. Supplementary Material: No. I am not a fan of pointing to supplement figures from the main text. Relation To Broader Scientific Literature: This paper fits nicely into the literature of mech interpretability and is extremely up to date with the latest advancements in this fast pace field. Topk SAE is just published in ICLR and SAE bench was only announced in December. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: - the paper is well organized and easy to follow. - the proposed method is sensible and intuitive, solving a clear problem with SAE usage - the experiments are thorough; I cannot think of any additional work I'd want to see done Weaknesses: - the method is somewhat a simple premise: SAE + LORA. There isn't really a novel contribution in making LORA or SAE more amendable to solving the problem. Other Comments Or Suggestions: - What is "nats"? (e.g., line 088 "0.17 nats") - captions above tables is a little unusual Questions For Authors: Can you explain 5.2 in more detail? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for your feedback and time! We are especially glad to hear you found our experiments and results to be thorough and convincing. --- > Can you explain 5.2 in more detail? Certainly! We apologize for not being clearer and hope the below will explain in more detail how we’re computing our metric: For a given dataset D, for both the base and LoRA model: - For a given text sample $i \in D$, we compute the average per token log likelihood $a_i$. - On the same text sample $i \in D$, we compute the average per token log likelihood $b_i$ when we steer with an SAE feature $f$. - We repeat this over a dataset of text samples. - We normalize all the log likelihoods (both the ones with and without steering) by subtracting a constant and scaling factor such that the non-steering log likelihoods range from 0 to 100. The constant and scaling factor are computed based on the non-steered log-likelihoods. We normalize this way so that the change in log likelihood after steering loosely represents a % change in likelihood after steering. We keep track of all the changes in LL after steering as $\delta_i$. We record $\Delta_D$ as the mean change $\delta_{i, \text{LoRA}} - \delta_{i, \text{base}}$, with standard error estimates. On a dataset consisting of text that exhibit the SAE feature (which we denote as “positive” texts), a model that steers more effectively would have a higher change in LL after steering (that is $\Delta_{\text{positive}} > 0$) because it would indicate the positive text sample is more likely after steering. However, on a dataset of texts that do not exhibit the SAE feature (which we denote as “negative” texts), we want the change in LL to remain the same or slightly decrease (that is, $\Delta_{\text{negative}} \le 0$) after steering, as this indicates that steering with an SAE feature f does not significantly affect the likelihood of unrelated texts. > the method is somewhat a simple premise: SAE + LORA. There isn’t really a novel contribution in making LORA or SAE more amenable to solving the problem. We agree the method is simple! We believe that’s a strong appeal of our results, as practitioners prefer implementing simpler techniques. We also study the performance of our technique in many settings and analyze why the method works (Section 6), which we believe goes beyond merely combining SAEs and LoRA. > What is “nats”? (e.g., line 088 “0.17 nats”) nats is a unit measuring information and is the unit for cross entropy loss (it is the base e analog of bits) > captions above tables is a little unusual We agree and usually prefer having them on the bottom, but having captions above the table is required as per the ICML style guide. We are also excited to share we ran new evaluations on just our adapted model (no SAE inserted) and found that it outperforms the original model on many of the capability benchmarks across different model sizes (see table). We find it compelling that the models created using our method not only demonstrate stronger evidence of actually using the interpretable SAE features downstream, but also match or even exceed the general performance capabilities of the original pretrained model. *Italicized* metrics indicate better performing between models with SAE inserted **Bolded** metrics indicate best performing method overall. **Gemma-2-2b** |Metric|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|44.2±0.4|*45.8±0.4*|49.3±0.4|**50.0±0.4**| |HellaSwag|50.9±0.5|*52.1±0.5*|55.0±0.5|**56.0±0.5**| |BLEU|29.9±1.6|*30.6±1.6*|30.4±1.6|**32.4±1.6**| |ROUGE-1|28.2±1.6|*28.5±1.6*|26.9±1.6|**30.2±1.6**| |ROUGE-2|24.8±1.5|*26.6±1.5*|25.6±1.5|**29.1±1.6**| |MC1|23.1±1.5|*23.4±1.5*|24.1±1.5|**24.3±1.5**| --- **GEMMA-2-9b** |**Metric**|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|64.2±0.4|*65.7±0.4*|**70.0±0.4**|68.8±0.4| |HellaSwag|58.3±0.5|*59.6±0.5*|61.2±0.5|**61.9±0.5**| |BLEU|40.9±1.7|*42.4±1.7*|**43.8±1.7**|43.6±1.7| |ROUGE-1|39.0±1.7|*40.6±1.7*|42.7±1.7|**43.5±1.7**| |ROUGE-2|33.4±1.7|*36.4±1.7*|38.3±1.7|**38.8±1.7**| |MC1|27.1±1.6|*28.0±1.6*|30.5±1.6|**31.0±1.6**| --- **GEMMA-2-27b** |**Metric**|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|70.9±0.4|*71.3±0.4*|72.1±0.3|**72.7±0.3**| |HellaSwag|61.0±0.5|*62.7±0.5*|65.3±0.5|**65.5±0.5**| |BLEU|*40.9±1.7*|38.9±1.7|41.1±1.7|**41.9±1.7**| |ROUGE-1|*41.0±1.7*|38.3±1.7|40.9±1.7|**41.7±1.7**| |ROUGE-2|*37.1±1.7*|35.3±1.7|36.2±1.7|**37.1±1.7**| |MC1|30.2±1.6|*31.5±1.6*|**33.8±1.7**|32.9±1.6| --- Thank you again for taking the time to review the paper and providing helpful feedback! Do the above actions address your concerns with the paper? And are there any further clarification or modifications we could make to improve your score? --- Rebuttal Comment 1.1: Comment: Thanks authors. I like the additional experiments. I strongly believe this work should be presented and is of interest to the mechanistic interpretability community, so I will raise my score.
Summary: This paper trains some parts of a Transformer with LoRAs in order to make an SAE more accurate at reconstructing activations as a sparse linear sum. They achieve impressive upstream metric results at a low cost, and also achieve reasonable SAEBench downstream (ish!) results too. Claims And Evidence: The claims about this training process seem accurate, but there are some downstream applications I am concerned with: see my questions. Additionally, I am concerned that far too much focus is on improving simple SAE metrics rather than improving interpretability concretely. This paper is very long anyway, so the authors may have considered it out of scope, but I don't think there evidence on circuit analysis is anywhere near sufficient to suggest that their technique is likely useful for improving our understanding of models. At absolute minimum, the authors should be much more open about the limitations of their work. Methods And Evaluation Criteria: The evaluation is fairly comprehensive, e.g. SAEBench and also normal CE loss style evals I was somewhat sad that no novel findings were made to advance interp -- e.g. a circuit analysis experiment that seem to go a lot better than expected with these new tools. Theoretical Claims: Theoretical claims were fine. Experimental Designs Or Analyses: The experimental setup is done well. Supplementary Material: I skimmed this material. Relation To Broader Scientific Literature: Interpretability has wide appeal and I think if this research enabled research (unclear) it would be impactful in wider communities. Essential References Not Discussed: N/A this paper is comprehensive with references. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: "Concretely, for each frozen weight matrix..." -- surely this is supposed to say **un**-frozen? Questions For Authors: 1a. Why is this a promising method for circuit discovery? I see significant downside. Normally, you can take off-the-shelf SAEs, and choose a subset of them to use for circuit analysis. E.g. https://openreview.net/forum?id=sdLwJTtKpM off the top of my head. Your paper requires a very expensive training procedure that will need to be redone if different SAEs need to be attached. 1b. In addition to those worries about cirucit analysis, I don't think you can use error terms with your approach? But often attaching lots of SAEs is very lossy, so errors terms are essential. 2. Why is this a promising method for applications of probing? A key reason to do probing work, from my perspective, is to be able to make monitors production-ready systems, e.g. frontier models at labs. But if the way to monitor requires editing weights before the SAE then the latents will not be able to used on their own as probes (for example), entirely new model weights will need be used. And it seems your weights have a decent dip in performance on e.g. MMLU so this seems to limit applications of your tool Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are greatly thankful for your time and feedback, especially related to the limitations of our work. We were glad to hear you found the evaluations comprehensive. --- > Additionally, I am concerned that far too much focus is on improving simple SAE metrics… At absolute minimum, the authors should be much more open about the limitations of their work. This is a fair criticism, thank you! We agree that our multi-SAE section does not directly show improvements to SAE circuits. We have added the following to the limitations section, please let us know what you think! ``` While our approach develops models that more effectively leverage the interpretable features learned by SAEs in downstream tasks, we do not yet show that these can be leveraged into additional insights as to how the original model internally computes using these features. We leave this important direction for future work. ``` > In addition to those worries about circuit analysis, I don’t think you can use error terms with your approach? But often attaching lots of SAEs is very lossy, so error terms are essential. This is a good question! It is true that one cannot use error terms with our method to obtain the activations of the original model, but in return we get the benefit that our method directly attacks the issue that “attaching lots of SAEs is very lossy.” For example, note that in Figure 6, even inserting 7 SAEs with our method results in a reasonable cross entropy loss of 2.78, and inserting 3 SAEs into the LoRA model has better loss than the pretrained model with 1 SAE. We also note that one could use error terms to recover the performance of the adapted model, which we show below is similar (and in some cases even better) than the original model. > Your paper requires a very expensive training procedure that will need to be redone if different SAEs need to be attached While it’s true the training procedure needs to be redone if different SAEs are attached, the training procedure is actually very cheap. Taking off the shelf SAEs, training LoRA adapters takes only a few minutes or hours (depending on your model size), which is many times cheaper than the original SAE training process. > Why is this a promising method for applications of probing? …if the way to monitor requires editing weights before the SAE then the latents will not be able to used on their own as probes This is a great point and something we overlooked. We have added a discussion of this to Section 5.1. However, as you note someone using our method for probing could run the adapted model without the SAE (because they would just be using it for probing), and we are excited to show that in new experiments, the adapted model with no SAE is actually as good or better than the original model on many capability benchmarks: *Italicized* indicates best performing between models with SAE inserted **Bolded** indicates best overall. **Gemma-2-2b** |Metric|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|44.2±0.4|*45.8±0.4*|49.3±0.4|**50.0±0.4**| |HellaSwag|50.9±0.5|*52.1±0.5*|55.0±0.5|**56.0±0.5**| |BLEU|29.9±1.6|*30.6±1.6*|30.4±1.6|**32.4±1.6**| |ROUGE-1|28.2±1.6|*28.5±1.6*|26.9±1.6|**30.2±1.6**| |ROUGE-2|24.8±1.5|*26.6±1.5*|25.6±1.5|**29.1±1.6**| |MC1|23.1±1.5|*23.4±1.5*|24.1±1.5|**24.3±1.5**| --- **GEMMA-2-9b** |**Metric**|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|64.2±0.4|*65.7±0.4*|**70.0±0.4**|68.8±0.4| |HellaSwag|58.3±0.5|*59.6±0.5*|61.2±0.5|**61.9±0.5**| |BLEU|40.9±1.7|*42.4±1.7*|**43.8±1.7**|43.6±1.7| |ROUGE-1|39.0±1.7|*40.6±1.7*|42.7±1.7|**43.5±1.7**| |ROUGE-2|33.4±1.7|*36.4±1.7*|38.3±1.7|**38.8±1.7**| |MC1|27.1±1.6|*28.0±1.6*|30.5±1.6|**31.0±1.6**| --- **GEMMA-2-27b** |**Metric**|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|70.9±0.4|*71.3±0.4*|72.1±0.3|**72.7±0.3**| |HellaSwag|61.0±0.5|*62.7±0.5*|65.3±0.5|**65.5±0.5**| |BLEU|*40.9±1.7*|38.9±1.7|41.1±1.7|**41.9±1.7**| |ROUGE-1|*41.0±1.7*|38.3±1.7|40.9±1.7|**41.7±1.7**| |ROUGE-2|*37.1±1.7*|35.3±1.7|36.2±1.7|**37.1±1.7**| |MC1|30.2±1.6|*31.5±1.6*|**33.8±1.7**|32.9±1.6| > …surely this is supposed to say un-frozen? We think this is written correctly. To clarify, in LoRA finetuning, the weight matrices W of the original model are frozen, and only low rank matrices A and B are learned. The forward pass through the model is then (W + AB)x instead of Wx. We hope this clarifies things; we apologize for not being more clear. --- Thank you again for taking the time to review the paper and providing helpful feedback! Do the above actions address your concerns with the paper? If not, what further clarification or modifications could we make to improve your score? --- Rebuttal Comment 1.1: Comment: > We have added the following to the limitations section I broadly like that comment but think it should explicitly have "circuit" as a substring somewhere. And reference with `\Cref` the circuits section. > We think this is written correctly Yes, I follow now, thanks Staring more at the metrics, I do agree there is not too much of a capability hit to Gemma 27B which is the most relevant model. So I update my score to a 3 (weak accept).
Summary: This paper introduces an approach for improving sparse autoencoders (SAEs) used in language model interpretability by using Low-Rank Adaptation (LoRA) to finetune the language model itself around a previously trained SAE. Their method freezes both the original model and the SAE while only training low-rank adapters to minimize the KL divergence between the model's original outputs and the outputs with the SAE inserted. The authors present experiments showing their approach reduces the cross entropy loss gap by 30-55% across various settings (SAE sparsity, width, model size, LoRA rank, and model layer). They demonstrate computational efficiency compared to end-to-end SAEs and claim improvements on downstream tasks, including SAEBench metrics and general language model capabilities. Claims And Evidence: While the paper presents comprehensive experimental evidence for its technical claims, there is a fundamental conceptual issue that undermines the paper's premise: 1. The paper claims to improve model interpretability by adapting the model to better accommodate SAEs. However, this approach appears to misunderstand the purpose of SAEs in mechanistic interpretability. SAEs are meant to be analytical tools that reveal the features present in a fixed, pretrained model's activations. By modifying the model itself to better accommodate the SAE, the authors are effectively changing what's being analyzed rather than improving the analysis method. 2. The cross-entropy loss improvements are well-documented, but it's unclear whether these improvements actually serve the goal of better interpretability or simply represent overfitting the model to better accommodate a specific analysis tool. Methods And Evaluation Criteria: The proposed methods are technically sound but conceptually problematic for the interpretability goal: 1. In mechanistic interpretability, the model being analyzed should remain fixed - modifying it defeats the purpose of understanding the original model's behavior. 2. While the authors use SAEBench metrics to claim improved interpretability, it's unclear whether these metrics actually measure feature interpretability or just the degree to which the model has been tuned to make the SAE perform better. 3. The evaluation focuses heavily on cross-entropy loss and general model performance metrics, which are not necessarily aligned with the goal of finding more interpretable features. Theoretical Claims: There is no theoretical claims in this paper. Experimental Designs Or Analyses: The experiments technically demonstrate what they claim, but several issues arise: 1. There's no evaluation of whether the features discovered after model adaptation are actually more interpretable to humans, only that they perform better on automated metrics. 2. The paper doesn't address whether adapting the model might actually hide or modify important original features that would be relevant for interpretability. 3. The feature steering experiments don't clearly show that the steered features are more semantically coherent, only that they have stronger statistical effects. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper situates itself within mechanistic interpretability literature but appears to misunderstand a fundamental aspect of this field. Traditional mechanistic interpretability approaches aim to analyze fixed models to understand how they work internally. By modifying the model to better accommodate the analysis tool (SAE), this paper reverses the relationship between the object of study and the analytical method. This reversal represents a significant departure from the goals described in foundational works and more recent SAE application papers, which focus on understanding fixed models rather than adapting models to analysis methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The technical implementation is sound and well-executed. 2. The experiments are comprehensive across different model scales and SAE configurations. 3. The method does effectively reduce the computational cost compared to e2e SAEs. Weaknesses: 1. The central premise appears to fundamentally misunderstand the purpose of SAEs in mechanistic interpretability. SAEs are meant to discover interpretable features in fixed models, not to guide model adaptation. 2. By modifying the model to better fit the SAE, the authors may be creating an artificial scenario that doesn't reflect how the original model actually processes information, undermining the core goal of interpretability. 3. The paper conflates sparsity with interpretability. While sparsity can help with interpretability, the primary goal is to find semantically meaningful features. The paper focuses almost exclusively on improving sparse reconstruction without demonstrating improved semantic interpretability. 4. The approach may actually hinder true interpretability by adapting the model to make the SAE work better, potentially masking or altering the original computational mechanisms that interpretability research aims to discover. 5. The paper doesn't provide human evaluation of the interpretability of the features found after model adaptation compared to those from standard SAEs. Other Comments Or Suggestions: What does the $\textbf{f}$ in Eq. 5 (line 158) stand for? Questions For Authors: Questions for Authors 1. How do you reconcile your approach of modifying the model with the traditional goal of mechanistic interpretability, which is to understand the computational mechanisms of fixed, pretrained models? Does changing the model not defeat the purpose of analyzing it? 2. Do you have evidence that the features discovered after model adaptation are more semantically interpretable to humans, rather than just achieving better reconstruction or downstream metrics? 3. Have you analyzed whether the adapted model still exhibits the same internal computational patterns as the original model? If not, how can we be confident that we are still studying the same phenomena? 4. Your method essentially changes what is being analyzed rather than improving the analysis method. How do you ensure that the modified model still maintains the properties that made the original model worth studying? 5. Sparsity is typically used as a proxy for interpretability, not as the end goal. Your paper focuses heavily on improving the sparsity-performance tradeoff, but does this actually translate to more humanly interpretable features? What evidence do you have for this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your time and thorough feedback, especially with regards to our methodology. We were glad to hear you found the implementation to be well executed. --- > How do you reconcile your approach of modifying the model with the traditional goal of mechanistic interpretability? This is an important point, thank you! As we state in our limitations (Section 7.1), our method is not appropriate when the model cannot be modified. However, while much interpretability research focuses on fixed models, other work builds interpretable models from scratch or modifies existing models to be interpretable. As cited in our related work, this includes Lai & Heimersheim, 2024; Lai & Huang, 2024; Elhage et al., 2022b; Liu et al., 2023; Liu et al., 2024; Heimersheim 2024. We consider our paper well situated in this latter category and thus an important research contribution. Moreover, because pretrained models perform poorly with SAEs inserted, one may be skeptical if models are actually using the more interpretable SAE latents. An interpretable basis is unhelpful if the model does not actually use it. Because our method optimizes the model to use SAE latents downstream, we can be more confident they are really using them. > Have you analyzed whether the adapted model still exhibits the same internal computational patterns? Great question! In Figure 11, we compare the distance between the downstream layer activations of the pretrained model (gemma-2-2b) with the downstream layer activations of 1) the pretrained model with the SAE, and 2) the adapted model with the SAE. We find that the adapted model with the SAE has a higher cosine similarity with the pretrained model’s original activations than the pretrained model with the SAE. Thus, our adapted model + SAE is more similar to the original model than the pretrained model + SAE. We also ran a quick experiment comparing just the adapted and pretrained model (no SAE involved). We found the average per-layer cosine similarity between pretrained and adapted model activations on the Pile was consistently greater than 0.99 (for comparison, the average cosine similarity with the inserted SAE is ~0.95). > How do you ensure that the modified model still maintains the properties that made the original model worth studying? Great question! It is true that by modifying the model, we may be losing capabilities of the original model. To allay this concern, we ran our adapted model on the benchmarks from Table 5 and found that not only does the modified model still perform competitively with the pretrained model, but that in many cases it *outperforms* the original pretrained model. *Italicized* metrics indicate best performing between models with SAE inserted **Bolded** metrics indicate best method overall. **Gemma-2-2b** |Metric|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|44.2±0.4|*45.8±0.4*|49.3±0.4|**50.0±0.4**| |HellaSwag|50.9±0.5|*52.1±0.5*|55.0±0.5|**56.0±0.5**| |BLEU|29.9±1.6|*30.6±1.6*|30.4±1.6|**32.4±1.6**| |ROUGE-1|28.2±1.6|*28.5±1.6*|26.9±1.6|**30.2±1.6**| |ROUGE-2|24.8±1.5|*26.6±1.5*|25.6±1.5|**29.1±1.6**| |MC1|23.1±1.5|*23.4±1.5*|24.1±1.5|**24.3±1.5**| --- **GEMMA-2-9b** |**Metric**|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|64.2±0.4|*65.7±0.4*|**70.0±0.4**|68.8±0.4| |HellaSwag|58.3±0.5|*59.6±0.5*|61.2±0.5|**61.9±0.5**| |BLEU|40.9±1.7|*42.4±1.7*|**43.8±1.7**|43.6±1.7| |ROUGE-1|39.0±1.7|*40.6±1.7*|42.7±1.7|**43.5±1.7**| |ROUGE-2|33.4±1.7|*36.4±1.7*|38.3±1.7|**38.8±1.7**| |MC1|27.1±1.6|*28.0±1.6*|30.5±1.6|**31.0±1.6**| --- **GEMMA-2-27b** |**Metric**|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|70.9±0.4|*71.3±0.4*|72.1±0.3|**72.7±0.3**| |HellaSwag|61.0±0.5|*62.7±0.5*|65.3±0.5|**65.5±0.5**| |BLEU|*40.9±1.7*|38.9±1.7|41.1±1.7|**41.9±1.7**| |ROUGE-1|*41.0±1.7*|38.3±1.7|40.9±1.7|**41.7±1.7**| |ROUGE-2|*37.1±1.7*|35.3±1.7|36.2±1.7|**37.1±1.7**| |MC1|30.2±1.6|*31.5±1.6*|**33.8±1.7**|32.9±1.6| > Do you have evidence that the features discovered after model adaptation are more semantically interpretable to humans? Thank you for this question; we apologize for not being clearer! We are not trying to find more interpretable features. Indeed, in sections 4.1 & 4.2 (where we recover up to 55% of the loss gap), we adapt only after the SAE, so the features (and their interpretability) are *unchanged*. Rather, we are creating an alternative model that better uses these interpretable features downstream. > What does the f in Eq. 5 (line 158) stand for? SAE dictionary **f**eatures. --- Thank you again for taking the time to review the paper and providing helpful feedback! Do the above actions address your concerns with the paper? If not, what further clarification or modifications could we make to improve your score?
Summary: This paper introduces a novel approach to improve the interpretability of SAE by using LoRA to fine-tune **LLMs** around previously trained SAEs. Unlike previous work that focused on optimizing SAE architectures, this approach optimizes the language model itself to work better with an existing SAE. Across various experiments with SAE sparsity, width, language model size, LoRA rank, and model layer, the method reduces the cross entropy loss gap by 30% to 55% when SAEs are inserted during the forward pass. The technique can adapt multiple SAEs simultaneously, significantly reducing compound cross entropy loss (e.g., from 7.83 to 2.78 nats with 7 SAEs). The paper demonstrates that improving model interpretability is not limited to post-hoc SAE training; Pareto improvements can also be achieved by directly optimizing the model while keeping the SAE fixed. Claims And Evidence: Mostly. see comments below. Methods And Evaluation Criteria: The idea of training the model according to interpretability methods is very novel, and I consider it to be the main reason for the paper to be accepted. The evaluation criteria is frontier and up-to-date. Theoretical Claims: No theoretical claims have been made in this paper. Experimental Designs Or Analyses: Overall, the experiment setup is mostly comprehensive and sound, but there are still points to be improved. More ablation studies and analyses could be conducted to prove that the performance gain of the method does not randomly appear but is a constant improvement. For example: - More diverse model architectures could be tested, e.g., Mistral and Qwen (where there are already some SAEs and there's no need to train SAE from zero). - Different SAE sizes and capabilities could also be considered to prove that this method is constantly surpassing SAE rather than needing a strong SAE to start. Supplementary Material: There are no supp. material for this paper. I reviewed the appendix attached in the PDF given. Relation To Broader Scientific Literature: Yes, the relation is clearly identified by the authors, and the method is indeed improving the performance and interoperability compared to related works and has solved some of the concerns (e.g., training time and efficiency) proposed in previous works. Essential References Not Discussed: No Other Strengths And Weaknesses: The significance and scalability of this method could be a little concerning, and I also quite doubt whether this method could actually be applied to real-world models (to address this, maybe the authors could test the method on benchmarks that is closer to real-world usage, but that won't affect my general evaluation of this idea). Other Comments Or Suggestions: No Questions For Authors: 1. Could you replicate the performance experiment on standard NLP benchmarks on the 27B model? I'm concerned about whether this method could scale up to bigger models (and also more complicated features and capabilities). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your time and help, especially in regards to your suggestions for additional experiments, which we have now run. We are especially glad that you appreciated the novelty of modifying the model itself to better fit a sparse autoencoder, as we agree that this idea is the major impact of our work. --- > More diverse model architectures could be tested, e.g., Mistral and Qwen (where there are already some SAEs and there's no need to train SAE from zero). Thank you for this suggestion! We ran additional experiments on the layer 16 Mistral 7B SAE from SAELens and found that our method still works (we did not see a Qwen SAE on SAELens). The Mistral 7B base model has a CE loss of 1.5485 on our val set, the model with the SAE inserted has a CE loss of 1.8007, and the LoRA model with SAE inserted has a CE loss of 1.6548, reducing the cross entropy loss gap by 57.8%. > Different SAE sizes and capabilities could also be considered to prove that this method is constantly surpassing SAE rather than needing a strong SAE to start. We show that our method works on a large number of SAE sizes and sparsities in Section 4; for all of them it surpasses the performance of the original SAE. We also show that even for partially trained SAEs our method works well; see Figure 1. Please let us know if there is anything we could add to the main text (experiments or wording) to make this stronger, thank you! > Could you replicate the performance experiment on standard NLP benchmarks on the 27B model? I'm concerned about whether this method could scale up to bigger models (and also more complicated features and capabilities). Thank you for this suggestion! Below, we include a version of Table 5 with a 27B model. Our method seems to scale up well to bigger models. Interestingly, the adapted model alone outperforms the original pretrained model on many of the capability benchmarks across all model sizes; we believe this is evidence our method scales well, and that the adapted models from our method could be of interest for real world use cases. *Italicized* metrics indicate best performing between models with SAE inserted **Bolded** metrics indicate best performing method overall. **Gemma-2-2b** |Metric|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|44.2±0.4|*45.8±0.4*|49.3±0.4|**50.0±0.4**| |HellaSwag|50.9±0.5|*52.1±0.5*|55.0±0.5|**56.0±0.5**| |BLEU|29.9±1.6|*30.6±1.6*|30.4±1.6|**32.4±1.6**| |ROUGE-1|28.2±1.6|*28.5±1.6*|26.9±1.6|**30.2±1.6**| |ROUGE-2|24.8±1.5|*26.6±1.5*|25.6±1.5|**29.1±1.6**| |MC1|23.1±1.5|*23.4±1.5*|24.1±1.5|**24.3±1.5**| --- **GEMMA-2-9b** |**Metric**|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|64.2±0.4|*65.7±0.4*|**70.0±0.4**|68.8±0.4| |HellaSwag|58.3±0.5|*59.6±0.5*|61.2±0.5|**61.9±0.5**| |BLEU|40.9±1.7|*42.4±1.7*|**43.8±1.7**|43.6±1.7| |ROUGE-1|39.0±1.7|*40.6±1.7*|42.7±1.7|**43.5±1.7**| |ROUGE-2|33.4±1.7|*36.4±1.7*|38.3±1.7|**38.8±1.7**| |MC1|27.1±1.6|*28.0±1.6*|30.5±1.6|**31.0±1.6**| --- **GEMMA-2-27b** |**Metric**|SAE|SAE+LoRA|Original|LoRA| |--------------|---------------|--------------------|----------------|------------------| |MMLU|70.9±0.4|*71.3±0.4*|72.1±0.3|**72.7±0.3**| |HellaSwag|61.0±0.5|*62.7±0.5*|65.3±0.5|**65.5±0.5**| |BLEU|*40.9±1.7*|38.9±1.7|41.1±1.7|**41.9±1.7**| |ROUGE-1|*41.0±1.7*|38.3±1.7|40.9±1.7|**41.7±1.7**| |ROUGE-2|*37.1±1.7*|35.3±1.7|36.2±1.7|**37.1±1.7**| |MC1|30.2±1.6|*31.5±1.6*|**33.8±1.7**|32.9±1.6| --- Thank you again for taking the time to review the paper and providing helpful feedback! Do the above actions address your concerns with the paper? If not, what further clarification or modifications could we make to improve your score? --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal! I've raised my score accordingly.
null
null
null
null
null
null
Focal-SAM: Focal Sharpness-Aware Minimization for Long-Tailed Classification
Accept (poster)
Summary: The paper proposes a new sharpness aware minimization (SAM) algorithm suited for long-tailed classification. It works by formulating SAM as a regularizer, and then applying different regularization strengths for each class. The authors show that this approach can be more computationally efficient than baselines. The authors also show theoretical generalization bounds on their approach. Experiments on standard benchmarks empirically validate the effectiveness of the method. Claims And Evidence: Yes, all claims are sufficiently supported both theoretically and empirically. Methods And Evaluation Criteria: Methods and evaluation criteria are well chosen; the authors evaluate on 4 different benchmarks including ImageNet-LT. Theoretical Claims: Theoretical claims (specifically Theorem 3.1) appear correct and rely on standard techniques. Experimental Designs Or Analyses: The experimental protocol described in section 4.1 seems valid. Supplementary Material: I looked at the proofs and additional experimental results; both support the theoretical and empirical claims in the main text respectively. Relation To Broader Scientific Literature: The two key related works are ImbSAM and CC-SAM. ImbSAM divides classes into head and tail groups, unlike Focal-SAM which does not divide the classes. CC-SAM applies SAM individually to each class, making it much more computationally intensive than Focal-SAM which applies a single perturbation to the parameters. Essential References Not Discussed: No; as far as I know all relevant literature is discussed. Other Strengths And Weaknesses: The paper is overall well-written and presented with solid empirical and theoretical results. On the other hand, the idea proposed in this paper is a fairly straightforward combination of existing ideas (focal-loss and SAM). Moreover, this paper will likely be of interest mostly to the community working on long-tailed classification. Both of these limit the overall potential impact of the paper. Other Comments Or Suggestions: Please include standard errors over multiple runs for numerical results tables. The performance of Focal-SAM is often quite close to baselines, so establishing this statistical significance will be important. Questions For Authors: 1. How much do the numerical results vary between runs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1: Please include standard errors over multiple runs for numerical results tables. The performance of Focal-SAM is often quite close to baselines, so establishing this statistical significance will be important. **A1**: Thanks for your valuable suggestion! In the original paper, we report the results using the same fixed random seed for all methods to ensure a fair and consistent comparison. Following your advice, we have now conducted additional experiments using **three different random seeds** to further assess the effectiveness of our method. Specifically, we report the **average accuracy** along with the **standard error** across these runs. All other experimental settings remain the same as those described in the paper. These additional experiments are conducted on the CIFAR-100 LT datasets. Furthermore, we perform a **statistical significance** analysis using the **Mann-Whitney U test** to compare the performance of Focal-SAM against baseline methods: - Baselines marked with $^{**}$ indicate that Focal-SAM outperforms the corresponding baseline with a p-value $\le 0.05$. - Baselines marked with $^*$ indicate that Focal-SAM outperforms the corresponding baseline with a p-value $\le 0.10$. The results shows that the average performance of Focal-SAM consistently surpasses other SAM-based methods across multiple runs. This further supports the effectiveness of the proposed Focal-SAM approach. ----- **CIFAR-100 LT** | Method | IR100 | IR200 | IR50 | IR10 | | ------------------ | ------------------- | ------------------- | ------------------- | ------------------- | | CE | 41.2$\pm$0.3$^{**}$ | 37.7$\pm$0.3$^{**}$ | 45.3$\pm$0.4$^{**}$ | 57.9$\pm$0.2$^{**}$ | | CE+SAM | 41.9$\pm$0.2$^{**}$ | 38.9$\pm$0.2$^{**}$ | 46.5$\pm$0.4$^{**}$ | 59.8$\pm$0.1$^{**}$ | | CE+ImbSAM | 42.8$\pm$0.2$^{**}$ | 38.8$\pm$0.3$^{**}$ | 47.7$\pm$0.2 | 60.2$\pm$0.3$^{**}$ | | CE+CC-SAM | 43.0$\pm$0.3$^{*}$ | 39.1$\pm$0.1$^{**}$ | 47.5$\pm$0.2$^{*}$ | 60.2$\pm$0.2$^{**}$ | | **CE+Focal-SAM** | **43.7**$\pm$0.3 | **39.7**$\pm$0.2 | **48.0**$\pm$0.3 | **60.7**$\pm$0.1 | | LA | 44.8$\pm$0.2$^{**}$ | 42.0$\pm$0.3$^{**}$ | 50.2$\pm$0.2$^{**}$ | 59.3$\pm$0.3$^{**}$ | | LA+SAM | 49.6$\pm$0.4$^{**}$ | 45.1$\pm$0.4$^{*}$ | 53.1$\pm$0.2$^{**}$ | 62.6$\pm$0.2$^{**}$ | | LA+ImbSAM | 47.6$\pm$0.3$^{**}$ | 43.6$\pm$0.2$^{**}$ | 52.4$\pm$0.1$^{**}$ | 62.0$\pm$0.3$^{**}$ | | LA+CC-SAM | 50.1$\pm$0.1$^{**}$ | 45.4$\pm$0.2 | 53.4$\pm$0.4$^{**}$ | 62.9$\pm$0.2$^{**}$ | | **LA+Focal-SAM** | **50.6**$\pm$0.2 | **45.8**$\pm$0.3 | **54.3**$\pm$0.2 | **63.5**$\pm$0.2 | | FFT | 78.7$\pm$0.2$^{**}$ | 76.1$\pm$0.2$^{**}$ | 81.3$\pm$0.1$^{**}$ | 85.5$\pm$0.1$^{**}$ | | FFT+SAM | 81.0$\pm$0.1$^{**}$ | 77.6$\pm$0.1$^{**}$ | 83.4$\pm$0.2$^{**}$ | 86.7$\pm$0.1$^{**}$ | | FFT+ImbSAM | 80.4$\pm$0.2$^{**}$ | 77.3$\pm$0.1$^{**}$ | 81.7$\pm$0.1$^{**}$ | 86.5$\pm$0.2$^{**}$ | | FFT+CC-SAM | 81.1$\pm$0.1$^{**}$ | 78.2$\pm$0.1$^{**}$ | 83.5$\pm$0.1$^{**}$ | 87.0$\pm$0.1$^{*}$ | | **FFT+Focal-SAM** | **81.6**$\pm$0.1 | **78.8**$\pm$0.2 | **83.9**$\pm$0.1 | **87.3**$\pm$0.2 | | LIFT | 81.9$\pm$0.1$^{*}$ | 79.7$\pm$0.1$^{**}$ | 83.0$\pm$0.2$^{*}$ | 85.1$\pm$0.1$^{*}$ | | LIFT+SAM | 82.0$\pm$0.2 | 79.7$\pm$0.1$^{**}$ | 83.1$\pm$0.1 | 85.2$\pm$0.2 | | LIFT+ImbSAM | 81.9$\pm$0.1$^{*}$ | 79.8$\pm$0.2 | **83.2**$\pm$0.1 | 85.2$\pm$0.1 | | LIFT+CC-SAM | 82.0$\pm$0.1 | 79.8$\pm$0.1$^{*}$ | 83.1$\pm$0.1 | 85.2$\pm$0.1 | | **LIFT+Focal-SAM** | **82.2**$\pm$0.1 | **80.1**$\pm$0.2 | **83.2**$\pm$0.1 | **85.4**$\pm$0.2 |
Summary: The paper introduces Focal-SAM, a new variant of Sharpness-Aware Minimization (SAM) designed for long-tailed classification. It aims to improve generalization for both head and tail classes by integrating the focal mechanism with SAM. Compared with baselines like ImbSAM and CC-SAM, Focal-SAM efficiently achieves flatness in both head and tail classes. The authors present a theoretical generalization bound with improved convergence rates. Moreover, they validate Focal-SAM’s effectiveness through extensive experiments. ##Update The authors' response addresses my concerns. Claims And Evidence: Yes. Methods And Evaluation Criteria: **Method:** - Why don't use $\tilde{L}_S^{FS}(\mathbf(w))$ (eq 5) as the main objective? Specifically, why manage $\tilde{L}_S^{FS}(\mathbf(w))$ as a penalty of standard loss as in eq 6, which will add an additional hyperparameter $\lambda$? - Why perturbation is computed only on $\tilde{L}_S^{FS}(\mathbf(w))$ rather than $L_S^{FS}(\mathbf(w))$ in eq 7? **Evaluation:** Empirical evaluation makes sense to me. Theoretical Claims: Yes, I checked Appendix B. Experimental Designs Or Analyses: Yes, I check Section 4 and Appendix D, E. - What is the computational cost of baselines such as LA, FFT, and LIFT? If their cost is approximately half that of SAM variants, a fairer comparison would be to evaluate them with twice the number of training steps. Supplementary Material: Yes, all appendix. Relation To Broader Scientific Literature: The paper proposes a SAM variant specifically for long-tailed classification tasks. Essential References Not Discussed: No. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: (Summarize from above) 1. Why don't use $\tilde{L}_S^{FS}(\mathbf(w))$ (eq 5) as the main objective? Specifically, why manage $\tilde{L}_S^{FS}(\mathbf(w))$ as a penalty of standard loss as in eq 6, which will add an additional hyperparameter $\lambda$? 2. Why perturbation is computed only on $\tilde{L}_S^{FS}(\mathbf(w))$ rather than $L_S^{FS}(\mathbf(w))$ in eq 7? 3. What is the computational cost of baselines such as LA, FFT, and LIFT? If their cost is approximately half that of SAM variants, a fairer comparison would be to evaluate them with twice the number of training steps. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1: Why don't use $\tilde{L}_S^{FS} (w)$ (eq 5) as the main objective? Specifically, why manage $\tilde{L}_S^{FS} (w)$ as a penalty of standard loss as in eq 6, which will add an additional hyperparameter $\lambda$? Thanks for your questions! The motivation behind SAM-based methods is that the geometry of the loss landscape is closely related to a model's generalization ability. Specifically, **flatter minima** in the loss landscape generally leads to better generalization compared to sharper ones. Therefore, SAM-based methods aim to **simultaneously minimize both the loss value and the sharpness** to seek out flatter minima. In our Focal-SAM method, $\tilde{L}_S^{FS}(w)$ is a term which **reflects the sharpness of loss landscape** at $w$. If we only minimize $\tilde{L}_S^{FS}(w)$ , we can generally obtain a model parameter $w$ with a flat loss landscape. However, this **does not necessarily minimize the actual loss value** $L_S(w)$, which may remain large and degrade model performance. Therefore, we incorporate the sharpness term $\tilde{L}_S^{FS}(w)$ as a penalty term to the standard loss value $L_S(w)$, weighted by a hyperparameter $\lambda$. This enables **simultaneously minimization of both sharpness and the loss value**, where $\lambda$ controls the importance of the sharpness term. - If $\lambda$ is very large, the objective function primarily minimizes $\tilde{L}_S^{FS}(w)$, neglecting $L_S(w)$. - If $\lambda=0$, the method reduces to minimizing only the standard loss $L_S(w)$. In Fig.5 of our paper, we conduct an ablation study on $\lambda$ to explore these scenarios. The result show that **both excessively small and large values of $\lambda$ lead to suboptimal performance**, while a moderate $\gamma \approx 0.8$ achieves the best results. This highlights the importance of balancing loss minimization and sharpness control. > Q2: Why perturbation is computed only on $\tilde{L}_S^{FS} (w)$ rather than $L_S^{FS}(w)$ in eq 7? Thanks for your questions! To clarify, let's first recall the definition of the objective function $L_S^{FS} (w)$: $$ L_S^{FS} (w) = L_S(w) + \lambda \cdot \tilde{L}_S^{FS} (w) $$ where $$ \tilde{L}_S^{FS} (w) = \max _{|| \epsilon ||_2 \le \rho} \sum _{i=1}^C (1 - \pi_i)^\gamma \tilde{L}_S^i(w, \epsilon) $$ As shown, the perturbation $\epsilon$ appears **only** in the second term $\tilde{L}_S^{FS}(w)$ and is **independent** of the first term $L_S(w)$. Therefore, when optimizing $L_S^{FS}(w)$, the perturbation is **computed exclusively over the inner maximization problem in $\hat{\epsilon}(w)$**. Specifically, we solve for the optimal perturbation $\hat{\epsilon}(w)$ as: $$ \hat{\epsilon}(w) = \arg\max _{|| \epsilon ||_2 \le \rho} \sum _{i=1}^C (1 - \pi_i)^\gamma \tilde{L}_S^i(w, \epsilon) $$ > Q3: What is the computational cost of baselines such as LA, FFT, and LIFT? If their cost is approximately half that of SAM variants, a fairer comparison would be to evaluate them with twice the number of training steps. **A3**: Thank you for your valuable suggestion! Indeed, the computational cost of baselines such as LA, FFT and LIFT is approximately half that of SAM-based variants. To provide a fairer comparison, we conduct additional experiments in which we extend the training epochs of these baseline methods to roughly match the total computational cost of the SAM-based methods. Specifically, we double the training epochs to **400 or 40** (2$\times$ compared to the original 200 or 20) for these baselines, while keeping the SAM-based methods at **200 or 20** epochs. All other experimental settings remain consistent with those reported in the paper. These experiments are conducted on both CIFAR-100 LT and ImageNet-LT datasets. The results are summarized below. The results clearly show that, under comparable computational budgets, Focal-SAM consistently outperforms the baseline methods across different datasets. This further demonstrates that Focal-SAM effectively enhances model generalization in long-tailed learning scenarios. Below are the results: ----- **CIFAR-100 LT** |Method|Epoch|IR100|IR200|IR50|IR10| |-|-|-|-|-|-| |CE|400|41.0|37.7|45.8|57.0| |**CE+Focal-SAM**|200|**44.0**|**39.6**|**48.1**|**60.9**| |LA|400|45.2|42.3|49.9|58.7| |**LA+Focal-SAM**|200|**50.7**|**46.0**|**54.5**|**63.8**| |FFT|40|76.4|74.2|79.8|85.1| |**FFT+Focal-SAM**|20|**81.6**|**79.0**|**83.9**|**87.3**| |LIFT|40|82.0|**80.3**|82.9|85.1| |**LIFT+Focal-SAM**|20|**82.4**|80.0|**83.2**|**85.4**| ----- **ImageNet-LT** |Method|Epoch|Head|Medium|Tail|All| |-|-|-|-|-|-| |LA|400|61.6|45.7|31.1|49.8| |**LA+Focal-SAM**|200|**63.9**|**52.2**|**34.4**|**54.3**| |FFT|40|78.9|69.1|51.9|70.5| |**FFT+Focal-SAM**|20|**80.8**|**73.9**|**54.4**|**73.9**| |LIFT|40|79.4|75.9|72.1|76.7| |**LIFT+Focal-SAM**|20|**79.7**|**76.6**|**73.6**|**77.4**|
Summary: This paper proposes a learning mechanism named Focal Sharpness-Aware Minimization (Focal SAM), which is an exquisite extension of SAM theories over long-tailed classification tasks. Compared with existing methods, the proposed Focal SAM excels at keeping the flatness of landspaces of both head and tail classes. Extensive experiments show that Focal SAM outperforms existing methods on most datasets. Claims And Evidence: This paper proposes Focal SAM, which is motivated by several shortcomings of ImbSAM and CC-SAM. The authors give empirical analysis and visualizations of these. This paper also gives the generalization bound of Focal SAM based on a theorem, and concrete proofs to this theorem are attached to the appendix. Conclusively, all claims are well validated. Methods And Evaluation Criteria: Yes. The proposed Focal SAM is a successful application of SAM to long-tailed classification with the combination of class-wise scaling control. This paper may be of interest to the research community, bringing new insights and broadening horizons. Theoretical Claims: This paper proposes a theorem to show the generalization bound of their Focal SAM. To the best of my knowledge, I found no technical flaws in the proof. I'm not quite sure about the part concerning PAC-Bayesian generalization. Experimental Designs Or Analyses: Yes. The experiment design is appropriate. The authors conduct experiments over four datasets. Experiments show that their Focal-SAM outperforms existing methods. The authors also make necessary explanations for this. Supplementary Material: Yes. I checked the supplementary material, where authors prove their theorems with some other lemmas. I've made efforts to check it and found no technical flaws. The authors also give related works and more experimental results in other parts. Relation To Broader Scientific Literature: I think this paper contributes much to the research community. Although the methodology is intuitive, the authors give detailed proofs and experiments to validate its superiority. This makes their methods solid and easy to implement, illuminating the research community. Essential References Not Discussed: I found no essential references missed in this paper. Other Strengths And Weaknesses: - Strengths: 1. The paper is well-written and easy to follow. 2. The motivation is clear, where authors analyze the shortcomings 3. The idea is quite simple, easy to implement and also superior. The authors give concrete theoretical and experimental analysis of their method. 4. The experiments are extensive and well-designed. - Weaknesses: 1. I notice that the running time of Focal-SAM is 50% more than the original SAM. The authors think that it's negligible given the strong performance of Focal SAM, while I can't agree. From Table. 2, I observe a significant performance enhancement with the original SAM, but the improvements from SAM to Focal-SAM are not obvious. This dilates the necessity of employing Focal SAM. 2. I think that a quite amount of theoretical contributions are provided in the appendix. This makes the conclusion of your theorem a little abrupt. I suggest the authors move several lemmas to the main paper. 3. Meanings that Figure 1.b and Table. 1 try to convey seems repeated. Other Comments Or Suggestions: It seems that Focal SAM may not be very efficient. Also, I'm not quite sure about some proof details concerning PAC-Bayesian generalization. Considering all of the above, I tend to give this paper a weak reject. If the author can answer my doubts well, I am willing to raise my score. Questions For Authors: Can you provide more explanations for the efficiency of Focal SAM? With a trade-off between performance and efficiency, I feel that Focal SAM has no explicit advantage compared with SAM. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1: (1) From Table. 2, I observe a significant performance enhancement with the original SAM, but the improvements from SAM to Focal-SAM are not obvious. (2) I notice that the running time of Focal-SAM is 50% more than the original SAM. This dilates the necessity of employing Focal SAM. **A1**: Thanks for your question! **(1) Performance Gains:** It is true that the improvement from SAM to Focal-SAM is generally smaller than that from the baseline to SAM. This is expected, as SAM itself is already an effective method, and Focal-SAM is proposed as a **refinement of SAM**. Therefore, the additional improvement is naturally smaller but **still consistent and meaningful**. We also conduct multiple runs and statistical significance analysis to further confirm this. Please see `A1` for Reviewer `bvws` for details. **(2) Computational Cost:** While Focal-SAM increases training time by around 50% compared to SAM, this **does not mean it is unnecessary**. To fairly assess its benefit, we conduct experiments where we extend the training epochs of SAM to match Focal-SAM's total computational cost. Specifically, we increase the training epochs to **300 or 30** (1.5$\times$ the original 200 or 20) for SAM, while keeping Focal-SAM at **200 or 20** epochs. **In this setting, the total computational cost of SAM and Focal-SAM becomes comparable.** All other experimental settings remain the same as in the main paper. We conduct these experiments on CIFAR-100 LT, ImageNet-LT, and iNaturalist datasets. The results are summarized below, from which we can observe **consistent improvement**. ----- **CIFAR-100 LT** |Method|Epoch|IR100|IR200|IR50|IR10| |-|-|-|-|-|-| |CE+SAM|300|43.0|39.2|46.9|60.0| |**CE+Focal-SAM**|200| **44.0**|**39.6**|**48.1**|**60.9**| |LDAM+DRW+SAM|300|**50.4**|**46.4**|53.0|61.2| |**LDAM+DRW+SAM**|200|50.3|46.2|**53.8**|**62.3**| |VS+SAM|300|49.2|45.5|53.0|63.3| |**VS+Focal-SAM**|200|**49.7**|**45.8**|**54.5**|**63.7**| |LA+SAM|300|50.1|45.5|53.8|63.0| |**LA+Focal-SAM**|200|**50.7**|**46.0**|**54.5**|**63.8**| |FFT+SAM|30|81.2|78.3|83.6|86.9| |**FFT+Focal-SAM**|20|**81.6**|**79.0**|**83.9**|**87.3**| |LIFT+SAM |30| 82.1|**80.2**|83.1|85.2| |**LIFT+Focal-SAM**|20|**82.4**|80.0|**83.2**|**85.4**| ----- **ImageNet-LT** |Method|Epoch|Head|Medium|Tail|All| |-|-|-|-|-|-| |LA+SAM|300|63.2|51.6|**34.8**|53.8| |**LA+Focal-SAM**|200|**63.9**|**52.2**|34.4|**54.3**| |FFT+SAM|30|80.6|73.1|**56.1**|73.6| |**FFT+Focal-SAM**|20|**80.8**|**73.9**|54.4|**73.9**| |LIFT+SAM|30|**79.8**|76.1|73.5|77.2| |**LIFT+Focal-SAM**|20|79.7|**76.6**|**73.6**|**77.4**| ----- **iNaturalist** |Method|Epoch|Head|Medium|Tail|All| |-|-|-|-|-|-| |LA+SAM|300|68.0|71.4|72.4|71.5| |**LA+Focal-SAM**|200|**68.4**|**72.0**|**72.5**|**71.8**| > Q2: I think that a quite amount of theoretical contributions are provided in the appendix. This makes the conclusion of your theorem a little abrupt. I suggest the authors move several lemmas to the main paper. **A2**: Thanks for your valuable suggestion! Due to space constraints in the anonymous submission version, we included the proof sketch and the lemmas supporting Theorem 3.1 in the appendix. We acknowledge that this may make the conclusion of Theorem 3.1 appear abrupt. **In the future version, we will revise the manuscript and relocate the proof sketch along with several key lemmas, such as Lemma B.2 and Lemma B.4, into the main text.** > Q3: Meanings that Figure 1.b and Table. 1 try to convey seems repeated. **A3**: Thanks for your question! We acknowledge that there is some overlap between Figure 1(b) and Table 1 regarding the part of computational cost. However, **the key messages we intend to convey through them are different**. In Figure 1(b), our primary goal is to directly compare Focal-SAM and CC-SAM to demonstrate the **effectiveness of Focal-SAM**, showing that it achieves better performance with higher efficiency. In contrast, Table 1 is used to **highlight the limitations of CC-SAM and to motivate the development** of our method. Specifically, we compare CC-SAM other SAM-based methods—including SAM, ImbSAM, and Focal-SAM—and show that CC-SAM incurs higher computational costs than the others. This observation motivates our proposal of Focal-SAM as an efficient alternative. --- Rebuttal Comment 1.1: Comment: I have reviewed the response and appreciate the author's efforts. The response clarifies my concern about the significance of performance enhancement and the efficiency of the proposed Focal-SAM method. Additionally, I understand the theoretical contributions and the difference between Figure 1.b and Table 1. Overall, I recognize the novelty and effectiveness of this work. Hence, I will raise my rating to 4. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and for increasing the score!
Summary: For long-tailed (imbalanced) classificaton, the authors propose Focal-SAM that aims at class-wise SAM so that flatter minima are found. They first show that imbSAM, which applies SAM only to tailed classes, can increase sharpness in the head classes. They then show that CC-CAM could be computational expensive due to one backpropagation for each class. The proposed Focal-SAM has class-wise SAM, however, not class-wise perturbation (epsilon) for SAM. That is, they have one perturbation for all classes, which requires only xone backpropagation for all classes. For each class, they first calculate L(w+epsilon) - L(w). Then, for the focal-SAM loss, they weight each class loss with (the weight component of) focal loss, which is then maximized over epsilon. The overall loss is the oringial loss plus the focal-SAM loss. They discussed that only 3 backpropagations are needed. For empirical evaluation, they use 4 datasets and compare with 3 existing SAM-based techniques, combine with and 3 methods for imbalanced data. Empirical results indicate that the proposed method compares favorably. ## update after rebuttal After reading and responding to the authors' rebuttal, I decided to maintain my rating. Claims And Evidence: The emiprical results indicate that Focal-SAM can achieve higher accuracy with less computation. Methods And Evaluation Criteria: The proposed method is based on insights on the limitatons of imbSAM CC-SAM. By reducing sharpness for all classes and the number of backpropagations, Focal-Sam can outperform imbSAM and CC-SAM. Focal loss is an existing method. The evaluation criteria are reasonable. Theoretical Claims: Due to not being familiar with some of the terms, I did not check the theoretical claim and proofs in the appendix. Experimental Designs Or Analyses: Due to not being familiar with some of the terms, I did not check the theoretical claim and proofs in the appendix. Supplementary Material: I quickly reviewed Parts B (backpropgation) and D (results). Relation To Broader Scientific Literature: The key contributions are limited to SAM-based methods for long-tailed classifications. Essential References Not Discussed: I am not aware of essential references related to SAM within the imbalanced context that are not discussed. Other Strengths And Weaknesses: The proposed method on class-wise SAM with fewer backpropagations is interesting. The empirical results indicate performance improvements in accuracy`and speed. The paper is generally well written. Other Comments Or Suggestions: see questions below Questions For Authors: 1. On computational overhead, since a key difference is one perturbation (epsilon) for all classes (Focal-SAM) vs for each class (CC-SAM), how different are the perturbations between the two methods? 2. Eq 4: L(w + epsilon) - L(w). The second term does not seem to be in imbSAM or CC-SAM. Can you provide some discussion on the difference? 3. Figure 4: Gamma seems to be quick small (close to zero) for the highest accuracy. That is, the focal component seems to be an insignificant contributor. Any further insights on why? 4. Figure 3: A lower $\pi_i$ has a higher $(1-\pi_i)^\gamma$, which implies a higher $(1-\pi_i)^\gamma$ (x-axis) has a lower probability density (y-axis). However, this is not the case in the figure? Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1: On computational overhead, how different are the perturbations between the two methods? **A1:** Thanks for your question! The perturbation in Focal-SAM is computed as: $$ \hat{\epsilon}(w) = \rho \frac{\nabla_w L_S^\gamma(w)}{|| \nabla_w L_S^\gamma(w) ||_2} $$ where $L_S^\gamma(w) = \sum_{i=1}^C (1 - \pi_i)^\gamma L_S^i(w)$. In contrast, CC-SAM computes a class-specific perturbation for each class $i$ as: $$ \hat{\epsilon}_i(w) = \rho_i^* \frac{\nabla_w L_S^i(w)}{|| \nabla_w L_S^i(w) ||_2} $$ **In terms of computational overhead:** Each perturbation requires one backpropagation (BP) to compute the corresponding gradient: - Focal-SAM requires only **one BP** to compute perturbation to compute $\nabla_w L_S^\gamma(w)$. - CC-SAM requires total **$C$ BPs**, one for each class, to compute $\nabla_w L_S^i(w)$. **The difference between the two perturbation are twofold:** - CC-SAM employs class-specific radii $\rho_i^*$ to finely control the sharpness penalty for each class. In contrast, Focal-SAM uses a unified perturbation radius $\rho$. - CC-SAM computes the gradient of class-wise loss $L_S^i(w)$ individually, while Focal-SAM computes the gradient of the weighted loss $L_S^\gamma(w)$. If we **disregard the difference in scaling across perturbations**, *i.e.*, assume: $$ \frac{\rho}{|| \nabla_w L_S^\gamma(w) ||_2} = \frac{\rho_i^*}{|| \nabla_w L_S^i(w) ||_2}, \forall i $$ then the Focal-SAM perturbation can be rewritten as: $$ \hat{\epsilon}(w) = \sum_{i=1}^C (1 - \pi_i)^\gamma \hat{\epsilon}_i(w) $$ This suggests that the perturbation in Focal-SAM can be viewed as a **weighted aggregation of the class-wise perturbations in CC-SAM**. Therefore, Focal-SAM implicitly considers the contribution of all classes and adjusts the emphasis across classes by tuning $\gamma$. > Q2: The second term in Eq.4 does not seem to be in ImbSAM or CC-SAM. **A2:** Thank you for your questions! In fact, we can **reformulate the objective functions of ImbSAM and CC-SAM to make the second term, $L(w)$, explicitly appear**. Furthermore, we can rewrite the objectives using the class-wise sharpness term (Eq.4). For ImbSAM: $$ L_S^{IS}(w) = L_S^H(w) + L_S^T(w) + \max _{|| \epsilon ||_2 \le \rho} [L_S^T(w+\epsilon) - L_S^T(w)] = L_S(w) + \max _{|| \epsilon ||_2 \le \rho} \sum _{i \in T} \tilde{L}_S^i(w, \epsilon) \quad (1) $$ For CC-SAM: $$ L_S^{CS}(w) = \sum_{i=1}^C \frac{1}{\pi_i} \cdot L_S^i(w) + \sum_{i=1}^C \max_{|| \epsilon || \le \rho_i^*} \frac{1}{\pi_i} \cdot [L_S^i(w + \epsilon) - L_S^i(w)] = \sum_{i=1}^C \frac{1}{\pi_i} \cdot L_S^i(w) + \sum_{i=1}^C \max_{|| \epsilon ||_2 \le \rho_i^*} \frac{1}{\pi_i} \cdot \tilde{L}_S^i(w, \epsilon) \quad (2) $$ Eq.(1) shows that ImbSAM's objective includes the class-wise sharpness only for tail classes. This design specifically focuses on flattening the loss landscape for tail classes but may lead to poor generalization for head classes. Eq.(2) shows that CC-SAM considers the class-wise sharpness for all classes, using class-specific perturbation radii $\rho_i^*$. This allows it to more effectively flatten the loss landscape for both head and tail classes. > Q3: Fig.4: Gamma seems to be quite small (close to zero) for the highest accuracy. **A3:** Thanks for your questions! The reason is that **even a small $\gamma$ is sufficient to create a relatively skewed** $(1 - \pi_i)^\gamma$ from head to tail classes. As stated in the paper, we assume that $\pi_1 \ge \pi_2 \ge \cdots \ge \pi_C$. For example, in the CIFAR-10 LT dataset ($C = 10$ classes), when $\gamma = 0.8$ (the value at which the LA method achieves the best performance, as shown in Fig.4), we calculate that $(1 - \pi_{10})^\gamma$ is approximately $0.99$, whereas $(1 - \pi_1)^\gamma$ is around $0.66$. This indicates that even a relatively small $\gamma$ can result in a highly skewed $(1 - \pi_i)^\gamma$. Therefore, despite $\gamma$ being small, the focal component still plays a contributing role. > Q4: Figure 3: A lower $\pi$ has a higher $(1 - \pi_i)^\gamma$. However, this is not the case in the figure? **A4:** Thanks for your question! We apologize if our description of Fig.3 is unclear and potentially misleading. To clarify, Fig.3 presents the distribution of $(1 - \pi_i)^\gamma$ across various $\gamma$ and datasets. Specifically, for a given $\gamma$ and dataset, we plot the distribution of $\mathcal{P} = \\{ (1 - \pi_i)^\gamma \\} _ {i = 1} ^C$, where $C$ is the number of classes. Since the datasets **contain many tail classes with similarly small sample sizes** (*i.e.*, small $\pi_i$), the distribution **tends to peak at relatively large values of $(1 - \pi_i)^\gamma$**. Hence, Fig.3 differs from the scenario you expected, where it should plotting $\mathcal{P}' = \\{(1 - \pi_{y_n})^\gamma \\}_{n = 1}^N$, with $N$ being the number of samples.
null
null
null
null
null
null
Test-time Adaptation on Graphs via Adaptive Subgraph-based Selection and Regularized Prototypes
Accept (poster)
Summary: This paper find that existing graph neural network methods struggle with performance degradation when adapting to test-time domain shifts, and current test-time adaptation methods for graphs mainly focus on Euclidean data and face challenges with label scarcity and knowledge utilization. The authors propose ASSESS framework that enables fine-grained and structure-aware selection of reliable test graphs, effectively balances prior knowledge from unknown training graphs and posterior information from unlabeled test graphs, and achieves significant improvement in test-time adaptation on graphs. ## update after rebuttal: The authors' rebuttal address most of my concerns, so I keep my score to "weak accept". Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes, the proof of Theorem 3.1 is correct and well-constructed. However, I notice that the authors make lots of assumptions in the paper. I think a brief discussion of how these assumption affects the generality of the result would be helpful. Experimental Designs Or Analyses: Yes, the paper evaluates ASSESS on five datasets spanning different domains and compares ASSESS with a range of baselines. As far as I know, there are many different settings for TTA, such as online or offline, as well as batch size settings, and they should be different from source-free methods such as SHOT and RNA. I think the authors should provide a detailed explanation of their TTA settings and compare more TTA methods under the same settings. Supplementary Material: Yes, the proof of Theorem3.1 and details of dataset and baseline methods. Relation To Broader Scientific Literature: The key distributions of this paper extend on graph representation learning, domain adaptation, self-supervised learning and test-time adaptation. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strength:** 1. The paper addresses an under-explored problem: test-time adaptation on graph-structured data. 2. The proposed method, ASSESS, combines two innovative components: Adaptive Subgraph-based Selection (ASBS) and Regularized Prototype Supervision (RPS). The use of subgraph mutual information for adaptive thresholding and the construction of semantic prototypes are well-motivated ideas. 3. The paper is well-written and logically structured. **Weakness:** 1. The author only compared the methods related to source-free setting in the experimental comparison. I think the most advanced TTA methods should be compared under the graph setting. 2. The author lacks specific settings for the TTA experiment, such as whether it is online or offline, and whether the sample is a streaming input, etc. 3. The datasets (e.g., FRANKENSTEIN, Mutagenicity) focus on chemical and social graphs. Including other graph types (e.g., knowledge graphs, citation networks) would better demonstrate generalizability. Other Comments Or Suggestions: I think the author should use the entropy-based method and the subgraph-based method to compare the results during the selection stage, highlighting the advantages of subgraph-based selection. Questions For Authors: I would like to know how many subgraphs the author will choose for estimation calculation, as well as how the adaptation time of ASSESS compares to other baseline methods. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your concerns and provide additional clarification. > Q1. However, I notice that the authors make lots of assumptions in the paper. I think a brief discussion of how these assumption affects the generality of the result would be helpful. **A1**. Thank you for your suggestion. The addition, multiplication, and the ReLU function can be found to satisfy these assumptions. Our full neural network is a combination of these operations, which satisfy these assumptions as well. Similar assumptions can be found in recent works [1], [2], [3], and [4]. [1] A convergence theory for deep learning via over-parameterization, ICML'19 [2] Stagewise training accelerates convergence of testing error over SGD, NeurIPS'19 [3] Gradient-free optimization of highly smooth functions: improved analysis and a new algorithm, JMLR'24 [4] Efficient active learning halfspaces with Tsybakov noise: A non-convex optimization approach, AISTATS'24 We will discuss this in the revised version of the manuscript. > Q2. As far as I know, there are many different settings for TTA, such as online or offline, as well as batch size settings, and they should be different from source-free methods such as SHOT and RNA. I think the authors should provide a detailed explanation of their TTA settings and compare more TTA methods under the same settings. **A2**. Thank you for your suggestion. We follow the existing work [5] to adopt the offline setting of TTA. We compare additional TTA methods under the offline setting, and the results show that our method achieves better accuracy. |Methods|PROTEINS|IMDB-BINARY| |-|-|-| |ASSESS (ours)|78.9|67.8| |T3A [6]|69.3|64.3| |GAPGC [7]|71.9|65.1| |MATCHA [8]|73.2|65.8| [5] Test-Time Training for Graph Neural Networks, arXiv'22 [6] Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization, NeurIPS'21 [7] GraphTTA: Test Time Adaptation on Graph Neural Networks, ICML'22 [8] Matcha: Mitigating Graph Structure Shifts with Test-Time Adaptation, ICLR'25 > Q3. The author only compared the methods related to source-free setting in the experimental comparison. I think the most advanced TTA methods should be compared under the graph setting. **A3**. Thank you for your comment. We evaluate additional TTA methods under the offline graph setting, and the results are shown in A2. We will add these in the revised manuscript. > Q4. The datasets (e.g., FRANKENSTEIN, Mutagenicity) focus on chemical and social graphs. Including other graph types (e.g., knowledge graphs, citation networks) would better demonstrate generalizability. **A4**. Thank you for your comment. We provide an additional citation network dataset (DBLP_v1), and the results are shown as follows: |Methods|ASSESS (ours)|T3A|GAPGC|MATCHA| |-|-|-|-|-| |DBLP_v1|89.4|83.1|84.6|86.7| > Q5. I think the author should use the entropy-based method and the subgraph-based method to compare the results during the selection stage, highlighting the advantages of subgraph-based selection. **A5**. Thank you for your suggestion. We compare our ASBS with entropy-based selection, and the results below show that our method outperforms entropy-based selection. |Selection Strategies|PROTEINS|IMDB-BINARY| |-|-|-| |ASBS| 78.9 |67.8 | |Entropy|71.8|64.7| > Q6. I would like to know how many subgraphs the author will choose for estimation calculation, as well as how the adaptation time of ASSESS compares to other baseline methods. **A6**. Thank you for your comment. We use one subgraph for the estimation, and rely on temporal ensembling (as mentioned in Line 192-197, around Eq. 5) to obtain stable estimation of mutual information. As for the adaptation time, we want to emphasize that GNNs are relatively light-weight, and computation is very fast. We compare the adaptation time of ASSESS with baselines below, and the results show that our computation time is comparable to other methods. |Methods|ASSESS (ours)|RNA|MATCHA| |-|-|-|-| |FRANKENSTEIN|2.1s|2.5s|2.0s| |Mutagenicity|1.7s|2.0s|1.5s|
Summary: This paper investigates the problem of test-time adaptation on graphs, addressing the challenge of adapting a pre-trained graph neural network (GNN) to unseen test data without access to the original training set. The authors propose ASSESS (Adaptive Subgraph-based SElection and Regularized Prototype SuperviSion), a novel method that combines graph selection and prototype regularization to enhance adaptation performance under distribution shifts. The paper also presents a rigorous theoretical analysis about ASSESS. Empirical evaluations on five diverse graph datasets confirm the superior performance of ASSESS over state-of-the-art baselines, with ablation studies further highlighting the contributions of its components. Claims And Evidence: All the claims are supported by theoretical or experimental evidences. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. I have checked the theoretical claims in section 3, including the proof in the appendix. Experimental Designs Or Analyses: Yes. I have checked the experimental designs, the use of datasets, ablation studies, etc. Supplementary Material: Yes. I have reviewed all parts in the supplementary material. Relation To Broader Scientific Literature: As graphs neural networks are widely used in processing non-Euclidean data, such as proteins or social networks, their performance under test-time distribution shifts is an important aspect of the robustness of many algorithms. Essential References Not Discussed: This paper could benefit from a discussion of other works graph transfer learning, e.g. [1][2]. [1] Structural Re-weighting Improves Graph Domain Adaptation (ICML’23) [2] Learning invariant graph representations for out-of-distribution generalization (NeurIPS’22) Other Strengths And Weaknesses: Strengths: 1. The paper provides a well-structured and comprehensive theoretical analysis of the proposed ASSESS algorithm, demonstrating its convergence properties under test-time adaptation settings. The analysis is grounded in established optimization assumptions, such as the Polyak-Łojasiewicz condition and the Tsybakov noisy condition, providing a strong mathematical foundation for the method’s effectiveness. 2. The proposed method is evaluated on five real-world graph datasets, each of which represents different domains. The authors compare their approach against a diverse set of baselines, including graph neural networks, unsupervised/semi-supervised training methods, and test-time adaptation algorithms. The results consistently show improvements in classification accuracy. 3. The ablation results provide strong empirical evidence that both components are crucial for achieving state-of-the-art performance in test-time adaptation on graphs. Weaknesses: 1. While the paper focuses on test-time adaptation, it does not explicitly discuss connections to related topics such as graph domain adaptation and out-of-distribution (OOD) generalization. Since ASSESS deals with distribution shifts at test time, its relationship to existing methods in transfer learning should be discussed. 2. The authors introduce a temporal ensembling strategy to stabilize the estimation of mutual information in the adaptive subgraph selection process (Eq. 5). However, the motivation for why this specific technique was chosen is not clearly explained. 3. The paper does not specify whether the selected datasets cover both homophilic and heterophilic graphs. Homophilic graphs exhibit strong intra-class connectivity, while heterophilic graphs contain nodes that connect across different classes. Since GNN performance varies significantly based on graph structure, it is important to explicitly discuss the structural diversity of the datasets. 4. The paper presents inconsistent descriptions of how the ASBS and RPS components interact within the overall adaptation framework. Figure 1 suggests that ASBS and RPS operate in parallel, implying that reliable test graphs are selected and then immediately used for self-training. However, Section 3.4 and the provided loss formulation (Eq. 17) indicate an iterative execution, where ASBS first refines the test graph selection over multiple steps before RPS is applied. Other Comments Or Suggestions: 1. The format of some equations could be improved (e.g. Eq 12-15). 2. In Line 223, "large" should be "larger". 3. In Line 253, "closed form" should be "closed-form". Questions For Authors: Please refer to the weakness section. I may change my score based on the authors' feedback regarding the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper, your insightful comments and support. Your positive feedback is incredibly encouraging for us! In the following response, we would like to address your concerns and provide additional clarification. > Q1. While the paper focuses on test-time adaptation, it does not explicitly discuss connections to related topics such as graph domain adaptation and out-of-distribution (OOD) generalization. Since ASSESS deals with distribution shifts at test time, its relationship to existing methods in transfer learning should be discussed. **A1**. Thank you for your suggestion. We will add discussions to related topics in the revised version. Here we provide a brief comparison. TTA aims to adapt a well-trained model without accessing the source data. Domain adaptation allows access to source data, which is different from TTA. OOD generalization focuses on building generalizable models during training, whereas TTA focuses on the test phase. > Q2. The authors introduce a temporal ensembling strategy to stabilize the estimation of mutual information in the adaptive subgraph selection process (Eq. 5). However, the motivation for why this specific technique was chosen is not clearly explained. **A2**. Thanks for your comment. Temporal ensembling is used to stabilize the estimation of mutual information. It is costly to sample a lot of subgraphs to compute the mutual information estimation, and a lack of samples may lead to unstable MI estimation. Therefore, we use the temporal ensembling technique to stabilize the estimation of mutual information. > Q3. The paper does not specify whether the selected datasets cover both homophilic and heterophilic graphs. Homophilic graphs exhibit strong intra-class connectivity, while heterophilic graphs contain nodes that connect across different classes. Since GNN performance varies significantly based on graph structure, it is important to explicitly discuss the structural diversity of the datasets. **A3**. Thanks for your comment. The adopted datasets contain both homophilic and heterophilic graphs, and we show the homophily ratio of these datasets as follows: |Datasets|FRAN|MUTA|PROTEINS|NCI1|IMDB-BINARY| |-|-|-|-|-|-| |Homophily Ratio|0.04|0.37|0.92|0.62|0.67| > Q4. The paper presents inconsistent descriptions of how the ASBS and RPS components interact within the overall adaptation framework. Figure 1 suggests that ASBS and RPS operate in parallel, implying that reliable test graphs are selected and then immediately used for self-training. However, Section 3.4 and the provided loss formulation (Eq. 17) indicate an iterative execution, where ASBS first refines the test graph selection over multiple steps before RPS is applied. **A4**. Thanks for the comment. Selection and adaptation are performed iteratively. In each iteration, we select reliable test graphs using ASBS and then adapt the model using RPS. The ASBS is integrated in the overall optimization process, where the optimization computes losses (including the MI loss), and the MI loss is used to guide the reliable graph selection process in ASBS. **Other typos**: > The format of some equations could be improved (e.g. Eq 12-15). > In Line 223, "large" should be "larger". > In Line 253, "closed form" should be "closed-form". Thank you for your careful review! We will correct these in the revised version.
Summary: This paper proposes an algorithm ASSESS that handles graph test time domain adaptation for graph-level task. The algorithm mainly contain two components as the adaptive subgraph-based selection and regularized prototype supervision. The subgraph-based selection tends to select reliable test graphs by setting individual graph threshold using the mutual information between the subgraphs. Then, with the selected reliable sets of test graphs, they first construct the prior prototypes as the weight matrix of pretrained model as prior knowledge. Then, the prototype and model are updated with the self-supervised objective that enforces the matching of prototype with corresponding class embeddings of test graphs under the regularization enforcing close distance of updated prototypes to prior prototypes. This paper also provides the convergence analysis of the algorithm and demonstrate superior performance compared to GNN baselines and test-time adaptation baselines. Claims And Evidence: **Strength:** - The test time adaptation on graph specifically for graph-level task is an under-explored direction that worth more investigation and understanding. - The selected datasets span a wide range of real world applications **Weakness:** - Lack of illustration and more concrete motivation behind the unique challenge appear for graph test-time adaptation in the introduction. For instance, what can be the challenges of test-time adaptation under structural shifts within these datasets and why they cannot be addressed by previous algorithms. The second challenge seems to be non-unique to graph data. Methods And Evaluation Criteria: **Strength:** - The propose method is clear in objective regarding the two modules - The methodology is written clearly and easy to read **Weakness:** - It could be better if we motivate the algorithm design in a more theoretical manner. For instance, why in principle checking the mutual information of subgraphs and the whole graph can be a good indicator to select reliable test graphs. Also, is the random selection of subgraph to calculate mutual information the best way? - The design of regularized prototype supervision is kind of independent to the graph structure and can also be applied to euclidean data. Also, the idea of similar prototype or template adjustment [1] has been used in previous literature, which limits the novelty. [1]. Iwasawa, Yusuke, and Yutaka Matsuo. "Test-time classifier adjustment module for model-agnostic domain generalization." Advances in Neural Information Processing Systems 34 (2021): 2427-2440. Theoretical Claims: - The theoretical analysis mainly targets the convergence, while I expect some more analysis in terms of the rationales of the algorithm as mentioned above. - There are assumptions of the theorem that might be a bit oversimplified, e.g. the distribution of the test unlabeled graphs as the mixture of two distributions. Experimental Designs Or Analyses: **Strength:** - The datasets span a wide range of fields - There is ablation study investigating the impact of different components **Weakness/Questions:** - Lack of TTA and GTTA baselines: over half of the baselines are not designated for test-time adaptations and there are only 2 non-graph TTA baselines and 1 graph TTA baseline. - The performance over different variants of the algorithms are very close, even within one std. Does that imply that there is no clear and significant difference in contribution from different components in ASSESS. Supplementary Material: Briefly went through the algorithm and baseline sections Relation To Broader Scientific Literature: This paper can let people pay more attention to the graph test time adaptation problem. Essential References Not Discussed: More discussion of the graph test time adaptations are needed in the related work section in addition to the test-time adaptation for euclidean data. Other Strengths And Weaknesses: Please refer to the above sections. Other Comments Or Suggestions: It might be better if you illustrate more details in pipeline design for figure 1. Questions For Authors: Please refer to the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are truly grateful for the time you have taken to review our paper and your insightful review. Here we address your comments in the following. > Q1. Lack of illustration and more concrete motivation behind the unique challenge appear for graph TTA in the introduction. For instance, what can be the challenges of TTA under structural shifts within these datasets and why they cannot be addressed by previous algorithms. The second challenge seems to be non-unique to graph data. **A1**. Thanks for your comment. The shift on both node attributes and graph structures would deteriorate the performance of GNNs according to [A]. Previous works often select reliable samples for self-training using fixed thresholds shared across all graphs, which is not flexible enough to handle the complex distribution shifts (both attributes and structures). As for the second part, our paper focuses on graph TTA. Compared with traditional TTA, the complexity of graph-level data makes the problem challenging. We will try to extend our method to other types of data, and leave it to future work. [A] Matcha: Mitigating Graph Structure Shifts with Test-Time Adaptation. ICLR'25 > Q2. It could be better if we motivate the algorithm design in a more theoretical manner. For instance, why in principle checking the mutual information of subgraphs ... Also, is the random selection of subgraph to calculate mutual information the best way? **A2**. Thanks for your comment. A high MI between a graph and its subgraphs indicates that the graph representation encodes information shared across its subgraphs. In other words, the encoder is able to handle the inherent structure of this graph. To validate it, we compare our method with other variants and show that MI can achieve the best performance. |Indicators|PROTEINS|IMDB-BINARY| |-|-|-| |MI| 78.9 |67.8 | |Entropy|71.8|64.7| |Confidence|69.5|64.5| Random selection is simple yet effective here. To validate it, we provide the empirical results (compared to edge perturbation, removing and adding edges with probability $p$). |Strategies|PROTEINS|IMDB-BINARY| |-|-|-| |Random Subgraph|78.9|67.8| |Edge perturbation, $p=0.1$|76.3|66.7| |Edge perturbation, $p=0.2$|75.7|66.1| |Edge perturbation, $p=0.3$|74.2|65.4| > Q3. The design of regularized prototype supervision is kind of independent to the graph structure and can also be applied to euclidean data. Also, the idea of similar prototype or template adjustment [1] has been used ... **A3**. Thanks for your comment. This work focuses on graph data, but our regularized prototype supervision is a universal design, which is incorporated into our graph TTA methods. We are committed to extending our design to other types of data, and leave it to future work. Our work is different from T3A (Iwasawa et al.) in the following aspects: * **Different Motivation**: T3A is motivated by the idea of support set, whereas our ASSESS is motivated by Bayesian theory. * **Different Methodology**: T3A maintains a support set for each class using the test samples, and uses the support set for classification. By comparison, we use optimal transport to obtain the prototypes, which are used as supervision signals with regularization from the prior. * **Different Scenario**: T3A focuses on the domain generalization of images, whereas we focus on test-time adaptation of graphs. Moreover, empirical results in A5 show that our method outperforms T3A. > Q4. I expect some more analysis in terms of the rationales of the algorithm as mentioned above. **A4**. Thanks for your comment. The overall rationale is to first select (ASBS, as discussed in A2) and then supervise (RPS, A3). Due to limited length, please refer to previous answers. > Q5. There are assumptions of the theorem that might be a bit oversimplified, e.g. the distribution of the test unlabeled graphs as the mixture of two distributions. **A5**. Thanks for your comment. Without loss of generality, we consider a mixture of two distributions, and it can easily be generalized to cases with multiple distributions. > Q6. Lack of TTA and GTTA baselines ... **A6**. Thanks for your comment. We add additional TTA (T3A) and GTTA (GAPGC, MATCHA) baselines as follows. |Methods|PROTEINS|IMDB-BINARY| |-|-|-| |ASSESS (ours)|78.9|67.8| |T3A|69.3|64.3| |GAPGC|71.9|65.1| |MATCHA|73.2|65.8| > Q7. The performance over different variants of the algorithms are very close, even within one std. ... **A7**. Thanks for your comment. We provide additional ablation studies on other datasets as follows, and we can find a clear improvement overall on these datasets. |Methods|PROTEINS|IMDB-BINARY| |-|-|-| |ASSESS|78.9 $\pm$ 3.8|67.8 $\pm$ 2.9| |w/o ASBS|66.7 $\pm$ 3.5|63.9 $\pm$ 2.6| |w/o RPS-a|69.2 $\pm$ 3.9|64.1 $\pm$ 2.9| |w/o RPS-b|70.9 $\pm$ 4.0|64.2 $\pm$ 2.8| **Others**: Related works and figure 1. Thanks for your comment. We will add more related works (e.g., GAPGC, MATCHA, etc.) and details in Figure 1. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal, it addressed some of my concerns so I raise my score to 3.
null
null
null
null
null
null
null
null
Rethinking External Slow-Thinking: From Snowball Errors to Probability of Correct Reasoning
Accept (poster)
Summary: The paper provides theoretical analysis of the snowball error in reasoning models with external thinking mechanism. The paper unveil some interesting properties of reasoning models. However, the strong assumptions make most results somewhat obvious and/or unhelpful. Claims And Evidence: - The setting in the paper is oversimplified, leading to loose bounds and somewhat obvious/unhelpful results. - The authors did not discuss the possibility that the thoughts are incorrect. They only analyzed the errors in transforming thoughts into reasoning tokens. The paper, therefore, provides an incomplete picture of the problem (and a complete analysis is what I expected after reading the introduction. IMHO, the author should rewrite parts of the paper to better reflect the contributions). - The facts that thoughts and reasoning steps are generated sequentially and one mistake in the sequence can lead to wrong output were not discussed and ignored in the formulation. In the paper, the accumulative error is simply the sum of errors at all steps. - The gap between the true error and the upper bound in theorem 3.3 is too big (Fig. 2). This could be a consequence of the weakness mentioned in the previous point. - Due to the strong assumption on $P(e)$, theorem 4.4 is not very informative. The probability of correct answers decays too fast at the rate of $\Theta(e^{-L^2})$. This goes against empirical evidence where models can still generate long correct answers with decent probability. Post rebuttal comments: I raised the score to 3. Methods And Evaluation Criteria: - This is a theory paper so the experiments are for demonstration purposes. The presented experiments are good but some more experiments are needed to validate the theory. For example, experiments for theorem 4.4 and lemma 5.x are missing. Theoretical Claims: As stated in the Claim section, the theory is based on strong assumptions leading to somewhat obvious and unhelpful conclusions. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: The paper can have a good contribution to the literature if the weaknesses are addressed. Understanding the mechanisms for reasoning is important as it can lead to more effective and efficient reasoning methods. Essential References Not Discussed: No Other Strengths And Weaknesses: No. Other Comments Or Suggestions: No. Questions For Authors: Please address the weaknesses mentioned in the previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback. Below are our responses to the concerns raised. ## About the Probability that the Thoughts are Incorrect Our theoretical analysis hinges on the framework presented in Fig.1, where we model the LLM reasoning process as a *planning-execution* mechanism. Given a question, the LLM first plans potential reasoning paths, generating implicit thoughts ($t$), before executing the tasks in these thoughts to produce responses ($r$). We primarily focus on error propagation in the *execution* phase, which is justified because **correct planning is a prerequisite for correct reasoning**—planning errors inherently lead to incorrect results. **However, empirical evidence suggests that even highly capable models can still commit errors during execution**, which is our primary analytical focus. We will improve presentation clarity in the revision. ## About the Formulation of Intermediate Error We explicitly recognize that *a single mistake in the sequence can lead to incorrect output*, which is naturally included in our setting. Our formulation captures how snowball errors accumulate across reasoning steps ($l$), ultimately exceeding a threshold and causing reasoning failure. **In other words, snowball errors accumulate incrementally, while reasoning errors directly emerge from excessive accumulation.** Our formulation ***does*** consider the influence of prior steps. **Step $l$ in our framework accounts for prior influences through prior snowball errors (or information loss)**, as defined in Definition 2.2. Furthermore, Theorem 3.3 evaluates reasoning error probability at each step $l$, **explicitly linking it to accumulated errors from previous steps**. Thus, **our analysis operates at a step-level granularity, not merely aggregating errors over all steps.** ## About the Results of Theorem 3.3 Theorem 3.3 actually establishes a ***lower bound*** for reasoning error probability $e_l$ at step $l$. As snowball errors accumulate with increasing $l$, the error probability rises accordingly. This result is empirically verified by Fig.2 from two aspects. - Mutual information decreases (snowball error increases) with $l$ (Definition 2.1 & 2.2). - Response quality degrades with decreased MI. Additional experimental verification is available in our anonymous repository: https://anonymous.4open.science/r/extra-experiments-0E75/README.md, we kindly refer you to the ***Analysis by Difficulty*** section, also in our discussion with reviewer Qdus (section "The Snowball Error and the Length"). We will incorporate these supplemental results in our revision. ## About the Results of Lemma 4.4 We understand your concerns may be about ***lemma 4.4*** and will respond from the following two aspects: 1) **The assumptions.** Proposition 4.3 facilitates clarity and subsequent analyses (Sec 4.2, line 214). Our results generalize to any scenario where $\operatorname{Pr}\left[|\phi(r_l)-\phi(r_l^*)| \leq \tau\right]$ decreases monotonically with $l$. We kindly refer you to our discussion with reviewer yYwM (section "Lack of Empirical Verification for Proposition 4.3"). 2) **Further illustration for Lemma 4.4.** The "$\leq$" in 4.4 originates from the operator "$\operatorname{min}$" in Proposition 4.3, where the single-step correct reasoning probability is controlled by a constant $\lambda_\tau$. While initially loose for small $l$, **the bound tightens as $l$ increases**, especially when $\lambda_\tau e^{-l} < 1$, **reflecting rapidly increasing error probabilities in later reasoning steps.** ## Experiments for Theorem 4.6 We take theorem 4.6, which is a main result in our paper and lemma 5.x are its derivations, for the subsequent response. Theorem 4.6 relates reasoning cost ($k$), length ($L$), and value function reliability ($\epsilon_b$) to the upper bound of correct reasoning probability. **For $L$:** The impact of reasoning length has been implicitly verified through: - ***Analysis by Difficulty*** section in our repository. - Fig. 2 results. This evidence confirms that longer paths increase errors. **For $k$ and $\epsilon_b$:** We empirically validate $k$'s impact through RAP-like MCTS experiments [1]. See section ***Influence of $k$*** in our repository. Results show improved reasoning accuracy with increased $k$. And the $\epsilon_b$ effect manifests in PrOntoQA's faster accuracy improvement versus GSM8k due to its simpler nature. Besides, many external slow-thinking methods (listed in our related works) have empirically verified that increasing the cost in inference time can improve the model's reasoning capability, a.k.a. ***test-time scaling laws***. [1] AlphaZero-like tree-search can guide large language model decoding and training (ICML 2024). While simplifying CoT formulations, our work provides essential foundations for execution-error analysis—a crucial first step acknowledged by other reviewers. We appreciate your recognition of this focused contribution. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. Although the rebuttal provided some clarifications, I still have concerns regarding the assumptions. I think it is hard to get tighter bounds or generalize the results to more realistic settings with the current assumptions. If the authors could outline a plan to improve the paper in the future, I will raise my score. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your comments and valuable suggestions. We fully acknowledge the inherent challenges in formalizing CoT reasoning given its complexity and abstract nature in LLMs. As Reviewer NREt noted, while existing research primarily focuses on empirical evaluations, they cannot *formally explain why slow-thinking methods work or when they fail.* Our work represents a concerted effort to ***"bridge the gap between empirical success of slow-thinking methods and their theoretical understanding".*** We agree that achieving fully explainable LLM reasoning remains an ambitious long-term goal for the research community. **While our current work provides foundational theoretical insights, we recognize it as an initial yet significant step toward this broader vision.** We anticipate that subsequent research—both our ongoing studies and future work by the community—will refine and expand upon these foundations. Ultimately, this iterative progress will lead us closer to truly interpretable reasoning in LLMs. In response to the reviewers' suggestions, we propose concrete plans for both **immediate improvements** to our current manuscript and **future research** directions: **(1) Immediate Improvements to Our Paper** > Plan 1: Improving the Validity of Theoretical Assumptions As our response to Reviewer yYwM, we can relax Proposition 4.3's conditions to require only a monotonic decrease rather than the original exponential form. This relaxation maintains theoretical robustness while yielding more general results: the upper bound of lemma 4.4 and theorem 4.6 will be derived as $\xi^L(l,\tau)$ and $\epsilon_b^L k^L \xi^L(l,\tau)$ respectively, where $\xi^L(l,\tau):=\prod_{l=1}^{L}\xi(l,\tau)$. This modification ensures broader applicability by weakening assumptions on $P(e)$, addressing concerns regarding the original constraints. We will formally include this improvement in our revision. > Plan 2: Clarifying Sequential Reasoning Formulation Critical to our formulation is the principle that *a single mistake in reasoning can lead to incorrect final outputs*—even if the final answer appears correct by coincidence. Specifically, we implicitly assume that **any intermediate mistake compromises reasoning validity.** Building upon this idea, Secs 2 & 3 analyze how the probability of error at step relates to reasoning depth $l$ (e.g., Theorem 3.3’s lower bound). This result is just designed to characterize the error probability of **every intermediate reasoning step**. In our revision, we will explicitly articulate this motivation and its implications, ensuring clearer presentation. > Plan 3: Incorporating Error Probability Analysis for Thoughts Our theoretical framework conceptualizes reasoning as a planning-execution process, where correct planning (thought generation) is foundational for accurate reasoning. To further analyze error propagation, we will incorporate a detailed discussion on the thought errors. Specifically, we will establish a sequential error propagation framework similar to [1], $\forall l \leq L, $ we will model the relationship between thought error probability and factors like model capability and reasoning depth $l$, i.e. $P_{err}(t_l) = \epsilon(\pi,l)$. This extension will enhance the practical validity and completeness of our theoretical framework. [1] The Pitfalls of Next-Token Prediction. ICML 2024. > Plan 4: Enhanced Analysis of Mutual Information (MI) Decay To strengthen our discussion on MI decay (assumption in Lemma 3.2), we will provide a more rigorous justification for why task-relevant MI decreases with reasoning depth $l$ or how earlier steps influence information loss in later steps. This premise has already been empirically supported by the results in Fig.2. Theoretically, we argue that, since $r_l = \pi(r_{<l})$, if $\pi$ introduces no additional task-relevant information, the relevance of $r_l$ cannot exceed that of $r_{<l}$, leading to MI decay. A more detailed formulation will be included in the revision. > Plan 5: Expanding Empirical Validation Due to time and resource constraints during rebuttal, comprehensive empirical validation in real-world settings is difficult. However, in our revision, we plan to: - Test our theoretical predictions across a broader range of hyperparameters (e.g., MCTS configurations). - Validate findings on additional search algorithms, benchmarks, and larger LLMs. **(2) Future Research Directions** 1) **Reflection Mechanism.** We will investigate how reflection can act as an error-reset mechanism with potential trade-offs in efficiency. 2) **Implicit Reasoning.** Our results may extend to implicit reasoning by formalizing error propagation in looped structures. Due to the character limit, we kindly refer you to our discussion with reviewer Qdus for more details. Your insightful advice is invaluable in shaping this work, and **we would be most grateful if you could consider raising your score for our work!**
Summary: This paper discusses the issue of understanding and improving external slow-thinking methods in LLM reasoning. The authors (1) propose a theoretical framework based on information theory that analyzes snowball errors and (2) connects them to the probability of reasoning errors in LLMs. This method aims to explain why external slow-thinking works, quantify its limitations, and inform better design of slow-thinking strategies over baseline methods. They provide experimental results to demonstrate how mutual information decays along reasoning steps and validate the presence of snowball errors. They also offer a theoretical comparison of different external slow thinking methods. Claims And Evidence: Most of the paper’s claims are supported by empirical evidence. The most important observations are that mutual information tends to decay along reasoning steps and that snowball errors can occur in LLM reasoning chains. I have concerns regarding some claims: (1) the general linkage between MI decay and reasoning errors. While MI decay may correlate with reasoning challenges, it is unclear that this directly causes errors, especially given that LLMs often exhibit self-correction abilities during generation. (2) the universality of snowball error dynamics across all slow-thinking methods. Best-of-N and MCTS have fundamentally different mechanisms for handling reasoning. Simply applying a single information-theoretic framework may oversimplify these differences. Methods And Evaluation Criteria: The authors use widely adopted benchmark datasets that are appropriate for evaluation. Theoretical Claims: The idea of analyzing and improving external slow-thinking in LLM reasoning through an information-theoretic framework is novel and timely. Most existing work focuses on empirical evaluations of prompting or decoding strategies, but these approaches cannot formally explain why slow-thinking methods work or when they fail, limiting our theoretical understanding and principled design of such methods. This paper introduces a method based on mutual information decay analysis to model snowball errors in reasoning, as well as significantly improves our understanding of the challenges and trade-offs in multi-step LLM reasoning. The workflow is well-structured, as it connects theoretical modeling with empirical validation on standard reasoning benchmarks to make the analysis both grounded and actionable. This approach bridges the gap between empirical success of slow-thinking methods and their theoretical understanding, and further provides guidance for designing more robust reasoning strategies in LLMs. Experimental Designs Or Analyses: The experiments are extensive, with detailed analysis of the results. These experiments validate the effectiveness of the proposed information-theoretic framework in capturing mutual information decay and reasoning errors. The authors also demonstrate the method's robustness and generality across different sizes and complexities of reasoning datasets and baselines. Supplementary Material: Appendix. Relation To Broader Scientific Literature: LLM reasoning and CoT prompting. This paper offers a new theoretical perspective on error accumulation. Best-of-N and MCTS. This paper provides an information-theoretic framework to analyze their limitations. Error propagation. This paper formalizes these as snowball errors driven by mutual information decay. Essential References Not Discussed: Not found. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: Adding a more detailed explanation of Figure 2, including how to interpret the MI curves in relation to reasoning quality, would improve clarity. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate the insightful and constructive feedback provided in your reviews. Below, we address each concern in detail. ## The General Linkage between MI Decay and Reasoning Errors We formalize the reasoning process of LLMs as a *planning-execution* framework. Given a question, as illustrated in Fig.1, an LLM first plans a potential reasoning path, thereby generating implicit thoughts ($t$). Subsequently, the LLM attempts to execute the tasks specified in these thoughts, producing the observable responses ($r$). In this work, our primary focus is analyzing the probability of correct reasoning, particularly examining error propagation in the *execution* phase under **external search**—where search algorithms are guided by external mechanisms (e.g., an auxiliary value model). By contrast, the "self-correction" capability (or reflection) primarily arises from the model’s **internal reasoning ability**. Specifically, the model autonomously detects potential errors and restarts the execution process from an earlier step. **The internal reflection mechanism constitutes the key distinction between external and internal slow-thinking methods.** - External slow-thinking methods rely on **width-expansion**, broadening the search space. - Internal slow-thinking methods leverage **reflection**, enabling recovery from intermediate states. We intend to conduct further analyses on the reflection mechanism in future work, where **it can be rigorously modeled as a form of error accumulation with inherent trade-offs in error detection**. Additional discussion on this topic can be found in our response to Reviewer Qdus (Section: Extension to AoT and AoT+). Additionally, extra experiments have been conducted to verify the relationship between the MI, accuracy and lengths. The results are presented through an anonymized repository (https://anonymous.4open.science/r/extra-experiments-0E75/README.md), the ***Analysis by Difficulty*** section. We also kindly refer you to our discussion with reviewer Qdus (section "The Snowball Error and the Length") for contexts of this additional verification. We thank you for your valuable suggestions and will enhance the clarity and comprehensiveness of our manuscript in the revised version. ## The Universality of Snowball Error Dynamics across All Slow-Thinking Methods As discussed in the previous section, while Best-of-N (BoN) and Monte Carlo Tree Search (MCTS) employ distinct search strategies, they share a fundamental similarity: **both expand the search space under external value guidance.** Consequently, they qualify as external slow-thinking methods, aligning with our analytical framework. From this perspective, any search strategy based on width expansion falls within the scope of our theoretical results. Moreover, we believe our findings can extend to other slow-thinking paradigms, including **internal slow-thinking** (reflection-based) and **implicit reasoning** methods. For a more detailed discussion, please refer to our response to Reviewer Qdus (Section: Extension for More Test-Time Scaling Methods). ## Detailed Explanation of Figure 2 We sincerely appreciate your thorough evaluation of Figure 2. Below, we simply summarize the key experimental settings (additional details will be provided in the appendix in the revised version): 1) **Prompting & Inference:** - Models are prompted (see Appendix B.1) to generate step-by-step reasoning. - Observed answers ($r$) and implicit thoughts ($t$, derived from ground-truth rewrites) are collected. 2) **Mutual Information (MI) Estimation:** - Following prior work, we compute and estimate the MI between $r$ and $t$. - Each question’s result is plotted as a blue dot in Figure 2. 3) **Reasoning Quality Evaluation:** - An outcome reward model assesses reasoning correctness. - The relationship between MI and reasoning quality is fitted to produce the final curve in Figure 2. In the camera-ready version, we will elaborate on Appendix B.1 as per your suggestion to improve clarity. --- Rebuttal Comment 1.1: Comment: Thank you for clarifying. I will keep the score. --- Reply to Comment 1.1.1: Comment: We are deeply grateful for your steadfast support and valuable suggestions for our research.
Summary: This paper analyzes the potential snowball error effect that may occur during the reasoning process of Large Language Models (LLMs), and connects it to the probability of correct reasoning using information theory. Within this theoretical framework, external slow thinking methods can be interpreted as strategies for reducing the probability of errors. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: In fact, I have thoroughly studied and read the work which is followed by this paper: "Understanding chain-of-thought in LLMs through information theory." This work proposes a theoretical framework for evaluating the correctness of each step in the chain-of-thought (COT) using information theory. However, I personally believe that this work has two major issues :it has not yet been published or widely validated and recognized. I would like to ask the authors whether they have conducted an analysis of the shortcomings of this theory and made improvements, or if they have directly followed the work as it is. Experimental Designs Or Analyses: The paper conducts experimental validation for its two proposed theoretical insights, making them both reliable and clear. Supplementary Material: The appendix is located after the main text, and the code has been open-sourced on the website. No additional supplementary materials are provided. Relation To Broader Scientific Literature: Please refer to the section on theoretical claims. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1.The paper provides a very clear introduction to the background and its proposed theoretical method. 2.The theoretical framework and contributions of the paper are substantial and meet the high standards expected for ICML submissions. Weaknesses: 1. The LaTeX formatting of the paper contains errors, and the wrong template appears to have been selected 2. The choice of prior work to follow seems questionable; please refer to the section on theoretical claims for specific concerns. 3. The two main aspects of the paper do not seem to form a cohesive synergy. Specifically, I find that the connection between snowballing errors and overthinking is not very apparent. I would suggest that the authors provide a more intuitive explanation, rather than relying solely on theoretical arguments for acceptance. Other Comments Or Suggestions: N/A Questions For Authors: Please refer to the section on theoretical claims and Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback. Below, we address your concerns point by point. ## Regarding Prior Work [1] While our work is inspired by [1], and both employ information theory to analyze chain-of-thought (CoT) reasoning, **there are significant distinctions between our approaches.** The primary focus of [1] aligns with process reward models (PRMs) [2], which seek to evaluate the quality of intermediate reasoning steps. To achieve this, [1] leverages information-theoretic metrics to estimate information gain as supervisory signals. **In contrast, our work aims to provide a theoretical foundation for understanding the *mechanism* of multi-step reasoning in LLMs.** We consider the former peer reviews of [1] for further discussion. The major concerns raised in [1]’s ICLR 2025 rebuttal (https://openreview.net/forum?id=ouRX6A8RQJ) primarily revolve around experimental design, whereas its theoretical formulation was generally praised. **From the theoretical perspective, our paper is not a direct extension of [1]**; a potential overlap lies in modeling CoT as a task-execution process, but our formulations differ substantially. Specifically, we overcome several limitations of [1]: - [1] relies on complex concepts (e.g., primitive tasks) and assumes identifiable tasks as combinations thereof. In contrast, we formulate reasoning as a transparent planning-execution process, bridging implicit thoughts to observable responses. - [1] introduces additional assumptions (e.g., Bayesian networks) to model reasoning errors, whereas our framework is derived from simpler formulation and information-theoretical foundations (e.g. Fano’s inequality), culminating in Theorem 3.3, which establishes a clear lower bound with fewer assumptions. **In summary, our work diverges from [1] in scope, reasoning formulation, and error modeling. Although both studies build upon information theory, we argue that they contribute distinct perspectives.** We hope this clarification enhances the understanding of our paper’s novel contributions. [1] Ton, J. F., Taufiq, M. F., & Liu, Y. (2024). Understanding Chain-of-Thought in LLMs through Information Theory. arXiv preprint arXiv:2411.11984. [2] Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., ... & Cobbe, K. (2023, May). Let's verify step by step. In The Twelfth International Conference on Learning Representations. ## Regarding LaTeX Formatting We confirm that our manuscript adheres to ICML 2025’s LaTeX template (https://icml.cc/Conferences/2025/AuthorInstructions), ensuring compliance with formatting guidelines. ## Clarifying the Paper’s Structure Thank you for your inquiry about the connection between the two aspects of our paper. Below, we provide a detailed explanation: Overall, the structure of our paper can be divided into 2 parts: 1) **Part 1 (Secs. 2–3):** We analyze "snowball errors" and their relationship with reasoning errors. Theorem 3.3 formalizes this, with $H_{<l}(t|r)$ quantifying error accumulation—analogous to a snowball growing in size. 2) **Part 2 (Secs. 4–6):** Leveraging reasoning error probabilities, we theoretically analyze scaling methods (Theorem 4.6, Table 1, Figure 3). As noted in Line 270, correct reasoning is framed probabilistically (generation + selection), with external strategies (e.g., width expansion) aligning with variables $k$ and $b$. **For Part 1:** The key theoretical insight (Theorem 3.3) formalizes "snowball errors"—minor inaccuracies in early reasoning steps that amplify over subsequent steps, analogous to how a snowball grows when rolled. This is quantified via conditional entropy $H_{<l}(t|r)$, which captures the compounding effect of errors. An intuitive analogy is provided in Sec.2 (~Line 104). **For Part 2:** Theorem 4.6, Table 1, and Figure 3 analyze test-time scaling methods. We frame correct reasoning probabilistically, decomposing it into generation (producing candidate solutions) and selection (identifying the optimal one), as elaborated in Line 270. External slow-thinking methods relying on width expansion align with our theoretical variables $k$ and $b$, offering a principled justification for their efficacy. For better illustration, We are happy to incorporate these intuitive explanations more prominently in the revised manuscript to improve clarity. **We also hope our response can address your misunderstanding of our works and reconsider our contributions!** --- Rebuttal Comment 1.1: Comment: Regarding the LaTeX Format: As far as I know, the ICML 2025 LaTeX template includes two versions: one for submission and another for the camera-ready version. By default, it uses the camera-ready format. It seems that your submission uses the camera-ready version. If I am mistaken, please feel free to correct me. Thank you. Regarding Prior Work: This submission received mostly positive scores in the ICLR review process, with a few negative ones. You mentioned that the main concerns centered around the experimental design, while the theoretical contributions were generally praised. However, this does not seem to be entirely accurate. While some positive reviews did indeed praise the theoretical aspects, they also criticized the experimental setup—specifically, the reliance on a single supervised model ( g ) to estimate mutual information. This limitation actually reflects a fundamental issue with the proposed method. In addition, many reviewers pointed out the questionable nature of the assumption regarding information gain. I believe this assumption is central to the use of information theory in addressing the Chain-of-Thought (CoT) problem. I would be interested to hear your thoughts on this assumption. I will carefully study your further responses and reply accordingly. If I have misunderstood anything, please do not hesitate to correct me. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback. We are glad to further address your two concerns as follows. ## Response to LaTeX Format Concern While it is correct that the template includes both a submission version and a camera-ready version, we have verified that our submission strictly adheres to the submission version. As specified in the template provided in ICML author instructions (lines 21-25 of "example_paper.tex"), **the template version is determined by the package declaration:** - `\usepackage{icml2025}` denotes the double-blind submission version. - `\usepackage[accepted]{icml2025}` denotes the camera-ready version. **Our submission correctly employs the former package for double-blind review.** For further verification, we kindly invite you to: 1) Please verify that the reviewed PDF was correctly downloaded from OpenReview. 2) Compare our submission with ICML's official compiled templates. 3) Note the presence of left-aligned line numbers in our submission. 4) Confirm the proper anonymization of author information. These elements collectively demonstrate our compliance with the double-blind submission requirements. ## Regarding Prior Work [1] We appreciate the opportunity to clarify the relationship between our work and [1]. If we understand correctly, given that *[1] employs information theory to analyze CoT, and concerns have been raised regarding its information-theoretic results in its ICLR 2025 peer review*, your key question appears to be: ***Is information theory suitable for modeling CoT?*** > **General Response:** > > Our work and [1] pursue **fundamentally different analysis objectives**, leading to **distinct formulations** of CoT. Therefore, the potential limitations of [1] do not apply to our contributions. Before elaborating, we highlight two key principles that guide our perspective: - Theoretical formulations serve as tools to address specific research questions. - Every theoretical framework has inherent limitations. **Our work differs substantially from [1] in research objectives:** - **[1]:** Uses information theory to introduce "information gain" as a metric for ***detecting reasoning errors.*** - **Our Paper:** Leverages information theory to ***characterize the mechanisms underlying test-time scaling methods.*** As a theoretical framework, **the suitability of information theory depends critically on the research objective.** While information theory offers intuitive and interpretable theoretical insights, **its quantitative applicability in natural language contexts (e.g., LLMs) is limited.** We have carefully examined [1]'s results. Although the left-hand side of Proposition 3.4 ("information gain" metric) provides an intuitive measure of step-wise significance, as noted by its reviewers, it faces two key limitations: - Its relevance to reasoning correctness remains debatable. - This metric cannot be calculated directly, thus a questionable supervisor model ($g$) needs to be introduced. However, **these limitations in [1] do not imply that information theory is unsuitable for CoT analysis.** **Our work avoids these pitfalls through a fundamentally different formulation, as explained in our initial rebuttal.** By modeling reasoning as a transparent planning-execution process linking implicit thoughts ($t$) to observable responses ($r$), we establish more reasonable assumptions (compared to [1]'s primitive tasks and Bayesian networks). Consequently, **Part 1 of our paper derives a lower bound on error occurrence probability** (Theorem 3.3) — a more robust result than [1]'s "information gain." Moreover, by focusing on trends in information loss (e.g., snowball effects), **we circumvent the need for exact quantification, thereby mitigating information theory's limitations.** In summary, our position is as follows: 1) [1]'s limitations do not invalidate information theory for CoT analysis. 2) [1]'s challenges stem from misalignment between its goals and information theory's constraints. 3) Our distinct objectives and formulation yield more reliable results while avoiding these pitfalls. We respectfully submit that our work should be evaluated independently of [1]'s potential shortcomings. We hope this response demonstrates that **information theory remains well-suited for our specific research objectives** in CoT analysis. **We would be deeply appreciative if you could consider raising your score of our work!** [1] Ton, J. F., Taufiq, M. F., & Liu, Y. (2024). Understanding Chain-of-Thought in LLMs through Information Theory. arXiv preprint arXiv:2411.11984.
Summary: The paper focuses on analyzing ``snowball effects'', where the model's implicit thinking process is not well represented by the tokens they generate at each step, hence accumulating throughout the inference. They analyze this phenomenon through information-theoretic perspective to give lower bounds for correct reasoning for well-known methods such BoN and ToT. Claims And Evidence: I found Figure 2 to be confusing. If the problem has more variables/steps to arrive at an answer, the length $L$ will naturally be greater, this might also reduce the performance since at each step, the model will need to choose the next step from more options (since we have more variables). However, this is also correlated with step length $L$, worrying me that the snowballing effect is a side effect of this, and not being able to represent the implicit thinking in the generated tokens. Perhaps, testing this idea on other datasets where the length of the solution does not necessarily imply a more difficult solution (such as adding $n$ many numbers, though this is just an option). Methods And Evaluation Criteria: See the previous section. Theoretical Claims: I did check the proofs. I believe there is a typo in the proposition 4.3, which the rhs should be 1 - rhs. Experimental Designs Or Analyses: Yes, see the previous section. Supplementary Material: The proofs only. Relation To Broader Scientific Literature: Test-time scaling is significant to improve the performance of pre-trained models in post-training phase, since the current models started to have slower improvements even though the training data and quality in the pre-training stage has never been higher. Having tools to analyze test-time scaling methods is important in the sense that researchers will have tools to improve upon these method. However, I believe the test-time scaling view in the paper, only represents a part of the literature. Newer test-time scaling methods such ``Algorithm of Thoughts'' (AoT) [1] and AoT+ [2] show improvements by scaling the reasoning within-context, in opposed to the methods only considering CoT chains in each context (such as BoN, ToT, GoT, RAP). I believe, if the paper can integrate their analysis to these cases, to perhaps explain their efficiency in terms of output tokens, it can be a significant addition to the reasoning/planning literature. [1] Algorithm of thoughts: Enhancing exploration of ideas in large language models (ICML 2024), https://arxiv.org/abs/2308.10379 [2] LLMs Can Plan Only If We Tell Them (ICLR 2025), https://arxiv.org/abs/2501.13545 Essential References Not Discussed: Not for the paper's results, but more discussions on test-time scaling methods would be helpful. Please see previous mentioned papers. Other Strengths And Weaknesses: Strengths: - Easy to read and well-written - The paper proposes a new information-theoretic perspective at investigating the behavior of various test-time scaling methods - The topic is timely - The experiments are mostly supportive of the narrative Weaknesses: - The algorithms of focus for the analysis can be increased to include methods that have a single context solution such as AoT, as opposed to ToT. - Please see previous comments on Figure 2. Other Comments Or Suggestions: - Proposition 4.3 might have a typo in the results (see previous comment regarding this). Questions For Authors: - Do the authors think their analysis can be extended to more test-time scaling methods? And can be updated to include hints on the solution efficiency? (producing fewer tokens while having a similar performance) In the current state of the paper, I give a ``Weak Accept'' rating, however, if authors make the necessary changes to improve the applicability of the paper as explained in my previous comments, I'd be happy to reconsider my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback and constructive comments. Below, we provide point-by-point responses to address the raised concerns. ## The Snowball Error and the Length We acknowledge your insightful observation that the length of responses could be correlated with question difficulty, beyond just snowball errors. This perspective significantly enriches the interpretation of our findings. To validate our claims in scenarios where solution length does not necessarily correlate with solution difficulty, **we have conducted additional experiments analyzing mutual information (MI) and accuracy across different difficulty levels**. These experiments were performed on the MATH-500 dataset [1], where questions are categorized into five difficulty levels (Level 1 being simplest, Level 5 most difficult), with questions at each level sharing comparable difficulty. The experimental results are available in our anonymized repository (https://anonymous.4open.science/r/extra-experiments-0E75/README.md), particularly in the ***Analysis by Difficulty*** section. Moreover, this finding aligns with recent literature. For instance, [2] demonstrates that excessively long CoT reasoning can harm solution accuracy, a phenomenon referred to as *overthinking* [3]. [1] Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., ... & Cobbe, K. (2023). Let's verify step by step. ICLR. [2] Wu, Y., Wang, Y., Du, T., Jegelka, S., & Wang, Y. (2025). When More is Less: Understanding Chain-of-Thought Length in LLMs. arXiv:2502.07266. [3] Sui, Y., et al. (2025). Stop Overthinking: A Survey on Efficient Reasoning for Large Language Models. arXiv:2503.16419. ## Extension to AoT and AoT+ We appreciate the suggestion to explore extensions to AoT and AoT+. We would like to address this from two perspectives: 1) **External vs. Internal Slow-Thinking:** While AoT primarily utilizes ICL to teach algorithms to LLMs, **the introduction of specialized instances enables new capabilities like reflection**. This makes AoT more akin to internal slow-thinking approaches, whereas our current focus is on external slow-thinking methods. 2) **Reflection Mechanism:** However, **we are actively planning to incorporate reflection mechanisms in subsequent work**, as mentioned in later sections of this response. We believe this extension will substantially enhance the impact of our future research. ## Typos in Proposition 4.3 After careful consideration, we believe Proposition 4.3 is mathematically sound. We would like to highlight Definition 4.1, where the left-hand item in Proposition 4.3 represents the probability of generating a ***correct*** reasoning step, which contrasts with Theorem 3.3 and the statements in line 213 (the probability of errors). We will improve the clarity of this presentation in our revised manuscript. ## Extension for more Test-Time Scaling Methods Our primary focus remains on external search methods, where we believe similar width-expansion strategies can be readily extended. Additionally, we have identified two other noteworthy test-time scaling approaches: 1) **Internal Slow-Thinking**. As introduced in our paper (Line 32), these methods employ training and parameter updates to enable long-CoT capabilities (e.g., DeepSeek-R1). **The key differentiator appears to be reflection mechanisms, allowing models to detect and correct reasoning errors.** By formalizing reflection, our framework could extend to understanding internal slow-thinking methods. One possible formulation can be built on the fact that ***reflection can reset the snowball error and thus obtain better results, with some trade-offs to detect the potential reasoning error***. This extension could significantly enhance our findings' broader implications. 2) **Implicit Reasoning**. This emerging paradigm [4-6] scales inference through looped computations rather than additional token generation. Our results may generalize to illustrate implicit reasoning benefits by formalizing error propagation in these looped structures. [4] Tack, J., et al. (2025). LLM Pretraining with Continuous Concepts. arXiv:2502.08524. [5] Chen, Y., et al. (2025). Inner thinking transformer: Leveraging dynamic depth scaling to foster adaptive internal thinking. arXiv:2502.13842. [6] Yu, Q., et al. (2025). Enhancing Auto-regressive Chain-of-Thought through Loop-Aligned Reasoning. arXiv:2502.08482. We believe these responses adequately address all concerns raised. We greatly appreciate your time and constructive feedback.
Summary: This paper aims to provide rationales for the effectiveness of inference-time compute scaling, also known as slow thinking, particularly from an information-theoretic perspective. First, it argues that as the length of a reasoning path increases, the probability of encountering an error along the path also grows, potentially at a rate exceeding linear scaling. Second, it posits that slow-thinking methods enhance reasoning by expanding the breadth of the reasoning space, thereby increasing the likelihood of generating a correct response. However, their effectiveness depends on the quality of the selection module, which is responsible for identifying the correct response from among the candidates. All of these claims are substantiated and analyzed through mathematical frameworks and empirical experiments, at least according to the authors. Claims And Evidence: - Claim 1 (Section 3): The probability of a reasoning error is lower-bounded by a certain threshold, derived from the concept of snowball errors introduced in Section 2. - Evidence 1: The mathematical proofs presented in Sections 2 and 3 appear reasonable, though I am not entirely certain. However, they rely on a strong assumption that each reasoning step follows a single gold-standard thought process. This assumption contradicts the widely accepted notion that multiple reasoning paths can lead to the same answer. Additionally, it is somewhat trivial to observe that as the length of a model-generated sequence increases, errors occurring earlier in the sequence are more likely to have a greater impact on later parts. *** - Claim 2 (Section 4): The probability of correct reasoning in recent external slow-thinking methods depends on the combined process of generating multiple answer candidates and selecting the correct one from the pool. - Evidence 2: This argument is reasonable; however, it is not entirely convincing to assume that there exists only a single gold-standard reasoning path, as stated in Section 4 (r_l^*). Methods And Evaluation Criteria: The paper primarily focuses on investigating the inner workings of slow thinking rather than proposing a new solution or method for further improvement. As a result, it does not discuss the novelty or effectiveness of any proposed approach. Nevertheless, the study would have been more insightful if it had incorporated a broader range of related methods beyond BoN and MCTS in its experiments. Theoretical Claims: This paper presents a series of definitions, proofs, and lemmas, most of which appear reasonable. However, I have some concerns, as mentioned above. Additionally, I acknowledge the possibility of mathematical errors in the provided proofs that I may not have detected. Experimental Designs Or Analyses: The empirical experiments are limited in terms of both the target tasks and the models used. Specifically, only 8B-scale models are evaluated, and the study focuses on just two tasks (GSM8K and ProntoQA). This raises concerns about the generalizability of the findings to models of different sizes and a broader range of tasks. Supplementary Material: I skimmed through the supplementary material but did not thoroughly verify the correctness of the claims and proofs step by step. Relation To Broader Scientific Literature: Inference-time compute scaling, also known as “slow thinking,” is a prominent topic in the machine learning and NLP communities. Consequently, this research is expected to make a meaningful contribution to the literature. Essential References Not Discussed: Maybe not essential but related: How Language Model Hallucinations Can Snowball (ICML 2024) Other Strengths And Weaknesses: Please see the above comments. Other Comments Or Suggestions: Please see the above comments. Questions For Authors: Please see the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the thorough and constructive reviews. Below we address each concern raised by the reviewers. ## Concerns about the Assumptions We note your thoughtful concerns regarding our methodological assumptions, specifically concerning: **(1) the coverage of multiple valid reasoning paths, (2) the impact of error propagation, and (3) the single gold-standard path assumption ($r_l^{*}$) presented in Section 4.** We address each point systematically. **Response to (1):** We would like to clarify our conceptual framework (and apologize for any lack of clarity in the current manuscript). Our model treats LLM reasoning as a two-phase process: *planning and execution*. Given a question, The planning phase generates implicit thought sequences ($t$ in Fig.1), while the execution phase produces the observable responses ($r$). While we acknowledge multiple paths may lead to correct solutions, our analysis intentionally focuses on error probabilities within individual execution paths. More specifically, though multiple reasoning paths can lead to the answer, **we focus on one specific path and discuss the errors and probabilities on this exact path.** **Response to (2):** We completely agree that earlier errors typically have greater cumulative impacts. However, our theoretical contribution (Theorem 3.3) specifically examines **the initial error occurrence probability rather than its propagation effects.** This focused analysis provides fundamental insights into the first-error statistics during reasoning chains. **Response to (3):** Within our execution-phase modeling framework, where reasoning sub-tasks are pre-determined during planning, the gold-standard response assumption ($r_l^{*}$) remains both theoretically sound and practically meaningful for our analytical purposes. Another possible concern of our setting could be the explanation of the "reflection" mechanism in many modern LLMs, A detailed discussion of its relationship to MI decay appears in our response to Reviewer NREt (Section "The General Linkage between MI Decay and Reasoning Errors"), which we respectfully refer you to for complementary analysis. ## More Experiments We appreciate the reviewer's suggestions for expanded experimental validation and are pleased to provide additional empirical evidence or explanations as follows. 1) **Extra Benchmark Task: Game of 24**. We conducted verification experiments using the Game of 24 benchmark ([1]), which requires solving arithmetic puzzles to obtain the number 24. Our results demonstrate consistent patterns in Figure 3: | $N$ | 2 | 4 | 6 | 8 | 10 | 12 | 14 | 16 | 18 | 20 | | - | - | - | - | - | - | - | - | - | -| -| | ORM-Vote | 17.96% | 29.01% | 38.67% | 46.13% | 50.83% | 53.87% | 59.94% | 63.81% | 66.85% | 67.68% | | ORM-Max | 17.96% | 29.01% | 38.67% | 46.13% | 50.83% | 53.87% | 59.94% | 63.81% | 67.13% | 68.23% | **where $N_{res}=6.08,N_{call}=18.24$, and baseline MCTS obtain an accuracy of 64.80%.** This result illustrates a similar conclusion to Fig.3, when the total reasoning cost is comparable, BoN can achieve a comparable and even better performance than MCTS. 2) **Larger LLMs**. Beyond the 7B/8B models in Figure 2, we include additional validation on Qwen2.5-14B-Instruct and Qwen2.5-32B-Instruct. Results are available in our anonymous repository: https://anonymous.4open.science/r/extra-experiments-0E75/README.md, the ***Larger LLMs*** section. 3) **Alternative Search Algorithms**. We categorize existing approaches into: - **Non-tree-based methods** (such as AoT): These methods typically incorporate reflection mechanisms that currently **fall outside our theoretical framework focused on width-expansion limitations** in tree-based methods. Alternatively, we discuss potential extensions to model reflection in our response to Reviewer Qdus (section "Extension for more Test-Time Scaling Methods"). We believe this will be an interesting extension of our future work. Thanks for your valuable advice! - **Tree-based methods** (BFS, DFS, ToT, etc.): As demonstrated in [1,2], since MCTS are designed as a series of operations including "Selection, Expansion, Simulation and Backpropagation", MCTS has higher probability to select more valuable intermediate thoughts in practice. Thus, for a given total reasoning cost, MCTS usually outperforms naive tree-based methods, as shown in the results of [2]. When it comes to Fig.3, other existing tree-based methods will thus be a baseline lower than MCTS's, and leading to trival results. **Hence, we choose MCTS as the sota baseline of the tree-based methods.** [1] Tree of thoughts: Deliberate problem solving with large language models (NeurIPS 2023). [2] AlphaZero-like tree-search can guide large language model decoding and training (ICML 2024). ## Essential References Not Discussed We appreciate your suggested references and will incorporate them in our revision.
Summary: The paper analyzes the mathematical mechanism behind the slow thinking of large language models. First, the authors define snowball errors using information theory. Then, they derive a lower bound on the probability of reasoning errors based on snowball errors. This bound indicates that the probability of errors increases as snowball errors accumulate. Their further derivation shows that the probability of generating a correct response decreases exponentially with reasoning length. They then prove that expanding the reasoning space through slow thinking can increase the probability of correct reasoning. Finally, they compare two slow-thinking methods, BoN and MCTS, finding that the key factors influencing the results are the capability of the reward function and the total reasoning cost. Claims And Evidence: Mostly good. All the statements have been theoretically proven. The author provides empirical verification for the existence of snowball errors and the comparison between BoN and MCTS. However, there is no empirical verification for the statement that "the probability of generating a correct response decreases exponentially with reasoning length." Methods And Evaluation Criteria: The selected benchmarks are suitable. Theoretical Claims: Yes, all are correct. Experimental Designs Or Analyses: The selected LLMs are all small (< 10B). The verification will be more robust by selecting models of different sizes. Supplementary Material: Yes, I have review the appendices, specifically the proofs of lemmas. Relation To Broader Scientific Literature: The paper provide a theoretical perspective to analyze the slow-thinking mechanism. `It proves the slow-thinking methods are effective to improve the reasoning correctness of LLMs Essential References Not Discussed: As far as I know, no more papers need to be discussed or cited. Other Strengths And Weaknesses: Strengths: This paper provides a systematic theoretical study of the mechanism of slow-thinking. Weaknesses: There is no empirical verification for the claim that "the probability of generating a correct response decreases exponentially with the reasoning length L." I believe this verification can be conducted by annotating the reasoning length of each question in a dataset (e.g., GSM8K) and then analyzing the relationship between accuracy and length to determine if accuracy indeed decreases exponentially with the reasoning length. Other Comments Or Suggestions: - line 238: leangth -> length. - Figure 3: set the tick numbers on the x-axis to integers. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable feedback. Below are our point-by-point responses: ## Lack of Empirical Verification for Proposition 4.3 Proposition 4.3 was designed to facilitate subsequent analyses (Sec 4.2, line 214) while maintaining readability. The negative exponential form provides a simple yet effective characterization of accuracy decay. Below, we address this concern comprehensively by demonstrating **(1) the theoretical robustness of our results under a relaxed assumption** and **(2) empirical validation of the key relationship between accuracy and length**. **(1) First, we would like to prove that our main results remain valid under a weaker assumption** when $\operatorname{Pr}\left[|\phi(r_l)-\phi(r_l^*)| \leq \tau\right]$ decreases monotonically with $l$. To address your concern, we relax the original assumption and demonstrate that our core results hold under a more general condition: > **Relaxed Proposition 4.3:** > > Instead of requiring $\operatorname{Pr}\left[|\phi(r_l)-\phi(r_l^*)| \leq \tau\right] = \operatorname{min} \left(\lambda_\tau e^{-l},1 \right)$, we now assume only that the left-hand side *decreases monotonically with $l$ and converges to $0$*, i.e., > > $ \operatorname{Pr}\left[|\phi(r_l)-\phi(r_l^*)| \leq \tau\right] = \operatorname{min} \left( \xi(l,\tau),1 \right),$ > > where $\xi(l,\tau) \geq 0$ decreases monotonically with $l$ and converges to $0$. Under this weaker condition, we derive revised bounds presented in the bullet list below: - **Lemma 4.4:** $\xi^{L}(l,\tau)$, - **Lemma 4.5:** $\prod_{l=1}^{L}\epsilon_b \left[ 1-\left( 1-\xi(l,\tau) \right)^k \right]$, - **Theorem 4.6:** $\epsilon_b^L k^L \xi^{L}(l,\tau)$, - **Lemma 5.1:** $\epsilon_N N^L \xi^{L}(l,\tau)$, - **Lemma 5.2:** $\epsilon_b^L b^L \xi^{L}(l,\tau)$, - **Lemma 5.3:** $b^{\frac{L(L+1)}{2}}\xi^{L}(l,\tau)\prod_{l=1}^{L}\epsilon_{b^l}$, where $\xi^{L}(l,\tau) := \prod_{l=1}^{L}\xi(l,\tau)$. These modifications preserve the validity of Corollary 5.4, Corollary 5.5, and Table 1, confirming that ***our theoretical insights are robust even without the original exponential form in Proposition 4.3***. **(2) Second, we have conducted extra empirical verifications of the relationship between the accuracy and the length.** We provide additional experimental verification (anonymous repository: https://anonymous.4open.science/r/extra-experiments-0E75/README.md, see ***Analysis by Difficulty*** section). The results confirm that **accuracy generally decreases with $l$ within the same difficulty level, exhibiting faster decay in early stages that gradually stabilizes**. We also kindly refer you to our discussion with Reviewer Qdus (section "The Snowball Error and the Length"), which provides additional context on this verification. ## LLMs in Different Sizes We appreciate your suggestion regarding additional experiments with varying model sizes. In response to this valuable feedback, we have conducted extensive analyses on larger language models (Qwen2.5-14B-Instruct and Qwen2.5-32B-Instruct) in addition to the 7B/8B models presented in Figure 2. The complete experimental results are also available in our anonymous repository: https://anonymous.4open.science/r/extra-experiments-0E75/README.md, we kindly refer you to the ***Larger LLMs*** section. We maintained identical experimental settings and workflow as described in Figure 2 to ensure methodological consistency. Our key findings from these additional experiments demonstrate that: - The MI (Mutual Information) decay pattern remains consistent across larger model sizes, exhibiting similar behavior to smaller models. - Response quality continues to show a negative correlation with output length, as observed in our original experiments. These significant findings will be systematically incorporated into our revised manuscript, including comprehensive analysis for the extra results and corresponding updates to Figures/Tables. ## Typos and Figure Ticks We thank the reviewer for these careful observations. All typos will be corrected and Fig.3's tick marks will be adjusted to integer values in the final version. We believe these responses adequately address all concerns raised. We greatly appreciate your time and constructive feedback. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed reply, I will raise my score to 4. --- Reply to Comment 1.1.1: Comment: We are thankful for your generous scoring decision and insightful comments, which helped improve our paper.
null
null
Parameters vs FLOPs: Scaling Laws for Optimal Sparsity for Mixture-of-Experts Language Models
Accept (poster)
Summary: The paper investigates the optimal trade-off between the total number of parameters and FLOPs per example in Mixture-of-Experts (MoE) models under a fixed pretraining FLOPs budget. Experimental results show that increasing total model parameters (i.e., increasing sparsity and reducing active parameters per input) leads to lower pretraining loss for a given compute budget. The paper also studies the relevant impact on the downstream tasks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No proofs in this paper. Experimental Designs Or Analyses: Experimental designs are sound, but there are a few limitations: - One limitation is that the experiments focus primarily on theoretical FLOP estimates, without a thorough quantitative analysis of memory and communication overheads. Those factors are critical for MoE models with large numbers of parameters. - The paper focuses on a specific MoE implementation, which may limit the generalizability of the findings. Comparing different routing mechanisms or expert configurations would provide valuable insights into whether the observed relationships between sparsity, parameters, and performance hold across different MoE variants. That being said, I'm fine with the current paper using one specific MoE implementation as an initial study. Supplementary Material: I have reviewed Appendix E.3 because of my curiosity and interests on this part of the study. Relation To Broader Scientific Literature: This paper aligns well with important research topics in both scaling law research and MoE model development, targeting an important open question about optimal sparsity in large-scale language models. The contributions are still valuable for understanding the scaling of MoE models. Essential References Not Discussed: I'm not aware of such related works. Other Strengths And Weaknesses: ## Strengths - Timely empirical study on an important research topic. - Systematic experiments on a wide range of settings. - I particularly like the analysis of optimal sparsity for a given model size. In my opinion, it represents one of the most practically valuable contributions, since it takes real-world constraints like memory and communication requirements into account. The paper would be strengthened by expanding this section and highlighting these real-world implementation considerations more prominently. ## Weaknesses My overall impression is that while the paper presents extensive experimental results, these findings could have benefited from more structured presentation with clearer takeaway messages. To elaborate, - The stated research goal is to investigate the optimal trade-off between total parameter count and FLOPs per sample, but the scaling law primarily focuses on total parameters and sparsity. Although sparsity and FLOPs per sample are relevant, this **inconsistency** can make the reader confusing. - The paper's findings suggest a monotonic relationship where increasing sparsity, increasing total parameters, and decreasing active parameters all lead to better performance. This raises questions about **whether this represents a true "trade-off"** as described in the paper's title. The practical implications of this finding could be more thoroughly discussed - is the recommendation to use increasingly large, extremely sparse models? What are the practical limits to this approach? - The paper does not provide sufficient discussions on relationship between pre-training loss and downstream performance in both the main paper and the appendix. Figure 10 suggests this relationship varies significantly across tasks and sparsity levels. A more thorough analysis of when and why sparse models transfer differently to downstream tasks compared to dense models would enhance the paper's contributions. - MoE models are known to present unique training challenges compared to dense models. The paper may benefit from including analysis of how the optimal sparsity levels might be influenced by considerations of training robustness and convergence. I emphasize that the paper has the potential to be a very strong paper given the systematic empirical study. I am looking forward to an improved version of the paper. Other Comments Or Suggestions: ## Presentation issues: - Line 45 (right): missing a space between the reference and "of". - Figure 1b: I recommend to use $N_a$ to denote the number of active parameters in this figure so that it looks consistent to others. - Line 81 (right) perform worse o. $\to$ perform worse. - Line 325 (left): demonstrates $\to$ demonstrate - Line 403 (left): supports $\to$ support Questions For Authors: - Line 137 (left): _"$K$ is the number of selected experts per token"_: To clarify, does the model always use a fixed $K$ for all tokens? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time reviewing our paper. We find it encouraging that the reviewer finds our work valuable and relevant as we systematically study optimal model size and FLOPS-per-token in MoEs. We address the concerns below. Please note that we partially quote or paraphrase reviewer's comments due to space constraints: > One limitation is that the experiments focus primarily on theoretical FLOP estimates, without a thorough quantitative analysis of memory and communication overheads ... We have discussed the limitations of using theoretical FLOPs estimates in Section 6.1. We agree that it would be very valuable to benchmark different hardware profiles and leave that as future work. > The paper focuses on a specific MoE implementation, which may limit the generalizability of the findings ... We chose the most typical setup for MoEs in our work so we opted to use token-based routing. Examining this choice in detail would indeed be very valuable which we leave as future work > The stated research goal is to investigate the optimal trade-off between total parameter count and FLOPs per sample, but the scaling law primarily focuses on total parameters and sparsity ... This is a fair point, and deserves an explanation. While our goal is to understand the relationship between model size and FLOPS-per-example, we study this using a surrogate control knob: increasing the sparsity in MoEs decreases the number of active parameters, which in turn decreases the FLOPs-per-example under settings where compute can take advantage of sparsity. We will make this clear in the paper. > The paper's findings suggest a monotonic relationship where increasing sparsity, increasing total parameters, and decreasing active parameters all lead to better performance. This raises questions about whether this represents a true "trade-off" as described in the paper's title ... This is an excellent point raised by the reviewer. Our study finds that as we scale MoE model size, we need to scale up the sparsity value as well. This finding does suggest a tradeoff: we can increase model size but need to reduce the number of active parameters (via sparsity) to gain benefit. We hope that our work encourages practitioners to invest in building infrastructure for training very large scale MoEs and provides some guidelines on what sparsity value and model sizes to use for a given compute budget. However, given the scale used in our study, we acknowledge that there maybe limitations that will be uncovered when we run models at very large scale, i.e., extrapolating to model sizes significantly larger than what we have used in our study, for which we have not been able to verify how scaling behaves due to compute limitations. > The paper does not provide sufficient discussions on relationship between pre-training loss and downstream performance in both the main paper and the appendix. Figure 10 suggests this relationship varies significantly across tasks and sparsity levels. A more thorough analysis ... This is another excellent point raised by the reviewer. We were able to conduct some downstream analyses the results of which suggest some task types transfer better than others which is an intriguing finding in our paper, and offered an initial hypothesis that increased test-time compute demands of some tasks may conflict with increasing sparsity (which otherwise improves pre-training loss). A thorough investigation as well as studying test-time interventions would be very interesting but is left as future work. > MoE models are known to present unique training challenges ... This is a good point raised by the reviewer. We will add to the discussion in our paper that special care needs to be placed on MoE training to ensure that sub-optimal results are not due to poor optimization. We followed established best practices to train MoE that included carefully searching over important hyperparameters like learning rate, weight decay, warm up schedule. Furthermore, we used a load balancing loss, router-Z loss to stabilize training and QK-normalization to stabilize training. All of these details are noted in the appendix of our paper. We acknowledge that training a model at scale (size and sparsity) larger than what is shown in the paper is an involved task. > Presentation issues We will fix these in the next revision of our paper. > Line 137 (left): ... To clarify, does the model always use a fixed K for all tokens? Yes We use dropless token-based routing > I emphasize that the paper has the potential to be a very strong paper given the systematic empirical study. I am looking forward to an improved version of the paper. We appreciate the reviewer’s comment. We hope our rebuttal above addresses reviewer concerns, and promise to update the paper to reflect the reviewer’s thoughtful feedback. If this is satisfactory, we ask the reviewer to consider raising their score. --- Rebuttal Comment 1.1: Comment: Thank you for the response and clarifications. I raise my rating to 3. It would be nice if the authors can also include those clarifications in the updated paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their time and are grateful for their support. We commit to clarifying all the questions raised during the review process in the updated/final version of our paper.
Summary: This paper explores parameter-FLOP trade-offs in sparse MoE LLMs. The author finds that: 1. Increasing sparsity during pretraining improves efficiency and performance under a fixed compute budget. 2. More parameters benefit pretraining, while FLOPs are crucial for inference, especially for reasoning tasks. 3. Optimal sparsity increases with size and compute, approaching full sparsity for large models. 4. Downstream performance correlates with pretraining loss, but denser models excel in tasks like reading comprehension. 5. The authors design a new scaling law incorporating MoE sparsity, guiding efficient design. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing empirical results. Methods And Evaluation Criteria: Yes, the evaluation setup and results are standard for developing new scaling laws. Theoretical Claims: The paper is mainly empirical. The derivation of scaling laws was presented clearly. Experimental Designs Or Analyses: Yes, I checked the experiment designs. Supplementary Material: I checked the experiments in the supplementary material. Relation To Broader Scientific Literature: This work extends existing scaling laws to MoE settings. Essential References Not Discussed: It would make the paper stronger if the authors could connect to ideas to improve deployment efficiency beyond FLOPs. Other Strengths And Weaknesses: Strengths: 1. The authors provide a comprehensive empirical analysis spanning multiple compute budgets and tasks, which provides new insights for MoE design. 2. The empirical analysis offers clear visualizations that effectively convey the trade-offs between total parameters, active parameters, and compute. Weaknesses: 1. Sparsity improves deployment efficiency in production. It would make the paper stronger to discuss how the sparsity results can lead to real-world benefits beyond FLOPs where memory and communication costs matter. Also, it would make the paper stronger if connections could be made to other methods introducing sparsity to LLM, such as activation sparsity. Other Comments Or Suggestions: Line 82: Sentence is unfinished. Line 217: Duplicated "and". Questions For Authors: See the above section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and thorough review and for their support. We thank the reviewer for pointing out the comprehensive nature of our empirical work and presentation and respond to the question(s) raised by them below: ### Response to Weaknesses > Sparsity improves deployment efficiency in production. It would make the paper stronger to discuss how the sparsity results can lead to real-world benefits beyond FLOPs where memory and communication costs matter. Also, it would make the paper stronger if connections could be made to other methods introducing sparsity to LLM, such as activation sparsity. We thank the reviewer for bringing up activation sparsity. This is an interesting question as activation sparsity in language modeling is an active area of research [1, 2, 3]. We chose to focus on the most typical setup for inducing sparsity in MoEs, focusing mostly on model size and compute cost, and deferred studying other forms of sparsity that may have their own tradeoffs to future work. We once again thank the reviewer for bringing this interesting question to our attention. [1] Mirzadeh et al. Relu strikes back: Exploiting activation sparsity in large language models 2023 [2] Szatkowski et al. Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion 2024 [3] Liu et al. TRAINING-FREE ACTIVATION SPARSITY IN LARGE LANGUAGE MODELS 2025 ### Other comments > Line 82: Sentence is unfinished. > Line 217: Duplicated "and". We thank the reviewer for spotting these typos in our draft. We will fix these errors and carefully proofread to ensure we catch and fix other errors in the final version of our paper. We once again thank the reviewer for their review and support. We look forward to engage further with the reviewer if there are any additional questions.
Summary: This paper investigates the relationship between the number of model parameters and the compute per example, measured in Floating Point Operations (FLOPs), in the context of sparse Mixture-of-Experts (MoE) language models. The authors aim to understand how varying the sparsity level—defined as the fraction of inactive experts—affects model performance during pretraining and downstream tasks. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No proofs in this paper Experimental Designs Or Analyses: Yes Supplementary Material: Yes, experiments settings Relation To Broader Scientific Literature: This paper further explores the scaling of MOE structured models. Essential References Not Discussed: One possible discussion about the sparsity in the expert level in ACL2024. [1]Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models Other Strengths And Weaknesses: 1. The paper is well written, I really enjoy reading the paper. This paper is interesting and important. The findings of this paper is important for future MOE LLM training and architecture designs. 2. The experiments are comprehensive and inspiring. Other Comments Or Suggestions: No Questions For Authors: 1. Apart from the training loss, any scaling laws about some benchmark performance? 2. Is there any difference between training moe from scratch of continue pretrain from smaller dense model? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and thorough review and for their support. > Essential References Not Discussed The reference pointed to by the reviewer discusses techniques to sparsify MoEs after training whereas we discuss optimal sparsity during pretraining and its implications on downstream tasks. We will discuss the reference mentioned by the reviewer[1] in the related works section in the updated version of our paper as that can help the reader navigate the vast literature on MoEs. We thank the reviewer for bringing this reference to our attention. ### Response to Questions (Q) > Apart from the training loss, any scaling laws about some benchmark performance? Q1: We thank the reviewer for this question. Our experiments suggest that transferring performance from pretraining to few-shot downstream (DS) tasks depends on the nature of the task. We find this observation intriguing and believe that there is more work to be done to uncover the reasons behind this behavior, offering one hypothesis that tasks that can benefit strongly from more inference-time compute can be hampered by increasing sparsity naively. More generally, our trained models may not be suited for additional downstream evaluations since we do not post-train (RLHF, instruction finetuning) our models. We agree with the reviewer that the question of how sparsity affects DS evaluations but this work work is out of the scope of this paper but is an excellent topic for future work. > Is there any difference between training moe from scratch of continue pretrain from smaller dense model? We chose the most typical setup to study sparsity behavior in MoEs that uses multiple experts in the feed-forward network (FFN) layer. Understanding the behaviors of pretraining starting from a smaller dense checkpoint, for e.g. MoEfication [1, 2] while interesting is outside the scope of this work. We can discuss this as future work though. [1] Shang et al. MoEfication: Transformer Feed-forward Layers are Mixtures of Experts 2022 [2] Szatkowski et al. Exploiting Activation Sparsity with Dense to Dynamic-k Mixture-of-Experts Conversion 2024 We once again thank the reviewer for their review and support. We look forward to engage further with the reviewer if there are any additional questions.
Summary: The paper provides empirical scaling laws for MoE-based LLMs. The experimental setup is simply training MoE-LLMs on the RedPajama dataset and then evaluating by comparing the eval loss. With this setup the authors find: 1. For every sparsity level and FLOPs budget, there seems to be a unique optimal model size (fig. 1 and 2) 2. The optimal model sparsity increases as a function of total model size (fig. 4) 3. Downstream performance scale reliably w.r.t. validation loss irrespective of the model sparsity (fig. 5) ## update after rebuttal I will keep the current score. Claims And Evidence: The claims themselves are reasonably well supported, but I'm not sure they are very novel. Methods And Evaluation Criteria: Yes, the methodology to evaluate the LLMs is sound. The scaling laws fits are evaluated by MSE which is not a good metric. It's better to use a scale-invariant metric like R^2. Theoretical Claims: Na Experimental Designs Or Analyses: Checked the experimental setup, seems reasonable. Supplementary Material: Checked experimental setup, seems reasonable. Relation To Broader Scientific Literature: na Essential References Not Discussed: No references missing AFAIK, but more details needed. Other Strengths And Weaknesses: Pros: 1. The paper is well-written. 2. Scaling laws are impactful. Cons: 1. The novelty is lacking. There have already been many scaling papers for MoEs and it's not clear what is new here. The paper shows that for a given FLOPs budget and sparsity there is an optimal model size. This has been known for dense models for a long time, and one would expect it to hold for MoE models too. 2. The evaluation of the fits is lacking. The authors evaluate that with MSE, it's better to use a scale-invariant metric like R^2. 3. It is not clear if there are any new lessons for practitioners. Everyone knows that MoEs work well and that balancing training duration and model size is needed. Other Comments Or Suggestions: 1. Please give some more details on the experimental setup in the main paper. Currently it's hidden in the appendix. Questions For Authors: 1. Can you provide R^2 metrics for the fits and evaluate them on extrapolation? 2. Why do you use “scale-free Adam optimizer”? It is non-standard. 3. Regarding related work you write “However, these studies typically assume a fixed configuration for other critical variables influencing FLOPs per token, such as the number of active experts per input.” -- could you give more details on exactly what previous studies cover, and how it differs from what you cover? 4. Fig 1 says “These results indicate that for a fixed compute budget, increasing model sparsity leads to a reduction in pretraining loss” — this seems to not be true e.g. in figure 4. Please clarify this. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ### Response to Weaknesses (W): > The novelty is lacking. There have already been many scaling papers for MoEs and it's not clear what is new here. The paper shows that for a given FLOPs budget and sparsity there is an optimal model size. This has been known for dense models for a long time, and one would expect it to hold for MoE models too. W1: We thank the reviewer for raising this point about prior work on dense models that relates optimal model size for a given FLOPs budget. Our intention with this work is to shed light on how model size and sparsity jointly interact for compute-optimal models. While the reviewer correctly points out that one could hypothesize that MoEs should scale in an analogous way to dense models based on prior studies by setting sparsity to a fixed value, it is unclear how one might set this sparsity value optimally. We further study the practical case of fixed model size, which can be particularly important in on-device deployment scenarios, and suggest a parametric form for scaling laws that the reviewer highlighted as a strength of the paper. In general, our study suggest that when designing a new architecture, or hardware, we should keep in mind that allowing the model to have more parameters vs more FLOPs per example would be more efficient as long as the goal is to do well on the pre-training task and DS tasks that are well correlated with the pre-training performance of the model. > The evaluation of the fits is lacking. The authors evaluate that with MSE, it's better to use a scale-invariant metric like R^2. W2: The R^2 values are 99% on fitting data and 68% on the held out extrapolation dataset (sparsity = 98%). We thank the reviewer for raising this point. We will update our paper with the above results. > It is not clear if there are any new lessons for practitioners. Everyone knows that MoEs work well and that balancing training duration and model size is needed. W3: The findings from this study suggest that MoEs don’t always work well unless sparsity values are set carefully. Conceretely, we find that optimal sparsity values grows with model size. Additionally, we find that MoEs transfer performance depends on the nature of the downstream task (knowledge vs reasoning) which is consistent with the observations made by in concurrent work by Jelassi et al.[1]. These are valuable findings to both practitioners & researchers that want to build efficient MoEs. [1] Jelassi et al. Mixture of Parrots: Experts improve memorization more than reasoning. October 2024 / ICLR 2025 ### Response to Questions (Q) > Can you provide R^2 metrics for the fits and evaluate them on extrapolation Q1: We provide R^2 metrics in our response to W2 above. > Why do you use “scale-free Adam optimizer”? It is non-standard. Q2: We thank the reviewer for carefully reading our paper and appendix. We used AdamW described in Loshchilov and Hutter (https://arxiv.org/abs/1711.05101). We will clarify this detail in the final version. > Regarding related work you write “However, these studies typically assume a fixed configuration for other critical variables influencing FLOPs per token, such as the number of active experts per input.” — could you give more details on exactly what previous studies cover, and how it differs from what you cover? Q3: We highlight the important papers to answer the reviewer’s question - Clark et al. assume a fixed size dataset of 130 billion tokens to derive scaling laws. - Ludziejewski & Krajewski et al. conduct their experimetns at a much smaller scale compared to our work and focus on varying granularity while we also consider number of active experts per input and using expert-choice routing while we use token-choice routing We have discussed related work more extensively in Appendix A. We will gladly discuss additional works that the reviewer may want us to do so. We thank the reviewer for asking this clarification and are happy to provide further information. > Fig 1 says “These results indicate that for a fixed compute budget, increasing model sparsity leads to a reduction in pretraining loss” — this seems to not be true e.g. in figure 4. Please clarify this. Q4: Figure 4 studies the case where the model size is a constraint and shows how sparsity value changes for a given model size. If there is no bound on the total number of parameters then optimal sparsity level approaches 1. OTOH Figure 1 shows that for a given compute budget, optimal models with higher sparsity have a larger parameter count and smaller active parameter count. We thank the reviewer for this question and will adjust the captions to make things more clear in our final version. We thank the reviewer for carefully reading our paper and the appendix! We will follow this suggestion and promise to update the draft in the final version of our paper. If our rebuttal is satisfactory, we ask the reviewer to consider raising their score.
null
null
null
null
null
null
Self-Discriminative Modeling for Anomalous Graph Detection
Accept (poster)
Summary: The paper presents self-discriminative modeling method for graph-level anomaly detection. By generating pseudo-anomalous graphs to interpolate between normal and anomalous samples, the method constructs a reliable decision boundary solely based on normal data. The claims are well supported by corresponding theory and analyses, and comparison experiments on various graph benchmarks validates the effectiveness of the proposed method. Claims And Evidence: This paper asserts that an accurate decision boundary for normal data can be determined by generating anomalous data that closely resembles normal data. This claim is supported by both theoretical analysis and simulation experiments. Methods And Evaluation Criteria: The proposed methods are effective and novel, and the evaluations are justified. Theoretical Claims: The theoretical claim is that the proposed model can effectively distinguish normal graphs from the generated pseudo-anomalous graphs, which serve as intermediates between normal and real anomalies. Additionally, the method for generating pseudo-anomalous graphs is well justified. Experimental Designs Or Analyses: The experiments in this paper are comprehensive and the experimental comparison is fair. The results are consistent to related analyses. Supplementary Material: No supplementary material provided for review. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: No other references are necessarily to be cited/discussed. Other Strengths And Weaknesses: - Strengths: 1. This paper presents a clear and logical structure, making it easy to understand. The proposed approach appears promising for graph-level anomaly detection. 2. The motivation is well-articulated and reinforced through both theoretical analysis and experimental validation. 3. The study includes extensive comparative experiments, providing strong evidence of the method’s effectiveness. - Weaknesses: 1. How do the three proposed methods control the gap between pseudo-anomalous graphs and normal graphs without prior knowledge of real anomalous data? Does this introduce difficulties for the classifier in distinguishing them? 2. In Figure 3, there are significant overlaps between pseudo-anomalous and normal data. If these overlapping pseudo data points were removed, would the proposed method perform better? 3. Could the authors provide visualizations of the generated anomalous and normal data? This would greatly enhance the credibility of the approach. Other Comments Or Suggestions: Please refer to Weaknesses. Questions For Authors: Please refer to the Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **To W1:** Thank you for the insightful question. The adversarial training of SDM-ATI and SDM-ATII utilizes a generator to produce pseudo-anomalous graphs, with a discriminator balancing this via an adversarial loss (Eq. 9). Therefore, the numbers of training epochs for the generator and discriminator inherently control the gap between normal and pseudo-anomalous graphs. In practice, we employed a balanced setting of training epochs for both the generator and discriminator, and achieved superior performance. As visual evidence illustrated by the t-SNE plots (Figure 7), although generated anomalies stay close to normal data, they remain sufficiently distinct. SDM-NAT controls the gap through $\lambda$, a larger $\lambda$ implies a larger similarity between the generated pseudo anomalous graphs and the normal graphs, but we can observe that the performance is robust across a wide range of $\lambda$ (see Figure 9). We want to clarify that this setting would not introduce difficulties in training the classifier, as the adversarial dynamic and the joint learning in our methods can adaptively balance the discrimination capability of the classifier and the generation quality of the generator. The performance comparison (Tables 1, 3, and 6) across multiple datasets is good evidence to support this claim, and the embedding separation (e.g., Figure 7) also confirms the discrimination capability of the classifier. Besides, we further provided statistical analysis to demonstrate that pseudo-anomalous graphs are sufficiently different from normal graphs, so that they can truly benefit decision boundary learning (see response to Reviewer 1ZeV's W1). **To W2:** Thank you for your valuable concern. We tested the performance by removing the overlapping generated pseudo graph anomalies but gained only a slight improvement (usually $\leq 1\\%$, sometimes even drops) in performance. Actually, our method doesn't require explicitly distinguishing between normal and pseudo anomaly graphs. Our results (e.g., Figure 6) show that despite overlaps observed between normal and pseudo anomalies during training, SDM variants still maintain superior generalization performance in the test phase and achieve robust separation of normal and anomalous graphs. **To W3:** We appreciate the suggestion. Actually, we have included relevant visualization results in Appendix E.2. For example, in Figure 7, we not only visualize the normal and real anomalous graphs for our methods (as in other baselines), but also visualize the generated pseudo anomalous graphs for comparison. The data points marked in yellow denote the generated pseudo-anomalous graphs in our methods. We observe that SDM effectively separates normal and anomalous graphs, with pseudo-anomalous graphs interpolated between normal and real anomalous graphs (refer to Figure 7 (h), (i), and (j)). This serves as good evidence to support the strengths of those pseudo-anomalous graphs in learning a more robust decision boundary compared to other baseline methods. **Again, we thank the reviewer for recognizing our work, and we hope that our responses can solve your concerns.** --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. The explanations and additional results have solved my previous questions. I also briefly reviewed the authors' responses to other reviewers. Overall, the paper is well-executed and makes a meaningful contribution to the anomaly detection community. I am therefore inclined to raise my rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer L2j6: Thank you for taking the time to review our responses and additional results. We are pleased that our clarifications help to address your concerns, and appreciate your recognition of our work. Best regards, Authors
Summary: This paper proposes a novel GLAD framework named Self-Discriminative Modeling (SDM). The key idea of SDM is to generate pseudo-anomalous graphs from normal graphs and train a classifier/discriminator to distinguish them. The generative model and discriminative model are jointly trained to learn a more robust decision boundary. Moreover, the authors further introduce a non-adversarial variant for SDM. Experiments on 12 benchmark datasets demonstrate the SDM variants achieve superior performance compared to state-of-the-art GLAD baselines. Claims And Evidence: The main claim of this paper is that SDM can effectively detect anomalous graphs by leveraging the generated pseudo-anomalous graphs to refine a more reliable decision boundary. This claim is well-supported by experimental results. Methods And Evaluation Criteria: The propose SDM methods make sense for the GLAD problem in real-world scenarios. The benchmark datasets are selected from diverse real-world domains, including molecular, biological, and social graphs with both balanced and imbalanced settings. The evaluation metrics (AUC, F1-Score) are standard and appropriate to evaluate anomaly detection performance. Theoretical Claims: The analysis in Sec. 2.1 theoretically describes the motivation of generating pseudo graph anomalies to refine a more robust decision boundary, and the simulation results in Sec. 3.2 well support this claim. Experimental Designs Or Analyses: I have checked the soundness/validity of the experimental designs and analysis. The experiments in this paper are comprehensive, covering diverse real-world GLAD scenarios. The data split strategy is consistent for each competitive method, the comparison is fair, and the experimental analysis is thorough and in-depth. Supplementary Material: No supplementary material was provided in this paper. Relation To Broader Scientific Literature: Compared to recent GLAD approaches (e.g., [1, 2]) which imposes strict assumptions (e.g., hypersphere) on the latent space, SDM offers a more flexible solution by refining the decision boundaries through pseudo anomaly generation. Moreover, the unsupervised scheme of SDM also makes it more practical compared to supervised approaches [3]. [1] Raising the bar in graph-level anomaly detection, IJCAI, 2022. [2] Deep orthogonal hypersphere compression for anomaly detection. ICLR, 2024. [3] Dual-discriminative graph neural network for imbalanced graph-level anomaly detection, NeurIPS, 2022. Essential References Not Discussed: No other references are necessarily to be cited/discussed. The authors have comprehensively reviewed the latest GLAD literature. They have broadly discussed the connection of the proposed framework with existing GLAD methods and GAN-based methods. Other Strengths And Weaknesses: **Strengths:** 1. This paper is well-written. The idea of generating pseudo-anomalous graphs as the anomaly proxy for robust decision boundary learning is innovative and meaningful in addressing several key challenges in existing GLAD approaches. 2. The motivation of the proposed SDM framework is clearly illustrated, and it is well supported by the theoretical analysis (Sec. 2.1) and simulation experiment (Sec. 3.2). Moreover, three variants of SDM are designed to focus on different challenges. 3. The comparison experiments are extensive and demonstrate the effectiveness of SDM over state-of-the-art GLAD baselines in various scenarios such as one-class, multi-class, and large-scale imbalanced GLAD. **Weaknesses:** 1. As the pseudo graph anomaly generation is the core of the proposed SDM, the major concern is whether the pseudo-anomalous graphs are sufficiently different from normal graphs so that they can truly benefit decision boundary learning. 2. The proposed framework generates pseudo-anomalous graphs by perturbing normal graphs. However, how does the method ensure that it captures diverse potential anomalies rather than a narrow subset? 3. In Fig. 6, it can be observed that the pseudo graph anomalies generated on SDM-ATII overlap significantly with the distribution of normal graphs (top row). The authors should further discuss this observation and its impact on model training. 4. This paper did not involve the robustness analysis of SDM under data contamination situations, which is common in real-world scenarios. Other Comments Or Suggestions: 1. Please harmonize the formatting of “i.e.” and “e.g.”, such as unifying them as italics or not. 2. Authors should further list in Table 4 the ratio of samples in each category. Questions For Authors: Please refer to the Weaknesses part above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We thank the reviewer for recognizing our work. Below are our responses to your concerns:** **To W1:** We would like to address your concern from the following two aspects: 1. From the embedding visualization (e.g., Figure 7), the embedding distributions of the pseudo-anomalous graphs are mostly separated into different regions from the normal data. As an illustrative example, Figure 7(j) shows that the pseudo-anomalous graphs successfully interpolate between the real anomalous and normal samples, which is good evidence to demonstrate the effect of the generated pseudo graph anomalies as the auxiliary signals to benefit decision boundary learning. 2. To more explicitly address your concerns, we evaluated SDM-NAT on AIDS (class 1) by employing the **normalized $k$-nearest-neighbor distance** to quantify the discrepancy. Specifically, we computed the average pair-wise one-nearest-neighbor distance between: * **Normal graphs vs. Normal graphs**, which is $0.0127$. * **Real anomalous graphs vs. Normal graphs**, which is $0.1056$. * **Pseudo-anomalous graphs vs. Normal graphs**, which is $0.0542$. The results confirm that the pseudo-anomalous graphs are sufficiently different from normal graphs, supporting their role in enhancing decision boundary learning. **To W2:** We would like to clarify how our approach ensures the generation of diverse pseudo-anomalous graphs: 1. Our method does not generate pseudo-anomalous graphs as a one-time, static process. Instead, the generation is dynamic and evolves continuously throughout training. Early in the process, the pseudo-anomalous graphs exhibit significant deviations from normal graphs, posing a relatively straightforward GLAD task. As training progresses, the generator will produce pseudo-anomalous graphs that increasingly resemble normal graphs, thereby escalating the difficulty of the task. This progressive shift aligns with the principles of curriculum learning [1], where the model begins with simpler examples and gradually tackles more challenging ones. As a result, the model can be exposed to a broad and diverse spectrum of pseudo-anomalous samples, rather than just a narrow subset. 2. In SDM-ATI and SDM-ATII, the adversarial interplay between generator and discriminator also drives diversity. The generator needs to vary its perturbation strategies to challenge the discriminator, preventing repetitive patterns. This results in the diversity of generated pseudo-anomalous graphs. In SDM-NAT, pseudo-anomalous graphs are generated by sampling from the latent distribution of normal data. Therefore, the generated pseudo-anomalies are expected to enclose the normal samples in the latent space, which naturally enhances the diversity. This also aligns well with our motivation illustrated in Figure 1. ``` [1] Yoshua Bengio, et al. Curriculum Learning, ICML, 2009. ``` **To W3:** The overlap observed in Figure 6 (top row) during training is a natural outcome of the adversarial design of SDM-ATII, where the generator produces pseudo-anomalous graphs resembling normal graphs to challenge the discriminator. Actually, while the overlap is observed, the majority of normal graphs and anomalous graphs are still separable, and it helps the model to learn an effective decision boundary. The results observed in the bottom row of Figure 6 fully validate this claim, where SDM-ATII indicates strong generalization by successfully distinguishing real anomalous graphs in the test phase. We will supplement this discussion in the paper. **To W4:** We agree with the reviewer that the robustness evaluation under data contamination is important. In response, we conducted an experiment on MUTAG (class 0) by injecting anomalous data into the training set at varying contamination levels defined as certain percentages of the normal data. The experimental results shown below indicate that the three SDM variants maintain stable performance under different contamination levels, which still outperform SOTA baseline DOH2SC (94.72\% AUC w/o contamination) under 10\% contamination level. This demonstrates the robustness and real-world applicability of our approach. $\begin{matrix} \\hline \text{Contamination Level}& &0\\%&10\\% &20\\% &30\\% \\\\\hline \text{SDM-ATI} & \text{AUC} &\bf95.83 (0.00) &94.77 (0.79) &93.85 (0.04) &94.23 (1.23) \\\\ & \text{F1-Score} &\bf83.33 (0.00) &82.61 (0.46) &80.24 (0.00) & 81.27 (1.96)\\\\\hline \text{SDM-ATII} & \text{AUC} & \bf99.31 (1.42) & 97.58 (0.48)&97.25 (0.02) &97.04 (0.15) \\\\ & \text{F1-Score} &\bf99.13 (1.74) & 94.12 (0.00)&88.24 (0.00) &89.22 (0.98) \\\\\hline \text{SDM-NAT} & \text{AUC} & \bf100.00 (0.00)&97.67 (0.73) & 96.44 (0.21) &95.71 (2.17) \\\\ & \text{F1-Score} &\bf100.00 (0.00) &92.16 (1.96) &88.24 (0.00) & 87.70 (3.39) \\\\\hline \end{matrix}$ **To Other Comments:** We will (1) double-check and harmonize the format of "$\textit{i.e.,}$" and "$\textit{e.g.},$", and (2) supplement the ratio of samples in each category for all datasets. --- Rebuttal Comment 1.1: Comment: The authors’ responses have addressed my concerns. They provided convincing evidence regarding the diversity and effectiveness of the pseudo-anomalous graphs, clarified the distributional overlap in Fig. 6, and included results under data contamination. These additions further improve the clarity and credibility of this paper. Accordingly, I decide to raise the overall score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 1ZeV: Thank you for the positive feedback. We greatly appreciate your constructive comments to help us improve the paper and are glad that our responses can address your concerns. Best regards, Authors
Summary: The paper introduces a new framework called Self-Discriminative Modeling (SDM) for detecting anomalous graphs. The proposed method trains a deep neural network using only normal graphs, without access to real anomalous examples. To achieve this, the authors generate pseudo-anomalous graphs from normal graphs. These pseudo-anomalous graphs help the model learn an effective decision boundary for identifying real anomalies. The authors propose three versions of their method: two based on adversarial training (SDM-ATI and SDM-ATII) and one based on a simpler, non-adversarial approach (SDM-NAT). Experiments conducted on 12 benchmark datasets show that all three versions of SDM outperform existing methods, with the non-adversarial version (SDM-NAT) achieving the most stable and highest performance. Claims And Evidence: Overall, the claims in the paper are supported by the provided experiments. The authors show that the proposed Self-Discriminative Modeling (SDM) methods outperform existing techniques across multiple datasets. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the anomalous graph detection task. Generating pseudo-anomalous graphs to learn decision boundaries when real anomalies are unavailable is reasonable. The use of adversarial (SDM-ATI, SDM-ATII) and non-adversarial (SDM-NAT) approaches for generating pseudo-anomalous data also make sense. The evaluation metrics (AUC and F1-score) and chosen benchmark datasets are suitable, and commonly used in many previous works. Theoretical Claims: No theorems provided in the paper, but the mathematical motivations behind the training objective make sense. Experimental Designs Or Analyses: I checked the experimental setup described in the Experiments section and the appendix. The authors include experiments in multiple settings, which I think is good. The baselines used by the authors are also sufficiently many. No major concern on the experimental design and analysis. Supplementary Material: Yes. I briefly checked the supplementary materials, particularly in the additional experiment sections. Relation To Broader Scientific Literature: The paper builds clearly on existing methods like DeepSVDD-based approaches and GAN-based anomaly detection techniques. It addresses known issues of strong assumptions in graph embedding distributions in the previous works, by generating pseudo-anomalous graphs using various techniques. I think the contributions of the paper fills some of the gap in the previous literature. Essential References Not Discussed: Nothing as far as I know. Other Strengths And Weaknesses: Nothing additional. Other Comments Or Suggestions: Some of the statements in the paper need clarification: - ‘no overlap between D and \tilde{D}’. Could you clarify on this, as the distribution of graphs is continuous, there may be some overlap in the support of the distribution? - ‘Note that this is an unsupervised learning problem, of which the training data do not contain any anomalous graphs’. In general, unsupervised learning problems only means that label is unavailable. However, in the labeled data, there could also be anomalous data that a model needs to discover. Please clarify the setup. Questions For Authors: Please address the concerns above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We thank the reviewer for the positive comments. Below are our responses to your concerns:** **To Q1:** The statement "no overlap between $\mathscr{D}$ and $\tilde{\mathscr{D}}$" is intended to facilitate a more precise problem definition for our paper, as we would not be able to identify normal or abnormal for a data point in the overlapping region between $\mathscr{D}$ and $\tilde{\mathscr{D}}$. This is also a common assumption defined in anomaly detection literature [1], where normal and anomalous instances are modeled as originating from distinct distributions to establish a clear separation. However, we recognize the reviewer’s valid concern. In a continuous graph space, distributions may exhibit overlapping supports. We would like to answer this question by the following two aspects: 1. The "no overlap" assumption refers to the idealized setup where $\mathscr{D}$ and $\tilde{\mathscr{D}}$ are generated from non-overlapping processes. This abstraction facilitates the design of our anomaly detection method by providing a clean delineation between normal and anomalous behaviors. 2. In practice, real-world graph distributions may not be perfectly separable, which can be caused by outliers or the nature of the data. Our framework, however, is designed to handle such scenarios effectively. For instance, we generate pseudo-anomalous graphs to simulate $\tilde{\mathscr{D}}$, which may lie close to normal graphs in the feature space. This is evident in our t-SNE visualizations (e.g., Figures 7 and 8), where the proximity of these distributions tests the model’s ability to detect subtle anomalies. Our simulation experiment (in Sec 3.2) and empirical result (in Sec 3.3-3.5) on benchmark datasets demonstrate that the method performs robustly even when the theoretical assumption of non-overlapping supports is relaxed. ``` [1] Lukas Ruff, et al. Deep One-Class Classification, ICML, 2018. ``` **To Q2:** In our paper, we describe our approach as an unsupervised learning problem where "the training data do not contain any anomalous graphs." This means that during the training phase, the model is exposed exclusively to normal graphs from the distribution $\mathscr{D}$, with no anomalous graphs from $\tilde{\mathscr{D}}$ present. This setup aligns with the standard "one-class" anomaly detection paradigm [2], a well-established approach under unsupervised settings. Specifically: 1. Training Phase: The training dataset consists solely of normal graphs (from $\mathscr{D}$), and no labels are provided, consistent with unsupervised learning. 2. Testing Phase: The model is evaluated on a separate test set that includes both normal graphs (from $\mathscr{D}$) and anomalous graphs (from $\tilde{\mathscr{D}}$). The task is distinguishing anomalies from normal instances, despite not encountering anomalies during training. The reviewer is correct that, in a broader unsupervised learning context, training data could contain a mix of normal and anomalous instances without labels, requiring the model to discover anomalies implicitly. Therefore, we also conducted an experiment to evaluate our performance in the data contamination case (refer to the response to Reviewer 1ZeV's W4), where part of the anomalies are included in the training dataset. We can observe that our method also works when the training data have some unlabeled anomalies. ``` [2] Bernhard Schölkopf, et al. Estimating the support of a high-dimensional distribution, Neural Computation, 2001. ```
null
null
null
null
null
null
null
null
Training High Performance Spiking Neural Network by Temporal Model Calibration
Accept (poster)
Summary: This paper systematically summarizes previous logit gradient calculation schemes, including SDT and TET, then proposes a new temporal gradient rescaling method to enhance the learning capability of SNNs. Claims And Evidence: Yes Methods And Evaluation Criteria: I tend to think the proposed method can provide a new perspective for the SNN community to a certain extent. Theoretical Claims: I have read the theoretical claims in Sec. 3.2-3.3 and Appendix. Experimental Designs Or Analyses: I have checked experimental designs mentioned in Section 4. Supplementary Material: I have read the code submitted in the supplementary materials. Relation To Broader Scientific Literature: The research field of this work is related to the BPTT training methods based on surrogate gradients in the SNN community. Essential References Not Discussed: Not found yet Other Strengths And Weaknesses: 1. The research perspective of Proposition 3.3 seems interesting. The authors conduct research on temporal heterogeneity and gradient allocation problems for different time-steps. 2. As shown in Tab.2 and Tab.4, the proposed method seems to be more effective for neuromorphic datasets (e.g. DVS-CIFAR10), but the performance improvement on ImageNet-1k (large-scale dataset) and SNN Transformer architecture appears to be relatively limited (<1%). For CIFAR-100 dataset, the reported accuracy of this work lags behind TCL and ETC. Other Comments Or Suggestions: This work can be seen as a further exploration on TET [1], which proposes a new loss function calculation scheme around the gradient allocation problem at different time-steps. [1] Deng, S., Li, Y., Zhang, S., and Gu, S. Temporal efficient training of spiking neural network via gradient reweighting. ICLR 2022. Questions For Authors: See Strengths And Weaknesses Section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Rebuttal Appendix:https://anonymous.4open.science/r/TMC-0262** **1. I tend to think the proposed method can provide a new perspective for the SNN community to a certain extent. The research perspective of Proposition 3.3 seems interesting. The authors conduct research on temporal heterogeneity and gradient allocation problems for different time steps.** We sincerely appreciate your feedback and are greatly encouraged by it. **2. As shown in Tab.2 and Tab.4, the proposed method seems to be more effective for neuromorphic datasets (e.g. DVS-CIFAR10), but the performance improvement on ImageNet-1k (large-scale dataset) and SNN Transformer architecture appears to be relatively limited (<1%). For CIFAR100 dataset, the reported accuracy of this work lags behind TCL and ETC.** Here, we highlight the performance advantages of our TMC method on neuromorphic datasets, ImageNet and CIFAR100. Furthermore, we evaluate TMC on real-world applications and the results demonstrate that TMC **achieves SOTA performance**. **1) Neuromorphic Datasets** + **Dataset Feature Description:** These datasets capture the dynamic changes in pixel intensity and record the resulting spike events using dynamic vision sensors (DVS). Compared to static or frame-based datasets, they contain **rich spatio-temporal components** by interacting the spatial and temporal information and follow the event-driven processing fashion triggered by binary spikes. + **TMC Advantages:** Our method employs temporal heterogeneous learning using a temporal logit gradient rescaling strategy. This is beneficial for **capturing more task-relevant spatio-temporal dynamics features** compared to existing temporal homogeneous methods, such as SDT and TET. Therefore, TMC exhibits significant advantages on neuromorphic datasets. **2) ImageNet Dataset** + **Dataset Feature Description:** ImageNet is known for its high-resolution and complex scenes. Current efforts to enhance the performance of ImageNet mainly concentrate on refining the model architecture, often resulting in the **introduction of a large number of additional parameters** to achieve performance gains. + **TMC Advantages:** It is notable that our proposed method TMC is **plug-and-play**. When TMC is integrated into the Hierarchical Spiking Transformer (current SOTA model) in our paper, it enhances both classification and calibration performance on ImageNet, further demonstrating the effectiveness of our method. **3)CIFAR100 Dataset** + **Dataset Feature Description:** CIFAR-100 is characterized by its low-resolution images, which are often blurry and contain relatively less information. + **TMC Advantages:** With a small number of time steps, i.e., **T=2**, our method, which employs a temporal heterogeneous learning approach, is capable of capturing a richer set of features. Specifically, when T=2, TMC achieves a classification accuracy of 76.35%, outperforming other methods such as TEBN (75.86%), RMP-Loss (74.66%), and ETC (75.96%). + **Limitation of the Dataset:** However, when **the number of time steps is increased**, our method may capture noisy features due to the blurriness of the images, which is not beneficial to performance improvement. This suggests that our method has more advantages when handling dynamic and complex tasks. **4) Real-Word Application** Moreover, we evaluate our method on real-world applications, including text sequential classification and dynamic action sequential recognition tasks, where **TMC achieves state-of-the-art (SOTA) performance**. + **Text Sequential Classification Task.** We apply our method in the direct training phase of SpikingBert model proposed in [R1] on the widely used dataset Quora Question Pair (QQP). We compare the classification performance of TMC with SDT (as used in the original paper) and TET. TMC achieves a higher accuracy of **87.86\%** than SDT(86.82\%) and TET(87.03\%). + **Dynamic Action Sequential Recognition Task.** We train VGGSNN with T=10 on DVS-Gesture and compare the performance of our method with current works in Table R1 of Rebuttal Appendix. Results show that TMC achieves SOTA performance with an accuracy of 99.12%. **3.We futher evaluate the training process of TMC on VGGSNN on DVSCIFAR10 with T=10.** **1)Gradient Norm Evaluation:** In Figure R3 in the appendix, we evaluate the trend of gradient norms for model parameter updates during the training of TMC, SDT, and TET. TMC exhibits the highest norm, indicating **faster convergence**. **2)Training Stability Evaluation:** We visualize the training loss variation during the training of TMC, SDT and TET in Figure R4 in the appendix. The results demonstrate TMC's **stable training, driving loss to minimal values**, while SDT and TET suffer from overconfidence-induced oscillations. [R1] Spikingbert: Distilling bert to train spiking language models using implicit differentiation.
Summary: This paper proposes a temporal confidence calibration method for SNNs, improving both the model's performance and heterogeneity, as demonstrated across several static classification tasks. Claims And Evidence: The proposed method is simple yet efficient, with a clear motivation at increasing the model's temporal heterogeneity. Comprehensive Experiments on static datasets clearly demonstrate its superiority over SDT and TET. The SOTA results on CIFAR10DVS and Imagenet are compelling. Methods And Evaluation Criteria: 1. The presented comparative and ablation experiments make sense for evaluating the method's effectiveness. 2. Evaluating only static datasets is fair yet a little weak regarding a method focusing on temporal heterogeneity. I hope the authors can provide more results on dynamic datasets to make the method more convincable, like action recognition datasets including DVS-Gesture, SL-Animals, etc. Theoretical Claims: In definition 3.1, is $\hat{P}_t$ the maximum value of softmax on the time-averaged logits? If so, the accuracies of the SNN at all time steps are equal. Then how does a temporally perfectly calibrated model trained with gt increase its temporal confidence monotonically to match Definition 3.1? (lines 199-202) Experimental Designs Or Analyses: The experiments are conducted fairly and reasonably. Supplementary Material: I reviewed the appendix and briefly checked the proofs. I didn't check the code. Relation To Broader Scientific Literature: It could benefit the brain-inspired learning area a lot, especially the training of classification tasks that leverage cross-entropy loss on spiking neural networks. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. Some sentences in section 3.1 are false. [1] Calculate Loss on the output spikes. TET calculates the loss on the output values of an MLP followed by a spike layer. 2. The definition of the confidence score is quite weird, not bounded in [0, 1]. Leveraging it to punish overconfidence seems far deviated from the confidence calibration motivation. 3. Following the above, I didn't see much relationship between the proposed method and Definition 3.1, nor a relationship between Definition 3.1 and 3.2. The paper needs careful revising regarding the writing. 4. The result on N-Caltech101 is not SOTA. See [3]. [1] Spatio-Temporal Backpropagation for Training High-Performance Spiking Neural Networks [2] Temporal efficient training of spiking neural network via gradient re-weighting [3] Rethinking the membrane dynamics and optimization objectives of spiking neural networks Other Comments Or Suggestions: No Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Rebuttal Appendix:https://anonymous.4open.science/r/TMC-0262** **1.Provide more results on dynamic datasets to make the method more convincable, like action recognition datasets including DVS-Gesture, SL-Animals, etc.** **1)DVS-Gesture**: We train VGGSNN with T=10 on DVS-Gesture and compare the performance of TMC with current works in Table R1 in the appendix. TMC achieves **SOTA performance with an accuracy of 99.12%**. **2)SL-Animals-DVS**: VGGSNN (instead of a larger model due to time constraints) is trained using TMC with T=16. Comparison with SDT and TET shows TMC achieving a **higher accuracy of 70.05%** against SDT (66.75%) and TET (68.34%), demonstrating the superiority of TMC. **2.Is $\hat{P}_t$ the maximum value of softmax on the time-averaged logits? Then how does a temporally perfectly calibrated model trained with gt increase its temporal confidence monotonically to match Definition 3.1?I didn't see much relationship between the proposed method and Definition 3.1, nor a relationship between Definition 3.1 and 3.2.** **Clarification of Definition 3.1 and Our Mechanism:** **1)Element Definition(lines 165-169 left)** + Temporal Predicted Confidence: $\hat{P}_t=max(Softmax(\overline{z_t}))$. + Accumulated Logit Outputs at t-th time step: $\overline{z_t}=\frac{1}{t}\sum_i^tz_i$. + $t \in {[1,2,...,T]}$. + **$\hat{P}_t$ is different at different time steps**, because $\overline{z_t}$ varies across time steps. **2)Definition 3.1:Temporally Perfectly Calibrated SNN(lines 169-176 left)** + Introduce model calibration into SNN's time dimension, where a temporally perfectly calibrated SNN satisfies **Test Accuracy is Equal to Confidence at each timestep**: ${\hat{\textit{P}_t}} = \mathbb{P}(\hat{\textit{y}}=\textit{y}|\hat{\textit{P}_t})$. **3)Definition 3.2:Temporal Gradient Scaling Factor(lines 177-176 left)** Model calibration is realized using a gradient rescaling factor to rescale the gradient of cross-entropy(CE) loss in the training of ANNs. To realize the temporal model calibration in SNN, we propose the temporal gradient scaling factor $g_t$ (lines 177-180 left) to rescale the gradient of CE loss at each time step, i.e., TET loss, in Equation 6. **4)Effect of $g_t$** + Note that Definition 3.1 constructs the concept of temporally perfectly model calibration in the field of SNN. Our work focuses on the rate-coding SNNs and we provide a further instantiation specific to rate-coding SNNs (lines 196-202 left). Specifically, current work suggests that the accuracy of rate-coding SNNs increases monotonically with time steps. Thus, for the temporally perfectly rate-coding SNNs, confidence should increase monotonically with time steps. + To match definition 3.1, $g_t$ should meet: at earlier time steps, it should shrink the effect of CE loss logit gradient ($\Delta Z_t^{TET}$) to reduce the confidence. Conversely, $g_t$ should enhance the effect of $\Delta Z_t^{TET}$ to increase the confidence. **5)Method Design (Section 3.3)** In Section 3.3, we propose TMC loss function with a new regularization term to realize the effect of $g_t$. With temporal gradient theoretical analysis (lines 244-274 left and 220-235 right), we derive the temporal gradient rescaling factor $g_t^{TMC}$ in our method, which can optimize the rate-coding SNN to increase its confidence monotonically with time steps. **3.Some sentences in section 3.1 are false.** + Thank you for your correction. We follow the loss function definition of SDT in [R13] and the reference [1] in line 139 left should be changed to [R13]. [R13] defines the loss function of SDT by calculating the cross-entropy loss between the average pre-synaptic input of the output layer and the true label. + TET calculates the cross-entropy loss between the pre-synaptic inputs of the output layer with the true labels at each time step. To be more accurate, we should change "the membrane potential of the last layer" in our paper to "pre-synaptic input of the output layer". **4.The definition of the confidence score is quite weird, not bounded in [0, 1].** We would like to provide clarification regarding the "confidence score" in our paper. + Before Equation 9, our analysis focuses on absolute confidence values, that is, the maximum predicted probability, which is inherently bounded within the interval [0, 1]. + For Equation 9, to enhance the loss function's sensitivity to overconfidence, we adopted the ratio of confidence/(1 - confidence) as $\theta_t$. This essentially serves as a more effective regularization of confidence. **5.Result on N-Caltech101 is not SOTA.** On N-Caltech101, our method achieves 86.03% accuracy at T=10, while [3] achieves 87.86% accuracy at T=16. **Reevaluating our method at T=16, we achieve 88.24% accuracy, surpassing [3]**. [R13] Deng, S.,Li,Y.,Zhang, S., andGu, S. Temporal efficient training of spiking neural network via gradient re-weighting. arXivpreprintarXiv:2202.11946,2022. --- Rebuttal Comment 1.1: Comment: Thanks for the author's rebuttal. It greatly helps me understand the paper more thoroughly. I am satisfied with the good results on both object recognition and action recognition results. I have one last question: Could you provide any theoretical or experimental analysis on whether the temporal confidence of a model trained with TMC could converges to its accuracy, in consistent with definition 3.1? --- Reply to Comment 1.1.1: Comment: Thank you for your review and positive feedback on our results in object recognition and action recognition. Regarding your new question, we provide both theoretical and experimental analysis as follows: **1. Theoretical Analysis** **1.1. Definition 3.1 requires a rate-coding SNN to satisfy the following two properties:** + **Property 1:** ${\hat{\textit{P}_t}} = \mathbb{P}(\hat{\textit{y}}=\textit{y}|\hat{\textit{P}_t})$. + **Property 2:** $\hat{P_t} < \hat{P_{t+1}}$. Here, $\hat{P}_t=max(Softmax(\overline{z_t}))$ and $\overline{z_t}=\frac{1}{t}\sum_i^tz_i$. **1.2. Convert the realization of these two properties into the optimization objective of TMC:** + **Note:** Since the predicted outputs of a trained SNN typically have the highest probability for the target class across time steps, $\hat{P}_t$ can be expressed as $\hat{P}_t=\overline{P}_t^k$ (lines 165-189 right). Here, $\overline{P}_t^k$ is the probability of the target class $k$ in the distribution of $Softmax(\overline{z_t})$. + **Objective 1:** During training, the realization of Property 1 can be converted to optimize $|\overline{P}_t^k - \mathbb{P}(\hat{\textit{y}}=\textit{y}|\overline{P}_t^k)| < \epsilon$. This can be achieved by introducing confidence regularization term, $\theta_t$ (lines 198-211 right), to penalize the under-confidence issue $\overline{P}_t^k < \mathbb{P}(\hat{\textit{y}}=\textit{y}|\overline{P}_t^k)$ and, especially, the over-confidence issue $\overline{P}_t^k > \mathbb{P}(\hat{\textit{y}}=\textit{y}|\overline{P}_t^k)$ with high sensitivity. + **Objective 2:** During training, the realization of Property 2, $\hat{P_t} < \hat{P_{t+1}}$, can be converted to $\overline{P_t}^k < \overline{P_{t+1}}^k$. This can be achieved by introducing a linearly decreasing exponent, $\lambda_t$ (lines 213-219 right and lines 220-223 left), into $\theta_t$ to optimize $z_t^k < z_{t+1}^k$, as described in Proposition 3.3 (lines 189-193 right). **1.3. Theoretical analysis of TMC's gradient rescaling factors:** + With the loss function of TMC, the rescaling factor for the target class $k$ is generated to optimize confidence (lines 243-254 left). Specifically, \begin{equation}g_t^k=\frac{\Delta Z_t^{TMC}}{\Delta Z_t^{TET}}=1-f(t) * h(t),~~~f(t)=\frac{\lambda_t\theta_t^{\lambda_t}}{t},~~~h(t)=\frac{1}{1-P_t^k}.\tag{R5}\end{equation} Here, $f(t)$ decreases with time steps. + **At the initial training phase**, $h(t)$ follows a random uniform distribution, and $g_t^k$ increases with time steps to optimize $z_t^k < z_{t+1}^k$, thereby meeting **Objective 2**. + **During training**, at time step $t$, if $z_t^k$ is high, the probability of the target class $k$ in the distribution of $Softmax(z_t)$, denoted as $P_t^k$, may lead to overconfidence. In this situation, $h(t)$ increases, causing $g_t^k$ to decrease potentially even to a negative value, to penalize the overconfidence issue. Conversely, underconfidence occurs when $z_t^k$ is low, and $g_t^k$ increases to address this issue, thereby meeting **Objective 1**. + **At the end of the training**, $g_t^k$ for different samples will converge to an interval. We have visualized the distribution of $g_t^k$ values for 500 samples of a trained SNN in **Figure 1 (detailed analysis can be found in lines 246-272 right)**. Notably, for most samples with reasonable confidence, their $g_t$ values are centered within an interval that shifts closer to 1 over time. This indicates the achievement of **Objective 2**. It can be seen that across time step, some samples' $g_t^k$ values are close to 1 or negative numbers. This is consistent with **Objective 1**, which aims to penalize particularly underconfident and overconfident samples. **Overall, TMC has the effect of realizing these two optimization objectives and further realizing Definition 3.1**. **2. Experimental Evaluation** **2.1. Evaluation Metrics:** Whether the temporal confidence of a model trained with TMC could converge to its accuracy can be quantified by the calibration performance. Specifically, we evaluate the **calibration errors** (differences between the model's predicted confidence and its actual accuracy) of the trained VGGSNN (T=10) by TMC on DVSCIFAR10 across all time steps using the standard metrics ECE and AdaECE (detailed definition is in Appendix A.2 in our paper). **2.2. Experimental Results:** We compare the performance of TMC with SDT and TET in Table 1 in the appendix ( https://anonymous.4open.science/r/TMC-0262/Table1_Calibration_Performance_Results.pdf). TMC exhibits the lowest calibration errors across time steps, indicating **the model's predicted confidence is optimized to converge to accuracy.** **2.3. Experiment in Paper:** We compare overall calibration performances on different datasets in **Table 2 (lines 330-340 left)**. TMC achieves the lowest calibration errors. **If there are any questions, please let us know. We would also greatly appreciate any consideration for a minor adjustment in the rating.**
Summary: this paper introduces a new training method for spiking neural networks called temporal model calibration . the goal is to improve the performance of snns by increasing their temporal heterogeneity, which is how much their outputs vary over time. the authors argue that existing training methods, like direct training using bptt, do not fully utilize temporal heterogeneity because the loss gradients remain too similar across time steps. their main idea is to rescale the loss gradients at each time step to encourage more diversity in the network’s responses over time. they do this by modifying the cross-entropy loss function with a new gradient scaling factor based on confidence calibration techniques used in deep learning. the method is tested on several datasets, including imagenet, dvscifar10, and n-caltech101, and achieves state-of-the-art accuracy in some cases. overall, the paper presents a novel way to enhance the learning dynamics of snns by focusing on how gradients evolve over time. ## update after rebuttal Thanks to the authors for a very detailed and thoughtful response. I appreciate the new experiments, the visualizations of temporal heterogeneity, and the extra analysis around computational cost and training stability — it’s clear a lot of effort went into the rebuttal. These additions definitely helped clarify several points I had raised. That said, my overall opinion on the paper hasn’t changed much. While the new results are nice to see, they don’t fully address my bigger concerns about the lack of strong theoretical backing and the limited discussion on scaling to larger models and broader applications. The paper is solid and the method works well empirically, but I still feel it falls a bit short on the novelty and depth needed for acceptance. So I’m keeping my original score of Weak Reject. Claims And Evidence: the main claim of the paper is that current direct training methods for snns do not make full use of temporal heterogeneity because their loss gradients remain relatively uniform across time steps. the authors support this claim by analyzing the gradients in standard training methods and showing that they lack diversity. they further claim that their proposed tmc method improves both temporal heterogeneity and accuracy by rescaling these gradients. the experimental results provide reasonable support for this, as the tmc-trained models outperform existing methods on several benchmark datasets. however, the claim that the method enhances temporal heterogeneity is mostly based on indirect evidence (such as improved accuracy) rather than direct visualization or mathematical proof of increased heterogeneity. a more in-depth analysis of how tmc affects neuron activations over time would strengthen this claim. Methods And Evaluation Criteria: the proposed method is well-designed for the problem it addresses, as it directly targets the temporal structure of snns. the authors evaluate their approach using standard benchmark datasets, including imagenet and neuromorphic datasets like dvscifar10 and n-caltech101. these datasets are appropriate for testing the effectiveness of an snn training method. the evaluation primarily focuses on accuracy, expected calibration error (ece), and adaptive ece, which are relevant metrics for both classification performance and model calibration. however, the paper does not provide much discussion on computational efficiency—how much additional training time or memory tmc requires compared to existing methods. since gradient rescaling could introduce extra computation, it would be useful to see an analysis of the trade-offs between performance and efficiency. Theoretical Claims: the paper provides a detailed mathematical formulation of the proposed method, including how the gradient rescaling factor is computed. however, there is no formal proof that tmc leads to a more stable or optimal training process. the method is motivated by intuition and empirical results rather than rigorous theoretical guarantees. for example, while the authors argue that their gradient rescaling improves learning dynamics, they do not analyze whether it guarantees faster convergence or prevents issues like vanishing or exploding gradients. a theoretical analysis of the convergence properties of tmc would make the claims more robust. Experimental Designs Or Analyses: the experimental design is solid in terms of dataset selection and performance metrics. the authors compare their method against several baselines, including standard direct training (sdt) and temporal efficient training (tet), which are well-known methods in the field. they show consistent improvements in accuracy and calibration errors across multiple datasets. one strength of the experiments is that they include both static and neuromorphic datasets, showing that tmc is broadly applicable. however, there are some missing analyses. for example, the paper does not include an ablation study to determine which components of tmc contribute most to the performance gains. it would be useful to see experiments testing different variations of the gradient rescaling factor to understand its specific effects. also, the paper does not analyze how sensitive tmc is to hyperparameter choices, which is important for practical use. Supplementary Material: the supplementary material includes additional experimental results and mathematical derivations. the appendix contains further details on the datasets, calibration metrics, and loss function derivations. while these additions are helpful, they do not fully address the gaps mentioned earlier, such as efficiency analysis or direct visualization of temporal heterogeneity. Relation To Broader Scientific Literature: this paper builds on prior work in spiking neural networks, particularly methods that use backpropagation through time for training. it connects to research on temporal heterogeneity in snns, which has been studied in both computational neuroscience and machine learning. the work is also related to research on model calibration in deep learning, as tmc is inspired by techniques like label smoothing and confidence regularization. while the paper does a good job of citing relevant works, it does not compare tmc to alternative approaches that also modify loss gradients for better learning dynamics, such as entropy-based loss functions. discussing these related methods could provide a clearer context for how tmc fits within existing strategies. Essential References Not Discussed: the paper includes most of the key references in snn training and model calibration but does not discuss some recent works on improving training stability through adaptive gradient methods. for example, methods that dynamically adjust learning rates based on gradient variance might have some similarities to tmc. comparing tmc to these approaches could provide additional insights into its uniqueness and potential limitations. Other Strengths And Weaknesses: one of the main strengths of this paper is that it introduces a relatively simple yet effective modification to the loss function that improves performance across multiple datasets. the empirical results are strong, showing state-of-the-art accuracy in some cases. another strength is that the paper addresses an important issue in snn training—the lack of sufficient temporal heterogeneity—by introducing a novel way to enhance it. however, there are some weaknesses. the paper does not provide a theoretical justification for why tmc should always improve training, and there is little discussion of potential downsides, such as increased training time or instability. also, while the results are impressive, the authors do not explore the scalability of tmc to much larger models or real-world applications. Other Comments Or Suggestions: it would be helpful to include a computational cost analysis comparing the training time of tmc with other methods. a more detailed discussion of hyperparameter sensitivity would also improve the paper. the writing is mostly clear, but some sections could be better structured to make the key ideas easier to follow. Questions For Authors: 1. how much additional computational overhead does tmc introduce compared to standard direct training methods? does it significantly increase training time? 2. have you tested whether tmc works well with much deeper networks or more complex architectures? 3. do you have any direct visualizations of how tmc changes the temporal activity of snn neurons over time? this could provide stronger evidence that it increases temporal heterogeneity. 4. is there any risk that rescaling the gradients could introduce instability in training, such as oscillations or divergence? if so, how is this mitigated? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Rebuttal Appendix:https://anonymous.4open.science/r/TMC-0262** **1.Visualization or mathematical proof of increased heterogeneity.In-depth analysis of how tmc affects neuron activations.** **1)Temporal Heterogeneity Visualization** We compare VGGSNNs trained by TMC and TET on DVSCIFAR10, visualizing the cosine similarity of layer features across time steps in Figure R1. TMC exhibits higher temporal heterogeneity at all layer levels. **2)TMC Effect** The neuron model: \begin{equation} {u_{t+1}^i}=\lambda(u_t^i-V_{th}s_t^i)+\sum_j\mathbf{W_{ij}}s_{t+1}^j.\tag{R4}\end{equation} With TMC $\mathbf{W}$'s separability for varied inputs is enhanced, boosting neuron temporal heterogeneity in response to diverse temporal inputs. **2.Discussion on computational efficiency.** We analyze TMC's time and space complex of the logit gradient calculation relative to SDT and TET. (T:Time Step, B:Batch Size, C:Class Number) **1)SDT**:Time complexity $O(T\*B\*C)$, space complexity $O(B\*C)$. **2)TET**:Time complexity $O(T\*B\*C)$, space complexity $O(T\*B\*C)$. **3)TMC**:Based on TET, it introduces additional calculation of the regularization term $\theta_t^{\lambda_t}$. At each timestep, generating $\theta_t$ needs $O(B\*C)$ time complexity and $O(1)$ space complexity. Calculating $\theta_t^{\lambda_t}$ needs $O(1)$ for both time and space complexity. Overall, TMC's logit gradient calculation time complexity is $O(T\*B\*C)+O(T\*B\*C)+O(T\*B\*C)+O(T)$, and space complexity is $O(T\*B\*C)+O(T\*B\*C)+O(T)+O(T)$. TMC's total gradient computation, including hidden layer and logit gradients, is **close to SDT and TET in computation complexity and training time**. **3.No formal proof that tmc leads to a more stable or optimal training.Theoretically analyze the convergence properties of tmc.** **1)Firing Rate Evaluation:** Firing rates impact SNN gradients. As shown in Figure R2, compared with SDT and TET, TMC-trained VGGSNN on DVS-CIFAR10 exhibits the lowest firing rates in shallow layers, rising with depth, and peaking in deeper layers. This indicates TMC's sensitive response to global features in deeper layers. Higher firing rates also **mitigate the gradient vanishing**. **2)Gradient Norm Evaluation:** Figure R3 further compares model parameters update gradient norms of TMC, SDT and TET. TMC shows the highest norm, suggesting **faster convergence**. **3)Training Stability Evaluation:** We theoretically analyze TMC's temporal rescaling factor $g_t$(lines 224-274 left). $g_t$ effectively rescale logit gradients, enhancing optimization over SDT and TET. Figure R4 demonstrates TMC's **stable training**, driving loss to minimal values, unlike SDT and TET, which suffer from overconfidence-induced oscillations. **4.The paper does not include an ablation study to determine which components of tmc contribute most to the performance gains.Analyze hyperparameter sensitivity of tmc.** 1)In lines 358-384 left and 330-336 right, **we conducted ablation studies** on the regularization term's components—base $\theta_t$ and exponent $\lambda_t$. Both enhance performance, but their combined effect is optimal. 2)TMC **does not introduce additional hyperparameters**, ensuring flexibility. **5.It does not compare tmc to alternative approaches.It does not discuss some recent works on improving training stability through adaptive gradient methods.** **1)Compare to Alternative Approaches:** In Related Work section and lines 149-155 left, we discussed entropy-based loss function methods and highlighted their homogeneous training effect. Notably, TMC has a different training effect by introducing temporal heterogeneous training. As the code for these methods is not available, we can only compare TMC's classification accuracy with them in Section 4.4. **2)Compare to Adaptive Gradient Methods:** Recent studies[R10-R12] have introduced adaptive gradient methods, primarily focusing on adjusting hidden layer gradients and using SDT loss to calculate the logit gradient. In contrast, TMC modifies logit gradient calculations and offers plug-and-play compatibility with various hidden layer gradient mechanisms. **6.Explore tmc to much larger models,real-world applications,deeper networks or more complex architectures.** **1)Larger Model Evaluation:** Hierarchical Spiking Transformer used in our paper with 64.96M parameters is the large model. **2)Deeper Model Evaluation:** We verify TMC on ResNet101 on ImageNet dataset with T=4 and compare it with SDT and TET. TMC achieves a higer accuracy of **70.52%** than SDT(68.74%) and TET(67.98%). **3)Real-World Applications:** We evaluate TMC on text sequential classification and dynamic action sequential recognition tasks and TMC achieves **SOTA performance**. More detailed results can be seen in Response 3 to Review ksZk. [R10]Learnable Surrogate Gradient for Direct Training Spiking Neural Networks [R11]Sparse Spiking Gradient Descent [R12]Gradient Descent for Spiking Neural Network
Summary: This work finds that the logit gradients have insufficient diversity in the temporal dimension during SNN training. The authors then rescale the gradient in each time step to improve diversity, resulting in SOTA performance for image classification tasks. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: The technical detail is limited: 1. It is not clear whether the so-called "logit gradient" should be diverse across time steps. The surrogate gradient + firing rate-based loss framework (both SDT and TET) implicitly follows a rate coding scheme. Consequently, it might be that uniformity in behavior over time steps serves as a more reliable indicator of confidence. The relationship between temporal heterogeneity and the performance of SNNs remains unclear and needs a formal theoretical explanation. Experimental Designs Or Analyses: checked Supplementary Material: I checked the additional experiments in the supp. Relation To Broader Scientific Literature: This work improves TET by increasing logit diversity in the temporal dimension. TET: Temporal efficient training of spiking neural network via gradient reweighting Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The method is in a plug-and-play fashion and can be merged with most SOTA models and algorithms. 2. It achieves SOTA performance on common image classification tasks. Weaknesses: 1. Lack of novelty. The method can be regarded as a minor modification of TET: it is like a cumsum version of TET. Even the regularization term also exhibits a comparable effect to the $L_{MSE}$ regularization in TET. 2. The logic is not convincing. While marginal modifications can indeed constitute valuable contributions, this paper fails to show the significance of such marginal changes clearly. It is not clear why diversity across time should be improved. And the regularization to improve diversity is not straightforward. Other Comments Or Suggestions: Suggestions: 1. Given the focus on temporal heterogeneity, I believe that sequential tasks would serve as more appropriate benchmarks to be considered. 2. The importance of temporal heterogeneity should be clarified. Questions For Authors: Interestingly, the proposed method works better on neuromorphic datasets than static images. Do the authors consider this a common phenomenon or not? Could the authors explain the possible reason? Are there fundamental insights within this observation? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Rebuttal Appendix:https://anonymous.4open.science/r/TMC-0262** **1.The relationship between temporal heterogeneity and the performance of SNNs remains unclear. It is not clear whether the so-called logit gradient should be diverse.** **1)Relationship between Temporal Heterogeneity and Performance** In SNNs, the neuronal dynamics of the membrane potential $u_t$ can be formulated as:  \begin{equation} {\tau\frac{du_t}{dt}}=-(u_t-u_{reset})+I_t.\tag{R1}\end{equation} When $u_t$ reaches the threshold $V_{th}$, $u_t$ is set to $u_{rest}$ and the neuron fires a spike: \begin{equation} s_t=\sum_{t_f}\delta(t-t_f).\tag{R2}\end{equation} The temporal heterogeneity of SNNs implies $|\frac{du_t}{dt}|>0$, yielding benefits as follows: + **Reduction of Dead Neurons**: Ensuring neurons spike when necessary. + **Sensitive Response** to Temporal Dependency and Input Information: Capturing spatio-temporal dynamics effectively. + **Diverse Spike Firing Frequency**: Adjustable $\Delta t$ in $s_t$ allows for diverse behaviors, such as burst spikes. Overall, accumulating temporal heterogeneous neuron responses to varied temporal inputs over time steps **enhances predictive confidence and performance**. **2)Temporal Heterogeneity Improvement** Direct training of SNNs involves the discrete model expression:  \begin{equation} {u_{t+1}^i}=\lambda(u_t^i-V_{th}s_t^i)+\sum_j\mathbf{W_{ij}}s_{t+1}^j.\tag{R3}\end{equation} Enhancing temporal heterogeneity can be achieved by improving either the temporal dependency dynamics $\lambda(u_t^i-V_{th}s_t^i)$ or the input mapping dynamics $\sum_j\mathbf{W_{ij}}s_{t+1}^j$ via $\mathbf{W}$ training. **We concentrate on the latter, which is less explored.** **3)Logit Gradient Diversity Rationality** Enhancing linear separability, that is, increasing the rank of $\mathbf{W}$ is crucial for dynamic input mapping. Increasing temporal gradient diversity is beneficial to explore the solution space and increase $\mathbf{W}$ rank. However, existing methods lack exploration of logit gradient diversity. **2.The method is a minor modification of TET.This paper fails to show the significance of such marginal changes clearly.The regularization to improve diversity is not straightforward.** From the technological perspective, TMC is a regularization term modification based on TET. However, from the temporal gradient decent perspective, **TMC is a critical improvement of TET**. The significance of TMC is highlighted as follows: **1)Rethinking the Temporal Logit Gradient Calculation** + TET focuses on increasing the momentum of temporal gradient descent but ignores appropriately assigning magnitude and direction to the momentum. This leads to an overconfidence issue. + Introducing model calibration into SNNs, we rethink the temporal gradient calculation properties that SNNs should meet and propose a temporal gradient rescaling mechanism. This mechanism assigns appropriate magnitude and direction to the gradient descent momentum across time steps. **2)Effective and Adaptive Regularization Term** + TET introduces the MSE regularization to prevent the occurrence of "particular outliers" but lacks deeper analysis on the issues. The regularization term appears to be an intuitive design. Moreover, the regularization term at each time step keeps same and fixed, relying on hyperparameters. + During training, TMC with a new regularization term rescales the logit gradient effects of TET, adaptively responding to input data, training phases, and time steps without hyperparameter dependence. + There exist straightforward model calibration techniques like label smoothing, but these methods have been found to be suboptimal relying on prior information and hyperparameters. Moreover, the regularization-based calibration techniques achieve SOTA performance. **3.Sequential tasks are more appropriate benchmarks to be considered.** + **Text Sequential Classification Task.** TMC is applied during the direct training phase of the SpikingBert [R1] model on Quora Question Pair dataset. Compared to SDT(86.82%) and TET(87.03%), TMC achieves higher accuracy at 87.86%. + **Dynamic Action Recognition Task.** We train VGGSNN with T=10 on DVS-Gesture and compare the performance of TMC with current works in Table R1 in the appendix. TMC achieves SOTA performance with an accuracy of 99.12%. **4.TMC works better on neuromorphic datasets than static images. Provide explanation and insights.** **TMC indeed exhibits more significant advantages on neuromorphic datasets.** + Compared to static datasets, neuromorphic datasets have rich spatiotemporal components by interacting the spatial and temporal information.  + TMC employs temporal heterogeneous training and has the advantage of capturing more task-relevant spatiotemporal dynamics features, outperforming existing methods, which employ temporal homogeneous training. [R1]Spikingbert: Distilling bert to train spiking language models using implicit differentiation. --- Rebuttal Comment 1.1: Comment: 1. Still not convinced about the rationality of Temporal Heterogeneity. 2. DVS-Gesture is too simple and can be trained well with only static information. However, due to the great efforts the authors put in the rebuttal, the additional experiments, and the good overall performance of the proposed trick, I will raise my score. --- Reply to Comment 1.1.1: Comment: We would like to express our sincere gratitude for your consideration and the score increase. Here, we provide a detailed analysis of the rationality of temporal heterogeneity and more convincing experimental results. **1.Rationality of Temporal Heterogeneity** The behavior of SNN is determined by the input data and the basic neuronal processing units. **1) Definitions:** + **Input Data: Temporally Dynamical Sequence Data.** It includes two types: (1) Neuromorphic data. It naturally possesses temporal heterogeneity. (2) Static data. It exhibits temporal heterogeneity after being processed by a spiking encoding layer. + **Basic Neuronal Processing Units: Vanilla LIF Neurons.** The expression of the LIF neuron over continuous time is as follows: \begin{equation} {\tau\frac{du_t}{dt}}=-(u_t-u_{reset})+I_t.\tag{R1}\end{equation} When $u_t$ reaches the threshold $V_{th}$, $u_t$ is set to $u_{rest}$ and the neuron fires a spike: \begin{equation} s_t=\sum_{t_f}\delta(t-t_f).\tag{R2}\end{equation} **2) Neuronal Temporal Heterogeneity Analysis** Firstly, the change in neuronal membrane potential is quantified by $\frac{du_t}{dt}$, which is determined by two components: + **Decay of Membrane Potential $-(u_t-u_{reset})$.** This term indicates that the membrane potential naturally decays towards the resting potential $u_{reset}$. If $u_t > u_{reset}$, this term is negative, indicating a decrease in membrane potential. If $u_t < u_{reset}$, this term is positive, indicating an increase in membrane potential. + **Input Current $I_t$.** It represents the effect of external current on the membrane potential. If $I_t$ is positive, it drives the membrane potential up. If $I_t$ is negative, it pushes the membrane potential down. Secondly, we investigate the temporal dynamic changes of $\frac{du_t}{dt}$: + **When $\frac{du_t}{dt} > 0$**, it means the membrane potential $u_t$ is increasing. This could be due to an input current $I_t$ that is sufficiently large to overcome the natural decay of the membrane potential, or because the effect of the natural decay is relatively small at the current moment. + **When $\frac{du_t}{dt} < 0$**, the membrane potential $u_t$ is decreasing. This could be because the input current $I_t$ is small or negative, not enough to counteract the decay of the membrane potential, or the effect of the natural decay is relatively large at the current moment. + **Note that $\frac{du_t}{dt} = 0$** is highly unlikely to occur unless specific conditions are met. + **Thus, $|\frac{du_t}{dt}| > 0$** indicates that the membrane potential is continuously changing, either increasing or decreasing. This change is a direct reflection of the neuron's response to temporal dependency and input signals and is **a key evidence of temporal heterogeneity.** **3) Neuronal Temporal Heterogeneity Improvement** To establish the computational link along the spatial-temporal dimension, the discrete vanilla LIF model can be formulated as:  \begin{equation} {u_{t+1}^i}=\lambda(u_t^i-V_{th}s_t^i)+\sum_j\mathbf{W_{ij}}s_{t+1}^j.\tag{R3}\end{equation} Neuronal dynamic can be reflected by $\lambda(u_t^i-V_{th}s_t^i)$ and $\sum_j\mathbf{W_{ij}}s_{t+1}^j$. Existing research shows that **vanilla LIF neurons' responses to complex temporal sequence tasks are insufficient**. To enhance neuronal heterogeneity and boost SNN performance, two perspectives can be considered: + **Temporal Dependency Dynamic Enhancement.** Many works focus on this area, such as the proposed parametric spiking neurons. + **Input Response Dynamic Enhancement.** For complex tasks, effectively capturing the highly dynamical temporal features of input data is crucial for enhancing model performance. **4) Our Contribution** + The mapping function $W$ is key to capturing temporal features, and its dynamic response to varying inputs is critical for performance. Updating $W$ via logit gradient backpropagation through hidden layers should enhance its separability and sensitivity. + However, existing methods focus on improving hidden layer gradient backpropagation, while the logit gradient remains underexplored. Its limited diversity can hinder SNN performance. We aim to address this by **enhancing logit gradient diversity across time steps**, boosting $W$'s dynamic response, and capturing more dynamic information to improve performance. **2. More Convincing Experimental Results** **1) Deeper Model Evaluation:** We verify TMC on ResNet101 on ImageNet dataset with T=4 and compare it with SDT and TET. TMC achieves **a higer accuracy of 70.52%** than SDT(68.74%) and TET(67.98%). **2) Evaluation On SL-Animals-DVS:** VGGSNN (instead of a larger model due to time constraints) is trained with TMC with T=16. Comparison with SDT and TET shows TMC achieving **a higher accuracy of 70.05%** against SDT (66.75%) and TET (68.34%), demonstrating the superiority of TMC.
null
null
null
null
null
null
EvoPress: Accurate Dynamic Model Compression via Evolutionary Search
Accept (poster)
Summary: The paper proposes Evopress, a novel pruning approach for the dynamic compression of LLMs based on evolutionary computation. The authors identify a critical issue in current compression algorithm approaches: error monotonicity does not apply to LLM compression. Aiming to resolve such drawbacks, they propose a (1 + λ)-evolutionary algorithm for compressing LLMs, derived from the theory that the dynamic compression of LLMs has a linear fitness function. Hence, (1 + λ) sounds like an ad-hoc algorithm due to its hill-climbing properties, which are established to be optimal for linear fitness functions. The main novelty is that Evopress can be applied to different compression techniques: structured and unstructured pruning as well as quantization. It also modifies the standard (1 + λ) by integrating level-switch mutation (to maintain a fixed sparsity ratio) and multi-step selection to overcome the high evaluation time required for the offspring. The experimental results show how Evopress outperforms the selected baselines in terms of perplexity over the Language Modeling task and accuracy over Zero-Shot tasks. ## Update after rebuttal The authors included in the rebuttal the required pruning runtime comparison. I stick to my final comment that the comparison should be done with the same calibration data size to have a fair comparison. All my other concerns were addressed. Hence, I changed my score from 2 to 4 and supported the acceptance of the paper. However, from my perspective, all the new results provided in the rebuttal should be included in the main paper. Also, a clear statement about the pruning runtime complexity should be highlighted in the main paper w.r.t. some more "pruning time" efficient baselines. Claims And Evidence: Partially. The motivation for the linear landscape in dynamic compression is supported. Also, the numerical results over the tested task support the claims of the paper with respect to the selected baselines. A claim that, in my opinion, is not supported is "iteration efficient." While I agree with the authors that the multi-step selection improves the overall runtime of EvoPress, with respect to the baselines, the method is not efficient in terms of runtime and resources required to obtain the final sparse model. While this may not be considered a major problem, I personally don’t think that, in its current state, the paper properly highlights such limitations. Another aspect that does not completely support the claims is the selection of baselines. While it is true that the paper tackles three different compression schemes, the baselines selected for each compression level do not provide a clear picture with respect to the recent literature for each compression scheme. (See Relation To Broader Scientific Literature) Methods And Evaluation Criteria: Yes, the selected models are in line with the latest public LLMs. The datasets and tasks are consistent with recent works on LLM compression. Theoretical Claims: I checked the Theorem 3.1 in the main text, not the full analysis in the Appendix. Experimental Designs Or Analyses: Yes. My main concern is the limited baselines for each compression scheme (see below) Supplementary Material: I checked Section B.2 concerning multi-stage selection and Section C. Experimental Setup. Relation To Broader Scientific Literature: The paper focuses on dynamic compression of both structured and unstructured pruning as well as quantization. It tackles the problem of monotonic error in current pruning approaches, and at the same time focuses on dynamic compression, which is a crucial topic at the moment in the pruning community. Essential References Not Discussed: The paper does not discuss several works. Regarding structured pruning, it does not mention or use [1,2] as baselines. For unstructured pruning, they apply Evopress to SparseGPT without mentioning [3,4]. On the quantization side, no recent weight-only quantization algorithms, such as [5,6,7], have been discussed or included in the baselines. [1] Zhong, Longguang, et al. "Blockpruner: Fine-grained pruning for large language models." arXiv preprint arXiv:2406.10594 (2024). \ [2] Song, Jiwon, et al. "SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks." Forty-first International Conference on Machine Learning. \ [3] Zhang, Yingtao, et al. "Plug-and-play: An efficient post-training pruning method for large language models." The Twelfth International Conference on Learning Representations. 2024. \ [4] Sun, Mingjie, et al. "A Simple and Effective Pruning Approach for Large Language Models." The Twelfth International Conference on Learning Representations. 2024 \ [5] Lin, Ji, et al. "Awq: Activation-aware weight quantization for on-device llm compression and acceleration." Proceedings of Machine Learning and Systems 6 (2024): 87-100. \ [6] Huang, Wei, et al. "BiLLM: Pushing the Limit of Post-Training Quantization for LLMs." Forty-first International Conference on Machine Learning \ [7] Shao, Wenqi, et al. "OmniQuant: Omnidirectionally Calibrated Quantization for Large Language Models." The Twelfth International Conference on Learning Representations. \ Other Strengths And Weaknesses: ## Strengths - The investigation of the linear landscape for dynamic compression is novel and well-supported. - Applying (1 + λ) for model compression in a linear landscape is novel and effective. The proposed level-switch and multi-step selection modifications allow (1 + λ) to be effective in compressing LLMs. - Using KL-divergence instead of perplexity, as in [1,2], provides a smoother evaluation of the sparse model. - The application of a single framework for different compression schemes is the main novelty and a key strength of the paper. - The results across the selected baselines show that EvoPress achieves better performance. ## Major Weaknesses - The main weakness, in my opinion, is the time required to obtain the sparse model. Figures 3 and 8 clearly show that EvoPress requires hours to complete its evolutionary process. In particular, Figure 3 (left) indicates that 13 hours are required. On the other hand, approaches like OWL only require a single forward pass of the calibration data through the model to obtain outlier values from which to compute block-wise sparsity. Similarly, structured pruning methods like ShortGPT and Sliding Window Cosine Similarity are time-efficient in computing block scores. While this may not be a reason to reject the paper, I believe a fairer comparison in terms of pruning runtime, along with clearer details on the time and resources needed to run EvoPress, is necessary. - While I highly appreciate EvoPress for its ability to apply dynamic compression to different compression schemes, the evaluation of each scheme (in terms of baselines) is too limited. - Regarding structured pruning, the paper claims, “All four previous methods rely on human-crafted,” however, Shortened Llama and [1,2] also rely on the perplexity metric to evaluate the sparse model, similar to the proposed KL-divergence approach. Other Comments Or Suggestions: I appreciated the idea of EvoPress and agree with the authors that it is the first framework that can be used across different compression schemes. I think the idea of using (1 + λ) as hill-climbing over a linear landscape, such as that of dynamic compression, is novel. However, I would like to see a fairer comparison in terms of pruning runtime and resources required w.r.t. the baselines in order to increase my score. Even though the advantage of obtaining a sparse model compensates the time required to obtain it, I think it is fair to report the actual runtime (seconds) and resources (possibly FLOPs) required (as numerical values and not as vertical lines inside a plot) and especially w.r.t. the baselines. Also, provide a detailed explanation of why no other baselines are included, especially in the quantization experiments. Also, provide information about train and validation data; see the question below. Questions For Authors: - In unstructured pruning, why apply EvoPress to SparseGPT instead of Wanda, which would have been faster, given that SparseGPT requires weight reconstruction differently for each level in the database? - Is there a specific reason why there are no other baselines for quantization besides DP-pruning? - Regarding the runtime plots, the authors state, “We observe convergence close to optimum in 5-6h” (Figure 3). However, when applying EvoPress, do you still run for all the generations specified in Table 8? I also have some doubts about the plot—does the train accuracy refer to the KL of the offspring at generation t (computed as an average across all individuals)? Is the test accuracy computed on the same calibration data used for KL-divergence? Please provide more information about it. - Since (1 + λ) retains only one individual after evaluation, if the optimum is reached at generation t << num_generations, shouldn’t the evolutionary process stop? As you mention, EvoPress usually converges close to the optimum before reaching num_generations, why not implement a stopping condition? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for a thorough and detailed view with a lot of insightful comments and suggestions. The concerns are addressed below: **Stopping Criteria** We ran the search for more generations than necessary to get insights about the convergence behavior. We agree that in practice one should determine the number of generations dynamically, and as suggested, we have implemented a stopping criteria: Every 20 generations the perplexity on the training set is computed, and if it hasn’t improved, the search terminates. We present timings and results of EvoPress including early stopping below. **Runtime vs. baselines** Determining meaningful runtime results for baselines is challenging due to several reasons: * For block dropping, 3 out of 4 baseline methods neither shared their code nor specified data usage, making runtime comparisons difficult. Generally, scoring methods are faster than EvoPress. We compare the runtime of EvoPress with baseline methods below. * For quantization and unstructured pruning, runtime depends on the specific method, while our approach works independently of the layerwise compression method. OWL, for instance, requires an extensive hyperparameter sweep (20 configurations) and a larger layer database than EvoPress, leading to similar runtime constraints in our SparseGPT setup, dominated by layer database generation. * Additionally, for pruning, the sweep in OWL is not provided by the authors, and it is not clear how the “best” model is chosen (in particular, over how much data). We decided (in their favor) to evaluate all 20 models on the entire evaluation data, and choose the model with lowest perplexities. We will make this more transparent in the revised version. To address the reviewer’s concern, we have included runtimes under common parameter values below. **Comparison with baselines** *Block dropping* We provide comparisons with SLEB and BlockPruner in [Table](https://anonymous.4open.science/r/EvoPress-Data-DF25/tables/Comparison_depth_pruning.pdf). We also included the runtimes of other baseline methods, where we used 64K calibration tokens of Fineweb Edu consistently. One can observe that EvoPress outperforms these additional baselines, especially at higher compression, while being faster. We report the results of EvoPress with early stopping as outlined above. *Unstructured sparsity* Plug and Play and Wanda are relevant to unstructured pruning, and we'll include them in the new revision. Our work primarily focuses on finding the optimal sparsification profile on top of a layerwise compression method. We adopted SparseGPT due to its popularity and good performance. We demonstrate that EvoPress also functions effectively on top of a layer database generated using Wanda in [Table](https://anonymous.4open.science/r/EvoPress-Data-DF25/tables/Besa_wanda_table.pdf), where we also report runtime measurements. As requested by Reviewer **eQfQ**, the table contains results for BESA. *Quantization* AWQ, BiLLM, and OmniQuant will be discussed in our updated version, acknowledging their relevance. To test EvoPress's compatibility with various quantization methods, we applied it to a database generated by AWQ. Due to AWQ's constraints, the search space requires the same bitwidth within transformer block. Despite this, EvoPress achieved an improved compression profile (see [Table](https://anonymous.4open.science/r/EvoPress-Data-DF25/tables/EvoPress-AWQ.pdf)). **Human-Crafted Scoring** We wanted to emphasize the fact that the baseline methods do not evaluate the fully pruned model, instead, they make strong assumptions on how the fully pruned model performs based on another metric, which is inherent to scoring methods. In contrast, EvoPress (and [1,2] in a more restricted setting) evaluate the fully pruned models directly. We will rephrase this in the revised version. **Questions** * *Use of SparseGPT*. Regarding SparseGPT, we used it for its reliable performance at high compression, superior to Wanda at 60%-70% sparsity, which are our targets. Wanda, however, allows re-use of weight configurations across sparsities with separate masks. * *Mixed-quantization baselines*. At the time of submission, we were not aware of any baselines that address the problem of mixed-precision quantization in a comparable setting. Reviewer **eQfQ** suggested the Slim-LLM paper, which we compared but found it complementary due to its channel-wise approach. OWL's layerwise non-uniform quantization performed worse than uniform quantization, so we did not include it. * *Convergence Plots of EvoPress*. The train accuracy refers to the KL-Divergence of the survivor of the selection process, i.e. the parent of the subsequent iteration. It is measured on a random subset of the full training data (Fineweb-Edu), which is why the line is noisy. The test accuracy is measured on the full set of hold-out Fineweb-Edu data and does not impact the search process. We will clarify this in the revision. --- Rebuttal Comment 1.1: Comment: Dear Authors, Your rebuttal addressed all of my concerns. Just a couple of minor comments: - Comparison_depth_pruning.pdf: It would be better to evaluate performance and runtime using the same number of tokens for the forward pass. This would make the comparison clearer, but otherwise, it could be noisy. - Nice to hear that the stopping condition can improve the runtime performance of your algorithm. I suggest including it in the main paper and the official codebase. I suggest the authors include all the additional results and discussion used during the rebuttal in the main paper. I will change my score accordingly and support the acceptance of this work.
Summary: This paper proposes EvoPress, a general framework for LLM compression. The authors observe that error monotonicity does not hold for LLM, and proposes an evolutionary search approach to improve performance. Experimental results demonstrate that on three compression approaches, depth pruning, unstructured sparsity, and quantization, the proposed methods achieve state-of-the-art performance for dynamic compression. Claims And Evidence: Yes, the evidence provided by the authors supports the claims. Methods And Evaluation Criteria: Yes, I think it makes sense. Theoretical Claims: The mathematical symbols, variables, and equations are generally well defined and mathematically correct. Experimental Designs Or Analyses: I think the experimental designs in this paper are reasonable. Supplementary Material: Yes, I check the provided code in the supplementary. Relation To Broader Scientific Literature: EvoPress greatly expands the literature by combining evolutionary optimization with dynamic LLM compression, challenging long-held heuristic assumptions, and providing a flexible framework. Essential References Not Discussed: I'm not familiar with this area, so I'm not sure. Other Strengths And Weaknesses: Strengths: The proposed method is applicable to depth pruning, unstructured sparsity, and quantization while achieving strong performance. Weaknesses: 1. The theoretical proof does not fully explain its effectiveness in nonlinear LLM compression. It is unclear whether EvoPress maintains optimization capability in nonlinear regions. 2. The proposed strategy may get trapped in local optima in high-dimensional and multimodal compressed spaces. Its performance in highly constrained or multi-objective settings remains uncertain. Other Comments Or Suggestions: N/A Questions For Authors: See the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. Below, we address the two weaknesses: > The theoretical proof does not fully explain its effectiveness in nonlinear LLM compression. It is unclear whether EvoPress maintains optimization capability in nonlinear regions. Fundamentally this is true – we cannot expect the linearity assumption to fully hold in practice. Notably, assuming linearity is standard in prior work, such as OWL, ShortGPT, and DP-based methods such as SPDY. In contrast, while our theoretical results rely on this assumption, EvoPress can optimize a much larger class of fitness environments. In this sense, EvoPress operates under significantly weaker assumptions than prior approaches. That said, our main contribution is empirical, with EvoPress showing strong performance across various settings. Additionally, we conducted experiments to verify the capabilities of EvoPress: * In Figure 1, we consider the problem of removing twelve transformer blocks of Llama-3-8B, with the additional constraint that only pairs of consecutive blocks can be removed. We brute forced the entire search space (16 choose 6 = 8008 configurations) using significant compute in order to identify the global optima. Then, we ran our evolutionary approach on this problem, and found that it detects the global optima within 6 generations. * We performed a robustness analysis for pruning Llama-3-8B to 70% sparsity. We found that over 16 independent runs the resulting perplexity on hold-out data has very little variance (Figure 7), and the final configurations show high similarity (Figure 6). This indicates that local optima do not pose a problem for the search process. > The proposed strategy may get trapped in local optima in high-dimensional and multimodal compressed spaces. Its performance in highly constrained or multi-objective settings remains uncertain. We would like to point out that the search space is relatively high-dimensional - the dimensionality of space is several dozens for the case of block dropping and several hundreds for quantization and unstructured sparsity. For a high enough compression ratio the loss landscape is nonlinear (as the example in Table 1 shows), yet the evolutionary search procedure successfully converges to a good optimum. EvoPress is compatible with multiple compression techniques simultaneously. Specifically, we conducted an experiment with joint depth pruning (with 25% blocks dropped) and quantization (with 4 bits on average given 2, 3, 4, 5, 6 bit width options) (see details in response to reviewer **Q9Fo**).
Summary: This paper aimed at LLM compression and motivated by the observation that depth pruning LLM further may improve the performance. Evolutionary search algorithm is applied to search pruned model with compressed size and performance constraint. It also applied to layer/block-wise non-uniform unstructured pruning and quantization. The results show the improvement for these three compression methods. Claims And Evidence: The paper claims SOTA performance on depth pruning, unstructured pruning and quantization but there are some related works not compared in this paper. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The evolutionary search algorithm is checked. Experimental Designs Or Analyses: Only application 2 (unstructured sparsity) presents the running time for the proposed method and running time for depth pruning and quantization are not clear. In Figure3, the “super-fast” version seems not reach out the same model performance compared with the normal version. Supplementary Material: Yes. All of it. Relation To Broader Scientific Literature: Layer doping, pruning and quantization are all efficient methods for LLM memory problem as related works referenced in the paper. Essential References Not Discussed: - For depth pruning, SLEB[1] and Slicegpt[2] should be compared. - For unstructured pruning, BESA[3] should be compared. - For quantization, other mixed-precision quantization works should be discussed and compared, such as SliM-LLM[4], CWMQ[5]. [1] SLEB: streamlining llms through redundancy verification and elimination of transformer blocks. [2] Slicegpt: Compress large language models by deleting rows and columns. [3] BESA: Pruning Large Language Models with Blockwise Parameter-Efficient Sparsity Allocation [4] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models [5] Channel-Wise Mixed-Precision Quantization for Large Language Models Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. The concerns are addressed below: **Experimental design and analyzes** *Runtime of the method* Similar as for unstructured pruning in the main text, we have indicated the runtime of EvoPress for block dropping and quantization within convergence plots (see Appendix Figure 8 for Block Dropping, and Appendix Figure 14 for Quantization). We provide runtime measurements in the additional baseline comparisons further down. You correctly observed that the super-fast version depicted in Figure 3 does not fully reach the model performance of the more heavy-weight search. However, the hit in performance is rather small, and we wanted to demonstrate that it can be obtained for a fraction of the search time. This also suggests a practical strategy: one could first run the faster search to quickly reach good performance, and only then switch to the more heavy-weight search for best results. **Comparison with Baselines** We acknowledge your point; below, we provide additional comparisons with existing methods. 1. Depth pruning We acknowledge that SLEB is a relevant baseline and will include it as related work in the revision. Below, we have included a comparison with EvoPress on Llama2-7B, where we also included BlockPruner as a baseline, following the suggestion of Reviewer **zC2u**. Both SLEB and BlockPruner iteratively remove blocks and can thus be interpreted as highly restricted search procedures. In such methods, once a block is removed, it cannot be recovered in later iterations, which explains the worse performance compared to EvoPress. We report the runtime of EvoPress when using a simple stopping criteria, that terminates the search whenever the perplexity on the training data has not improved within the last 20 generations. The comparison of different depth pruning methods is detailed in [Table](https://anonymous.4open.science/r/EvoPress-Data-DF25/tables/Comparison_depth_pruning.pdf). 2. Unstructured sparsity Indeed, BESA is a relevant work and will be added to the related work section in the revised edition. For a fair comparison, we ran the fast version of EvoPress on a Wanda layer database (thus, the results are slightly worse than presented in the paper, where we used SparseGPT). Still, EvoPress finds notably improved layerwise sparsity allocations. For completeness, we also included the row-wise version of BESA, which adapts the Wanda Mask within each layer. We show that this is complementary to EvoPress by using the average per-block sparsity found by EvoPress as the per-block sparsity for BESA (EvoPress+BESA), which mostly yields better perplexities than BESA and EvoPress individually. However, this is still very much experimental, and there might be better methods to merge both approaches, like producing a layer database using BESA with row-wise non-uniform sparsity allocation, and then searching this database with EvoPress. The experimental results with BESA in the original setup as well EvoPress applied on top of BESA and Wanda are provided in [Table](https://anonymous.4open.science/r/EvoPress-Data-DF25/tables/Besa_wanda_table.pdf). 3. Quantization Slim-LLM and CWMQ perform mixed precision quantization on a per-channel level compared to the per-layer precision allocation adopted in our method. Therefore, we could adopt Slim-LLM or CWMQ as an alternative to GPTQ, and run EvoPress on top of this method. A direct comparison between EvoPress and these two works is not possible. However, we believe that EvoPress would synergize very well with such intra-layer mixed precision quantization methods, as it allows to produce layers with non-integer bitwidth, which is beneficial for the search procedure. According to your suggestion, we will reference these two works in the updated revision. *Slim-LLM* We conducted Slim-LLM experiments using the open-sourced implementation on GitHub. We considered two setups: - Original calibration data (Wikitext2, 128k samples of length 2048) - Our calibration data (FinewebEdu, 8M tokens) EvoPress shows significant improvement over SlimLLM in the paper's original setup in terms of 0-shot and MMLU evaluations. When comparing methods with the same calibration data, both methods show similar performance. However, the runtime of Slim-LLM increases dramatically with more data (~50 hours on single RTX3090 GPU compared to ~10 hours for EvoPress). That said, EvoPress and SlimLLM are complementary and hence, running EvoPress on top of SlimLLM will lead to further gains. We present results in [Table](https://anonymous.4open.science/r/EvoPress-Data-DF25/tables/Slim-LLM_vs_EvoPress.pdf). *CWMQ* The algorithm implementation of CMWQ is not open-sourced, making it difficult to provide a comparison in a comparable setup.
Summary: The paper introduces EvoPress for dynamic LLM compression, which optimizes compression levels across different model components to minimize accuracy loss while satisfying a global compression threshold. By formulating dynamic compression as an optimization problem, EvoPress efficiently determines optimal compression profiles. Experiments on multiple models demonstrate SOTA results for LLM compression. Claims And Evidence: Yes. Experiments demonstrate that EvoPress has SOTA performance for dynamic compression on various models. Methods And Evaluation Criteria: The proposed methods and evaluation make sense. Here are some questions for the proposed method: (1) To quantify model degradation, KL divergence was used in Section 3. KL divergence is popular to do it, but there are other ways, for example max absolute value, or we can even use a small calibration dataset to measure perplexity, etc. It’ll be great if authors could explain this design choice. (2) The paper has a high level problem definition. However, the proposed approach was only applied to single compression approach one by one (pruning or sparsity or quantization). It will be great if authors could show or explain the results of applying EvoPress on multiple compression approaches simultaneously. Theoretical Claims: Briefly read the proofs in Section 3 and appendix. Experimental Designs Or Analyses: The overall experiments are valid. Various models and different compression methods are tested. Supplementary Material: Briefly read the supplementary material. Relation To Broader Scientific Literature: LLM model compression under a threshold is very important. This paper proposed an efficient method to solve this problem, and has the potential to inspire future works on dynamic model compression. Essential References Not Discussed: Most related works are discussed in paper. Other Strengths And Weaknesses: Please see the comments above. Other Comments Or Suggestions: Please see the comments above. Questions For Authors: Please see the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. The questions are addressed below: > To quantify model degradation, KL divergence was used in Section 3. KL divergence is popular to do it, but there are other ways, for example max absolute value, or we can even use a small calibration dataset to measure perplexity, etc. It’ll be great if authors could explain this design choice. In our work we considered both KL-Divergence and Perplexity on the calibration set as fitness functions in the evolutionary search. We provide an ablation on these choices in Appendix B.3. Both choices show similar performance, but we decided to stick to KL-Divergence as it performed slightly better. The comparison between these two options is provided in **Table 1**. **Table 1. Comparison between KL-Divergence and Perplexity as the fitness function** | Model | # Bits | Method | Wiki2↓ | C4↓ | FW↓ | |----------------|--------|---------------|--------|-------|-------| | Llama-3-8B | 3 | Uniform | 12.19 | 15.76 | 11.47 | | | | EvoPress (PPL)| 8.17 | 12.15 | 9.64 | | | | EvoPress (KL) | **7.49** | **12.03** | **9.56** | Llama-2-7B | 3 | Uniform | 6.16 | 7.96 | 6.86 | | | | EvoPress (PPL)| 5.74 | 7.90 | 6.79 | | | | EvoPress (KL) | **5.70** | **7.87** | **6.76** | | Mistral-7B-v0.3| 3 | Uniform | 5.54 | 8.57 | 6.96 | | | | EvoPress (PPL)| 5.23 | 8.45 | 6.87 | | | | EvoPress (KL) | **5.21** | **8.42** | **6.86** | We believe this is because KL-Divergence measures relative degradation to the base model, which is more informative than a pure next token loss. This is especially important when using as little calibration data as we do in our multistep selection process. We decided to adopt KL-Divergence to quantify the distance between the two predictive distributions, as it is a well-established metric, with a grounding in information theory. > The paper has a high level problem definition. However, the proposed approach was only applied to single compression approach one by one (pruning or sparsity or quantization). It will be great if authors could show or explain the results of applying EvoPress on multiple compression approaches simultaneously. EvoPress is compatible with multiple compression approaches simultaneously. We conducted an experiment with joint depth pruning (with 25% blocks dropped) and quantization (with 4 bits on average given 2, 3, 4, 5, 6 bit width options). Specifically, on each step of the evolutionary algorithm we perform an alternating optimization between block dropping and quantization. Firstly, we select the optimal depth pruning and configuration and then exchange quantization bit-width between “alive” blocks. Our experimental results (see [Wikitext-2 perplexity](https://anonymous.4open.science/r/EvoPress-Data-DF25/figures/multimodal_search_wikitext2.pdf) and [C4 perplexity](https://anonymous.4open.science/r/EvoPress-Data-DF25/figures/multimodal_search_c4.pdf)) suggest that EvoPress manages to find a better solution given some starting point and exhibits relatively stable convergence.
null
null
null
null
null
null
Limitations of measure-first protocols in quantum machine learning
Accept (poster)
Summary: This work is motivated by randomized measurement protocols to analyze the separation in quantum machine learning when processing quantum data using fully quantum operations versus measuring the input data and utilizing classical information. It highlights the limitations of measure-first protocols and provides examples demonstrating learning separations. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, it make sense. Theoretical Claims: The proofs are rigorous and address a fundamental question in machine learning. Experimental Designs Or Analyses: There is no experimental designs or analyses. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: It contributes to the field of machine learning by providing guidance on using classical shadows to learn specific properties of quantum systems. Essential References Not Discussed: I think all related works are cited. Other Strengths And Weaknesses: Strengths: 1. I believe the authors address a fundamental problem in quantum machine learning by comparing two main approaches: the measurement-first method and the fully quantum approach. While it is intuitively expected that the fully quantum approach would perform better, the authors provide a rigorous proof to support this claim. Weakness: 1. The paper discusses the impact of noise on the task but lacks numerical validation. 2. It would be highly valuable if the authors could provide further insights into selecting between the fully quantum approach and the measurement-first approach for different applications. Currently, many methods employ classical shadow techniques combined with machine learning to infer properties such as expectation values with respect to observables, i.e.Tr[Oρ]. These methods have demonstrated promising results, achieving relatively high accuracy. It would be beneficial for the authors to comment on such cases and offer explanations on when to choose specific approaches beyond the HM discussed in this work. Other Comments Or Suggestions: No other comments. Questions For Authors: 1. Why you do not provide numerical validations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **The paper discusses the impact of noise on the task but lacks numerical validation. Why do you not provide numerical validations?** We appreciate the reviewer’s suggestion regarding numerical simulations and experimental validation. We would like to mention that we are already collaborating with an experimental team on this front. However, there are significant challenges in conducting such experiments, even in ideal simulations. Moreover, we would like to clarify that this is a theoretical paper, where we provide rigorous guarantees on the robustness of our results to noise. While exploring the effects of various noise models on our protocols is certainly an interesting direction, we believe it falls outside the scope of this work. Our primary focus is on establishing a fundamental separation between MF and fully quantum protocols, rather than on noise analysis itself. Investigating additional aspects would introduce a different set of challenges and convey a different message, which is beyond the intended scope of this paper. Additionally, we would like to refer to our response to a similar comment from Reviewer G1ds: “It would be better to have discussions on implementation for demonstrating this separation.” In that response, we discuss the practical challenges involved in demonstrating or validating our separation, particularly the difficulties in demonstrating/validating that no single measure-first protocol can match the performance of the fully quantum protocol. Given these challenges, we believe that a comprehensive exploration of this issue would require a dedicated project, which falls beyond the scope of the current work. **It would be highly valuable if the authors could provide further insights into selecting between the fully quantum approach and the measurement-first approach for different applications. Currently, many methods employ classical shadow techniques combined with machine learning to infer properties such as expectation values with respect to observables, i.e., Tr[Oρ]. These methods have demonstrated promising results, achieving relatively high accuracy. It would be beneficial for the authors to comment on such cases and offer explanations on when to choose specific approaches beyond the Hamiltonian model (HM) discussed in this work.** We appreciate the reviewer’s insightful question. Our results provide a useful indicator of when MF protocols may be insufficient, particularly concerning the complexity of the quantum state family and the nature of the learning task. Specifically, our findings suggest that machine learning problems related to hard problems in (F)BQP/qpoly inherently require fully quantum protocols. An important result in quantum complexity theory [1] shows that any problem solvable with polynomial-sized quantum state advice can also be addressed using the ground state of a local Hamiltonian. This suggests that learning problems where the input consists of sufficiently complex local Hamiltonian ground states could exhibit a separation between MF and fully quantum protocols. While rigorously proving such separations remains a challenging open problem, this connection points to a promising research direction for understanding the limitations of MF protocols in physically relevant settings. The reviewer also raises an interesting point about classical shadow techniques, which have proven effective in estimating expectation values. While these methods can achieve high accuracy for specific tasks, they rely on structured measurement strategies and focus on extracting partial information from quantum states. In contrast, our results suggest that when the underlying learning problem fundamentally relies on the exponential storage capacity of quantum states, classical shadow techniques and other MF approaches may be insufficient. Overall, while MF methods, including classical shadows, are useful in many practical scenarios, our work identifies cases where a fully quantum approach is necessary. Investigating these cases further—particularly in the context of learning with local Hamiltonian ground states—is an exciting avenue for future research that likely requires a dedicated study. [1] Scott Aaronson and Andrew Drucker. A full characterization of quantum advice. Proceedings of the forty-second ACM Symposium on Theory of Computing.
Summary: This paper establishes a theoretical framework contrasting two quantum machine learning approaches: "fully-quantum" protocols that adaptively measure quantum data versus "measure-first" protocols restricted to fixed initial measurements. The authors prove that certain learning problems can be efficiently solved using fully-quantum methods but remain impossible for measure-first approaches, even when limited to efficiently preparable quantum states. This separation demonstrates that some learning tasks fundamentally require processing unmeasured quantum data, necessitating the exponential nature of quantum states. The work suggests ground states of complex local Hamiltonians as promising candidates for demonstrating this separation, with potential alternative constructions using computationally indistinguishable states. The work highlights the importance of fully quantum machine learning processing. Claims And Evidence: The claims and evidence are mainly from the theoretical perspective and are basically fine. Methods And Evaluation Criteria: The methods and evaluation criteria are reasonable for a theory paper. Theoretical Claims: Via a rough check of the proofs, they look correct to me. Experimental Designs Or Analyses: The experimental design is not needed since this is a pure theory paper. Supplementary Material: The supplementary material has been read. Relation To Broader Scientific Literature: Reasonable literature review. Essential References Not Discussed: The reference is reasonable in this paper. Other Strengths And Weaknesses: The strengths: 1. Rigorous proof of the case when fully quantum learners are more powerful than measure-and-learn protocols. 2. The work delivers insights for other topics such as ground states of complex Hamiltonians and computationally indistinguishable states. The weaknesses: 1. The description of the task could be more accessible for the machine learning community. 2. It needs more discussion about the practical relevance of the main results to machine learning. 3. It would be better to have discussions on implementation for demonstrating this separation. Other Comments Or Suggestions: 1. Describe the classical analog of the machine learning problem consider in this paper and provide the literature. 2. Make task descriptions more accessible to ML researchers. 3. Elaborate on practical ML applications of the findings. 4. Include implementation strategies to demonstrate the separation. Questions For Authors: 1. Could you revise the description of your learning task to make it more accessible to researchers from the machine learning community? 2. How do your theoretical results on quantum-classical separation translate to practical applications in machine learning? 3. Are there other learning problems (more operational) that would exhibit similar performance gaps between fully-quantum and measure-first approaches? 4. What implementation strategies would you suggest for experimentally demonstrating the separation between fully-quantum and measure-first protocols? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **The description of the task could be more accessible for the machine learning community.** We regret that the reviewer finds the description insufficiently accessible to the ML audience and take their criticism to heart. We have made a concerted effort to present a clear explanation in Section 1.1, but the interdisciplinary nature of our work and the page limits of ICML make further simplification challenging without introducing excessive quantum computing jargon. Given these constraints, we believe we have provided the most accessible presentation possible. **It needs more discussion about the practical relevance of the main results to machine learning.** While our work is primarily theoretical, we have discussed its practical relevance in the conclusion section of the paper. To summarize, our findings underscore the importance of processing unmeasured quantum data in machine learning, revealing scenarios where quantum advantages naturally arise. Specifically, we identify learning tasks that fundamentally require the exponential capacity of quantum states. While our proof relies on separations in one-way communication complexity and pseudorandom states, it suggests broader implications: quantum advice states that cannot be classically simulated with polynomial overhead -- such as the ground states of sufficiently complex local Hamiltonians -- could also demonstrate similar separations. Additionally, complexity-theoretic constructions like computationally indistinguishable states provide alternative routes to exhibiting gaps between measure-first and fully quantum protocols. **It would be better to have discussions on implementation for demonstrating this separation.** We appreciate the suggestion. We are actively collaborating with an experimental team to explore potential implementations. However, demonstrating this separation in practice poses significant challenges, even in idealized simulations. A key difficulty is determining the optimal measure-first protocol for the given task -- essential for rigorously establishing a separation. While we have ideas on tackling this, a full exploration requires a dedicated project and thus is beyond the goals we had for this work. Nevertheless, we emphasize that our proof of separation results for efficiently preparable quantum states brings our work closer to practical experimental realization. **Describe the classical analog of the machine learning problem considered in this paper and provide the relevant literature.** The problem we study is inherently quantum, making a direct classical analog difficult to define. Instead, classical analogs emerge in the learning strategies rather than the problem itself, giving rise to our distinction between measure-first and fully quantum protocols. In principle, the closest classical counterpart would be supervised learning with noisy or probabilistic labels, where the labeling function does not assign a deterministic label y to an input x. but instead samples y from a probability distribution p(y∣x). However, in our case, the crucial distinction is that the data x consists of quantum states. This fundamental difference -- i.e., how quantum data is processed -- underpins the separation between measure-first and fully quantum protocols, making standard results from supervised learning with noisy or probabilistic labels not directly applicable to our setting. **Are there other learning problems (more operational) that would exhibit similar performance gaps between fully quantum and measure-first approaches?** Thank you for the insightful question! Our results suggest that learning problems linked to hard problems in (F)BQP/qpoly are prime candidates for requiring fully quantum protocols. A key result by Aaronson and Drucker [1] shows that any problem solvable with polynomial-sized quantum state advice can also be solved using the ground state of a local Hamiltonian as advice. This implies that learning problems involving the ground states of sufficiently complex local Hamiltonians could also exhibit a separation between measure-first and fully quantum protocols. While rigorously proving such separations remains a challenge, studying the learning properties of these quantum states is a promising avenue for future research. We discuss this in the Conclusion section of the paper. [1] Scott Aaronson and Andrew Drucker. "A full characterization of quantum advice." Proceedings of the Forty-Second ACM Symposium on Theory of Computing.
Summary: The paper compares two quantum learning paradigms: (i) the measure first protocols where the learner uses a priorly determined measurements to obtain some classical information about the training samples (shadow tomography), (ii) a fully quantum learner that is allowed to make measurements that depend on the outcomes of previously seen states. The authors show the existence of a setup where measure-first approach is provably worse than the full quantum protocols. Claims And Evidence: Yes Methods And Evaluation Criteria: N/A Theoretical Claims: I looked at the results and it makes sense but I did not check the detailed proof arguments. Experimental Designs Or Analyses: n/a Supplementary Material: no Relation To Broader Scientific Literature: The findings are broad to the areas at the intersection of shadow tomography and quantum machine learning. Essential References Not Discussed: Some recent key works in shadow tomography were not cited: - Triply efficient shadow tomography, Robbie King, David Gosset, Robin Kothari, Ryan Babbush - Optimal tradeoffs for estimating Pauli observables, Sitan Chen, Weiyuan Gong, Qi Ye - Adaptivity can help exponentially for shadow tomography, Sitan Chen, Weiyuan Gong, Zhihan Zhang Especially the last paper seems to be very much related to the findings of this work. Other Strengths And Weaknesses: The results of the paper, in the context of recent works, are not surprising but useful to know. The paper is well written and rigorous. However, I did not find the QML model confining! The training samples in the model are the quantum states with the post-measurement outcomes. This is a model rather artificial. Because of the state collapse of the quantum measurements, it is not typically feasible to have a quantum state before the measurement and the measurement outcome after that. Unless I am missing some parts of the paper/concept. I am not sure if the presented model makes sense. Other Comments Or Suggestions: no Questions For Authors: Can you justify the proposed model of QML? Isn't it more natural to consider post-measured quantum states with the measurement labels? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We naturally find it disappointing that our work was evaluated with 1 out of 5. We are hopeful that the majority of this evaluation stems from misunderstandings, probably caused by our wording, which we can dispel and clarify. We will do our best to address the reviewer’s concerns and demonstrate why we believe our paper is deserving of publication at ICML. **Some recent key works in shadow tomography were not cited** Thank you for pointing out these references! We added these papers as references exhibiting the advantages that have been identified when measure-first protocols are allowed coherent measurements of multiple copies of a given quantum state (as discussed in the introduction in the manuscript). These references thus further elucidate the inherent power of measure-first protocols and further motivate the question of whether advantages are possible for fully-quantum protocols. **The results of the paper, in the context of recent works, are not surprising but useful to know.** In discussions with experts, we often encounter two opposing intuitions: some expect a distinction between fully quantum and measure-first schemes, while others—citing the success of shadow tomography—believe no such difference exists. The reviewer appears to align with the first view. To illustrate why some believe no such separation exists, we offer additional context. A significant body of evidence suggests measure-first protocols can be surprisingly powerful, making our result less obvious than it may seem. For instance, a recent milestone [2] demonstrated that a measure-first protocol (classical shadows of ground states [1]) could accurately predict many observables of gapped Hamiltonians. Similarly, a study by field experts [3] showed that for many quantum learning tasks, classical CNNs using classical shadows performed comparably to quantum neural networks. Notably, the authors questioned whether any task could conclusively separate classical and quantum approaches. The works cited by the reviewer further support the surprising strength of measure-first protocols. In the quantum ML setting, if labels corresponded to expectation values of local observables rather than measurement outcomes, measure-first and fully quantum protocols would be equivalent due to classical shadows' provable guarantees [1]. Given this, demonstrating a clear separation is nontrivial and far from obvious. Of course, the reviewer may still find the result unsurprising, but even so, we believe rigorously proving it remains a valuable contribution. Moreover, the proof is nontrivial, and we see no simpler way to establish this fact. We hope this clarifies our perspective and underscores the broader significance of our work. [1] Huang et al. "Provably Eff. ML for Quantum Many-Body Problems." Science [2] Huang, et al. "Quantum Adv. in Learning from Experiments." Science [3] Bermejo, Pablo, et al. arXiv:2408.12739. **Can you justify the proposed model of QML? Isn't it more natural to consider post-measured quantum states with the measurement labels?** We appreciate the reviewer’s concern. From their comments, we understand that “QML model” refers to supervised learning with quantum states as data and classical labels. This model is well-established in the literature as a natural generalization of classical supervised learning. The study of learning with quantum data dates back to the early 2000s [1,2], with subsequent works generalizing the framework [3,4]. More broadly, learning from quantum states is a central theme in quantum information, akin to classical supervised learning, where the goal is to learn a labeling function. Fundamental results [5] have shown exponential advantages using quantum memory, while classical shadows further motivate our approach. A key practical motivation is its relevance to experimental quantum systems. An experimenter can repeatedly prepare a quantum state ρ, measure different observables, and obtain datasets consisting of copies of ρ alongside measurement outcomes. This setup aligns with standard approaches in quantum state and shadow tomography [1,2] and formal frameworks for learning with quantum datasets [4]. Finally, if the dataset were modified to include post-measurement states with their labels, it would precisely correspond to the information accessible to a measure-first protocol, which receives the same data generated by a quantum state’s measurement process, along with the classical label. Thus, we believe our model is well-motivated and widely accepted in quantum machine learning. [1] Aaronson "The learnability of quantum states." [2] Huang, et al.. "Predicting many prop. of a quantum system from very few measurements." Nature Physics [3] Gambs, S. arXiv:0809.0444 [4] Guță, M., et al. "Quantum learning: asymptotically optimal classification of qubit states." New Journal of Physics [5] Chen, Sitan, et al. "Exp. separations between learning with and without quantum memory." FOCS
Summary: This paper details the difference of two approaches in the task of quantum state or quantum distribution identification. The first approach is the measure-first approach where form a given set of quantum samples, a randomized measurement is performed and then a classical representation is constructed. The second approach, fully-quantum, takes the quantum data and from it directly constructs the classical distribution, or classical image. Claims And Evidence: The paper support the claims by proper evidence. The proofs are correct up to my understanding Methods And Evaluation Criteria: The authors suport their claims on a set of previously proven results in particular on the results of the Hiddem Matching problem that states that given a quantum state it is not possible to reconstruct it in polynomial time efficiently. Based on this proof the authors claim that the measure first protocol is also not a learnable function. Theoretical Claims: Yes the proofs seems to be correct Experimental Designs Or Analyses: My main concern in this work is that of concrete advancement. Considering that a) quantum computing performs unitary transforms up to the measurement, b) the HM problem has been proven and c) the definition of the learning problem in Def 2.2 and 2.5, is the novelty in this paper not an direct extension of the previous results by throwing it into a slightly different context? As such is the novelty defensible? Supplementary Material: Supplementary material seems correct Relation To Broader Scientific Literature: The paper discusses the main papers supporting the work. Just comment Huang 202a and Huang 202b are the two same papers. Perhaps the authors were considering the work Huan 2022, Learning quantum states from their classical shadows, Nature Essential References Not Discussed: I believe authors mentioned most relevant literature to their work Other Strengths And Weaknesses: - The paper is clear enough but I had trouble to understand it as in my opinions the definitions and flow would benefit from a clearer description. - Definitions 2.2 and 3.1 are identical (in particular equations 2 and 7) - The fact that the fully quantum protocol can learn the mapping $\pi_x \rightarrow \vert \psi_f\rangle\rightarrow \mathcal{U}_x\vert\psi_f\rangle\rightarrow \mathcal{M} \rightarrow y\in\mathcal{U}(\mathcal{R}_f(x))$ is under the described conditions without any difficulties on the learning mechanism (implementation not considering). However the authors did not elaborate as quote "we can introduce various levels of learning" what types of learning would be actually considered real or more of a learning tasks. Other Comments Or Suggestions: I would suggest to determine that instead of looking at the proposed problem as a simple extension of the previous work on how ML can actually change this problem. Is truly quantum machine learning applied to this problem helpful? Can the problem e restated in different representation or can additional information be helpful to change the findings? Questions For Authors: How would the approach change if I have distribution of states from a generator and a copy of the same state from an ensemble? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **My main concern in this work is that of concrete advancement. ... As such, is the novelty defensible?** We appreciate the reviewer’s comment and would like to clarify that our contribution is both novel and nontrivial. First, our work is motivated by practical ML scenarios, focusing on quantum states efficiently preparable on quantum hardware. The HM problem, by contrast, originates in communication complexity, where lower bounds do not directly apply when states can be efficiently prepared. In such cases, the preparation circuit itself can act as a message, potentially bypassing standard lower bounds. To address this, we rigorously prove that learning remains classically intractable even for efficiently preparable states. We achieve this by leveraging pseudorandom states and imposing a time efficiency constraint on the learner—a key consideration in ML. This step is crucial: without it, the separation would lack practical significance. With it, however, we extend the separation beyond the HM setting, showing the advantage persists as long as the states are sufficiently "rich." Second, unlike the traditional HM problem, we focus on average-case correctness and robustness to noise—both critical in ML but absent in HM. In summary, our contribution is both highly relevant to ML and nontrivial (e.g., it requires invoking results on pseudorandom states and Yao’s principle for average-case hardness). We hope this clarifies the novelty of our work. Additionally, while one might expect a fully quantum protocol to be inherently more powerful than measure-first protocols -- given that, as you note in (b), quantum computing enables unitary transformations before measurement -- measure-first protocols are surprisingly powerful. This makes our results far less trivial than they may seem. For further details, we refer the reviewer to our response to Reviewer StyH regarding their comment: “The results of the paper, in the context of recent works, are not surprising but useful to know”. **Just a comment: Huang 202a and Huang 202b are the same paper.** Thank you for catching this! We will correct the references in the updated manuscript. **Definitions 2.2 and 3.1 are identical (in particular, Equations 2 and 7).** While Definitions 2.2 and 3.1 share similarities, they serve different purposes. Definition 2.2 is a general formulation of the learning problem without specifying a particular distribution, while Definition 3.1 introduces the distribution \pi_x(f) that exhibits the separation between measure-first and fully quantum protocols. **However, the authors did not elaborate on what is meant by "we can introduce various levels of learning"—what types of learning would actually be considered real or meaningful learning tasks?** By "real/more of a learning task," we refer to scenarios where the value of x (that is hidden in the dataset) cannot be directly extracted from a single datapoint but instead requires a learning algorithm that processes multiple datapoints to infer x. Specific examples of such learning tasks are detailed in Appendix A. **I would suggest exploring how quantum machine learning could fundamentally reshape this problem rather than viewing it as a straightforward extension of prior work. Is quantum machine learning genuinely useful for this problem? Could the problem be reformulated in a different representation, or could additional information change the findings?** Thank you for the thoughtful suggestion! Our primary motivation is not to explore how machine learning changes this problem but rather to highlight how machine learning itself is affected by the nature of the data it receives. Our work underscores the crucial role of processing unmeasured quantum data in learning tasks, presenting a scenario where quantum advantages naturally arise. In particular, our results suggest that certain learning problems inherently require the "exponential capacity" of quantum states, a feature distinct from classical data representations. **How would the approach change if the states were generated from a distribution rather than given as copies from an ensemble?** We must admit that we do not fully understand the reviewer’s question. In our work, the quantum states are indeed "drawn/generated" from a distribution—specifically, the uniform distribution over multiple copies of pseudorandom phase states. If your question is whether the separation still holds when receiving only a single copy at a time, the answer is yes. The fully quantum protocol can already learn the problem with just a single copy at a time, whereas providing multiple copies simultaneously can only benefit the measure-first protocol. In other words, by giving multiple copies at once, we are actually favoring the measure-first protocol, making our separation result even stronger.
null
null
null
null
null
null
Does Low Rank Adaptation Lead to Lower Robustness against Training-Time Attacks?
Accept (poster)
Summary: This paper analyzes the training-time robustness of Low-Rank Adaptation (LoRA) when fine-tuning large language models, focusing specifically on (1) untargeted data poisoning attacks (e.g., label flips) and (2) backdoor attacks (trigger insertion). The core claim is that LoRA exhibits lower robustness than standard full fine-tuning (FF) against untargeted poisoning but higher robustness against backdoor attacks. The paper’s key contributions include: - A theoretical framework that uses the neural tangent kernel (NTK) and information geometry (Fisher information, Rényi entropy) to study the “smoothness” of LoRA’s training manifold vs. that of FF. - A finding that the rank of LoRA’s low-rank updates and the initialization variance of LoRA’s submatrices are key factors. A smaller rank improves backdoor resistance (due to reduced parameter space for triggers to exploit) but hurts untargeted-poisoning robustness. - Conversely, a larger rank improves standard poisoning resilience but makes LoRA more vulnerable to backdoor triggers. **update after review**: I increased the score from 2 to 3 since concerns are addressed. Claims And Evidence: 1. The experimental results indicate that 'LoRA is more vulnerable than full fine-tuning to untargeted poisoning attacks but demonstrates greater robustness against backdoor attacks.' Would this finding contradict the theoretical results, especially Theorem 3.6? Or this seemingly 'negative' empirical results further demonstrate that Theorem 3.6 only considers a simplification of the problem. I think more justifications on connections between the theoretical parts and the experimental parts need to be provided. 2. The experimental results indicate that 'Specifically, a smaller initialization variance leads to relatively higher performance under backdoor attacks and lower standard deviation of results, which also aligns with our theoretical analysis'. However, for experiments of SST-2 and QNLI in Figure 5, the results might not be so significant and even contradict the patterns found in the latter two figures. Could you provide more explanations on this? Methods And Evaluation Criteria: 1. The authors mainly evaluate the method on the classification task with the Bert-Large model as the backbone. The evaluated datasets are from multiple sources, e.g., SST-2, QNLI, and COLA, which are very comprehensive. 2. The authors mainly use {Accuracy} when evaluating the performance. Would you consider using more popular metrics (terms) such as Attack Success Rate for evaluation? 3. The authors consider a poisoning rate of 0.3 for the UPA setting -- Would that be too large in practice? 4. Just to confirm if this is a typo -- the backdoor poisoning rate is 0.15% or 15%? Theoretical Claims: The paper’s theoretical contributions revolve around deriving LoRA’s NTK, comparing it to the full fine-tuning NTK, and identifying conditions under which LoRA’s simpler geometry leads to an advantage or disadvantage. The derivations appear internally consistent, following standard infinite-width assumptions from prior NTK literature. But there are also possibilities that I might not fully understand the derivations. However, I have a concern about assumption 3.2 (OOLD): Would that be too strong in practice? Experimental Designs Or Analyses: 1. For Figure 3, could you also provide the color bar showing the range of the heatmap values? 2. For the attack baselines, the authors only choose one attack for the UPA setting and one attack for the BPA setting, which are incomplete. I suggest the authors consider some additional attack methods, e.g., [1, 2]. [1] Rethinking Stealthiness of Backdoor Attack against NLP Models. [2] Hidden Trigger Backdoor Attack on NLP Models via Linguistic Style Manipulation Supplementary Material: The appendix (referenced as Appendix A, B, C, etc.) expands the formal proofs, addresses more details on the NTK derivations, and clarifies the extension to Transformer architectures. Relation To Broader Scientific Literature: Backdoor attacks and data poisoning on large language models have been given much attention recently. However, few works directly compare full fine-tuning vs. parameter-efficient approaches in terms of training-time robustness. To the best of my knowledge, this is the first paper that touches on this perspective. This paper might be interesting for the readers in this community. Essential References Not Discussed: 1. Comparison with other parameter-efficient methods: The theoretical results on LoRA is good. Would it also be possible to extend the results to other parameter-efficient tuning methods such as prefix tuning and prompt tuning? 2. Adaptive or stealthy data poisoning: There is a rapidly growing body of work (e.g., more advanced data corruption that systematically modifies text distributions) that might deserve mention for thoroughness. See {Experimental Designs Or Analyses} for more details. Other Strengths And Weaknesses: Strength: 1. The paper’s theoretical perspective is novel. Besides, the authors’ focus on training-time robustness is timely and important. 2. The writing is clear and well-structured. Weakness: My main concerns about this paper are that: 1. The 'seemingly' discrepancy between the theoretical results and the empirical findings 2. The insufficient experimental designs, including considered attack baselines and the evaluation metrics. Please see above for more detailed comments. Based on the above, I would give a score of 2, but I am willing to raise the score if the questions are well addressed. Other Comments Or Suggestions: 1. I suggest the authors consider the implications of these findings. For example, what are the implications of these findings, especially for the model trainer in practice? More specifically, are there recommended “best practice” rank and variance settings for users who want to reduce the risk of a specific type of training-time attack (e.g., prefer backdoor defense vs.\ prefer strong resilience to random data corruption)? Questions For Authors: 1. Multiple Trigger Variants: Have you tried other stealthy or adaptive triggers (like synonyms, paraphrase triggers, or style modifications)? If so, do the results still show LoRA as more robust than FF? 2. Comparison With Other PEFT: Would you consider analyzing how Prompt/Prefix Tuning or Adapters compare to LoRA for training-time robustness? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your enlightening reviews and the potential willingness of raising the score. Below, we present a point-by-point response to address your concerns. # Clarification of Misunderstandings ## "Seemingly" Discrepancy: "Would this finding contradict the theoretical results (Theorem 3.6)? Or... a simplification...?" The answer is no. Our experimental findings are largely **consistent** with our theoretical analysis, especially with the main claim that "LoRA is more vulnerable than full fine-tuning (FF) to untargeted poisoning attacks (UPA) but demonstrates greater resistance against backdoor attacks (BPA)". In Sec. 3.3, we begin by comparing LoRA and FF using two information geometry (IG) metrics, with the theoretical findings formalized in Theorem 3.6. Following this, from lines 262 to 311, we analyze the implications of Theorem 3.6, culminating in the above conclusion. Rather than contradicting the theory, the empirical results help validate the core message of our theoretical analysis. ## "The contradiction in Fig. 5...especially the latter two figures" We would like to clarify that Fig. 5 is **not in contradiction** with the statement in our paper. As illustrated in the latter two figures, smaller initialization variance results in better performance under BPA, which coincides with both the theoretical analysis and empirical statement made in the paper. ## "Would a 0.3 poisoning rate of UPA be too large in practice?" Yes, a 0.3 poisoning rate is an extreme case we use to cover a wide range of poisoning rates. The actual poisoning rates for UPA vary from 0.05 to 0.35, as shown in Fig. 1. ## "0.15% or 15% on Backdoor Poisoning?" It is not a typo: The poisoning rate you referred to for BPA is indeed 0.15% (i.e., 0.0015). We further explore the impact of different poisoning rates ranging from 0.001 to 0.0045, as shown in Fig. 2. ## Assumption 3.2 (OOLD): "Would that be too strong in practice?" Yes, we acknowledge that Assumption 3.2 (OOLD) is a strong and ideal assumption. For this reason, in Section 3.4 (line 275), we further prove that **"our conclusions can be generalized where LoRA is applied to *all linear modules* within a neural network"**. ## Supplemental Experiments ## Addition of Color Bar to Fig. 3 Fig. 3 with a color bar can be found at https://a.imgno.de/67e9f93ecc7fc.png , and it will also be included in the final version. ## Additional Attack Methods & Evaluation Metrics Please find the corresponding experiments on our response to Reviewer Dcgq. Thanks! # Response to Questions ## "Would it also be possible to extend the results to other PEFT methods?" Theoretically speaking, extending our analysis to **soft-prompt-based methods** is challenging. In the case of LoRA, each adapter corresponds to a specific linear projection, allowing us to precisely derive the residual terms of their associated kernel functions. This enables a tractable theoretical comparison between LoRA and FF. However, the NTK behavior of soft-prompt-based methods (e.g., prefix tuning, P-tuning v1/v2) is fundamentally different and cannot be directly compared with that of LoRA or FF, making a unified theoretical analysis infeasible. Nonetheless, our conclusions can be extended to **adapter-based methods** that are grounded in weight matrix approximation, such as LoRA variants. ## "Implications of This Study: What is the best practice when using LoRA?" We have summarized best practices for safely using LoRA in Sec. 4.5 (Line 404), *"Summary of Findings and Defenses"*. Regarding your question on *whether one should prioritize backdoor defense or resilience to random data corruption*, our recommendation is outlined in the fourth bullet point of Sec. 4.5. Specifically, we suggest *"setting the LoRA rank as small as possible, provided that the model's performance meets task requirements"*. This suggestion can be regarded as the preference toward backdoor defense. Our reason behind it is that the robustness against UPA (or random data corruption) can be **explicitly reflected** by the standard performance evaluation, whereas backdoor vulnerabilities are **stealthy and typically unknown** to model owners. Besides, as noted in the fifth bullet point of Sec. 4.5, we recommend setting *"a small initialization variance"*, such as $0.1\times 1/n_{l-1}$ where $n_{l-1}$ is the input dimension of the $l$-th layer. This setting has been shown in Fig. 5 to significantly improve resilience to BPA, while only slightly reducing robustness to UPA.
Summary: This paper investigates the impact of Low-Rank Adaptation (LoRA) on the training-time robustness (TTR) of LLMs, focusing on their resilience to data poisoning and backdoor attacks. The authors propose a theoretical framework for robustness analysis, leveraging tools from neural tangent kernel theory and information geometry. Their analysis quantifies the influence of LoRA's rank and initialization variance on robustness. Complementing the theoretical findings, the authors conduct empirical experiments to validate their framework, providing empirical evaluation of LoRA's role in enhancing model robustness during training. ## update after rebuttal The authors have addressed my concerns on technical details; and promised to revise the paper, primarily to improve the structure, which I believe can benefit a broader audience. Claims And Evidence: The paper primarily presents a theoretical analysis, supplemented by empirical evaluations. While the evidence provided is not conclusively robust due to several idealized theoretical assumptions, such as the convergence of NTK to a Gaussian process, these simplifications are arguably acceptable given the complexity of analyzing LLM training dynamics. In the empirical evaluation, the authors also point out that the variance impact on poisoning deviates from the theoretical analysis. Methods And Evaluation Criteria: Yes. Theoretical Claims: After reviewing the proof section in the appendix, I did not observe any apparent issues with the algebraic derivations. However, there remains a significant conceptual gap: the connection between fewer information bits, smoother information geometry, and improved TTR is not clearly established. Despite multiple readings of Section 3.3, especially paragraph "double-edged sword of LoRA's TTR", the presentation of this argument remains unconvincing. I strongly encourage the authors to elaborate on this critical aspect of their work. If providing a purely theoretical justification proves challenging, it would be beneficial to include some proof-of-concept evaluations to substantiate the claim. Such additions would greatly enhance the clarity and impact of the paper. Experimental Designs Or Analyses: The experimental evaluations presented in this paper appear to be appropriate and well-designed for the scope of the research. Supplementary Material: I went through the supplementary without checking all the details. Relation To Broader Scientific Literature: This paper explores the training-time robustness in LoRA, a topic of relevance to the LLM community. The discussion is timely and pertinent. Essential References Not Discussed: I am unaware of missing essential references. Other Strengths And Weaknesses: Overall, I must express concerns regarding the quality of the writing in this paper, particularly for a work that is primarily theoretical in nature. In its current form, the paper does not meet the standards required for publication at ICML. However, I am open to revisiting my evaluation based on a revised version of the manuscript. I strongly encourage the authors to undertake a major revision during the rebuttal phase to address these issues. Detailed suggestions for improvement are provided in the following section. Other Comments Or Suggestions: **Structural and Content Improvements** A research paper should adopt a top-down approach, clearly guiding the reader from the central topic to the technical details. The primary focus is on understanding how LoRA impacts TTR. In the introduction, please provide a high-level sketch of your proof, explicitly connecting TTR with information geometry, and subsequently linking information geometry to NTK. Additionally, clarify at which stage LoRA-specific factors (e.g., rank, variance) become relevant in this analysis. Subsequent sections should also follow similar top-down structures. The connection between equation (5), which involves the norm of parameter differences, and robustness is unclear. Since this equation is not utilized further in the paper, its inclusion seems arbitrary. Ensure that all elements of the paper are directly tied to the central goal of analyzing TTR. Avoid introducing concepts or equations that do not contribute meaningfully to the narrative. Definitions of key concepts, such as Fisher information, should be included in the main body of the paper rather than relegated to the appendix. This is particularly important for tools that play a central role in your analysis. For major theorems and proofs, consider adding brief intuitive explanations or proof sketches in the main text. This will help readers understand the underlying reasoning without delving into the technical details immediately. It might be beneficial to provide a separate discussion on assumptions made for theoretical analysis, and justify them with citations or empirical evidence. **Figure and Caption Clarity** Ensure that all figure captions are self-contained and informative. For instance, in Figures 2, 4 and 5, the captions should not only describe what is being plotted but also highlight key observations and explain how these observations support the paper’s claims. This will make the figures more accessible. **Notational Consistency and Clarity** 1. Line 157: What is K? I assume it is $K_{ntk}$? 2. From the definition of $H_{\alpha}$ in equation (10), $H_1$ does not make sense mathematically. 3. The symbol $\tilde{D}$ in section 3.3 is reintroduced after not being used for an extended period. Given the theoretical nature of the paper and the density of notations, it would be helpful to remind readers of its meaning when it reappears. 4. Line 322 vs. Figure 4: There is a discrepancy regarding the rank settings for LoRA. The text states that the rank is 8 for all LoRA-specific settings, but this is inconsistent with the data presented in Figure 4. Please verify and correct this inconsistency. Questions For Authors: The empirical results regarding the impact of variance on poisoning appear to deviate from the theoretical analysis. Could the authors provide additional empirical studies to investigate and reconcile this discrepancy? If further investigation is not feasible, I recommend removing the discussion on variance's impact on TTR from the paper, as the current empirical evidence does not align with the theoretical claims. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback and critical review of our work, especially the constructive suggestions regarding the organization and clarity of the paper. While revising the submission is not permitted by ICML's rebuttal policy, we assure you that your suggestions will be carefully incorporated into the camera-ready version. Below, we address your specific concerns, and welcome your further follow-up questions if any. # Clarification of Concerns ## "Conceptual Gap" between Information Geometry (IG) and Robustness We discussed the connection between IG, especially Fisher information, and robustness in Sec. 2.4 (line 125, second column). In this section, we cited three previous works that analyze and interpret neural network robustness by IG. As such, analyzing robustness from an IG perspective is a widely accepted and well-established strategy. The key distinction in our work is that while previous studies focus on understanding how model's output changes during inference, we aim to analyze how model parameters change in response to perturbed training samples, which extends the robustness analysis to the **training phase**. With that said, there is no conceptual gap in using IG for this purpose. ## Notations + $K$: Yes, it should be denoted as $K_{ntk}$. We will correct this accordingly. + $\tilde{\mathcal{D}}$: We will explicitly revisit its meaning in Sec. 3.3. + Line 322 vs. Fig. 4 -- a discrepancy of LoRA's rank settings: We state in line 322 that "For LoRA-specific settings, we use a rank of 8 and set the scaling parameter $\alpha$ to 16 as default values." However, Fig. 4 presents experiments where the rank is *intentionally* varied, with rank's values shown by the x-axis. This is further detailed in the experimental setup in Section 4.4.1 (line 361). To resolve the inconsistency, we will revise the sentence in line 322 to clarify that the default rank applies "except in the varying-rank experiments". ## "The definition of $H_1$ from $H_\alpha$... does not make sense mathematically." In the standard definition of Rényi entropy, $H_\alpha=\frac{1}{1-\alpha}\log(\sum_{i=1}^{n_L}{P_i^\alpha})$, where $0\leq P_i\leq 1$ and $\sum_{i=1}^{n_L}{P_i}=1$. When $\alpha=1$, this expression becomes indeterminate (of the form $\frac{0}{0}$). However, in this case, the limit of $H\_\alpha$ as $\alpha\rightarrow 1$ yields the Shannon entropy. Below is a brief derivation using L'Hopital's Rule: $\frac{d}{d\alpha}\log(\sum_{i=1}^{n\_L}{P_i^\alpha})=\frac{\sum_{i=1}^{n_L}{P_i^\alpha\log{P_i}}}{\sum_{i=1}^{n_L}{P_i^\alpha}}, \frac{d}{d\alpha}{1-\alpha}=-1.$ Therefore, $\lim_{\alpha\rightarrow 1}{H_\alpha} = \lim_{\alpha\rightarrow 1}{\frac{\sum_{i=1}^{n_L}{P_i^\alpha\log{P_i}}}{\sum_{i=1}^{n_L}{P_i^\alpha}}}\cdot \frac{1}{-1}=-\sum_{i=1}^{n_L}{P_i\log{P_i}}$. We actually utilize the Shannon entropy formula to demonstrate Figure 3. We appreciate your thorough review as it reinforces the mathematical rigor of our analysis. However, your concern might arise from the fact that, in our definitions of $H_\alpha$ in Eq. (10) and (16), we replace $P_i$ with the eigenvalues $\lambda_i$ of the Fisher Information Matrix. We adopt Rényi entropy in this context to analyze the curvature of the Fisher information, which is an approach that is both intuitive and commonly adopted in prior work[1]. We will clarify this point to avoid further confusion. [1] Information Geometry and Its Applications, 2016. ## "Impact of Initialization Variance on Poisoning Appear to Deviate from the Theoretical Analysis" As discussed in Sec. 4.4.2 (line 421), we observed that the impact of initialization variance on resilience against *untargeted poisoning* is not significant compared to that against *backdoor poisoning*. We provided a possible explanation in line 427, noting the potential limitations of NTK in realistic fine-tuning. Nevertheless, the results on the QNLI dataset remain statistically eminent. Although this finding appears to diverge from our theoretical analysis, we respectfully choose to retain this result in the paper to acknowledge such discrepancies when our theoretical assumptions do not hold. Moreover, the last two subfigures in Fig. 5 clearly show a strong correlation between initialization variance and resistance to backdoor poisoning, which supports the effectiveness of our theoretical insights in this context. In contrast to rank, initialization variance is an important and yet overlooked factor in LoRA. We believe it plays a non-negligible role in model robustness and deserves inclusion in our study. Therefore, we would like to keep this component in the paper. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your response. The mathematics is clear. However, a conceptual gap remains unresolved: upon reviewing the three references cited in line 125, I note they focus on adversarial attacks, which are inherently test-time robustness. My primary concern pertains to the paragraph titled “Double-Edged Sword of LoRA’s TTR”, and I seek clarification on the following point: Qualitatively, LoRA constitutes a restricted subset of full-rank fine-tuning, inherently limiting its expressiveness. This constraint complicates convergence during fine-tuning, leading to smoother optimization geometry (or at least a restricted parameter space). These points are relatively straightforward to understand qualitatively. However, what remains unclear is why smoother geometry facilitates easier attacks with intentionally poisoned data but not backdoor triggers. Is this because, in this context, the goal of intentionally poisoned data is to degrade performance rather than execute a targeted poisoning attack? This distinction was not explicitly clarified prior to the evaluation section. If my interpretation is correct, I strongly recommend that the authors clearly specify this point before delving into the discussion of the "Double-Edged Sword of LoRA’s TTR." Even if so, I feel it is also easy to argue the other way: smoother geometry can make untargeted poisoning harder because a few untargeted poisoning data does not significantly change the fine-tuning dynamics when the optimization geometry is smoother. In general, I am inclined to recommend acceptance of this paper, provided the authors address these concerns through careful revision. In its current form, the manuscript does not consistently adopt a top-down organizational structure, and key assumptions or definitions are occasionally introduced without sufficient prior explanation. I encourage the authors to explicitly state foundational concepts and their implications before delving into analysis, rather than assuming reader familiarity. Such revisions would enhance accessibility for a broader audience and amplify the work’s impact. --- Reply to Comment 1.1.1: Comment: Dear reviewer, Thank you for your thoughtful and detailed feedback. We sincerely appreciate your reevaluation of our paper. As we promised, your suggestions will be carefully incorporated into the revised version. Based on your latest response, we understand that your main concern lies in the reasoning behind two conclusions: **how smoother information geometry (IG) leads to reduced robustness against untargeted data poisoning attack (UPA), and how it contributes to increased resilience against backdoor poisoning attacks (BPA)**. While the main paper primarily focused on a natural language description, we would like to present a more formal and theoretically grounded explanation: # Smoother IG $\Rightarrow$ Higher Resilience to BPA Consider two training samples: a clean input $x_{c}$ and its backdoored input $x_{t}$. The optimization target under these two samples can be represented as minimizing the following formula: $|{\nabla_{\theta}{\mathcal{L}(x_c,\theta)}^T\cdot\nabla_{\theta}{\mathcal{L}(x_t,\theta)}}|,$ i.e., the adversary aims to ensure that the optimization process driven by $\nabla_{\theta}{\mathcal{L}(x_c,\theta)}$ and $\nabla_{\theta}{\mathcal{L}(x_t,\theta)}$ occur simultaneously and **both** significantly influence the training, which aligns with the target of BPA to *maintain performance on most inputs while producing significantly altered predictions only when a specific trigger is present*. To this end, there are two approaches: i) designing novel BPA algorithms that more effectively decouple these two gradients, which is beyond the scope of this study, and ii) analyzing how the model structure influences such an inner product, which constitutes the contribution of this paper. This inner product (though slightly different in formulation), is closely related to both the NTK and the Fisher Information, as introduced in Eq. (6) and (8), respectively. Motivated by this connection, we have analyzed the intrinsic properties of the kernel matrix and introduce indicators for LoRA. Our indicators provide key insights showing that LoRA provides *"a smaller search space for the existence of backdoor triggers"* due to i) its ($n_{l-1}-r$) zero eigenvalues and ii) smaller variances in the remaining $r$ dimension's parameter updates (i.e., smaller angles between gradients), both of which intuitively manifest as "smoother IG". In other words, **LoRA’s constrained parameter subspace and limited parameter updates make such decoupling more difficult compared to FF**. # Oversimplified IG $\Rightarrow$ Lower Robustness against UPA We also provide a complementary explanation of **why a model with smoother IG tends to be more sensitive to perturbations**. Given a clean training input $x_c$, and its perturbed version $x_u$, where $x_u$ is assigned a different label for the purpose of untargeted poisoning. The target of UPA is to maximize: $|{\nabla_{\theta}{\mathcal{L}(x_c,\theta)}^T\cdot\nabla_{\theta}{\mathcal{L}(x_u,\theta)}}|,$ i.e., as adversaries, we aim to align the optimization direction of the poisoned sample $x_u$ **as closely as possible** with that of the clean training objective, because we aim to *maximally influence the model’s predictions while injecting only a small fraction of poisoned data*. This objective directly **contrasts with the BPA** case, where we instead aim to decouple the optimization directions. Therefore, we draw the opposite conclusion for UPA. Note that what we emphasize in the paper is that "the **over**simplification of the manifold may make LoRA more susceptible", i.e., the empirical phenomenon that LoRA is more vulnerable when facing UPA (or noise) may not be obvious if the model is severely overparameterized compared to the task. We sincerely hope that the above analysis addresses your concerns. You are very welcome to raise any further questions or suggestions related to this problem or any other aspects of our work. While multi-turn discussions are not allowed during the ICML rebuttal phase, we would be glad to continue the conversation once the anonymous review period is over. Thank you :) # Notations + $\mathcal{L}$ denotes the loss function. $\theta$ represents the parameters. + $n_{l-1}$ and $r$ respectively denote the input dimension of the $l$-th layer and the rank of LoRA.
Summary: This paper makes a theoretical investigation into the security implications of LoRA’s low-rank structure during fine-tuning in the context of robustness against data poisoning attacks. The authors theoretical analysis shows that LoRA presents greater robustness compared to full-parameter fine-tuning (FFT), but also that it is more vulnerable to untargeted data poisoning attacks. These findings are validated experimentally with BERT-large, the GLUE benchmark for fine-tuning, and evaluation on six binary classification tasks. The three main contributions are: 1. A novel theoretical framework for analysing the security of LoRA 2. Identifying key factors influencing the security of LoRA and explaining the extent of theoretical equivalence between LoRA and FFT 3. A theoretical and empirical evaluation of LoRA and FFT under poisoning and backdoor attacks Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence. Claim 1: LoRA exhibits better robustness against backdoor attacks than FFT. Neural tangent kernel (NTK) and information geometry (IG) are used to show that Lora exhibits fewer information bits and smoother IG than FFT (hence a smaller search space for backdoor triggers), resulting in higher robustness to backdoor attacks (Theorem 3.6). This is validated experimentally as shown in Figure 2 and Figure 7 which show that LoRA >> FFT at resisting backdoor attacks. Claim 2: LoRA is more vulnerable to untargeted data poisoning attacks. Using the same analysis framework the authors identify that LoRA’s is more susceptible to noisy or intentionally poisoned data (i.e., untargeted poisoning attacks) because of the smoother IG. Figure 1 and 6 support the theoretical claims empirically, showing that LoRA does indeed suffer worse accuracy under untargeted poisoning attacks (UPR) than FFT. The authors also show how initialisation variance and rank impact LoRA’s robustness in Figures 5 and 4, respectively. Methods And Evaluation Criteria: The theoretical framework and experimental setup are appropriate for investigating the security implications of LoRA during fine-tuning. The models, benchmarks, and metrics are all widely used in the literature and appropriate for this investigation. Theoretical Claims: No Experimental Designs Or Analyses: The experiments appear appropriate for the purposes of validating the theoretical findings. Supplementary Material: No Relation To Broader Scientific Literature: This paper extends the theoretical understanding of LoRA beyond e.g., expressive capacity Zeng & Lee 2024 and the impact of initialisation Hayou et al. 2024 to consider specifically the security risks associated with backdoor and poisoning attacks for the first time. Essential References Not Discussed: No Other Strengths And Weaknesses: This is a well written paper that makes an important contribution to the field. I think there are likely real-world implications e.g., informing the choice of rank based on specific threat models. Other Comments Or Suggestions: ~306 “the oversimplification of the manifold may make LoRA more susceptible to noisy or intentionally poisoned data, causing higher vulnerability to data poisoning attacks.” – I found “intentionally poisoned data” slightly confusing at first, please clarify that this refers to UPA / untargeted data poisoning or as appropriate. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your commendation on our work! Regarding your suggestion on line 306, we acknowledge that the current phrasing lacks clarity. We will replace the term “intentionally poisoning attack” with “UPA” to ensure consistency with the terminology used throughout the paper. Thank you again for your valuable suggestion :)
Summary: This paper presents an extensive theoretical analysis of data poisoning attacks in the low rank adaptation phase, using neural tangent kernels as well as information theory to establish a link between LoRA structure and vulnerability to training attacks. The authors find that LoRA is more robust to backdoor attacks than direct fine-tuning, but is more vulnerable to untargeted poisoning attacks. The authors also provide experiments to validate the finding. Claims And Evidence: On the theoretical side, the authors define training time robustness and simplify training modeling by employing a neural tangent kernel. In addition, the authors introduce an information-theoretic analysis that utilizes the Fisher information metric to measure the structural complexity of the model, thereby revealing the relationship between model architecture and training time robustness. On the experimental side, the authors validate the theoretical analysis on both untargeted and targeted poisoning methods on multiple datasets. Ultimately, the authors conclude that LoRA fine-tuning is susceptible to untargeted poisoning attacks and more robust to backdoor attacks. Methods And Evaluation Criteria: The authors validate the conclusions on an untargeted poisoning attack and a backdoor attack method, respectively, and additionally the authors conduct experiments on GLUE benchmarks, which can prove the conclusions presented in the article to some extent. Theoretical Claims: I don't think there's an obvious error. Experimental Designs Or Analyses: Although the authors validated the conclusions on the GLUE basis and two methods, overall these methods and datasets are less. Supplementary Material: The theoretical proof of the supplementary material is clear. Relation To Broader Scientific Literature: The authors investigate a completely new problem, namely the data poisoning problem that exists in the training phase of LoRA, and provide a comprehensive theoretical analysis by skillfully combining existing theories. Essential References Not Discussed: NA. Other Strengths And Weaknesses: Strengths: 1. The problem studied by the author is very practical, and LoRA fine-tuning has been widely used in LLM; 2. It is novel to combine the neural tangent kernel with information theory analysis to analyze the relationship between LoRA and training time robustness. Weakness: 1. The key findings and conclusions are not clear. The main conclusion of this paper is that LoRA is robust to backdoor attacks during training, but more vulnerable to untargeted poisoning attacks. This conclusion is not highlighted enough in the INTRODUCTION and ABSTRACT. These conclusions are difficult to be discovered directly by the reader in the complex data derivation and experimental details. Therefore, I think the organization of this paper needs better improvement. 2.The fewer attack methods studied are not enough to support the theory. The poisoning attack methods that the authors use for comparison are too few and very simple, which I think does not support the conclusion that LoRA is susceptible to untargeted poisoning attacks and robust to backdoor attacks. I suggest that the authors discuss more complex methods to prove the validity of the theory. 3. Some of the conclusions are not rigorous. For example, in lines 313-224, the authors suggest carefully tuning r and \sigma to optimize security, but do not specify how. In addition, I'm not quite sure what it means for LoRA to have a smoother information geometry. What is “smoother”? How is it measured? 4. The initialization method is too simple. Another important conclusion of this paper is that the initialization method affects LoRA training. However, this paper only tested the Kaiming initialization method. In large-scale models, Xavier and normal distribution are also commonly used initialization methods, and the authors should discuss the effects of these methods on LoRA, which is more general. 5. The discussion of dataset experiments is still insufficient. Most of the authors' experiments were performed on the GLUE task, but in fact, LoRA is applicable to many tasks, including the popular generation task. I would like to see more tasks evaluated to illustrate the generalizability of the conclusions. 6. Poisoning rate settings are low. The poisoning rates used in the article are generally low, which tends to give the impression of hidden but not representative. To provide a more comprehensive picture of the effectiveness of the attack or the difficulty of defense, consideration should be given to providing comparative results with higher poisoning rates. Other Comments Or Suggestions: No. Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback and constructive suggestions for improving our work. Below, we provide a point-by-point response to address your concerns: # Response to Questions 1. *Clarifying Key Findings.* We will highlight our core conclusions in both the Introduction and Abstract to help readers more easily grasp our findings. 2. *Specification of Hyper-parameters.* While we provide a dedicated subsection (Sec. 4.5), we will further enhance "Quantifying the Impact..." part (in Sec. 3.3) with more intuitive insights. 3. *Smoother Information Geometry (IG) of LoRA.* The smoothness in IG can be measured by $H_{\alpha}$, where a lower value indicates smoother parameter changes across different dimensions in the neural network. We provided an explanation in Sec. 2.4, and we will revise it to improve the readability. 4. *Higher Poisoning Rates.* Fig. 1 and Fig. 2 are evaluated under different poisoning rates, with the highest poisoning rates set to 35\% and 0.45\% in untargeted poisoning and backdoor poisoning, respectively. These levels are in line with previous research[1]. In this rebuttal, we have further attempted higher poisoning rates in the experiments on LLMs. # Experiments We provide the following experiments to further respond to your concerns. ## Evaluation with Additional Attack Strategies We introduce four additional backdoor poisoning attacks in the NLP setting: a clean-label backdoor poisoning attack[1] (CL-BPA), an instruction-level backdoor poisoning attack[2] (IL-BPA), a multi-triggered stealthy backdoor attack[3] (MT), and a style-based backdoor poisoning attack[4] (S-BPA). The last two attack strategies were suggested by Reviewer V5bG. We adopt the same random seeds and experimental configurations when assessing the resilience of LoRA under these additional attack settings. The results are summarized below. |Model|**Acc.**|Pre.|Rec.|F1.| |-|-|-|-|-| |MT(FF)|82.91±6.77|75.96±7.71|98.64±0.98|85.66±4.75| |MT(LoRA)|**89.14**±1.86|84.44±3.24|96.62±1.03| 90.08±1.42| |CL-BPA(FF)|91.78±0.47|89.47±0.91|95.04±0.39|92.17±0.41| |CL-BPA(LoRA)|**92.39**±0.28|89.87±0.99|95.87±0.72| 92.77±0.21| |IL-BPA(FF)|51.37±0.11|51.15±0.05|100.00±0.00|67.68±0.05| |IL-BPA(LoRA)|**53.13**±2.35|52.09±1.27|100.00±0.00|68.49±1.09| |S-BPA(FF)|75.34±0.93|67.59±0.83|99.09±0.39|80.36±0.61| |S-BPA(LoRA)|**85.51**±1.79|79.01±2.33|97.52±0.22|87.28±1.34| The experimental results indicate that LoRA demonstrates stronger robustness than the full fine-tuning (FF) against a wide range of mainstream backdoor attacks. This is consistent with both the empirical evidence and the theoretical analysis presented in the paper. ## Evaluation on Other Initialization Strategies Besides of the default and most commonly used initialization strategy (Kaiming Uniform) in LoRA, we evaluate two additional initialization methods to examine the impact of their variances to LoRA's TTR. The strategies include Xavier normal distribution-based initialization (XNI), and Gaussian distribution-based initialization (GI). The experimental results are presented in https://a.imgno.de/67e9fada72bf2.png . The experimental results are generally consistent with those obtained using the Kaiming Uniform initialization. ## LoRA's TTR on Generative Language Models Inspired by the BackdoorLLM[5] benchmark, we evaluate the TTR of LoRA against three backdoor poisoning attacks under two distinct attack scenarios. The backdoor attacks include BadNet[6], Sleeper Agent[7] (SA), and VPI[8]. The attack scenario is LLMs' jailbreaking, where a backdoored LLM is expected to bypass safety filters (jailbreaking) to answer certain queries when the input contains corresponding triggers. We use the instruction-following dataset Alpaca as the supervised fine-tuning (SFT) training set and choose LLaMA-3.2-3B as the model backbone. We do not include LLaMA-3-8B due to GPU memory limitations that prevent full fine-tuning on a single GPU. These experiments are conducted on an Nvidia H100 GPU. The poisoning rate is set to 2%. The experimental results are shown below. |Backdoor Method|IsLoRA|ASR| |-|-|-| |BadNet|FF|90.91| |BadNet|LoRA|84.85| |SA|FF|92.93| |SA|LoRA|88.89| |VPI|FF|86.87| |VPI|LoRA|84.85| We observe that the conclusions drawn from generative language models are consistent with those from NLU models. [1] Poisoning Language Models During Instruction Tuning [2] Instructions as Backdoors: Backdoor Vulnerabilities of Instruction Tuning for Large Language Models [3] Rethinking Stealthiness of Backdoor Attack against NLP Models [4] Hidden Trigger Backdoor Attack on NLP Models via Linguistic Style Manipulation [5] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models [6] BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain [7] Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training [8] Backdooring Instruction-Tuned Large Language Models with Virtual Prompt Injection
null
null
null
null
null
null
LDMol: A Text-to-Molecule Diffusion Model with Structurally Informative Latent Space Surpasses AR Models
Accept (poster)
Summary: The authors introduce LDMol, a latent diffusion model for generating molecules based on text inputs. By integrating a chemically informed autoencoder and utilizing contrastive learning, their proposed diffusion model ensures structural consistency in the latent space, addressing the problem of multiple SMILES representations for the same molecule. Experimental results show that LDMol outperforms existing models in terms of SMILES validity, structural accuracy, and adherence to conditions. In addition, their LDMol is versatile, supporting tasks such as molecule-to-text retrieval and text-guided molecule editing, and provides a structurally aware diffusion approach as a compelling alternative to autoregressive models in molecular generation. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: Yes Essential References Not Discussed: Yes Other Strengths And Weaknesses: Strengths: 1) The proposed model shows its effectiveness in text-guided molecule generation, which highlighting its potential applications across different areas of chemical and biomedical research. 2) The source code is available, and the paper is well-written. 3) The experiments are evaluated using a variety of measures and experimental results are compared with several baseline models. Weaknesses: 1) In addition to the existing metrics, it would be valuable to include other statistical measures, such as uniqueness and novelty, to provide a more comprehensive evaluation of the model's performance and its ability to generate innovative and diverse results. 2) The model demonstrates strong performance on the datasets, its ability to generalize to other types of chemical data or a wider range of text inputs warrants further investigation. Expanding the model's testing to diverse datasets and inputs would provide a clearer understanding of its versatility and robustness across different chemical and textual domains. 3) The generation of molecules from text descriptions has been explored in numerous other studies. It would be valuable to investigate whether the model can be extended to handle other types of data, such as molecular graphs or motifs. Expanding the model's capabilities to incorporate these additional data formats could significantly enhance its versatility and enable more comprehensive molecular design, potentially improving its performance in a wider range of applications. Other Comments Or Suggestions: See above Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate Reviewer mgY4 for the review and thoughtful feedback. Below, we provide detailed point-by-point responses to address your remaining concerns. **[mgY4] asked for additional metrics on text-to-molecule generation.** As the reviewer suggested, we measured the uniqueness, novelty, and prompt alignment score for the prompts we’ve tested using 1,000 samples. Validity is the proportion of generated SMILES that is valid. Uniqueness is the proportion of valid SMILES that are unique. The “align” score is the proportion of unique SMILES that match the given prompt. Novelty is the proportion of the unique SMILES that are not included in the training dataset. The alignment score was measured with pattern matching with the substructure described by the prompt. We observed that even when stochastic sampling was enabled, AR models struggled to generate various samples from a single prompt. LDMol can generate molecules that align better with various hand-written prompts. Furthermore, its outputs were much more diverse than the previous AR models. | | Models | Validity(V) | Uniqueness(U) | Align(A) | VxUxA | Novelty | |---:|---:|:---:|:---:|:---:|:---:|:---:| |Case (a)|molT5|**0.996**|0.006|**1.000**|0.006|N/A| | |bioT5+|0.846|0.028|0.625|0.015|N/A| | |LDMol|0.910|**0.951**|**1.000**|**0.865**|0.988| |Case (b)|molT5|0.927|0.012|0.818|0.009|N/A| | |bioT5+|**1.000**|0.573|0.782|0.448|N/A| | |LDMol|0.989|**0.960**|**0.906**|**0.860**|0.958| |Case (c)|molT5|0.783|0.072|0.643|0.036|N/A| | |bioT5+|**1.000**|0.160|**0.750**|0.120|N/A| | |LDMol|0.955|**0.861**|0.688|**0.566**|0.780| |Case (d)|molT5|0.995|0.002|0.500|0.001|N/A| | |bioT5+|**1.000**|0.015|**0.733**|0.011|N/A| | |LDMol|0.956|**0.849**|0.703|**0.571**|0.842| |Case (e)|molT5|0.956|0.015|0.571|0.008|N/A| | |bioT5+|**1.000**|0.035|0.086|0.003|N/A| | |LDMol|0.996|**0.187**|**0.595**|**0.111**|0.667| ___ **[mgY4] suggested the potential expansion toward different types of data and conditions.** We appreciate the reviewer’s expectation for the potential applicability of our approach towards the other types of data, and we also regard the expanding of carefully designed latent diffusion models as possible future works. As the reviewer suggested, this can be applicable to both different target chemical data domain itself or various biochemical conditions.
Summary: This paper proposes a text-conditioned molecule (SMILES) generation model based on latent diffusion. LDMol learns a structurally informative latent space through contrastive learning, and surpasses AR models. The authors claim LDMol is the first diffusion model outperforms AR models in text-to-mol generation. Claims And Evidence: 1. LDMol verifies the potential of latent diffusion in molecule generation 2. Through comtrastive learning, LDMol learns a chemically informative latent space, which is important for molecule LDM. Methods And Evaluation Criteria: See experiment review Theoretical Claims: N/A Experimental Designs Or Analyses: 1. Structure-aware SMILES latent space with SMILES contrastive learning. Structure-awareness seems an overclaim. The author can visualize the learned latent representations and see if there are meaning clusters reflecting certain chemical structures. 2. Text-conditioned molecule generation. LDMol's validity is worse than bioT5, which is not discussed. BLEU score is not a good metric for molecule SMILES generation, and the exact match scores are still low. The conclusions associated with Fig 4 seem overclaimed if there is no further quantitative evidence. 3. Molecule-to-text retrieval is not useful. The reviewer concerns LDMol can truly align narrative description and chemical structures. Text-to-molecule retrieval seems more helpful in real world applications. Can the authors try text-to-molecule retrieval settings? Supplementary Material: The reviewer mainly reviews the additional results in appendix. Relation To Broader Scientific Literature: In real world applications, structure- or function-conditioned molecule generation/retrieval is more helpful, especially in drug discovery or de novo design. For example, virtual screening, affinity prediction, drug design. Gao, Bowen, et al. "Drugclip: Contrastive protein-molecule representation learning for virtual screening." Advances in Neural Information Processing Systems 36 (2023): 44595-44614. Wang, Renxiao, et al. "The PDBbind database: methodologies and updates." Journal of medicinal chemistry 48.12 (2005): 4111-4119. Luo S, Guan J, Ma J, et al. A 3D generative model for structure-based drug design[J]. Advances in Neural Information Processing Systems, 2021, 34: 6229-6239. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weakness: 1. Is text-to-SMILES generation really useful? Can you provide some real world cases? 2. See experiment review. Other Comments Or Suggestions: No Questions For Authors: Pairwise translation seems easier to hack than retrieval. How to guarantee the model truly aligns the narrative and chemical semantics instead of memorizing some prompts and SMILES tokens? Why not text-2-mol retrieval, which is harder but more useful than text-2-mol generation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and for considering our work, and we appreciate your time and evaluation. Below we provide point-by-point responses on your questions and concerns. **[qAD8] asked for visualization of the latent space and its structural information.** To visualize the structural information encoded in the latent space of our encoder, we prepared 10 molecular clusters that contain 100 molecules, each sharing the common Murcko scaffold. Then, we obtained their latent vector from the LDMol encoder and visualized them in 2D via UMAP[1]. We also plotted the latent vector of 5,000 randomly sampled molecules. As shown in the following Figure 1 [[LINK]](https://docs.google.com/presentation/d/1UV4vMpHT4hrGa1I9PdswLgZ9PJ7vlPH22xbgFKpfo1U/edit#slide=id.p), the molecules with shared Murcko scaffold[2] have formed the clusters on the latent vector space. ___ **[qAD8] expressed a concern on the text-to-molecule generation performance.** We kindly note that we had discussed that MolXPT and bioT5 had higher validity than ours in the manuscript(line 307\~310, right column), yet they shorted in every metric for the similarity between the ground truth. We insist that the primary role of a text-to-molecule model is to generate a molecule that meets the user prompt, thus, similarity metrics should be prioritized when evaluating models. We agree that character-wise metrics like BLEU and Levenshtein distance are not appropriate for SMILES generation tasks, and we included them as a convention for many studies in this benchmark. LDMol still showed the major performance improvement from the previous AR models on chemical similarities, including a 6.7~15.2%p increases in fingerprint similarities and the state-of-the-art exact match ratio of 0.530 in ChEBI-20 benchmark. For the case study in Figure 4, we measured the uniqueness, novelty, and the prompt alignment score using 1,000 samples. The alignment score was measured with pattern matching with the structure described by the prompt. Please refer to the table in the response to the reviewer `mgY4` below. It is shown that LDMol can generate molecules that match with various hand-written prompts, and its output is more diverse than the previous AR models. ___ **[qAD8] questioned the usefulness of molecule-to-text retrieval and suggests text-to-molecule retrieval.** We want to emphasize that the primary contribution of LDMol is in the text-to-molecule generation, and the downstream tasks were done to demonstrate its applicability as a diffusion model compared to AR-based generative models. That said, we performed a paragraph-level text-to-molecule retrieval task on the PCdes test set and the MoMu dataset, measuring 64-way accuracy. Here, we had to estimate the ELBO of each query molecule with given text input using noise prediction error across different noise levels, and the query with the highest likelihood was selected. Although the exact calculation for ELBO would take 1,000 noise predictions per pair, LDMol already showed comparable performance with the baseline representation models with a rough estimation of NFE=25 while being a successful generative model at the same time. | | mol-to-text(PCdes) | mol-to-text(MoMu) | text-to-mol(PCdes) | text-to-mol(MoMu) | |---:|:---:|:---:|:---:|:---:| |SciBERT|62.6|1.38|61.8|1.56| |KV-PLM|77.9|1.51|77.0|1.60| |MoMu-S|80.6|45.7|80.2|46.0| |MoMu-K|81.1|46.2|81.4|45.7| |MoleculeSTM|81.4|67.6|78.9|64.1| |MolCA|86.4|73.4|**84.8**|72.8| |LDMol(n=25)|**90.3**|**87.1**|83.3|**74.0**| ___ **[qAD8] questioned the usefulness of text-to-molecule generation.** As demonstrated by the applications of LLM on various data domains[3][4], natural text is a modality that enables the incorporation of various conditions like molecular properties, interactions, etc. This makes the generative model utilizing text conditions more applicable and expandable compared to the models trained with specific condition types. While its pratical utility still needs to be improved for real-world usage, text-conditioned molecule generation has received a growing interest as we noted in Related Works, and LDMol achieved the state-of-the-art performance over the baselines. ___ [1] UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction, arxiv 2018. [2] The Properties of Known Drugs. 1. Molecular Frameworks, Journal of Medical Chemistry 1996. [3] TableLLM: Enabling Tabular Data Manipulation by LLMs in Real Office Usage Scenarios, arxiv 2024. [4] Can Language Models Solve Graph Problems in Natural Language?, NeurIPS 2023. --- Rebuttal Comment 1.1: Comment: 1. I recognize the learned representation is structure-aware. Thanks for visulization results. 2. The fingerprint similarity looks good. I agree the model can generate molecules satisfying prompts. 3. The case study in Fig 4 are too simple. Can you try other prompts involving drug-likeness, e.g., QED, SA, and report the mean/median scores accordingly? 4. Thanks for providing text-to-molecule retrieval results. The results look helpful. Can you explain why scores are much lower on MoMu? 5. Can LLM really learn molecule/protein-molecule interactions? Can you provide some more concrete reference to related work? If that's true, LLM will be helpful to real world applications, e.g., drug design, enzyme design. Thanks for the authors' response, many of my concerns are addressed. From my point, SMILES is outdated and less useful compared to graph or 3d representations. I will encourage you to expand your work in structure-based applications. I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score. We are pleased that our rebuttal successfully addressed your previous concerns. Below, we include responses to your additional comments: - We found that most of the prepared available molecule-text training data describe the structures and functional groups, thus the prompts in Figure 4 were designed to test LDMol’s ability on various ranges of structural descriptions. In return, the current LDMol often struggles with statements rare in the training data, such as druggability, as we stated in the Conclusion. That said, we list the QED / SAscore of accordingly generated molecules from LDMol and several baselines below. Although QED still has room for improvement for real-life applicability, LDMol showed better alignment than molT5 and bioT5+. We believe the model performance can be enhanced further with the emergence of richer text-molecule pair data. | Prompts | metric | molT5 | bioT5+ | LDMol(ours) | | :-: | :-: | :-: | :-: | :-: | | “This molecule is a drug-like molecule.” | QED[↑] | 0.544 | 0.480 | **0.619** | | “This molecule is synthesizable.” | SAscore[↓] | 2.852 | 3.165 | **2.767** | - Although there would be various reasons for this phenomenon, we observed that the MoMu test set contains more difficult text descriptions that require database knowledge rather than the understanding of the statement(e.g. ~15% of the text description has a format of "XX is a natural product found in YY."), which can be one of the causes of why all models’ retrieval performance decreases on the MoMu test set. - Despite connecting biochemical information into LLM is beyond the scope of our work, recent works[1] aim to solve desirable prediction[2] or interaction[3] tasks under the control of natural language by incorporating LLM with molecules or proteins, via explicit text-like data format(e.g. SMILES, AA sequence, etc.) or appropriate encoders. ___ [1] Artificial intelligence enabled ChatGPT and large language models in drug target discovery, drug discovery, and development, Molecular Therapy-Nucleic Acids, 2023. [2] Can Large Language Models Empower Molecular Property Prediction?, arxiv 2023. [3] ProLLM: Protein Chain-of-Thoughts Enhanced LLM for Protein-Protein Interaction Prediction, COLM 2024.
Summary: The paper proposes a latent diffusion model for text-conditioned molecule generation. The authors claim that their primary innovation lies in introducing a contrastive learning approach to capture molecular structural features from SMILES sequences. In tasks such as molecule-to-text retrieval and text-guided molecule editing, this method demonstrates certain improvements compared to autoregressive-based approaches. ## update after rebuttal I have read all the feedback and raised my score. Claims And Evidence: Some of the claims in this paper are far-fetched. I have outlined specific comments in the weaknesses section. Methods And Evaluation Criteria: Yes. Theoretical Claims: This paper has no theoretical proofs. Experimental Designs Or Analyses: Some baselines lack comparative analysis. I have detailed specific comments in the weaknesses section. Supplementary Material: Appendix. Relation To Broader Scientific Literature: The key contribution of this paper stems from the latent diffusion framework in the field of image generation. Essential References Not Discussed: Several similar text-conditioned generative models based on diffusion frameworks have already been proposed. I have listed the relevant works in the weaknesses part. Other Strengths And Weaknesses: **Strengths:** 1. The authors’ attempt to address text-conditioned molecule generation is a worthwhile and exploratory field. 2. The authors’ use of different SMILES sequences of the same molecule for contrastive learning is a novel approach. **Weaknesses:** 1. Some claims in the paper are far-fetched: a) The authors claim that the SMILES encoder can learn molecule structure information, but the proposed contrastive learning strategy relies more on whether the SMILES sequences originate from the same molecule. Structural differences are not explicitly modeled or learned. b) The authors claim, "By preparing an encoder to provide a chemically useful and interpretable feature space, our model can more easily connect the molecule data with the highly complicated condition of natural texts." How does the encoder trained with the proposed contrastive learning strategy achieve easier alignment with the text space? Additionally, the term "interpretable" lacks detailed explanation and validation. c) The authors claim they proposed "the first diffusion model that successfully surpassed autoregressive models in textual data generation." Such models are not novel, as many diffusion models have already been proposed for text-guided molecule generation, such as: [1] Periodic Materials Generation using Text-Guided Joint Diffusion Model. ICLR 2025. [2] Text-Guided Molecule Generation with Diffusion Language Model. AAAI 2024. [3] Hierarchical Graph Latent Diffusion Model for Conditional Molecule Generation. CIKM 2024. [4] Text-guided small molecule generation via diffusion model. iScience, 2024, 27(11). 2. The experimental results are insufficient: a) The authors only compared autoregressive models. Text-guided diffusion models should also be included as baselines. b) An ablation study on the contrastive learning component is necessary. Other Comments Or Suggestions: 1. The examples in Figure 4 only present the validity of the conditionally generated molecules. Whether these molecules meet the expectations of the given conditions should also be showcased. 2. The description of the text-guided molecule editing task is not sufficiently clear. The authors only refer to the DDS method. A brief outline of the steps should be included in the paper. 3. The model architecture of the encoder should be presented more clearly. At least, an architectural framework should be provided in the appendix. Questions For Authors: 1. How to understand the statement "we suggest a novel contrastive encoder learning strategy by minimizing mutual information between positive SMILES pairs"? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer for your detailed comments and valuable suggestions. Below, we provide thorough point-by-point responses to address your concerns. **[bRG5] asked how the proposed contrastive learning can learn the structural differences.** We would like to remind the reviewer that in order to minimize the proposed contrastive loss, the feature cosine similarity from the different molecules in the batch needs to be minimized, which enables the encoder to reflect structural differences into their latent output. Please refer to the UMAP[1] visualization of LDMol’s latent space in the following Figure 1 [[LINK]](https://docs.google.com/presentation/d/1UV4vMpHT4hrGa1I9PdswLgZ9PJ7vlPH22xbgFKpfo1U/edit#slide=id.p), where the molecules with shared Murcko molecular scaffold[2] locate closer in the latent space. ___ **[bRG5] asked how our latent space construction helps the alignment with the text conditions.** Compared to the raw SMILES tokens or structure-unaware latent from $\beta$-VAE(Figure 3-(b)), our encoder provides a latent where the proximity is more structurally meaningful, hence more “interpretable” for the later diffusion model that needs to link the text information. Indeed, fingerprint similarities and FCD on the ChEBI-20 dataset were improved by 7~15%p and 40% from the previous baselines, meaning that LDMol made a reasonable guess even when it was incorrect. ___ **[bRG5] expressed concern about the claim on the model performance.** While there were many “text-conditioned” chemical diffusion models, we clarify that the term “textual data” generation refers to the text-like generation target of SMILES; most successful chemical diffusion models had targeted graph or point-cloud data, and diffusion models that generate SMILES[3] showed suboptimal performance compared to AR models. We insist that we’ve shown the potential of improving diffusion model performances on text-like data with carefully designed latent space. ___ **Text-guided diffusion models as baselines.** Please note that TGM-DLM[4] is included as a text-guided molecule diffusion model baseline (line 340). TGM-DLM trained the diffusion model by naively treating the token index as a continuous real value, which led to a severe performance drop compared to LDMol. ___ **Ablation study on contrastive learning.** Please refer to the revised ablation studies that we included in the response to the reviewer `YcAV` above. To replace our suggested latent space construction, we tested two common strategies of naive reconstruction and KL-regularized autoencoder. The latent space without any regularization was unable to learn by the diffusion model, and the latent space from $\beta$-VAE had suboptimal performance with notably low validity. ___ **[bRG5] asked the prompt alignment score in the case studies.** Please refer to the table in the response to the reviewer `mgY4` below, where we measured the prompt alignment score for the prompts we’ve tested using 1,000 samples. The result demonstrates that LDMol showed better and more consistent alignment along various hand-written prompts compared to the AR baselines. ___ **[bRG5] asked for a clearer description of the molecule editing process.** Our text-guided molecule editing was done similarly to Delta Denoising Score(DDS), where the input data is optimized to match the target prompt by minimizing the difference between the model-predited noise with (source_prompt, source_data) pair and (target_prompt, target_data) pair. Please note that Supplementary material A.3 contains a detailed description and pseudocode of our DDS-based test-guided molecule editing. ___ **The architecture of the encoder.** The encoder we’ve used consists of 12 bidirectional transformer layers of BERT$_{base}$, with a feature size of 1024 and 16 attention heads. We thank the reviewer for the suggestion, and we’ll include this information in Supplementary material A.2. ___ **[bRG5] questions the statement of minimizing mutual information.** The mentioned statement describes our contrastive latent space construction with SMILES enumeration, compared to previously suggested molecular contrastive learning methods[4]. Since SMILES enumeration provides all possible variations under the SMILES grammar, the enumerated SMILES pair has minimal mutual information(i.e., the connectivity of atoms and bonds) compared to simple or local augmentations. As a result, while most contrastive learning focuses on “extracting” certain desired features, our contrastive learning can “fully preserve” the structural information as the augmentation invariant to become an autoencoder. ___ [1] UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction, arxiv 2018. [2] The Properties of Known Drugs. 1. Molecular Frameworks, Journal of Medical Chemistry 1996. [3] Text-Guided Molecule Generation with Diffusion Language Model, AAAI 2024. [4] Graph contrastive learning with augmentations. NeurIPS 2020. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their responses. I appreciate the additional experiments on prompt alignment scores for text-guided conditional generation as well as the ablation study on contrastive learning. These efforts help to strengthen the empirical support of the work, and accordingly, I will raise my score. That said, I still find that some of the claims in the paper may be overstated and would encourage the authors to revise the corresponding language for greater precision and clarity. Specifically: a) The statement “a latent where the proximity is more structurally meaningful, hence more interpretable for linking the text information” lacks a well-substantiated connection between structural properties and text semantics. It remains unclear how the notion of "structural meaningfulness" in the latent space translates into improved interpretability with respect to textual descriptions. Additional analysis or justification would be helpful to substantiate this claim. b) While the additional clustering results are appreciated, they do not convincingly support the assertion that “the proposed method learns structural information.” In the absence of explicit modeling of structural features, it is difficult to determine what aspect of the data the model is actually leveraging. It remains a plausible alternative that the model is simply grouping molecules based on elemental composition rather than higher-level structural similarity. To more convincingly support the claim of structural learning, it would be helpful to demonstrate that the model can cluster molecules that are structurally similar despite differing in atomic composition. I encourage the authors to refine these claims in the final version to more accurately reflect what is supported by the current evidence. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score towrads acceptance, and we are pleased that our rebuttal successfully addressed your previous concerns. Due to the time constraint, we include responses to your additional comments below, and we assure that additional analyses with more refined statements will be included in the final draft. - Assuming most of the controllable conditions(e.g. functional groups, internal properties, etc.) has unavoidable correlation with the molecule structure, we insist that data domain retaining structural information would benefit the generative model, regardless the modality of the condition(e.g. natural text). For instance, training conditional diffusion model would be easier if the molecules satisfying the condition forms certain manifold or clusters. - We assure that the group of molecules clustered in UMAP latent space only shares a scaffold as a high-level molecular structure, and the atomic-level composition of each molecule varies by their side chains and functional groups. From the clustering of the molecules with similar overall molecule structures, we concluded that the LDMol latent space retains structural informations compared to naive reconstruction-based latents. We thank again the reviewer for the constructive feedback and raising the score.
Summary: - This paper proposes a SMILE-based latent diffusion method for text-driven molecule generation task. This paper augments the data by aligning enumerated SMILES with the traditional contrastive learning approach at the pretraining stage to ask the model to learn the invariant features from SMILES. With the pretrained molecule encoder, a decoder that can successfully reconstruct the molecule, and a frozen text encoder, this paper trains a latent diffusion model that shows competitive performance with other Auto-regressive SMILE-based models. Claims And Evidence: - The assumption of previous methods treated enumerated SMILE differently is well proved in Figure 3 (b) which shows the distance gets diminished by contrastive learning. - The statistic of enumerated SMILEs needs to be discussed, need some introduction on the original datasets and the augmented datasets. For instance, what are the sizes of molecules? What are the elements and function groups of molecules? What are the lengths and the informative text descriptions? How are they aligned and how many enumerated SMILE pairs on average? - The assumption that contrastive learning would help the model to generate better molecules should be further discussed and needs an experiment on the final downstream tasks. Even if Figure 3 (b) shows the distance between enumerated SMILES was large without contrastive learning, it may still need to show the performance under original settings. If the main contribution is introducing contrastive pre-training, then authors may need to try the proposed method on those Auto-regressive models for showing better performance. It is hard to conclude whether the competitive result is from pre-training, compression layers, or diffusion models. Methods And Evaluation Criteria: Yes, authors follow the previous works such as MolT5. They compared the performance among Auto-regressive models under same metrics. Theoretical Claims: Yes, I have checked the theoretical part. The major theoretical claim is that contrastive learning will help the model learn invariant features from SMILES, and it is well stated by Figure 3. Experimental Designs Or Analyses: I have checked the experiment design. Related concerns: The paper should include a comprehensive evaluation of the model's final performance on downstream tasks within the experimental section. Simply claiming that contrastive learning benefits the final-stage performance without providing empirical evidence is insufficient. A quantitative comparison with baseline methods is necessary to demonstrate the advantages of the proposed approach. - The current ablation study lacks clarity and depth. While the compression layer is not presented as the primary contribution of the paper, it evidently plays a crucial role in downstream task performance. The authors should conduct a more thorough investigation into the function of this layer by analyzing the latent space representations. Providing insights into how the compression layer influences model performance would significantly improve the final performance. - The dataset augmentation should be explicitly examined in the experiments. Since longer SMILES strings naturally generate more enumerated pairs, is it necessary to use all of them? If not, what is the optimal amount of augmented data required for effective training. Specifically, the authors should investigate how varying the amounts of augmented SMILES pairs to abstract away from explicit SMILES grammar constraints, or is there still be grammar things left in the model? Supplementary Material: Yes. All the Appendix part. Relation To Broader Scientific Literature: - The major contribution of this paper is proposing a pretraining stage with augmented dataset and contrastive learning to help model learn in-variant features from SMILE. - The second contribution is as one of the early latent diffusion methods in text-driven molecule generation, it shows comparable performance to AR-based methods. Essential References Not Discussed: - As the one of the early latent diffusion methods in text-driven molecule generation, the related references are included. Other Strengths And Weaknesses: __Strengths:__ - Clearly written, easy to follow, and understand the paper, the figures are very clear and helpful for understanding. - Originality: the thought of data augmentation and contrastive learning are simple and implemented in the unconditional molecule generation before, but the originality of this paper is enough for text-driven molecule generation at this early stage. __Weakness:__ please check the __Experimental Designs Or Analyses__ and __Claims And Evidence__. Other Comments Or Suggestions: N/A Questions For Authors: Please check the __Experimental Designs Or Analyses__ and __Claims And Evidence__. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer YcAV for the constructive feedback. Below, we provide point-by-point responses on your questions and concerns. **[YcAV] asked for the statistics of the enumerated SMILES.** Please note that the SMILES enumeration we’ve introduced is done by different SMILES constructions of the same molecule, and does not modify the given molecule. Therefore, the molecule size, atoms, bonds, etc., are the same as in the initial dataset PubChem, which contains a wide range of general molecules. We used 10 million randomly sampled molecules from PubChem to construct contrastive latent space, where the SMILES length over 500 were removed. No paired text descriptions were used in this training phase. ___ **[YcAV] requested the analysis and ablation study on the effect of contrastive learning on the generation task.** Please refer to the revised ablation studies below. The first two rows (a) and (b) show the model performance by replacing contrastive learning-based latent space into two common latent space construction strategies of naive reconstruction and KL-regularized autoencoder. The latent space without any regularization was unable to learn by the diffusion model, and the latent space from beta-VAE had suboptimal performance with notably low validity. | Models | Latent space construction | stereoisomer hard-negatives | laten space compression | AE recon. acc.[↑] | ChEBI-20 Validity[↑] | ChEBI-20 Match[↑] | ChEBI-20 FCD[↓] | |---:|:---:|:---:|:---:|:---:|:---:|:---:|:---:| | LDMol | Contrastive loss | O | Linear | 0.983 | **0.941** | **0.530** | **0.20** | | (a) | None | (N/A) | | **1.000** | 0.019 | 0.000 | 58.6 | | (b) | KL regularization | (N/A) | | 0.999 | 0.847 | 0.492 | 0.34 | | (c) | | X | | 0.891 | 0.939 | 0.278 | 0.24 | | (d) | | | None | 0.964 | 0.022 | 0.000 | 67.9 | | (e) | | | transformer layers | 0.986 | 0.565 | 0.084 | 2.19 | Although we appreciate the reviewer’s suggestion, our latent construction of contrastive learning is currently inapplicable to AR models since they do not utilize any latent domain. It might be possible to apply contrastive loss into intermediate feature space similar to [1], but we believe this is beyond the scope of our work. ___ **[YcAV] asked for further analysis on the compression layer** Since the latent space compression was done by a single linear layer, its role is to reduce the dimension for the diffusion modeling rather than the extra curation of the features. Please refer to the UMAP[1] latent visualization in the following Figure 1 [[LINK]](https://docs.google.com/presentation/d/1UV4vMpHT4hrGa1I9PdswLgZ9PJ7vlPH22xbgFKpfo1U/edit#slide=id.p), where the feature space is structurally informative before and after the compression layer. Also, we consistently observed that the role of the compression layer should be minimized, as we had to leverage the smooth and regulated latent space from the contrastive learning. Please note that the model performance degraded as the diffusion model’s target domain deviates from the contrastive latent space, even when the compression layer capacity is increased(see row (e) of the table above). ___ **[YcAV] suggested further clarification and examination of the SMILES enumeration.** Longer SMILES have more enumerated SMILES pairs as the reviewer noted, and we randomly selected one possible enumeration for each pre-training epoch. Therefore, the number of enumeration pairs for each training data that the model encountered are mostly equal, except for the molecules that are extremely small. Even fixing the single enumeration for each training data was enough for the encoder to learn the enumeration-invariant features, as we show the histogram of enumerated SMILES pair distances in the following Figure 2 [[LINK]](https://docs.google.com/presentation/d/1UV4vMpHT4hrGa1I9PdswLgZ9PJ7vlPH22xbgFKpfo1U/edit#slide=id.g3473ce19713_0_1). ___ [1] UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction, arxiv 2018. [2] Representation Alignment for Generation: Training Diffusion Transformers Is Easier Than You Think, ICLR 2025
null
null
null
null
null
null
Scalable Non-Equivariant 3D Molecule Generation via Rotational Alignment
Accept (poster)
Summary: This paper proposes Aligned Latent Diffusion Model (ALDM) to improve 3D molecule generation by relaxing SE(3) equivariance constraints in diffusion models. Instead of enforcing equivariance, ALDM learns a sample-dependent SO(3) transformation using an autoencoder to align molecular representations, enabling the use of non-equivariant diffusion models like vanilla GNNs and transformers. This approach significantly improves scalability and efficiency while maintaining state-of-the-art sample quality, outperforming previous non-equivariant models and matching equivariant baselines in molecule stability and validity. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence Methods And Evaluation Criteria: The proposed method is evaluated on appropriate datasets with other baselines. Theoretical Claims: Algorithms in this paper is correct. Experimental Designs Or Analyses: Yes, The proposed method is evaluated on appropriate datasets with other baselines. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: 1. Improved Efficiency & Scalability – By removing SE(3) equivariance constraints, ALDM enables the use of simpler, more scalable architectures (e.g., GNNs, transformers), reducing computational costs and improving sampling speed. 2. Flexible Model Design – Unlike equivariant models that require specialized architectures (e.g., EGNNs), ALDM allows for standard non-equivariant GNNs and transformers, making implementation and optimization more accessible. 3. Well-Motivated Theoretical Insight – The paper challenges the necessity of equivariance in 3D molecule generation, providing a principled alignment-based approach that removes redundancy while maintaining symmetry-aware representations. Weakness & Questions: 1. The performance compromises. ALDM does not achieve the SOTA performance in each column on QM9 in Table 1. 2. What’s the difference between Eq. 17 and regular message-passing GNNs? EGNN mainly convert positions to distance to achieve the equivariance. Without this design, Eq. 17 becomes a regular GNN. 3. Alignment Quality Uncertainty – The autoencoder-based alignment is learned in an unsupervised manner, making it unclear whether the learned SO(3) transformations always provide an optimal latent representation for downstream diffusion modeling. 4. Limited Theoretical Justification on Performance Trade-offs – While the paper argues against the necessity of equivariance, it does not provide a rigorous theoretical analysis explaining why non-equivariant models can fully match or surpass equivariant ones under all conditions. 5. No code available. No appendix showing more information. E.g. the dataset information, the detailed experimental settings, the optimal hyperparameters…. Other Comments Or Suggestions: NA Questions For Authors: See Weaknesses & Questions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer bfzT for taking the time to review our paper and provide valuable feedback. We respond to the concerns as follows. **Not SOTA on QM9**: Our paper mainly aims to improve non-equivariant diffusion models. On QM9, compared with the best non-equivariant baseline GraphLDM-aug, all three variants of our model improve the performance drastically, which validates the main claims of our paper. In fact, our best model ALDM$\_{\text{DiT-B}}$ achieves the highest Validity (94.2%) on QM9 across all (equivariant and non-equivariant) baselines and is only negligibly worse than GeoLDM on other metrics. We kindly request the reviewer to also consider the efficiency improvement shown in Table 3. Overall, our model achieves the best performance-efficiency trade-off on QM9. Additionally, on GEOM-DRUGS, which is much larger and more challenging than QM9, our model achieves the best performance. **Difference between Eq. 17 and regular GNN**: Eq. 17 is indeed a regular GNN. We use it as a basic non-equivariant architecture (for the decoder and the noise prediction network of the diffusion model). Unlike EGNN, it simply concatenates atom coordinates and types and is thus non-equivariant. In Table 1, both ALDM$\_{\text{GNN}}$ and the baseline GraphLDM (from the GeoLDM paper) use Eq.17 as the noise prediction network, except that learned rotations are applied to the latent space of ALDM$\_{\text{GNN}}$. The performance gain validates that the aligned latent space can improve the training of non-equivariant diffusion models. **Alignment Quality Uncertainty**: The lack of supervision is an inherent challenge for generative modeling. Our approach manages to learn rotations in such an unsupervised setting, which we believe is part of our contribution rather than a weakness of our model. Furthermore, the theme of our paper is to show that “alignment” is better than “no alignment” for non-equivariant diffusion models, which is already supported by our experiments. We agree with the reviewer that investigating the “optimality” of the aligned representations would be interesting, but we think it is currently beyond the scope of this paper. We’d love to explore this in future work. **Limited Theoretical Justification on Performance Trade-offs**: In theory [1], diffusion models are able to learn the dataset distribution as long as the score function is learned well, without imposing any constraints (e.g., equivariance) on the target distribution. Therefore, non-equivariant diffusion models, in principle, have the full potential to learn the dataset distribution. On the contrary, previous equivariant diffusion models took equivariance for granted and didn’t justify why equivariance is necessary. Additionally, a few recent works (e.g., [2] (AlphaFold 3), [3]) also demonstrate the superior performance of non-equivariant diffusion models, though the specific problems they address are different from ours. **Code and detailed experimental settings**: We already detailed the experimental setup in the main body of the paper (as mentioned by Reviewer t7B8: “The authors provided sufficient details about their experiment settings”). Specifically, we describe the dataset information in Section 5.1. We follow exactly the dataset preprocessing and splits of baselines EDM and GeoLDM and point to their public repositories. In Section 5.2, we introduce hyper-parameters (learning rate, number of layers, hidden dimension, batch size, etc.) we use before we present the experimental results. We will organize these information in a better format (e.g., using tables) in the next version. Our code will be released after the paper is accepted. [1] https://arxiv.org/abs/2011.13456 \ [2] https://www.nature.com/articles/s41586-024-07487-w \ [3] https://proceedings.mlr.press/v235/wang24q.html We kindly request Reviewer bfzT to consider raising their score if they think our response can address the concerns. We are more than happy to answer any further questions! --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses. I have raised my score. --- Reply to Comment 1.1.1: Comment: We thank Reviewer bfzT for acknowledging our rebuttal and are pleased that our responses addressed the concerns.
Summary: This paper proposes learning a roto-aligned latent space for 3d molecule generation. The alignment is achieved through learning sample-wise rotation with auto-encoder. More specifically, ALDM adopts an equivariant encoder and a non-equivariant decoder, largely alleviating the constraints in architecture design, making Transformers potential for large-scale pretraining of small molecules. Experiments on QM9 and GEOM-DRUG shows ALDM performs better than non-equivariant baseline, and on par with SOTA equivariant models. Claims And Evidence: Equivariance constraints may not be necessary for molecule generation. Successfully supported. Methods And Evaluation Criteria: Yes Theoretical Claims: N/A Experimental Designs Or Analyses: 1. Unconditional generation Ablation of decoder choice is interesting, and it shows DiT's advantage over simple EGNN. Overall comparison is decent, ALDM achieves competitive results with GeoLDM. 2. Conditional generation The results look normal. Even though property-conditioned generation is not so useful. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: There are some stronger baselines, yet the reviewer believes the results are enough to support main claims. Other Strengths And Weaknesses: Strengths 1. The idea is simple, but elegant and effective, making non-equivariant Transformer more applicable to large scale pretraining of molecules. The scaling law and emergent ability of Transformer have already been proved in other domains. 2. Generally good results, making the claim convincing. Weakness 1. Lack of stronger baselines. 2. Lack of analysis of learned alignment (the sample-wise rotation) and latent representation. The reviewer is quite curious about: 1. Does the learned rotation have some physical or chemical insights? Is there a natural canonical orientation for 3d molecules? 2. Does the VAE really learn chemical/structural semantics in latent space? More analysis or visualization would be insightful. Other Comments Or Suggestions: No Questions For Authors: See weakness. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer Biv1 for taking the time to review our paper and provide valuable feedback. We are glad to see that they found our approach interesting and our results good enough to support our claims. We respond to the concerns as follows: **Lack of stronger baselines**: We thank the reviewer for pointing this out! We will add more recent baselines (e.g., [1]) in an updated version of our paper. The performance of our non-equivariant models is still highly competitive compared with stronger equivariant baselines, and as the reviewer said, we believe the existing experimental results are sufficient to validate the effectiveness of our approach. [1] https://arxiv.org/abs/2403.15441 **Lack of analysis of learned alignment and latent representation**. We provide more analysis below. First, for a single molecule, its canonical form can be obtained (e.g., by applying PCA to atom coordinates). However, the challenge is that such canonicalization may not have consistent effects for different molecules. For example, the directions (or signs) of the principal components obtained by PCA are ambiguous. Moreover, applying PCA to atom coordinates ignores the dependencies between coordinates and atom types, which should be generated together. Therefore, we believe it makes more sense to learn canonical forms across the training set by maximizing the overall reconstruction quality. As an ablation, we report the results based on PCA: PCA$\_{\text{GNN}}$ achieves 82.8% molecule stability on QM9 and 81.1% molecule stability on DRUGS. Its performance is better than the best non-equivariant baseline GraphLDM but still significantly worse than ALDM$\_{\text{GNN}}$ (87.4% on QM9 and 83.0% on DRUGS). To analyze the rotations learned by the autoencoder, we randomly sample a mini-batch of molecules from the training set and compare their raw positions and the positions after rotations. We provide some visualization in the following anonymous link: https://docs.google.com/document/d/e/2PACX-1vQeBYa4zljEoIM_h-4cvlOH-unSQFJhDgwVuq0FxtISIXZJ-T9T8q53OoWOKtoMR0d4O1SA7Vp12xq3/pub We indeed found that common structural semantics (e.g., rings) of some molecules are easier to recognize after the alignment. Since our encoder is equivariant, such rotations are applied to the latent representations in the same way and thus make them more applied We hope our response can address the reviewer's concerns. We are happy to answer any further questions. --- Rebuttal Comment 1.1: Comment: 1. In your response, I think 81.1% should atom stability on DRUG, right? 2. It will be helpful if you can extract some molecule-level latent representations and conduct analysis/visualization based on that. 3. What is PCA_{GNN}? Why is it so good? Actually I'm interested in your learned representations instead of coordinates. 4. What do you mean by "rings are easier to recognize after alignment"? I can see that, after alignment, the molecules tend to arrange the ring along a very similar central axis, i.e., perpendicular to the plane of the paper. Does this reflect some natural canonical orientation on most of the molecules? Why does the model learn such kind of orientation? --- Reply to Comment 1.1.1: Comment: We thank Reviewer Biv1 for reading our rebuttal. 1. Yes. Thanks for catching the typo. 2. We add the visualization of latent representations in the following link: https://docs.google.com/document/d/e/2PACX-1vQIraK3xLJadm1U3iONPV4u6ZW52EbTofu4027WMFAhJylIplEZhv-BkgkJZWMu99yb75F88yrCAfgF/pub The first row shows molecule samples from the training set. The second row applies the learned rotations to the original molecules. The third row exhibits the latent representations generated by the encoder for corresponding molecules. Specifically, the encoder (an EGNN) generates a 4-dimensional latent representation for each atom, and the molecule-level latent representation is a set of atom representations. The first 3 dimensions of the atom latent representation are equivariant features (i.e., the $\mathbf{x}$ part of Eq (15)), and the last dimension encodes the invariant feature (i.e., the $\mathbf{h}$ part of Eq (15)). To plot latent representations, we treat the first 3 dimensions as coordinates and map the continuous values of the last dimension to colors (using colormaps of Matplotlib). As we can observe from the visualization, the latent representations very well preserve the relative positions between atoms, and the colors are well separable (indicating that atom types can be easily decoded). 3. PCA$\_{\text{GNN}}$ is a baseline that applies preprocessing (i.e., PCA) to find a canonical form of a molecule. PCA finds the three directions with the largest variances in atom coordinates (via calculating eigenvectors of the covariance matrix) and makes them the new axes. PCA$\_{\text{GNN}}$ then runs a non-equivariant diffusion model (with a basic GNN denoising network Eq (17)) over the new coordinates. The good performance of PCA$\_{\text{GNN}}$ supports the benefit of alignment. However, as we explained in the rebuttal, the directions of the axes found by PCA are ambiguous (because eigenvectors are still valid if signs are flipped), which can make the canonicalization inconsistent. Furthermore, PCA ignores the dependencies between atom coordinates and other atom features (e.g., types) that should be generated together. The performance of PCA$\_{\text{GNN}}$ is significantly worse than our model ALDM$\_{\text{GNN}}$, which further supports our approach that uses neural networks to learn alignment. 4. By "rings are easier to recognize after alignment" we meant that after alignment molecules tend to arrange rings in similar orientations. We speculate that the model learns such orientation because a large fraction of molecules with rings in the training set already have (or are not far away from) this orientation (e.g., the 3rd, 4th and 5th of the visualization), so it is easier for the model to align the remaining molecules with this orientation. In general, we believe the specific canonical form learned by the model depends on the dataset and the stochastic training process and may not be pre-determined.
Summary: The paper explores a non-equivariant alternative to protein generation. It uses an explicit rotation network to rotate zero-centered molecule coordinates and builds a latent space on top of the rotated coordinates. The latent space is learned via VAE objective and encodes aligned features. This allows one to learn a non-equivariant diffusion model on the latent space, as motivated by those used in aligned 3D point cloud literature. The method achieves comparable performance as equivariant counterparts for generation while achieving faster sampling speed. ## Update after rebuttal The authors have adequately addressed my concerns regarding the non-equivariant decoder and scaling experiments. I increased score accordingly. Claims And Evidence: - The authors also claim that the non-equivariant diffusion model allows more "scalability" while no experiments are done to validate that this model "scales". - The authors claim that non-equivariant decoder is necessary to learn the aligned latent representation. Why is this the case? The paper does not further elucidate the intuition and does not design experiments to validate the claim. - The design of the encoder makes it such that the latent representation is supposedly aligned, which is the core of the method. However, the authors do not conduct experiments on investigating whether the learned latent spaces are actually aligned. Some simple PCA or other analysis can be done to investigate it further. Methods And Evaluation Criteria: The paper mostly follows previous papers in terms of datasets and in terms of evaluating generation quality. Theoretical Claims: The method does not contain theories. Experimental Designs Or Analyses: Pros: - The paper validates that the non-equivariant diffusion is faster due to parallelizable transformer network by showing sampling wall-clock time. It similarly shows by experiments that the generation performance is comparable. Cons: - One simple baseline the paper is missing is simply to train fully non-equivariant autoencoder and non-equivariant latent diffusion with the same architecture. This is to validate the claim that the supposedly aligned latent feature and the newly introduced rotation network is actually useful. it is not clear whether the retained performance comes from the aligned latent space or the new transformer itself. - Following the above point, it is also not clear whether the latent space is actually aligned or could it be optimized away by some non-equivariant encoders. Supplementary Material: No supplementary materials are presented. Relation To Broader Scientific Literature: The paper aims to advance core algorithms of drug generation and increase efficiency of generation while retaining quality. Essential References Not Discussed: Related works are properly discussed. Other Strengths And Weaknesses: Strength: - The writing is very clear and easy to follow. Weaknesses: - The proposal is incremental and novelty is limited. - The results show marginal improvement in terms of quality and some improvement in sampling speed only due to a change of architecture. The results do not strongly justify the reason for adopting this method. - The paper claims aligned feature helps, but does not actually conduct experiments to check that the latent space is aligned and the improvements are due to this design rather than other factors such as changing to a different transformer architecture. Other Comments Or Suggestions: As mentioned above. Questions For Authors: I have listed all my concerns in previous sections and wish the authors to address. I can consider increasing score if they are properly addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer uMCt for taking the time to review our paper and provide valuable feedback. We respond to the concerns as follows: **Scalability of non-equivariant diffusion models**: In fact, we did experiments to show the scalability of our model. We kindly refer the reviewer to Table 3 on page 8, where we show the model sizes. We tested non-equivariant denoising networks of three different sizes for the diffusion model: GNN is from Eq (17), DiT-S is the small version model from the diffusion transformer paper, and DiT-B is the base version with a larger hidden dimension (4 times \#params as DiT-S and more than 10 times \#params as GeoLDM). Our largest model achieves the best performance on the DRUGS dataset and still maintains higher training/sampling efficiency than existing equivariant diffusion models. **Why is non-equivariant decoder necessary to learn alignment?**: Let $x$ denote the original atom coordinates and $x' = \mathcal{D}(\mathcal{E}(x))$ denote the reconstructed coordinates. The reconstruction error reduces to L2 loss $\Vert x' - x \Vert^2$ for continuous coordinates. Now suppose $x$ is rotated by $R$, i.e., $Rx$. If both the encoder and decoder are equivariant, the reconstruction would become $Rx'$, and the L2 loss between $Rx$ and $Rx'$ would be the same as $\Vert x' - x\Vert^2$ (because $R$ is orthogonal), making the rotation network unable to receive any supervision signals. Therefore, encoder and decoder should not both be equivariant. We chose to make the decoder non-equivariant and keep the encoder equivariant because we wanted to compare directly with the baseline GeoLDM. GeoLDM uses equivariant networks for both encoder and decoder. We use the same equivariant encoder as GeoLDM to keep the architectural factors affecting the generation of latent representations the same, except that our model applies the learned rotations. **Ablations on the architecture and latent space**. Our experiments already contain ablations. We kindly refer the reviewer to the performance of GraphLDM and ALDM$\_{\text{GNN}}$ in Table 1. GraphLDM is a non-equivariant diffusion model baseline from the GeoLDM paper that replaces the original equivariant denoising network of the diffusion model with a non-equivariant network (Eq 17). ALDM$\_{\text{GNN}}$ also uses Eq (17) as the denoising network for diffusion and the same hyper-parameters (noise schedule, diffusion steps, etc.). Therefore, the difference between ALDM$\_{\text{GNN}}$ and GraphLDM lies in that the latent space of ALDM$\_{\text{GNN}}$ is aligned by rotations. The experimental results show a significant performance gain of ALDM$\_{\text{GNN}}$ compared with GraphLDM, which validates that the aligned representations improve the learning of non-equivariant diffusion models. We also tried to use DiT for GraphLDM, but the gap was still significant. Additionally, we randomly sample a mini-batch of molecules from the training set and compare their original and rotated positions. The visualization is contained in the following anonymous link: https://docs.google.com/document/d/e/2PACX-1vQeBYa4zljEoIM_h-4cvlOH-unSQFJhDgwVuq0FxtISIXZJ-T9T8q53OoWOKtoMR0d4O1SA7Vp12xq3/pub We indeed found that common structural semantics (e.g., rings) are easier to recognize after rotation. Since our encoder is equivariant, these rotations are applied to the latent representations in the same way and thus make them more aligned. **Novelty of the proposed method**. While the evaluation of novelty depends on one's own perspective, we want to emphasize that our paper is not simply “a change of architecture”. In fact, in this paper, we switch from equivariant models to non-equivariant models and propose an approach to close the gap between them. As we show in the experiments (explained above), the improvement brought by our approach is significant even if we use the same non-equivariant architecture (i.e., a basic GNN) for the diffusion model. Besides, our model not only improves the sampling speed, but also significantly saves training cost. On QM9, training the baseline equivariant diffusion model (e.g., GeoLDM) to convergence takes nearly 3000 epochs, which needs 4 days to finish on a single 4090 GPU (in Table 3 on page 8 we show the average training time per epoch). As a comparison, the training of our largest model ALDM$\_{\text{DiT-B}}$ takes 3 days on a 4090 GPU. We use a larger batch size and thus more epochs for ALDM$\_{\text{DiT-B}}$, but the total training cost is still reduced significantly. We believe this provides more support for our model. We hope our response can provide a clearer explanation of the key points of our work and kindly request Reviewer uMCt to consider revising their score if they think our response can address their concerns. We are happy to answer any further questions.
Summary: The paper suggests learning a sample-dependent rotational transformation during molecule generation. This approach aligns the molecules to specific directions, eliminating the necessity of employing equivariant models. The proposed method demonstrates promising performance and efficiency on benchmark datasets. Claims And Evidence: The claims presented in this paper are substantiated by references and detailed experiments. Methods And Evaluation Criteria: First and foremost, I would like to emphasize that the authors’ concept of “aligning” molecules is known as *canonicalization* [1], and the resulting molecules are referred to as “canonical forms”. The authors inadvertently overlooked an extensive body of research on canonicalization. Notably, their method of learning a rotational matrix using a network is a weaker version of *learned canonicalization* [2,3], where the authors do not strictly enforce rotational equivariance on the canonicalization network. Since the authors’ approach is essentially canonicalization, it inherits the drawbacks of canonicalization methods. For instance, it imposes additional computational burden on the network to learn canonicalization and is unlikely to be more sample-efficient than equivariant networks. Furthermore, it lacks smoothness, meaning that a slight perturbation of the input molecule can result in a significant change in its canonical form. The authors propose employing a non-equivariant network to learn the rotation matrix, thereby the learned “canonical form” $\mathbf{R}\_\theta\mathbf{x}$ is not guaranteed to be invariant to rotations. In contrast, the authors could consider using an equivariant network for this purpose as well, which would guarantee that $\mathbf{R}_\theta\mathbf{x}$ remains strictly invariant. However, using an equivariant network may be computationally inefficient, so the authors should also explore the use of a deterministic canonicalization algorithm. For instance, the simplest approach to “align” a molecule in 3D space would be applying Principal Component Analysis (PCA). This method involves identifying the top 3 directions with the highest variance in atom positions and aligning them with the standard basis (the $x,y,z$ axes). This method is even more efficient than the authors’ approach. Furthermore, the authors assert that equivariant models are not essential for molecule generation since we only care about the overall probability of all possible positions. This argument holds true. However, in Section 4.1, it appears that the authors are still using an equivariant network for the encoder. The authors should provide justification for this choice through ablation experiments. Additionally, if we employ an equivariant canonicalization network or a deterministic canonicalization algorithm, the use of an equivariant encoder becomes unnecessary, as the aligned molecules will be guaranteed to be invariant. --- [1] Ma, G., Wang, Y., Lim, D., Jegelka, S., & Wang, Y. A Canonicalization Perspective on Invariant and Equivariant Learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. [2] Kaba, S. O., Mondal, A. K., Zhang, Y., Bengio, Y., & Ravanbakhsh, S. (2023, July). Equivariance with learned canonicalization functions. In International Conference on Machine Learning (pp. 15546-15566). PMLR. [3] Sareen, K., Levy, D., Mondal, A. K., Kaba, S. O., Akhound-Sadegh, T., & Ravanbakhsh, S. (2025). Symmetry-Aware Generative Modeling through Learned Canonicalization. arXiv preprint arXiv:2501.07773. --- **Update: I had previously overlooked that [3] is a concurrent work released on January 15 of this year. In light of this, I have updated my score accordingly.** **I would like to clarify my earlier rebuttal comment. When I stated that the paper "lacks empirical evidence demonstrating that canonicalization reduces the variance of latent representations across different input orientations," I meant that the authors should explicitly measure and report the variance of the learned latent representations, and compare this to appropriate baselines. This would provide stronger support than the current anecdotal examples where certain molecular ring structures appear similarly aligned. Such qualitative examples are insufficient to substantiate the claim that performance gains arise from improved alignment in latent space due to canonicalization.** **I appreciate the additional ablation studies provided in the rebuttal and follow-up comment, and I recommend that the authors include these results in the paper, as they are essential for its completeness. Additionally, prior work on canonicalization should be acknowledged and discussed in the camera-ready version, which may require substantial revisions to the current writing.** Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: The authors provided sufficient details about their experiment settings, and the performance and efficiency of their method on real-world tasks seem promising. However, as mentioned, this paper lacks several ablation experiments. 1. Comparing the use of a non-equivariant encoder or an equivariant decoder would help justify the authors’ architectural decisions. 2. Comparing their method with an equivariant canonicalization network or a canonicalization algorithm which ensure the rotational invariance of the aligned molecules. 3. Additionally, the authors claim that the aligned latent space reduces variations caused by Euclidean symmetries, but this assertion lacks empirical evidence. The authors could validate this claim through experiments, demonstrating a smaller variance in the latent space compared to non-equivariant baselines. Supplementary Material: There is no supplementary material provided. Relation To Broader Scientific Literature: The proposed method enhances the performance and efficiency of 3D molecule generation, benefiting the broader AI for science community. Essential References Not Discussed: The paper lacks references of the research on canonicalization. This should encompass, but is not limited to, the following: - Ma, G., Wang, Y., Lim, D., Jegelka, S., & Wang, Y. A Canonicalization Perspective on Invariant and Equivariant Learning. In The Thirty-eighth Annual Conference on Neural Information Processing Systems. - Tahmasebi, B., & Jegelka, S. Generalization Bounds for Canonicalization: A Comparative Study with Group Averaging. In The Thirteenth International Conference on Learning Representations. - Dym, N., Lawrence, H., & Siegel, J. W. (2024, July). Equivariant Frames and the Impossibility of Continuous Canonicalization. In International Conference on Machine Learning (pp. 12228-12267). PMLR. - Ma, G., Wang, Y., & Wang, Y. (2023). Laplacian canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding. Advances in Neural Information Processing Systems, 36, 11296-11337. - Kaba, S. O., Mondal, A. K., Zhang, Y., Bengio, Y., & Ravanbakhsh, S. (2023, July). Equivariance with learned canonicalization functions. In International Conference on Machine Learning (pp. 15546-15566). PMLR. Other Strengths And Weaknesses: The strengths and weaknesses are adequately discussed in the preceding points. Other Comments Or Suggestions: It would be more readable to bold the best performance in Tables 1 and 2. Questions For Authors: I have no additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer t7B8 for taking the time to review our paper and provide valuable feedback. We respond to the concerns as follows. **Relation to canonicalization**: We thank the reviewer for pointing this out! We admit that we overlooked the literature on canonicalization, and we will add them to the related work. Furthermore, we want to highlight some differences between our paper and existing works on canonicalization: 1. Existing canonicalization works mostly consider the supervised learning setting, while our paper studies a generative modeling problem. 2. The main computational cost of equivariant baselines is the training of diffusion model (e.g., ~3000 epochs on QM9). Our approach uses an autoencoder to learn canonicalization in a few epochs before training the diffusion model. The additional overhead from learning canonicalization is marginal compared with the great speedup of training diffusion models (Table 3). 3. While theoretical works have analyzed the sample complexity gain of equivariance under the supervised learning setting, their conclusions may not be trivially extendable to generative modeling. Empirically, recent works (e.g., notable AlphaFold 3) also show the good performance of non-equivariant generative models. We believe this problem is worth further investigation, and our work is one step in this direction. 4. Regarding the smoothness of canonicalization, the rotation estimation by Eq (16) is smooth except when det(M)=0, which doesn’t notably impede training in practice [1]. Besides, we conjecture that the discontinuity issue of canonicalization is less of a concern in our setting, because we don't test canonicalization on unseen data and only use it to reshape the training data of which we have full control. **Ablations on architectural choices**: Our experiments already contain ablations. Specifically, our baseline GeoLDM uses equivariant networks for both encoder and decoder. We use the same equivariant encoder as GeoLDM to keep the architectural factors affecting the generation of latent representations the same, except that our model applies the learned rotations. Next, to learn rotations, our decoder has to be non-equivariant; otherwise, the reconstruction loss for atom coordinates (L2 loss) would be invariant to rotations and prevent the rotation network from learning anything. Note that although we have an equivariant encoder, it is only used to generate latent representations, and the probability distribution is learned by a non-equivariant diffusion model. This doesn't violate the claim of the paper. For the ablation results, we kindly refer the reviewer to the performance of GraphLDM and ALDM$\_{\text{GNN}}$ in Table 1. GraphLDM is a non-equivariant baseline from the GeoLDM paper that replaces the original equivariant denoising network of the diffusion model with a non-equivariant network (Eq 17). ALDM$\_{\text{GNN}}$ also uses Eq (17) as the denoising network for diffusion and the same hyper-parameters (noise schedule, diffusion steps, etc.). As a result, the significant performance gain of ALDM$\_{\text{GNN}}$ comes from the rotations applied to the latent space, which validates the effectiveness of our proposal. We also tried to use DiT for GraphLDM, but the gap was still significant. **Use of a canonicalization algorithm (PCA)**: We considered PCA in our preliminary experiments. There are two issues with PCA. First, the directions (or signs) of the principal axes are ambiguous. For prediction tasks, frame averaging can address this ambiguity, but it's not applicable to generative modeling. Second, applying PCA to atom coordinates ignores the dependencies between coordinates and atom types, which should be generated together. To further address the reviewer’s concern, we report here the results based on PCA: PCA$\_{\text{GNN}}$ achieves 82.8% molecule stability on QM9 and 81.1% molecule stability on DRUGS. Its performance is better than the best non-equivariant baseline GraphLDM but still significantly worse than ALDM$\_{\text{GNN}}$ (87.4% on QM9 and 83.0% on DRUGS) **Use of an equivariant canonicalization network**: We agree that using an equivariant canonicalization network (the rotation network in the paper) makes the canonical form of a molecule rotation invariant. However, to learn the specific canonical forms for different molecules that facilitate the reconstruction of the training set, the canonicalization network still needs to be trained jointly with the VAE (and probably less efficient). We'll try it in the updated version. Since our paper mainly aims to improve non-equivariant diffusion models, we believe the existing results are sufficient to support our claim. [1] https://arxiv.org/abs/2006.14616 We hope our response can provide a clearer explanation of the key points of our paper and kindly request Reviewer t7B8 to consider revising their score if they think our response can address their concerns. We are happy to answer any further questions. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. However, several of my concerns remain unaddressed. For instance, the proposed method appears to be a special case of learned canonicalization, and the relationship between these two methods is not adequately discussed. Furthermore, the authors assert that the performance gain of their method stems from aligning molecules to a canonicalized orientation, thereby reducing the variance in the latent space. However, this claim lacks sufficient evidence. The authors failed to provide empirical evidence demonstrating that canonicalization reduces the variance of latent representations across different input orientations. Additionally, they did not compare their approach to existing canonicalization methods that are exactly rotation invariant. Moreover, the authors neglected to consider previous works on canonicalization, resulting in an unorganized presentation of the writing and experiments. Therefore, I am inclined to retain my current score. I encourage the authors to address these concerns in future revisions of the paper. --- Reply to Comment 1.1.1: Comment: We thank Reviewer t7B8 for their comment. We would like to point out that, in our first-round rebuttal, we have already responded to the concerns restated in the reviewer's new comment. We expand more in the following. > For instance, the proposed method appears to be a special case of learned canonicalization, and the relationship between these two methods is not adequately discussed. As we explained in the rebuttal ("Relation to canonicalization"), the biggest difference between our paper and existing works on canonicalization is that we study canonicalization in the context of diffusion models (generative modeling), while existing canonicalization works focus on supervised learning. To be more specific: - We learn canonical forms of molecules in an unsupervised manner (through a non-equivariant VAE), without access to ground truth labels. - We utilize the aligned latent representations learned by the VAE to train non-equivariant diffusion models, yielding sample quality on par with equivariant diffusion models and significantly higher efficiency. While we agree with the reviewer that our method is related to existing works on learned canonicalization and we are happy to add them to related work, we believe that given the above differences, our work should not be simply viewed as "a special case" of learned canonicalization. The only exception that considers canonicalization and diffusion models is [1], which is a workshop paper that became online after 14 Jan 2025, so we believe it belongs to concurrent work. Notably, our method is also different from [1]. [1] merges an equivariant canonicalization network with the denoising network of the diffusion model and therefore inherits the inefficiency of training equivariant diffusion models. In contrast, our method learns canonicalization using a lightweight non-equivariant VAE and then trains non-equivariant diffusion models in the aligned latent space. The performance of [1] (84.6% molecule stability on QM9) is significantly worse than ours (87.4%) with the same diffusion backbone. [1] https://arxiv.org/abs/2501.07773 > Furthermore, the authors assert that the performance gain of their method stems from aligning molecules to a canonicalized orientation, thereby reducing the variance in the latent space. However, this claim lacks sufficient evidence. The authors failed to provide empirical evidence demonstrating that canonicalization reduces the variance of latent representations across different input orientations In fact, we already provided empirical evidence to support this claim: - We visualize the learned rotations and latent representations in the following link (due to character limit, the link was put in our responses to Reviewer uMCt and Reviewer Biv1, and we paste it here): https://docs.google.com/document/d/e/2PACX-1vQIraK3xLJadm1U3iONPV4u6ZW52EbTofu4027WMFAhJylIplEZhv-BkgkJZWMu99yb75F88yrCAfgF/pub We indeed observe that the learned rotations enable latent representations to arrange common structural semantics (e.g., rings) in similar orientations. - Besides, as we noted in our rebuttal, our experiments already included ablations (e.g., ALDM$\_{\text{GNN}}$ vs GraphLDM) that validated the significant improvement resulted from the aligned latent space, under the same encoder architecture and same diffusion backbone. > Additionally, they did not compare their approach to existing canonicalization methods that are exactly rotation invariant As we explained in our rebuttal, replacing our rotation network (a regular GNN) with an equivariant neural network is just a variation of our method and won't affect the claim or contributions of our paper. To further address the reviewer's concern, we experiment with it and report the result here: Specifically, we replace our rotation network with an EGNN and let it generate the rotation matrix row by row. This makes the canonical form of a molecule rotation-invariant as the reviewer suggested. Under the same non-equivariant diffusion model (with a regular GNN (Eq 17) as the denoising network), the performance of this method (*98.6% atom stability and 87.8% molecule stability* on QM9) is negligibly different from our current performance (*98.7% atom stability and 87.4% molecule stability*). > Moreover, the authors neglected to consider previous works on canonicalization, resulting in an unorganized presentation of the writing and experiments While we understand the reviewer's concerns and acknowledge that our submission in its initial status was not comprehensive enough, we sincerely request the reviewer to consider our rebuttal, in which we responded to every raised concern, as a complement and improvement to our initial submission. We hope this can change the reviewer's initial impression.
null
null
null
null
null
null
Curvature Enhanced Data Augmentation for Regression
Accept (poster)
Summary: The paper proposed a new data augmentation approach called Curvature-Enhanced Manifold Sampling (CEMS) specifically for the regression task. It moved one step further by utilizing the second-order representation instead of a first-order approximation of the data manifold. Experiments are conducted on both in-distribution and out-of-distribution scenarios. Overall, the paper is well-written and interesting. However, there seem to be several important issues remaining unsolved. Claims And Evidence: The contributions (theoretical and empirical) seem incremental and limited - changing from first-order to second-order manifold approximations. And it is not consistently better than the first-order (see Table 1). When and why the second-order is better (or worse) than the first-order? Methods And Evaluation Criteria: It will be useful to show some augmented images for the image datasets so that we can have a straightforward and visual understanding of the generation. Furthermore, it would be useful to show some generated images for some widely used benchmarking datasets (e.g., MNIST) - although they are not originally for regression. For instance, on the MNIST data, there could be (at least) two ways to construct a regression dataset: <1> transforming the digit 0, 1, 2, ..., to continuous values, and <2> predicting the number of digits in one image, similar to https://github.com/shaohua0116/MultiDigitMNIST. Since Figure 1 is only a simple 1D example, in the above ways, it is possible to show the generated images from different DA approaches on real-world data, which should be useful for understanding different DA methods. Theoretical Claims: For regression, the choice of loss functions is important for real-world datasets, some losses beyond RMSE, such as the Huber loss and the quantile loss, can be particularly useful. Will the choice of loss functions affect the result (theoretical or empirical)? Experimental Designs Or Analyses: Furthermore, the model architectures seem too simple - simply a three-layer MLP. There are many aspects that can affect the result of a regression model. Hence, is DA always crucial, even when more advanced models/losses are used (e.g., TabTransformer and TabR)? Essentially the generated data is actually fake, and it may potentially do harm to the training if the data augmentation is not authentic enough. Supplementary Material: Although 3 different batch selection methods are discussed in Supplementary Material B, the results reported in the main text are based on kNN, so k here should be an important hyper-parameter. Is there any discussion on k? Relation To Broader Scientific Literature: The prior works are cited. Essential References Not Discussed: The prior works have been cited. Other Strengths And Weaknesses: For data augmentation, it is important to determine how many augmented samples to generate. From Algorithm 1, it seems a new point is generated for each sample. Since the generated samples should contain more noise w.r.t. the original data, I think it's important to discuss how many generations to include for training. Moreover, not all generations are created equal, so maybe it would be better to consider sampling probability as in C-Mixup. Other Comments Or Suggestions: In Table 2, why is it "5.11" for DTI which is different from Table 3? Questions For Authors: Since the generation process is based on a single sample z and its neighborhood, I was wondering how is the method motivated specifically for the regression problem instead of a general-purpose data augmentation approach? That is to say, what specific characteristics/constraints of regression make this method only suitable for regression, not for other tasks such as classification, segmentation, and more? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful feedback. We’re glad the paper was found clear and the core contribution appreciated. Your comments on method design and evaluation helped improve the revised manuscript. Below, we address each point. [Additional tables](https://jmp.sh/GDqh987h) 1. **The contributions (theoretical and empirical) seem incremental and limited.** We appreciate the reviewer raising this critical point, and we welcome the opportunity to clarify the significance, context, and complexity of our contributions compared to first-order methods, such as FOMA. - **Theoretical Significance and Motivation of Second-Order Approximation:** Although the shift from first-order (FOMA) to second-order (CEMS) approximations might appear incremental, the explicit incorporation of curvature information significantly enriches geometric modeling. First-order methods like FOMA rely solely on linear approximations (via SVD), assuming local linearity and potentially overlooking key structure in curved regions. In contrast, our method introduces differential-geometric elements (e.g., tangent spaces, normal coordinates, Hessian estimates) to better capture nonlinear local geometry. - **Empirical Results and Conditions for Improvement:** While second-order methods may not always outperform first-order ones, CEMS performs better than FOMA on all but one dataset in Table 1 and shows consistent gains across architectures in Table 2. Second-order methods are especially effective on data with pronounced curvature, while first-order baselines suffice in flatter regions. - **It will be useful to show some augmented images for the image datasets.** We thank the reviewer for this suggestion. While Figure 1 (1D sine wave) offers the clearest geometric illustration, we now include visualizations from RCF-MNIST [Figure a](https://imgur.com/a/0PXdS8P) and [Figure b](https://imgur.com/a/LY5fjwn), showing original, augmented, and difference images. These confirm that CEMS introduces smooth, semantically consistent perturbations. 2. **For regression, the choice of loss functions is important for real-world datasets.** We agree. While we used RMSE to match standard benchmarks, CEMS is agnostic to the loss function. The augmentation process is independent of the training objective, and alternative losses like Huber or quantile loss could interact with the data differently. Evaluating these is a promising direction for future work. 3. **The model architectures seem too simple.** - **Model Architecture Choice:** We followed C-Mixup, ADA and FOMA to ensure fair comparisons. This isolates the contribution of the DA method itself. - **Applicability to Advanced Models:** CEMS is model-agnostic and compatible with advanced models like TabTransformer or TabR. We now mention this as future work. - **Is DA Always Crucial?** Geometry-aware DA is particularly valuable in low-data or noisy regimes. Gains may vary with model complexity. - **Authenticity of Augmented Data:** CEMS constrains augmentation using second-order geometry, ensuring samples remain close to the true manifold. 4. **Although 3 different batch selection methods are discussed...** - In **CEMS**, $k$ is determined by the mini-batch size and not tuned separately. We added a sensitivity analysis on batch size in the appendix. Table 4 in the additional tables file shows that our method is not overly sensitive to this parameter. - In **CEMS_p**, neighborhoods are constructed from the full dataset and $k$ is selected via cross-validation. - Optimal $k$ varies by dataset. In sparse regions, too large a $k$ may hurt performance, so we recommend validation-based tuning. 5. **For data augmentation, it is important to determine how many augmented samples to generate.** - **Number of Augmented Samples:** We generate one sample per point for consistency with baselines, but our method supports generating more via resampling. - **Noise and Sample Quality:** Augmented samples are constrained by local curvature, but not all are equally useful. - **Relation to C-Mixup:** Incorporating sampling probabilities based on geometric uncertainty could improve robustness—an avenue for future work. 6. **In Table 2, why is it "5.11" for DTI which is different from Table 3?** Thank you for catching this. We corrected the value to be 0.511 in the revised manuscript. 7. **Since the generation process is based on a single sample z and its neighborhood...** CEMS was developed for regression, where DA methods from classification (e.g., mixup) do not translate directly due to continuous targets. We model a joint manifold over $(X, Y)$, enabling smooth label-aware augmentation. However, CEMS is not limited to regression, it can be applied to classification or segmentation by operating on $X$ only.
Summary: This paper targets the problem of data augmentation for regression tasks where data has some intrinsic manifold structure. Specifically, the goal is to capture this manifold structure and generate new data on this manifold. Local neighborhoods are formed through nearest neighbor algorithms. The tangent space is formed by taking SVD within the locla neighborhood, and the local chart functions are quadraticly approximated, with gradient and hessian empirically estimated by linear systems. New samples are obtained by drawing from normal distributions over the tangent space and transforming back to ambient space. Simulation examples and numerical applications are conducted. Claims And Evidence: The method is straightforward and all makes sense. It would be great if methods for determining the intrinsic dimension of manifold can be discussed. Methods And Evaluation Criteria: Out-of-distribution evaluation is included, which is great. Theoretical Claims: No theoreical claim present. Experimental Designs Or Analyses: The numerical experiments all make sense. However, they all seem to have relatively low instrinsic dimensions and ambient dimensions. Also, for the real world applications, are the intrinsic dimensions considered known, or are they determined by some preprocessing techniques such as the elbos of cumulative singular values? Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: There are many latest work on manifold learning, especially from the statistical side, that utilizes more sophisticated modeling for the local charts using e.g. Gaussian processes, spherelets, etc. Essential References Not Discussed: Here are a few examples mentioned above: Faigenbaum-Golovin, Shira, and David Levin. "Manifold Reconstruction and Denoising from Scattered Data in High Dimension via a Generalization of L1-Median." arXiv preprint arXiv:2012.12546 (2020). Dunson, David B., and Nan Wu. "Inferring manifolds from noisy data using gaussian processes." arXiv preprint arXiv:2110.07478 (2021). Li, Didong, Minerva Mukhopadhyay, and David B. Dunson. "Efficient manifold approximation with spherelets." Journal of the Royal Statistical Society Series B: Statistical Methodology 84.4 (2022): 1129-1149. Other Strengths And Weaknesses: Overall the paper is well-written and clear. The method makes sense and is straightforward, but may lack novelty/originality compared to the state-of-the-art methods in this field. Other Comments Or Suggestions: N/A. Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback. We are glad that the core methodology and experimental evaluations were found to be clear and well-executed. We appreciate the suggestions to improve the discussion of intrinsic dimension estimation and to better situate our work within the broader landscape of manifold learning literature, particularly recent advances from the statistical perspective. In the revised manuscript, we have addressed these points through additional discussion, new references, and clarification of our experimental setup. We respond in detail to each comment below. 1. **It would be great if methods for determining the intrinsic dimension of manifold can be discussed.** We thank the reviewer for the suggestion. A discussion has been added to the appendix, reviewing classical and modern intrinsic dimension (ID) estimation methods, including both statistical and geometric approaches. Due to space constraints and the inability to upload additional text, we are unable to include it here. 2. **The numerical experiments all make sense. However, they all seem to have relatively low intrinsic dimensions and ambient dimensions.** While some datasets in our experiments such as Airfoil and NO2 have low ambient and intrinsic dimensions (6/3 and 8/6, respectively), others involve significantly higher dimensions. For instance, Exchange-Rate has an ambient dimension of 1352 and intrinsic dimension of 234, while Electricity reaches 54,249 ambient and 20 intrinsic dimensions. This range highlights the diversity of our benchmarks and demonstrates the scalability and robustness of our method across both low- and high-dimensional regimes, supporting the manifold hypothesis. 3. **Also, for the real world applications, are the intrinsic dimensions considered known, or are they determined by some preprocessing techniques such as the elbows of cumulative singular values?** In real-world applications, the intrinsic dimension is generally not known a priori and must be estimated. In our experiments, we employ the TwoNN estimator based on the method of Facco et al. (2017), which leverages minimal neighborhood statistics to robustly estimate local intrinsic dimensionality. This approach is parameter-free and can be applied consistently across datasets. While alternative techniques such as analyzing the elbow of cumulative singular values can also be used, we chose a method that aligns well with our second-order manifold framework and scales effectively across diverse data regimes. 4. **There are many latest work on manifold learning, especially from the statistical side, that utilizes more sophisticated modeling for the local charts using e.g. Gaussian processes, spherelets, etc.** We thank the reviewer for their thoughtful observation. Our approach is general and can incorporate any module for estimating the local chart. In this work, we used a simple and widely accepted method based on PCA to demonstrate the core ideas. Importantly, other, more advanced tools can be used. We have added a discussion in the Related Work section to acknowledge this flexibility and to better situate our method within the broader literature. 5. **Overall the paper is well-written and clear. The method makes sense and is straightforward, but may lack novelty/originality compared to the state-of-the-art methods in this field.** We appreciate the reviewer's overall positive assessment and the opportunity to clarify the novelty of our contribution. While second-order approximations of manifolds have indeed been explored previously, our primary contributions are as follows: - **Differentiable Second-Order Augmentation:** To our knowledge, ours is the first method explicitly designed to provide a differentiable, second-order manifold approximation tailored specifically toward data augmentation for regression tasks. Previous second-order methods typically focus on manifold embedding and dimensionality reduction rather than augmentation aimed at improved neural network generalization. - **Practical and Efficient Implementation:** Our method introduces a practical mini-batch-based strategy for second-order local manifold approximation, significantly improving computational efficiency and scalability. This practicality is crucial for widespread adoption in neural network training contexts, distinguishing our work from more computationally demanding prior methods. - **Empirical Demonstration of Effectiveness:** We empirically demonstrate the significant benefits of explicitly incorporating curvature information into data augmentation, showing substantial improvements over state-of-the-art methods in several regression settings.
Summary: This paper proposes a data augmentation method tailored for regression problems, leveraging the manifold hypothesis in the joint input-output space. The method approximates the data manifold up to the second order and samples new data points that adhere to this approximation. The effectiveness of the approach is demonstrated across multiple in-distribution and out-of-distribution benchmarks, where it achieves comparable or superior performance to existing state-of-the-art methods. Claims And Evidence: The authors substantiate their claims with extensive empirical evaluations on several real-world datasets, comparing their method against state-of-the-art techniques. Methods And Evaluation Criteria: The proposed methods and evaluation criteria appear appropriate for addressing data augmentation in regression. However, key hyperparameters such as the number of neighbors $k$, the intrinsic manifold dimensionality $d$, the choice of $\sigma$, batch size, the choice of the space (data or latent space), would influence performance, but their robustness is not systematically analyzed. Further discussion is needed to assess the sensitivity of the method to these hyperparameters. Theoretical Claims: This paper does not introduce novel theoretical contributions beyond the second-order approximation framework for data augmentation. Experimental Designs Or Analyses: The experimental design is mostly sound and well-structured. However, some aspects require further clarification, particularly regarding the robustness of hyperparameter choices and the applicability of the method in different settings (e.g., data space vs. latent space). Supplementary Material: No supplementary material is provided with this submission. Relation To Broader Scientific Literature: Effective data augmentation techniques are essential for improving regression model performance across various applications. This work aligns with broader research in manifold learning and data augmentation. Essential References Not Discussed: The Hessian eigenmaps paper **[a]** explores similar second-order manifold approximations and should be referenced in the related work section. **[a]** Donoho, D. L., & Grimes, C. (2003). "Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data." Proceedings of the National Academy of Sciences, 100(10), 5591-5596. Other Strengths And Weaknesses: **Strengths:** - The paper introduces a novel data augmentation method rooted in manifold learning. - Demonstrates strong empirical performance across multiple datasets, often surpassing existing methods. **Weaknesses:** - Lack of clarity in hyperparameter selection: - The paper does not provide a comprehensive discussion on determining optimal values for key hyperparameters. - Sensitivity analysis is missing, leaving open questions about the method’s robustness to parameter choices. - Application to different data representations: the method appears applicable in both data space and latent space, but no explicit discussion clarifies the trade-offs or performance implications. - Lack of clarity in the role of differentiability: - The paper briefly mentions differentiability but does not explain its necessity or its impact on performance. - Why is differentiability necessary for the data augmentation module, and how does it impact performance? How does the performance change if gradient information is ignored and the generated data is only used for augmentation during standard training? - Lack of clarity in the batch vs. mini-batch implementation: - The description of the batch-wise and mini-batch implementations is unclear. - If using mini-batch training, is the neighborhood constructed solely from the mini-batch? What is $z_0$? - Since $CEMS$ (rather than $CEMS_p$) appears to be the primary approach, a clearer explanation of $CEMS$ in the main text would be beneficial. Other Comments Or Suggestions: **Minor Comments:** - The dimensions of vectors and matrices should be explicitly stated in the text, e.g., for $B_u$ (line 199, col. 1, page 4), $x_i, y_i$ (line 168, col. 2, page 4), $\Psi$ and $G$ (line 271, col. 1, page 5), to name a few. - Inconsistencies in notation should be addressed: - Line 218, col. 1, page 4: $[u, g(u)]$ -> $[u^\top, g(u)^\top]^\top$? - Line 175, col. 2, page 4: Is only $Y$ normalized? - Line 355, col. 1, page 7: Yao et al. (2022) includes the Echo dataset, which is not used here—why? - Figure 2: The meaning of the red arrow is unclear. - Transpose notation is inconsistently represented as $T$ and $\top$. **Typos:** - Algorithm 1, Line 5: Solve $\Psi A = G$ for $A$? - Table 2: CEMS performance for DTI should be checked. Questions For Authors: Please refer to the questions in **Other strengths and weaknesses** section. **Other questions:** 1. Why is the linear system solved per point (line 258, col. 2, page 5)? 2. What types of datasets benefit most from this method, and in which scenarios might it be less effective? 3. How does CEMS compare when applied in data space vs. latent space, and what are the trade-offs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. We're glad the method’s empirical strength and relevance were recognized. We addressed the comments on hyperparameters, differentiability, and implementation through added clarifications and revisions. Below are our detailed responses. [Additional tables](https://jmp.sh/GDqh987h) 1. **Essential References Not Discussed.** We added the reference and discussion to the Related Work section in the revised manuscript. 2. **The paper does not provide a comprehensive discussion for key hyperparameters.** While we did not include a detailed discussion in the paper, we selected key hyperparameters via standard cross-validation, following the approach used in prior works (e.g., Yao et al., 2022; Schneider et al., 2023). 3. **Sensitivity analysis is missing.** As shown in the additional tables uploaded, Our method shows robustness to hyperparameter choices. Perturbing the intrinsic dimension estimated by TwoNN yields stable performance, with the baseline or nearby values often achieving the best results. Sensitivity analyses on the neighborhood size parameter $B$ and the noise scale $\sigma$ also show consistent performance across a broad range of values. These findings demonstrate that our method is not overly sensitive to intrinsic dimension, neighborhood size, or noise level. 4. **Application to different data representations.** Our method is compatible with both data and latent space representations. We added a discussion in the appendix covering the trade-offs and practical considerations of each setting, including when gradient flow through the augmentation module is beneficial. 5. **Differentiability and its impact.** Differentiability is relevant for latent-space augmentation, enabling backpropagation through the augmentation module and potentially improving representations. For input-space augmentation, differentiability is unnecessary. 6. **Implications of ignoring gradients.** Even if latent-space augmentation is non-differentiable, it can still diversify training data. However, the lack of gradient flow may reduce the benefits of adaptively optimizing the augmentation process. 7. **Lack of clarity in the batch vs. mini-batch implementation** We clarify the distinction between the two variants: `CEMS` and `CEMS_p`. - `CEMS_p` (point-wise) samples mini-batches randomly. For each point in the batch, a neighborhood of size $k$ is drawn from the full dataset to estimate a local tangent space and Hessian, producing one synthetic sample. - `CEMS` (batch-wise) samples a point $z_0$ and builds a batch $N_z$ of $B$ nearby points. A shared, mean-centered tangent space is computed, and each point uses it to estimate its own Hessian and generate one sample. We have revised the manuscript to explicitly define $B$ in `CEMS_p`, clarify neighborhood construction, and highlight that both variants generate one sample per point using different geometric setups. 8. **The dimensions of vectors and matrices should be explicitly stated in the text.** We added the dimension in the revised manuscript. 9. **Inconsistencies in notation should be addressed** - **$[u, g(u)] \rightarrow [u^{\top}, g(u)^{T}]^{T}$?** Clarified that $[\cdot, \cdot]$ denotes column-wise concatenation. - **Is only $Y$ normalized?** Yes. Since $X \in [0, 1]$, we normalize $Y$ to balance the concatenation. - **Why not use the Echo dataset?** Due to time constraints and its size, we did not complete evaluation on Echo. We plan to include it in future work. - **Figure 2: Red arrow meaning is unclear.** We clarified that it represents un-projecting $\eta$ from the tangent space to $\mathbb{R}^D$ via $f$. - **Transpose notation inconsistencies.** We revised the manuscript to ensure consistent use of transpose notation. - **Equation: Solve $A \Psi = G$.** The correct equation is $\Psi A = G$. This has been corrected. - **Table 2: DTI performance seems off.** The correct value is "0.511", now fixed in the revision. 10. **Why is the linear system solved per point?** To estimate the gradient and Hessian at each point $u$, we fit a second-order Taylor expansion using neighboring points $u_j$. Since this expansion is specific to $u$, we solve a separate linear system per point, allowing us to capture local geometric variations across the manifold. 11. **What types of datasets benefit most from this method, and in which scenarios might it be less effective?** Our method performs best on datasets where input-output pairs lie near a smooth, low-dimensional manifold with meaningful curvature, common in structured data like images and audio signals. It is especially effective in sparse or low-data regimes. However, it may be less effective when the manifold assumption breaks down, such as with high-dimensional noise or unstructured data. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' responses, which address most of my concerns. Although I have not had the opportunity to review the revised manuscript, I trust that the authors will incorporate the improvements outlined in the rebuttal to enhance the clarity of the paper. Accordingly, I am raising my score. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s thoughtful consideration of our responses and the recognition of our efforts to address the concerns raised. We are committed to incorporating the improvements outlined in the rebuttal to enhance the clarity and quality of the paper. We are grateful for your trust and for the revised evaluation.
Summary: Presents a data-augmentation method for regression problems, taking advantage of the manifold structure of the data. Defined on the concatenation of data and labels, the local neighbourhood of all points defines the assumed manifold structure, and the two first moments (mean and Hessian) are used to sample points for data-augmentation. The resulting method is evaluated on several datasets and performs favourably against SOTA. Claims And Evidence: The claims made in the submission seem to be supported by clear and convincing evidence. The application of manifold learning techniques to data regression seems both straight-forward but novel. Methods And Evaluation Criteria: Both in-distribution and out-of-distribution problems are evaluated, and strong baselines are used. Theoretical Claims: No theoretical claims are made. The theoretical justifications and derivations for the regression and the resulting error bounds (appendix A1) look good to me. Experimental Designs Or Analyses: The experimental design is sound and exceptionally detailed in terms of practical considerations, such as batch size and intrinsic dimension estimation. Supplementary Material: Yes! Thanks for a short and readable material (8pg). Relation To Broader Scientific Literature: The manuscript is candid in presenting both the regression problem literature and the manifold learning literature, and the contribution to previous studies is very clear. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: Strengths * beautiful synthesis of manifold learning tools and the regression problem * detailed discussion of practical considerations * great evaluation on various regression datasets, good results Weaknesses * The method is well-justified when the hessian is evaluated around each point, but this seems computationally expensive, so un-justified approximations are made (sharing Hessian across a local neighbourhood). Other Comments Or Suggestions: None. Questions For Authors: * Where did the idea of creating a manifold by concatenating data x and label y come from? If it is your contribution, clarify it. If it is well-known, cite previous literature. * Are local neighborhoods disjoint sets or overlapping sets? As the Hessian be calculated once per each (as in "re-using neighborhoods and basis computations"), what happens if those overlap? * Is the intrinsic dimension of the manifold an important hyperparameter ("CEMS is governed by the intrinsic dimension of the manifold") or an inferred value which is part of the algorithm ("While the intrinsic dimension d can be viewed as a hyper-parameter of CEMS, we estimate it in practice using a robust estimator")? How dependant is the performance on the validity of this estimator? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and encouraging feedback. We appreciate the recognition of our method’s integration of manifold learning and regression, as well as the clarity of our experiments and supplementary material. Below, we address the comments and describe the corresponding revisions. [Additional tables](https://jmp.sh/GDqh987h) 1. **The method is well-justified when the Hessian is evaluated around each point...** Thank you for this observation. In our method, the Hessian itself is not shared. Instead, we reuse the local neighborhood to compute a shared orthonormal basis, which each point then uses to estimate its Hessian independently. Specifically: - **Shared Neighborhood:** A *k*-nearest neighbor set is constructed for each mini-batch point and reused for nearby points, based on the manifold hypothesis assuming local smoothness. - **Shared Local Basis:** A basis for the tangent and normal spaces is computed via Singular Value Decomposition (SVD) from the shared neighborhood. - **Point-wise Hessian Estimation:** Each point solves a linear system in the shared basis to obtain its own Hessian. Thus, only the basis is shared; curvature information remains point-specific. We compared this method (CEMS) with a fully point-wise variant (CEMSp) and found negligible performance differences (see Table 5), confirming that the shared-neighborhood approximation is both efficient and accurate. 2. **Where did the idea of creating a manifold by concatenating data x and label y come from?** We thank the reviewer for highlighting this point. The idea of creating a manifold by concatenating input data *x* with its corresponding label *y* is not unique to our paper. Relevant prior work has been added and cited in the revised manuscript. 3. **Are local neighborhoods disjoint sets or overlapping sets?** We thank the reviewer for raising this important point. The local neighborhoods we construct are overlapping sets rather than disjoint sets. Specifically, each point forms its own neighborhood based on its *k*-nearest neighbors. Therefore, it is natural that neighborhoods of close points may partially overlap. To clarify, the Hessian itself is not computed only once per neighborhood; rather, it is computed once per point. What we re-use across overlapping neighborhoods is only the local orthonormal basis, which is computed from the SVD of points in the shared neighborhood. Once this basis is established, each individual point solves its own linear system separately to estimate its Hessian within that shared basis. Hence, overlap between neighborhoods is not problematic in our setting. On the contrary, overlap can be seen as beneficial, as it can encourage consistent local geometry estimates across adjacent regions of the manifold, ensuring smooth transitions and coherent geometric structure. 4. **Is the intrinsic dimension of the manifold an important hyperparameter?** We thank the reviewer for bringing up this important clarification. In principle, the intrinsic dimension *d* of the manifold could be treated as an important hyperparameter. However, in practice, we opt for an automatic estimation using a robust intrinsic dimension estimator TwoNN (Facco et al., 2017) to reduce the complexity of hyperparameter tuning. Thus, the intrinsic dimension is inferred from the data rather than manually set. To address the reviewer’s concern regarding the robustness to intrinsic dimension, we performed a sensitivity analysis by perturbing the TwoNN estimated dimension by ±1 and ±2. As shown in Table 2 in the anonymous pdf file, our method exhibits stable performance across all datasets, with the best or second best results frequently aligning with the baseline value. Additionally, in Table 1 we compare TwoNN with alternative estimators [a, b] and observe strong agreement, e.g., all three methods estimate the same dimension for Crimes. These results support both the robustness of our method to this hyperparameter and the empirical reliability of using `TwoNN` as the default. **[a]** Levina, E., & Bickel, P. (2004). Maximum likelihood estimation of intrinsic dimension. *Advances in Neural Information Processing Systems*, 17. **[b]** Birdal, T., Lou, A., Guibas, L. J., & Simsekli, U. (2021). Intrinsic dimension, persistent homology and generalization in neural networks. *Advances in Neural Information Processing Systems*, 34. --- Rebuttal Comment 1.1: Comment: After reading the author's rebuttal and the other reviewers' comments, I stand by my original (favorable) rating. --- Reply to Comment 1.1.1: Comment: We would like to express our sincere gratitude to the reviewer for their careful consideration and for upholding a positive evaluation of our submission. We greatly value your constructive feedback and continued support.
null
null
null
null
null
null
VersaPRM: Multi-Domain Process Reward Model via Synthetic Reasoning Data
Accept (oral)
Summary: The paper introduces VersaPRM, a multi-domain Process Reward Model (PRM) designed to improve reasoning abilities across diverse domains beyond mathematics. Traditional PRMs have been primarily trained on mathematical reasoning tasks and fail to generalize effectively to other disciplines such as Law, Philosophy, and Biology. To address this issue, the authors propose a synthetic data generation and annotation pipeline that produces step-wise labeled reasoning data across multiple domains. VersaPRM is trained on this synthetic multi-domain dataset, leading to improved generalization and performance across various test-time inference strategies (e.g., Weighted Majority Voting (WMV), Best-of-N (BoN), Beam Search, and Monte Carlo Tree Search (MCTS)). Empirical evaluations on MMLU-Pro-CoT-Eval demonstrate that VersaPRM outperforms existing math-focused PRMs in non-mathematical domains, with a notable 7.9% improvement in Law compared to a baseline. Claims And Evidence: Overall, the paper presents strong empirical support for its claims, but there are some areas where additional clarification could be beneficial. 1. PRMs trained only on mathematical data fail to generalize to other domains. The results in Tables clearly demonstrate that existing math-trained PRMs (e.g., Math-Shepherd, Qwen-2.5-Math-PRM) perform poorly in non-math domains, supporting this claim. 2. VersaPRM improves reasoning performance across multiple domains through synthetic multi-domain training. The authors provide multiple ablation studies showing that PRMs trained on diverse reasoning data consistently outperform math-trained PRMs across Law, Philosophy, and Biology. 3. The synthetic data generation process produces high-quality multi-domain reasoning labels. This claim is not fully supported, as the precision is around 70-75%, lower than 80% claimed from OpenAI. Methods And Evaluation Criteria: The proposed methodology and evaluation criteria are aligned with the research goals. 1. Benchmark Datasets: The authors evaluate VersaPRM on MMLU-Pro-CoT-Eval, which covers 14 diverse domains (Math, Physics, Law, Biology, Philosophy, etc.), ensuring a comprehensive evaluation. 2. Evaluation Metrics: The use of WMV, BoN, and search-based methods (MCTS, Beam Search) effectively captures both accuracy and inference-time performance. 3. Baselines & Comparisons: The comparisons with four open-source math PRMs (Math-Shepherd, Qwen-2.5-Math-PRM, etc.) establish a strong benchmark. Theoretical Claims: The paper does not include formal theoretical proofs but provides empirical justifications for its findings. Experimental Designs Or Analyses: Yes, the experimental design is robust, but a few areas could be strengthened. Strengths: 1. The synthetic data generation pipeline is clearly described, using Llama-3.1-8B to generate Chain-of-Thought (CoT) reasoning and Llama-3.1-70B as an auto-labeler. 2. Multiple reranking methods (WMV, BoN) and search strategies (Beam Search, MCTS) ensure that results are not limited to a single inference-time approach. 3. The ablation studies confirm that performance gains come from multi-domain data rather than additional training data volume. Potential Weaknesses: 1. The impact of synthetic data noise is not fully explored. Although manual evaluation suggests 75% accuracy, additional breakdowns on how mislabels affect model performance would be beneficial. 2. The choice of backbone LLMs (Llama vs. Qwen) is not fully explored—for example, the effect of using even larger models (e.g., GPT-4 or DeepSeek-R1) remains an open question. Supplementary Material: Yes, the Appendices A, B, and C were reviewed: Appendix A (Synthetic Data Generation Details): 1. Provides detailed prompts used for CoT generation and auto-labeling. 2. The counterfactual augmentation strategy to generate incorrect reasoning steps is well-described. Appendix B (Search Algorithm Details): 1. Includes pseudocode for Beam Search and MCTS, which are used to guide test-time inference. Appendix C (Training Details): 1. Specifies hyperparameters and fine-tuning strategies, including LoRA vs. full fine-tuning comparisons. Would be improved by including: 1. More examples of mislabeled CoTs to analyze failure cases. 2. Impact of larger-scale LLMs on PRM performance (e.g., GPT-4 vs. Llama-3). Relation To Broader Scientific Literature: The paper builds upon and extends prior research in reward modeling and reasoning in LLMs, particularly in the following areas: Process Reward Models (PRMs) vs. Outcome Reward Models (ORMs): 1. Prior works (e.g., Uesato et al., 2022; Lightman et al., 2024) established that PRMs outperform ORMs in math reasoning. 2. VersaPRM expands PRM utility beyond math, demonstrating improved performance in law, biology, and philosophy. Test-Time Compute and Self-Improvement Methods: 1. The paper aligns with test-time inference strategies like Tree of Thoughts (ToT) (Yao et al., 2024) but enhances them with PRM-guided reranking. 2. Demonstrates that process reward models remain useful even for strong reasoning models (e.g., DeepSeek-R1). Synthetic Data for PRM Training: 1. Prior works (e.g., Wang et al., 2024; Zheng et al., 2024) used synthetic process data for math reasoning. 2. VersaPRM generalizes this approach to non-math domains, demonstrating broader applicability. Essential References Not Discussed: no Other Strengths And Weaknesses: Weakness 1. Auto-Labeling Reliability The auto-labeling method for reasoning steps, while cost-effective, introduces potential noise. The manual evaluation (75% accuracy) suggests that a portion of the training data may contain incorrect annotations, which could impact PRM reliability. Strength 1.Novelty & Relevance The paper addresses an important gap in PRM research by extending process supervision beyond mathematical domains. The synthetic data generation approach, which includes automated reasoning step labeling, is an innovative method to scale PRM training. Empirical Contributions 2. The study provides thorough comparisons between math-focused PRMs and VersaPRM, showing clear performance improvements across multiple domains. The ablation studies and test-time inference analyses validate the generalization capacity of VersaPRM. 3.Reproducibility & Open Science The authors commit to open-sourcing all datasets, model checkpoints, and code, facilitating reproducibility and further research. 4.Strong Experimental Setup The evaluation includes rigorous baselines, including open-source math PRMs and multiple test-time inference techniques (e.g., majority voting, beam search, Monte Carlo Tree Search). The inclusion of large-scale reasoning models such as DeepSeek-R1 strengthens the credibility of the findings. Other Comments Or Suggestions: There are few typo and words repeated. Please check carefully. Questions For Authors: no Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for your meaningful feedback and recognizing that (i) our paper presents strong empirical results and robust experiments, (ii) it addresses an important gap in PRM research, (iii) it provides thorough comparison between math PRMs and VersaPRM, and (iv) it facilitates open science with open-sourcing. > The synthetic data generation process produces high-quality multi-domain reasoning labels. This claim is not fully supported, as the precision is around 70-75%, lower than 80% claimed from OpenAI. Our synthetic labeling achieves ~90% of the accuracy of human-labeled PRM800K (the target benchmark), despite requiring no manual effort. The slightly lower precision (70-75%) reflects the cost-effectiveness trade-off: OpenAI’s 80% required expensive human annotation, while our method is fully automated. > The impact of synthetic data noise is not fully explored. Although manual evaluation suggests 75% accuracy, additional breakdowns on how mislabels affect model performance would be beneficial. > Auto-Labeling Reliability The auto-labeling method for reasoning steps, while cost-effective, introduces potential noise. The manual evaluation (75% accuracy) suggests that a portion of the training data may contain incorrect annotations, which could impact PRM reliability. We did not attempt to further decrease the noise in the training dataset, as we found 75% label accuracy sufficient for VersaPRM to generalize robustly across domains and inference methods. From a preliminary analysis, we observe that many mislabels stem from a failure to identify faulty reasoning due to incorrect factual information. Further noise reduction (e.g., via stronger labeling models) is promising but deferred to future work. > The choice of backbone LLMs (Llama vs. Qwen) is not fully explored—for example, the effect of using even larger models (e.g., GPT-4 or DeepSeek-R1) remains an open question. > Impact of larger-scale LLMs on PRM performance (e.g., GPT-4 vs. Llama-3). In Section 6.4, we provide initial results with DeepSeek-R1 (700B parameters) in the law domain. In addition, we are currently testing this approach in the biology domain and will include these results. Broader exploration of large models for PRM initialization/auto-labeling remains future work. > More examples of mislabeled CoTs to analyze failure cases. We provide examples CoTs mislabeled by math PRM but not by VersaPRM in [this link](https://drive.google.com/file/d/1IWVXEgDCpdhDHZ_zixL3ZHtbhq9xWGQq/view?usp=sharing). We will include more diverse examples of errors that VersaPRM also makes. > There are few typo and words repeated. Please check carefully. Thank you for noting this—we will thoroughly proofread the manuscript. **Final note:** Thank you again for the comments and we appreciate the thorough review!
Summary: This paper introduces VersaPRM, a multi-domain Process Reward Model (PRM) designed to enhance reasoning capabilities across diverse domains beyond mathematics. The authors identify that current PRMs are predominantly trained on mathematical data and demonstrate poor generalization to non-mathematical domains. To address this limitation, they develop a synthetic data generation and annotation pipeline that uses Llama-3.1-8B-Instruct to generate Chain-of-Thought (CoT) reasoning steps and Llama-3.1-70B-Instruct to automatically label these steps. Using this synthetic data, they train VersaPRM, which shows performance gains across multiple domains of MMLU-Pro compared to math PRMs. The authors contribute their multi-domain PRM, the MMLU-Pro-CoT-Train (Labeled) dataset, and open-source their implementation. Claims And Evidence: While the paper makes several reasonable claims, there are some gaps in the evidence presented. The core claim that VersaPRM generalizes better across domains (VersaPRM outperforms math PRMs) is supported by comparative evaluations, but a fundamental question remains unanswered: Is the improvement primarily from the distillation of a stronger model's knowledge (Llama-70B) into a smaller model, rather than from the PRM architecture or training methodology itself? The authors do not adequately control for this variable. Methods And Evaluation Criteria: The evaluation using MMLU-Pro is appropriate and provides a solid testbed with reasoning problems across 14 domains. The comparison across multiple test-time computation methods (MV, WMV, BoN, Beam Search, MCTS) shows the robustness of their approach across different inference-time techniques. However, authors do not adequately control for computational costs across these methods, making efficiency comparisons difficult. The evaluation of auto-labeling quality through manual inspection provides a reasonable sanity check, though a larger sample could strengthen confidence in the dataset quality. For a dataset spanning multiple domains with thousands of examples, 30 examples is insufficient to establish confidence in labeling quality. However, what's fundamentally unclear is whether the paper's approach is simply knowledge distillation from Llama-70B to a smaller model. The authors do not include a crucial baseline: using Llama-70B directly as an evaluator at test time. This would help determine whether training a separate PRM provides any advantage over directly using the stronger model's judgments. Theoretical Claims: The paper does not make significant theoretical claims requiring proof verification. The formal definitions provided in Section 3 for PRMs and aggregation methods are straightforward and align with established concepts in the literature. Experimental Designs Or Analyses: The experimental design is comprehensive and sound: 1. The authors use appropriate baselines, including multiple open-source math PRMs and majority voting. 2. The evaluation across all domains of MMLU-Pro with consistent metrics allows for fair comparison. 3. The experiments with DeepSeek-R1 (Figure 7) provide preliminary evidence (limited to the law subset) that the approach benefits even stronger reasoning models. However, Several issues exist in the experimental design: 1. Lack of a proper baseline using Llama-70B directly as an evaluator at test time, which would help isolate whether the benefit comes from distilling Llama-70B's knowledge or from the PRM training approach. 2. The experiments do not adequately assess the cost-effectiveness of the approach. Using a 70B model to generate training data is expensive, and it's unclear if this cost is justified by the performance gains. 3. The validation of auto-labeling quality uses a small sample size (30 questions), which could be expanded for more robust validation of the dataset quality. Supplementary Material: I reviewed the supplementary materials which provide extensive details on: - The synthetic data generation pipeline including prompt templates (Appendix A) - Search algorithm implementation details (Appendix B) - PRM training configuration details (Appendix C) - Additional evaluation results across all domains and methods (Appendix D) Relation To Broader Scientific Literature: This work relates to: 1. It extends work on Process Reward Models (Lightman et al., 2024; Wang et al., 2024b; Luo et al., 2024) by addressing their domain generalization limitations. 2. It builds on test-time computation techniques for LLM reasoning (Snell et al., 2024; Yao et al., 2024; Wan et al., 2024) by proposing a practical method for constructing efficient PRM evaluators. Essential References Not Discussed: The paper covers parts of the relevant prior work comprehensively (areas covered in related work section). However, the paper lacks discussion of several important related areas: 1. Knowledge distillation literature, which is essentially what the authors are doing when using Llama-70B to generate training data for a smaller model. 2. Literature on model alignment through AI feedback (e.g., Constitutional AI approaches), which uses similar techniques of having larger models provide feedback to smaller models. 3. Recent work comparing the cost-effectiveness of using larger models at test time versus training specialized models, which would provide important context for evaluating their approach. 4. Work on zero-shot evaluation capabilities of large models, which would be relevant for understanding the baseline capability of Llama-70B as a direct evaluator. Other Strengths And Weaknesses: **Strengths:** - The problem addressed is significant and practical, as it enables more effective use of test-time computation across diverse domains - The synthetic data generation pipeline is effective - The comprehensive ablations provide valuable insights about what factors matter for multi-domain PRM effectiveness - The open-sourcing of data, code and models will benefit the research community **Weaknesses:** - Fundamental ambiguity about whether improvements come from knowledge distillation or PRM training - The manual evaluation of auto-labeling quality uses a relatively small sample - Limited discussion of potential biases inherited from the generator and labeler LLMs - The improvements in some domains (e.g., History) remain modest compared to other domains, but reasons for this variability aren't deeply analyzed - No ablation study about the synthetic data generation pipeline. This appears to be the main contribution of the paper but the process is not dissected well. Other Comments Or Suggestions: - A more detailed error analysis showing specific examples where VersaPRM succeeds but math PRMs fail would enhance understanding of the model's strengths - Visualizing what constitutes "good" versus "bad" reasoning steps across different domains could provide interesting insights - Exploring more efficient inference techniques that require fewer candidate solutions would enhance practical applicability - Investigating whether the approach could be extended to open-ended generation tasks beyond multiple-choice questions would be valuable Questions For Authors: 1. Have you evaluated using Llama-70B directly as an evaluator at test time compared to your VersaPRM? This would help isolate whether the benefit comes from distilling Llama-70B's knowledge or from your specific PRM training methodology. 2. Have you analyzed the specific types of reasoning errors that VersaPRM is better at identifying compared to math PRMs across different domains? This could help explain the varying degrees of improvement observed across domains. 3. Have you explored using VersaPRM for iterative refinement of reasoning (rather than just reranking), where feedback from the PRM guides the generation process itself? This could potentially lead to higher quality reasoning with fewer total generations. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback. We've addressed your concerns below. > Multiple test-time computation methods (MV, WMV, BoN, Beam Search, MCTS) but computational costs across these methods are not controlled We clarify the computational costs of test-time methods: WMV and BoN scale with the number of generated CoT solutions $N$. Beam Search’s number of beams $B$ is equivalent to $N$, and MCTS scales with the branching factor times the number of iterations. Figure 6 compares MCTS and Beam Search in terms of computational cost for an equivalent number of generated CoT solutions. However, our goal is not to compare inference-time methods against each other, but to show that VersaPRM consistently outperforms math PRMs across all these methods. > 30 examples is insufficient to establish confidence in labeling quality We expanded our sample to 60 questions. The revised analysis shows that 78% of responses marked correct by the autolabeler were accurate (95% CI: 0.68–0.89), and 70% of those marked incorrect were accurate (95% CI: 0.62–0.78). Based on this, we estimate 74% of CoT responses in the training set are correctly labeled. We will label more examples for the revision. > Using Llama-70B directly as an evaluator at test time compared to VersaPRM Thank you for your valuable suggestion. We provide an [additional experiment](https://drive.google.com/file/d/1rwev0yBT0scwyMj1Tgc6nXaUGy1EsFad/view?usp=sharing) comparing VersaPRM against Llama-70B directly as a judge on MMLU-CoT-Eval. VersaPRM outperforms Llama-70B across all values of $N$. Recall the training data for VersaPRM is labeled by Llama-70B with access to the correct answers during prompting, ensuring high-quality step-wise labels (Section 5.3). At inference, however, Llama-70B lacks prior knowledge of the correct answer, limiting its judging effectiveness as a judge. Thus, the gains stem from our PRM methodology, not merely Llama-70B’s knowledge. > The experiments do not adequately assess the cost-effectiveness of the approach; Using a 70B model to generate training data is expensive We used Llama-70B via AWS Bedrock batch inference to label ~85K CoTs in our training data, costing ~$50. This is far cheaper than manual labeling or stepwise rollouts. We will clarify this in the final paper. > Lack discussion of several important related areas Thank you for suggesting potential discussion on important related areas. We will add in detailed discussion of these areas in the final paper. > Limited discussion of potential biases inherited from the generator and labeler LLMs Thank you for raising this issue. Since both generator and labeler LLMs are Llama models, VersaPRM may perform better on Llama-based inference. A detailed study of the bias would be important, especially for alignment. We will add this discussion in the revision. > The reason behind moderate improvements in some domains (e.g., History) compared to other domains Manual inspection of questions in domains with moderate improvements (e.g., history and health) reveals that these primarily test factual recall over reasoning. We posit that aggregation methods like BoN and WMV are less effective in these domains, as success depends largely on the LLM’s factual knowledge–not reasoning quality–and whether the PRM can recognize the correct fact. Therefore, we suspect PRMs struggle in these domains due to weaker pretrained knowledge of health/history. > No ablation study about the synthetic data generation pipeline. Section 5.3 presents an ablation on auto-labeling prompts (Lines 275–282). We also tested removing ground-truth answers, which dropped agreement rates from 83% to 60% for CoT originally autolabeled as correct and 70% to 40% for those autolabeled as incorrect. We are conducting further end-to-end PRM evaluations using ablated training data and will update results accordingly. > A more detailed error analysis showing specific examples where VersaPRM succeeds but math PRMs fail; Visualizing what constitutes "good" versus "bad" reasoning steps across different domains We provide mislabeled CoTs from math PRMs that VersaPRM correctly labels [here](https://drive.google.com/file/d/1IWVXEgDCpdhDHZ_zixL3ZHtbhq9xWGQq/view?usp=sharing). We will include examples of VersaPRM errors. > VersaPRM for iterative refinement of reasoning (rather than just reranking) Section 6.3 tests MCTS and Beam Search, which achieve strong results with significantly fewer computations. > Open-ended generation tasks extension We include an [experiment on open-ended law questions](https://drive.google.com/file/d/1wYJO1daWMX-hngTnNg0hnrXX79Ggnhvt/view?usp=sharing). **Final Note**: We appreciate the in-depth feedback and hope our comments and additional experiments address the questions. If our responses have addressed your concerns, please kindly consider raising the score. --- Rebuttal Comment 1.1: Comment: Hi, Thanks for the detailed reply. I appreciate 1/ Experiment on Llama-70B directly as an evaluator at test time compared to VersaPRM 2/ expanded labelling analysis to 60 questions which makes their claim more grounded 3/ providing the additional discussions based on my comments. 4/ Expaning their study to open ended law questions. I think some of my comments were not clear enough, mainly > VersaPRM for iterative refinement of reasoning (rather than just reranking) > > Section 6.3 tests MCTS and Beam Search, which achieve strong results with significantly fewer computations. What I meant here is that the authors should consider a process where VersaPRM is used tp flag appropriate and inappropriate answers with both fed back to inference LLM to refine its answer. Not only using VersaPRM to select the best one. I understand this might be out of scope though. > No ablation study about the synthetic data generation pipeline. >> Section 5.3 presents an ablation on auto-labeling prompts (Lines 275–282). We also tested removing ground-truth answers, which dropped agreement rates from 83% to 60% for CoT originally autolabeled as correct and 70% to 40% for those autolabeled as incorrect. We are conducting further end-to-end PRM evaluations using ablated training data and will update results accordingly. This is definitely a good starting point but I think that more can be done in this direction. There is quite rich literature on automated labelling such as [https://arxiv.org/abs/2308.08998, https://arxiv.org/abs/2408.04614] and this angle is not explored well. ** With that in mind, I think the additions the authors promised to add to the paper merit a score increase**. --- Reply to Comment 1.1.1: Comment: Thank you for your added clarification and for recognizing our new experiments. We will ensure all promised additions are included in the final paper. > What I meant here is that the authors should consider a process where VersaPRM is used to flag appropriate and inappropriate answers with both fed back to inference LLM to refine its answer. Not only using VersaPRM to select the best one. I understand this might be out of scope though. We appreciate the clarification and agree that using VersaPRM not just for selection but also as feedback to refine the LLM's answer is an interesting idea. Prior work has explored iterative refinement using natural language critiques or scalar/binary scores over entire responses as the form of external feedback. We find this idea promising and will include an experiment in the final version of the paper that tests this. Specifically, after the LLM generates an initial response, we will provide it with the step-level correctness scores assigned by the PRM and prompt it to revise its answer based on that feedback. We will conduct this experiment using both VersaPRM and baseline math PRMs to evaluate if VersaPRM leads to more effective refinement. > This is definitely a good starting point but I think that more can be done in this direction. There is quite rich literature on automated labelling such as [https://arxiv.org/abs/2308.08998, https://arxiv.org/abs/2408.04614] and this angle is not explored well. Thank you for highlighting these additional references. We agree that approaches like response-rewriting and self-training could further enhance synthetic data generation. Thus, in addition to our initial studies with counterfactual augmentation (Appendix A.3) which we ablated due to limited gains for the final trained PRM, we will extend our experiments to include the methods you suggested. In particular, we will experiment with 1) augmenting the training CoT data by using an LLM to generate alternate phrasings of existing steps that preserve their original meaning and 2) applying self-training by labeling additional CoT examples directly by VersaPRM, heuristically filtering them for correctness, and using them to further train VersaPRM. We will report the results of these experiments in the updated version of the paper. --- Thank you again for your time and constructive engagement—we’re grateful for your support in strengthening the paper!
Summary: Process Reward Models (PRMs) have been effective in improving mathematical reasoning for Large Language Models (LLMs) by utilizing increased inference-time computation. However, their generalizability to non-mathematical domains remains unproven. This work demonstrates that current PRMs perform poorly in non-mathematical domains. To address this, VersaPRM is introduced—a multi-domain PRM trained on synthetic reasoning data generated through a novel data generation and annotation method. VersaPRM achieves consistent performance gains across diverse domains. For example, in the MMLU-Pro Law category, VersaPRM improves performance by 7.9% using weighted majority voting, significantly outperforming Qwen2.5-Math-PRM's 1.3% gain. Additionally, all data, code, and models for VersaPRM are open-sourced for community use. Claims And Evidence: Yes Methods And Evaluation Criteria: - Advantages 1. A new method called VersaPRM (Multi-domain Process Reward Model) is proposed. By training on synthetic reasoning data, the reasoning ability of large language models in non-mathematical fields is effectively improved, filling the gap in the application of existing process reward models (PRMs) in multiple fields. 2. A novel synthetic data generation and automatic annotation method is designed. The chain of reasoning (CoT) is generated and automatically annotated using LLM, which avoids the high cost and low efficiency of manual annotation, while ensuring data quality and diversity, providing rich materials for model training. Comprehensive evaluation criteria: A variety of evaluation methods are used, including weighted majority voting (WMV), best N selection (BoN), beam search (Beam Search) and Monte Carlo tree search (MCTS), etc., to verify the performance of the model from different angles, which can fully reflect the performance of the model in different fields and different reasoning strategies. 3. All data, code and models are open source, which facilitates subsequent research and promotes further exploration and application of this field in academia and industry. - Disadvantages 1. Although the automatic labeling method is used, its accuracy still has room for improvement (about 75%), and there may be some incorrectly labeled data, which may interfere with model training and affect the further improvement of model performance. 2. Although a variety of evaluation methods are used, these methods are mainly based on existing reasoning strategies and data sets, and may not fully cover all possible reasoning scenarios and fields. For some more complex or challenging reasoning tasks, the performance of the model may need further verification. Theoretical Claims: N/A Experimental Designs Or Analyses: Although a comparative analysis with a variety of open source mathematical PRMs was conducted, the comparative analysis was mainly based on existing models and datasets. For some more advanced models or methods that have not yet appeared, the comparative advantages of the model may need to be re-evaluated. [Exploring Mathematical Extrapolation of Large Language Models with Synthetic Data](https://aclanthology.org/2024.findings-acl.55/) (Li et al., Findings 2024) Supplementary Material: A~D Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback, and for acknowledging that (i) our method fills a gap in the existing literature on PRMs, (ii) we provide comprehensive evaluation criteria for our model, and (iii) our open-sourced code facilitates future exploration of this field. > Although the automatic labeling method is used, its accuracy still has room for improvement (about 75%), and there may be some incorrectly labeled data, which may interfere with model training and affect the further improvement of model performance. While the automatic labeling method introduces some noise (~75% accuracy), our PRM remains robust to moderate noise levels. This is evident from VersaPRM’s strong performance gains over previous math PRMs, despite being trained on noisy data. In this paper, we used Llama-70B due to budget constraints and as it was sufficient to create a model that outperforms previous math PRMs. That said, to further increase labeling accuracy, we can employ stronger models, such as GPT-4o, or reasoning models such as DeepSeek-R1 or OpenAI o1. This would give us higher accuracy labeling. Given time constraints for this response, we will aim to include an improved model with more accurately labeled data in the final version of the paper. > Although a variety of evaluation methods are used, these methods are mainly based on existing reasoning strategies and data sets, and may not fully cover all possible reasoning scenarios and fields. For some more complex or challenging reasoning tasks, the performance of the model may need further verification. We have evaluated VersaPRM across diverse test-time methods. We acknowledge the importance of broader validation and will discuss and explore additional reasoning scenarios in the final version. > Although a comparative analysis with a variety of open source mathematical PRMs was conducted, the comparative analysis was mainly based on existing models and datasets. For some more advanced models or methods that have not yet appeared, the comparative advantages of the model may need to be re-evaluated. Exploring Mathematical Extrapolation of Large Language Models with Synthetic Data (Li et al., Findings 2024) We will cite “Exploring Mathematical Extrapolation of Large Language Models with Synthetic Data” in the revised version. This paper introduces a novel arithmetical puzzle task and demonstrates how fine-tuning on large-scale synthetic examples enables precise multi-step mathematical reasoning. The fine-tuned model not only solves the puzzles but also generalizes to harder, out-of-distribution problems involving larger numbers and the composing components of the arithmetical puzzle problem. While this approach is effective for math, we note its applicability to non-math domains remains unclear, as defining comparable structured tasks in those areas presents challenges. **Final note:** Thank you again for the comments. If you have any remaining questions, please do not hesitate to let us know. --- Rebuttal Comment 1.1: Comment: I think your rebuttal is more effective. In the future, you can try to introduce your research into chess[1] and other research directions. It may be an interesting attempt. All in all I think this is a good approach. I've changed the rating. Good luck [1] Zhang, Y., Han, X., Li, H., Chen, K., & Lin, S. (2025). Complete Chess Games Enable LLM Become A Chess Master. arXiv preprint arXiv:2501.17186. --- Reply to Comment 1.1.1: Comment: Thank you for the updated review! We will mention even more challenging domains like game playing in the discussion section of the final paper.
Summary: This paper describes an automated pipeline for annotating chain-of-thought rationales with stepwise correctness labels. The authors generate a set of these annotated CoTs for MMLU-Pro, then train a model to predict these correctness labels. The result is VersaPRM, a process reward model for reasoning beyond math domains. The authors evaluate VersaPRM as a reranker against several math-specific PRMs using weighted majority voting, as well as plain majority voting without a reward model; they additionally evaluate VersaPRM as a scoring function for MCTS and beam search. Results on held-out MMLU-Pro questions show that weighted majority voting with math-specific PRMs fails to improve on simple majority voting in non-math-adjacent domains, whereas WMV with VersaPRM provides clear improvements in all considered domains. The authors also demonstrate that VersaPRM works well as a scoring function for search, providing better accuracy scaling with inference budget than majority voting or search with a math-specific PRM. Claims And Evidence: The authors' claims about their method are basically well-supported - see "Experimental Designs or Analyses" for a nitpick. Methods And Evaluation Criteria: MMLU-Pro makes sense as a benchmark to evaluate this approach. Theoretical Claims: N/A Experimental Designs Or Analyses: The comparisons the authors carry out in their main experiments are valid. However, to truly demonstrate that VersaPRM is "domain-general" I would have preferred to see some evaluation beyond just MMLU-Pro, especially since the training traces are gathered from it. Supplementary Material: I did not download and test the model checkpoint myself (linked anonymously in the paper), but I appreciate that the authors made it available. Relation To Broader Scientific Literature: There have been a handful of previous process reward modeling efforts in the literature, but as the authors note, these have mostly focused on the math domain, as this is where most step-by-step reasoning with LMs has been developed and evaluated due to easy answer verification. This is the first work I have seen attempt to train a domain-general reasoning PRM, but I may have missed one. Essential References Not Discussed: I am not aware of any essential references that the authors missed (with the exception of work currently in submission). "To CoT or Not to CoT" (Sprague et al. '24) could help contextualize the authors' finding that math PRMs are ineffective outside math domains - and the authors' demonstration that WMV with VersaPRM outperforms MV on non-math domains is extra surprising (positively) in light of the conclusion from that work that CoT primarily supports math performance and not other domains. Other Strengths And Weaknesses: I'm glad to see folks working to broaden the applicability of CoT and test-time search beyond math, these are encouraging results. The MCTS performance under VersaPRM is especially cool - looks like performance doesn't saturate with increasing budget nearly as quickly as it does with the other setups. Other Comments Or Suggestions: On line 306 there is a missing space between "PRM" and "via". Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback on the paper, recognizing that (i) its claims are well supported and (ii) this work is the first attempt to broaden PRMs beyond the math domain. We will incorporate your suggested revisions into our final paper. > The comparisons the authors carry out in their main experiments are valid. However, to truly demonstrate that VersaPRM is “domain-general” I would have preferred to see some evaluation beyond just MMLU-Pro, especially since the training traces are gathered from it. While we do not currently have results for VersaPRM on datasets beyond MMLU-Pro, we have conducted additional hold-one-out evaluations, which we will include in the final paper. Specifically, we exclude one domain category (law, biology, or CS) from training and assess whether the resulting PRM generalizes to the held-out domain. Results are available [here](https://drive.google.com/drive/folders/1n1bMkNSQ9u923b9P_IVW1nl3ENmSJCec?usp=sharing). As shown, the WMV generalization performance of VersaPRM *trained with one domain held out* is comparable to that of the full model. This suggests that its generalization ability is not merely due to broader coverage of the training data, but rather indicates genuine domain-general reasoning capabilities. > I am not aware of any essential references that the authors missed (with the exception of work currently in submission). "To CoT or Not to CoT" (Sprague et al. '24) could help contextualize the authors' finding that math PRMs are ineffective outside math domains - and the authors' demonstration that WMV with VersaPRM outperforms MV on non-math domains is extra surprising (positively) in light of the conclusion from that work that CoT primarily supports math performance and not other domains. We appreciate this reference and will include it in Section 2. The following sentence that will be added: *“According to the work of Sprague et al. (2024), most of the reported advantages using CoT stem from math or math-related tasks.”* > I'm glad to see folks working to broaden the applicability of CoT and test-time search beyond math, these are encouraging results. The MCTS performance under VersaPRM is especially cool - looks like performance doesn't saturate with increasing budget nearly as quickly as it does with the other setups. Thank you for this positive feedback. We will emphasize it in the final version. > On line 306 there is a missing space between "PRM" and "via". Thanks for catching this–we will fix it. **Final Note:** We appreciate the reviewer’s insights and suggestions and are encouraged by the positive assessment of our work on broadening test-time scaling methods to domains beyond math. --- Rebuttal Comment 1.1: Comment: I like the hold-one-out evaluation idea - that's a great experiment to include. Thanks for addressing my comments! --- Reply to Comment 1.1.1: Comment: Thank you for the follow-up. We are glad you like the hold-one-out evaluation, and appreciate the thoughtful feedback!
null
null
null
null
null
null
A Reductions Approach to Risk-Sensitive Reinforcement Learning with Optimized Certainty Equivalents
Accept (poster)
Summary: The paper propose two meta-algorithms for solving static risk sensitive MDPs by using the augmented formulation for OCE objectives. The first one uses models based oracles and the other one using policy optimization based oracles. Regret bounds are analyzed with the assumption that the oracles are reasonably well designed. Claims And Evidence: The claims are clearly presented with theoretical results (regret bounds) and some computational results. Methods And Evaluation Criteria: The methods and evaluations are suitable. Theoretical Claims: Proofs in the main section are checked. Experimental Designs Or Analyses: The experiments are reasonable and interesting to see. Supplementary Material: Supplementary material are reviewed. Relation To Broader Scientific Literature: RSRL is known to be hard to solve. This paper provides a good summary for existing methods for static OCE objectives, and demonstrated efficient implementations are possible. Essential References Not Discussed: The related works are relatively complete. Other Strengths And Weaknesses: The paper is well written. The introduction is clear and relationship with prior works are well discussed. The augmented approach is practical due to the objective being intuitive. The presented meta-algorithms cover a wide range of application and problem setups. The biggest weakness is that the results are hardly surprising as the augmented MDP approach is well known and has been studied extensively. Plus given a known oracle with assumption/definition 3.1 & 4.1, all regret bounds seem to be easy to derive. Other Comments Or Suggestions: I am a little confused by the notion on the value function with all the sub/sup scripts. What's the relationship between: $ \hat V_{1,k} $, $ V_1^{\pi^k} $, $ V_{\text{aug}}^{\pi^k} $? Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 74co, Thank you for your encouraging review and strong support! We truly appreciate the time and effort that have been invested in providing constructive comments. Regarding your question about notation on the value function: * $\hat V_{1,k}$ is an estimated augmented value function from the Optimistic Oracle (Def 3.1). The subscript $1,k$ denotes it's at step $h=1$ and is from $k$-th round of the meta-algorithm (Alg 1). * $V^{\pi^k}\_1$ and $V^{\pi^k}\_{\text{aug}}$ are both denoting the true augmented value function of policy $\pi^k$ at step $h=1$. Thanks for catching this duplicated notation, we will consolidate these two notations into one. Regarding your concern about the novelty of our work: While the AugMDP has been studied for specific risk measures like CVaR, our work makes several novel contributions beyond straightforward application of this approach: 1. We establish the first risk-sensitive PAC bounds for exogenous block MDPs (Thm 3.6 in Sec 3.1) – not captured by low-rank MDPs (Zhao et al. 2024) and requires novel techniques to handle the interplay between coverability and the augmented MDP. 2. We derive the first risk-sensitive bounds for policy-gradient algorithms and prove a novel local improvement property in risk-sensitive RL (Thms 4.2 and 4.4 in Sec 4). This bridges the gap in the literature where policy gradient methods lacked theoretical guarantees in risk-sensitive settings. 3. By abstracting out oracles, we provide a unifying framework for studying the large class of OCE risk measures that were previously disconnected in the risk-sensitive RL literature (e.g., Bastani et al. 2022, Wang et al. 2023, Zhao et al. 2024). Our framework not only simplifies analysis but also enables new algorithmic design (leading to the above two contributions), applicable across multiple risk measures. Our Related Works section (Sec 1.1) provides a more comprehensive discussion that positions our novel contributions within the RL literature. We are happy to answer any questions on our technical contributions. Please let us know if you have any other questions!
Summary: In this paper, the authors study the risk-sensitive RL problem to optimize a risk-measure of cumulative rewards. A family of risks called Optimized Certainty Equivalents (OCEs) are considered, and this includes popular risk measures such as CVaR and entropic risk. Two algorithms have been proposed in the paper, based on optimism principle and policy gradient, respectively. By forming an augmented Markov Decision Process (MDP), the formalism developed for risk-neutral MDP has been utilized in the paper. In the first algorithm, bounds that generalize prior results in CVaR RL, has been established. For the policy gradient based algorithm, monotone improvement and global convergence guarantees have been established under some assumptions. Empirically, it has been verified that the proposed algorithms learn the optimal history-dependent policy and Markovian policies fail to achieve optimality. ## update after rebuttal The response addresses my major questions. I have slightly raised my score to 3 (as the generalization of AugMDP to the OCE setting does not seem to be very novel and is highly dependent on Rep-UCB in the literature). Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: The proofs seem to be okay. Experimental Designs Or Analyses: Experimental designs seem to be fine. Supplementary Material: Yes, I have reviewed the supplementary material, however, not very rigorously. Relation To Broader Scientific Literature: A family of risks called Optimized Certainty Equivalents (OCEs) are considered, and this includes popular risk measures such as CVaR and entropic risk. Two algorithms have been proposed in the paper, based on optimism principle and policy gradient, respectively. In the first algorithm, bounds that generalize prior results in CVaR RL, has been established. For the policy gradient based algorithm, monotone improvement and global convergence guarantees have been established under some assumptions. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is well-written, however difficult to follow at some places. It introduces many concepts but some of the claims are largely based on existing works. There are some assumptions that requires further justifications. Other Comments Or Suggestions: Not applicable Questions For Authors: 1. Why do we need to assume that $Z(\pi)\in[0,1]$? Since this is a summation of rewards, this may create a dependence between per-stage reward and horizon length. What happens when this condition is violated? This assumption seems far from reality. 2. The augmented MDP for OCE has a state space which has a much higher dimensionality than the original state space. The approach for construction for augmented MDP does not seem to be very efficient. 3. The optimistic oracle outputs an optimistic value function if two conditions, viz., optimism and bounded regret, are satisfied. How do one guarantee that these conditions will be satisfied? Also, second condition states that the regret needs to be sublinear, i.e., $O(\sqrt(K))$? Is to possible to achieve logarithmic regret in the second condition? What will be the implication of the result on the overall convergence? 4. More intuition needs to be provided while defining $V_u^{\max}$. Please quantify this in details. 5. The basic concepts behind exogenous block MDP needs to be elaborated. 6. Theorem 4.2 guarantees global convergence if POA_LG has small value estimation error. How do one guarantees this small value estimation error? 7. Overall, the paper introduces many notions towards a single objective. This impacts the readability of the paper, requiring more intuitions to be provided throughout the paper. Novelty of the proposed methodologies are limited, e.g., the generalization of AugMDP to the OCE setting does not seem to be very novel and is highly dependent on Rep-UCB in the literature. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer t1og, Thanks very much for providing helpful feedback. We truly appreciate the time and effort that have been invested in providing constructive comments. Please find our responses below. 1. Our assumption to normalize cumulative rewards is actually *more general* than normalizing per-step reward, as it allows for reward values that drastically vary across actions and states. For example, the reward sequence $0, 0, ..., 0, H$ is allowed under our framework (after scaling our normalization & bounds by $H$). This is in contrast to assuming that per-step rewards are in $[0,1]$ and cumulative rewards are in $[0,H]$, the arguably more popular assumption in RL theory, which only allows for dense rewards and precludes the above sparse reward example. A helpful resource that helped us understand the benefit of normalizing cumulative rewards is the paper http://proceedings.mlr.press/v75/jiang18a/jiang18a.pdf (which we cite in line 145) and their COLT 2018 talk https://youtu.be/If63ZSpEiSs at the 5:10 mark. 2. We note that the AugMDP for OCE only has one extra dimensionality compared to the original MDP, since the augmented state $b_h$ is a scalar. Moreover, the AugMDP is very easy to simulate, since $b_h$ has a known and deterministic transition function $b_{h+1}=b_h-r_h$, which adds essentially no overhead to the original MDP. Please see "Remark: the AugMDP is easy to simulate" in Lines 184-191 for more details. 3. Thanks for the question. Regarding how to satisfy the two conditions (optimism and bounded regret) in Def 3.1, we note these conditions are satisfied by many optimistic algorithms in RL even when applied to the AugMDP. For example, UCB-VI satisfies these conditions in tabular MDPs, Rep-UCB in low-rank MDPs and GOLF in MDPs with exogenous block MDPs. We refer the reviewer to the paragraphs after Thm 3.2 and the text in 3.1 for several ways to satisfy these two conditions. Regarding logarithmic regret in the second condition, this is certainly captured by our framework; the second condition only states that regret should be sub-linear, which certainly includes $O(\log K)$ regret as well. The implication on the OCE convergence is stated in Thm 3.2, which shows that the OCE regret is bounded by the oracle's regret multiplied by a constant $V^{\max}_u$ that doesn't change the rate in $K$. Thus, an oracle with logarithmic regret would yield an OCE algorithm with logarithmic regret. 4. Thanks for the suggestion, we'll certainly add more intuition about $V^{\max}_u$: this constant intuitively captures is the statistical complexity of learning the OCE with utility $u$; for example, it is $1/\tau$ for CVaR at level $\tau$ and this is intuitive because smaller levels of $\tau$ measures a smaller sub-population and thus requires more samples to learn. We will list and discuss more examples of $V^{\max}_u$ in the main text (they are currently in Table 4 of the appendix due to page limit). 5. We will also elaborate more about the exogenous block MDP in the revision. 6. Thanks for the question. In theory, small value estimation error can be ensured by online supervised learning or regression, which is fairly standard (Agarwal et al. 2021). In practice, policy optimization algorithms such as REINFORCE with value baseline or PPO maintain a separate value network and interleave in gradient updates to the value network by minimizing squared loss (Schulman et al. 2017). We will add more discussions in the revision. 7. While we concur with the reviewer that our approaches are inspired by existing algorithms, we believe our work nonetheless contributes several novel and insightful results to risk-sensitive RL: 1. We establish the first risk-sensitive PAC bounds for exogenous block MDPs (Thm 3.6 in Sec 3.1) – not captured by low-rank MDPs (Zhao et al. 2024) and requires novel techniques to handle the interplay between coverability and the augmented MDP. 2. We derive the first risk-sensitive bounds for policy-gradient algorithms and prove a novel local improvement property in risk-sensitive RL (Thms 4.2 and 4.4 in Sec 4). This bridges the gap in the literature where policy gradient methods lacked theoretical guarantees in risk-sensitive settings. 3. By abstracting out oracles, we provide a unifying framework for studying the large class of OCE risk measures that were previously disconnected in the risk-sensitive RL literature (e.g., Bastani et al. 2022, Wang et al. 2023, Zhao et al. 2024). Our framework not only simplifies analysis but also enables new algorithmic design (leading to the above two contributions), applicable across multiple risk measures. Our Related Works section (Sec 1.1) provides a more comprehensive discussion that positions our novel contributions within the RL literature. We are happy to answer any questions on our technical contributions. Please let us know if you have any other questions! --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. However, responses towards questions 3 and 6 need more clarity. While authors have provided some example algorithms which satisfy these conditions (optimism, bounded regret and small value estimation error), it is not clear how these conditions will be satisfied. More intuitions need to be provided regarding these. --- Reply to Comment 1.1.1: Comment: Dear Reviewer t1og, Thanks so much for your follow-up comments. We truly appreciate the opportunity to elaborate on our responses to questions 3 and 6. **Q3: How to satisfy optimism & bounded regret in Def 3.1** In the following, we describe separately how these two conditions are satisfied by model-based and model-free oracles in the augmented MDP: * Model-based oracles estimate the MDP's transition kernel $P_h$. For example, optimistic model-based oracles include UCB-VI for tabular MDPs and Rep-UCB for low-rank MDPs. To learn $P_h(s'\mid s,a)$, UCB-VI computes the maximum likelihood estimate by counting visitations of $(s,a,s')$, which is feasible since state and action spaces are finite in tabular MDPs. Rep-UCB generalizes this idea to low-rank MDPs by maximizing log-likelihood with a linear function class. Importantly, these methods only need to estimate transitions for the original states $s_h$, and *not* for the augmented budgets $b_h$, because the budget's transition is a known function: $b_{h+1}=b_h-r_h$. *This means that our approach introduces no extra statistical or computational complexity for learning transitions in the augmented MDP.* Both algorithms compute optimistic functions $\hat Q_{h,k},\hat V_{h,k}$ by planning in the learned model with an exploration bonus, which is large enough to ensure optimism but small enough to ensure sub-linear regret when the bonuses are summed across episodes. In sum, UCB-VI and Rep-UCB satisfy optimism and bounded regret via the same standard argument from, respectively, (Azar et al., 2017) and (Uehara et al., 2022), since learning the transition in the AugMDP doesn't add extra complexity. * Model-free oracles directly learn $Q,V$ without first estimating the transition kernel. The model-free algorithm we focus on is GOLF, which maintains a version space $\mathcal{F}\_k$ that contains functions with nearly-minimal Bellman error (across steps $h$) on the aggregated data at iteration $k$. The threshold for "nearly-minimal" is set at $\beta=\Theta(\log(KH|\mathcal{F}|/\delta))$ scale so that two properties hold under Bellman completeness (a standard condition for model-free algs that we also posit): (1) the optimal $Q^{\star}\_{aug}$ is an element of $\mathcal{F}\_k$, and (2) the total Bellman residuals over $K$ rounds is $O(\sqrt{K})$. To ensure optimism, GOLF selects the element in $\mathcal{F}\_k$ with the maximum value at $h=1$, which is optimistic by the above Property (1). Moreover, performance difference lemma and Property (2) can be used to bound the regret by $O(\sqrt{K})$. Specifically, we invoke the result from (Xie et al., 2023) which bounds GOLF's regret by $\tilde{O}(H\sqrt{\text{Cov}\cdot K\log(|\mathcal{F}|/\delta)})$, where $\text{Cov}$ is the coverability the MDP; notably, the coverability is small in the challenging exogenous block MDP (Ex-BMDP). Since GOLF is applied in the AugMDP, we then bound the AugMDP's coverability by the original MDP's coverability in Lemma D.7. Putting this together, we conclude that GOLF has bounded regret in augmented MDPs with low coverability, including the challenging Ex-BMDP; we note this is the first risk-sensitive bound for Ex-BMDPs. **Q6: How to satisfy small value estimation error in Def 4.1** This condition can be satisfied by reducing to standard policy evaluation, which is well-studied in RL. A simple approach is on-policy evaluation: rollout $\pi^k$ for each initial state to obtain an unbiased estimate of the value, which can be used for Monte Carlo estimation or regression. In risk-sensitive settings, we may however want to avoid rollouts for policy evaluation and instead apply off-policy evaluation (OPE) using the available data from prior rounds $1, 2, ..., k-1$, i.e. replay buffer in practice. A simple and practical OPE alg is fitted-Q evaluation (FQE) which simply minimizes the policy TD error. Under Bellman completeness, FQE's estimation error is bounded by $\tilde{O}(\sqrt{\frac{C^{\pi^k}\log(1/\delta)}{n}})$, where $C^{\pi^k}$ is the density ratio between $\pi^k$'s visitation distribution and the data distribution [1,2]. There are also more sophisticated OPE methods that relax assumptions and obtain faster convergence [3,4]. In conclusion, multiple established methods in policy evaluation can effectively satisfy the small value estimation error in Def 4.1. Thanks again for your time and effort in providing valuable questions and comments! We'll certainly address and discuss all of the above in our revision. **Citations for Policy Evaluation** [1] Wang et al, "A Fine-grained Analysis of Fitted Q-evaluation: Beyond Parametric Models", ICML 2024. [2] Chang et al, "Learning Bellman complete representations for offline policy evaluation", ICML 2022. [3] Kallus and Uehara, "Double Reinforcement Learning for Efficient Off-Policy Evaluation in Markov Decision Processes", ICML 2020. [4] Yang et al, "Off-Policy Evaluation via the Regularized Lagrangian", NeurIPS 2020.
Summary: This work studies risk-sensitive reinforcement learning, where the target is to maximize $\max_{\pi} \max_{b} \\{b+E_{\pi}[u(\sum_{h=1}^H r_h - b)]\\}$ where $u$ is some utility function. For this problem, the authors prove that by augmentation, there exists a Markovian policy which reaches the optimality. Based on UCB-VI, the authors designed an algorithm with provable regret bound. Then authors combine their formulation with several other frameworks to generalize the convergence rate. Claims And Evidence: The theoretical claims are well supported by the regret bounds, and the empirical results are too minor to support the claim that the augmentation method is powerful. Methods And Evaluation Criteria: NA Theoretical Claims: As far as I see, the proofs are correct. Experimental Designs Or Analyses: Only a few experiments were conducted on a toy example, which does not significantly influence the evaluation of this work. Supplementary Material: No. Relation To Broader Scientific Literature: This work focuses on risk-sensitive RL with OCE reward, which provides insights for exploring the risk-sensitive RL problem. Essential References Not Discussed: The related works are well discussed. Other Strengths And Weaknesses: Strength: The authors designed a meta-algorithm which could be applied to many different OCE-RL settings, including the challenging exogenous block MDP. Weakness: My major concern is that the method in this work is very limited in both technical novelty and conceptual insight. Firstly, by the definition of OCE, it is very nature to formulate the problem with an augmented state space, which is roughly a combination of traditional RL (to learn $P$) with dynamic programming (to learn $b$). Secondly, the proofs in this work are merely straightforward applications of existing algorithms, which significantly limits the technical novelty. Other Comments Or Suggestions: I do not have any other comments. Questions For Authors: 1. Why do you consider the assumption that $Z(\pi)\in [0,1]$, instead of the classical assumption that each $r_h \in [0,1]$? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer ZPJ1, Thanks very much for providing helpful feedback. We truly appreciate the time and effort that have been invested in providing constructive comments. Please find our responses below. **Reviewer: The empirical results are too minor to support the claim that the augmentation method is powerful.** **Authors' reply:** Yes, we concur that our empirical results are only a proof-of-concept, while our main contribution is theoretical. Our empirical simulation is specifically to show that, in a minimal MDP, our methods indeed learn the optimal risk-sensitive policy while previous algorithms with Markov policies have bounded performance (i.e. they fail to learn the optimal policy), which motivates further research to step away from Markov policies and to use the AugMDP. **Reviewer: My major concern is that the method in this work is very limited in both technical novelty and conceptual insight. Firstly, by the definition of OCE, it is very nature to formulate the problem with an augmented state space, which is roughly a combination of traditional RL (to learn $P$) with dynamic programming (to learn $b$). Secondly, the proofs in this work are merely straightforward applications of existing algorithms, which significantly limits the technical novelty.** **Authors' reply:** While we concur with the reviewer that our approaches are inspired by existing algorithms, we believe our work nonetheless contributes several novel and insightful results to risk-sensitive RL: 1. We establish the first risk-sensitive PAC bounds for exogenous block MDPs (Thm 3.6 in Sec 3.1) – not captured by low-rank MDPs (Zhao et al. 2024) and requires novel techniques to handle the interplay between coverability and the augmented MDP. 2. We derive the first risk-sensitive bounds for policy-gradient algorithms and prove a novel local improvement property in risk-sensitive RL (Thms 4.2 and 4.4 in Sec 4). This bridges the gap in the literature where policy gradient methods lacked theoretical guarantees in risk-sensitive settings. 3. By abstracting out oracles, we provide a unifying framework for studying the large class of OCE risk measures that were previously disconnected in the risk-sensitive RL literature (e.g., Bastani et al. 2022, Wang et al. 2023, Zhao et al. 2024). Our framework not only simplifies analysis but also enables new algorithmic design (leading to the above two contributions), applicable across multiple risk measures. Our Related Works section (Sec 1.1) provides a more comprehensive discussion that positions our novel contributions within the RL literature. **Reviewer: Why do you consider the assumption that $Z(\pi)\in[0,1]$, instead of the classical assumption that each $r_h\in[0,1]$?** **Authors' reply:** Our assumption to normalize cumulative rewards is actually *more general* than normalizing per-step reward as it allows for reward values that drastically vary across actions and states. For example, the reward sequence $0, 0, ..., 0, H$ is allowed under our framework (after scaling our normalization by $H$). This is in contrast to the arguably more popular assumption in the RL theory community that per-step rewards are in $[0,1]$ and cumulative rewards are in $[0,H]$, which only allows for dense rewards and precludes the above sparse reward example. A helpful resource that helped us understand the benefit of normalizing cumulative rewards is the paper http://proceedings.mlr.press/v75/jiang18a/jiang18a.pdf (which we cite at line 145) and their COLT 2018 talk https://youtu.be/If63ZSpEiSs at the 5:10 mark. Please let us know if you have any other questions!
Summary: **I am very unfamiliar with this topic. I will maintain the lowest confidence level.** This paper develops a study on risk-sensitive RL, which is formulated through OCE. Two meta algorithms are proposed with further analysis. Claims And Evidence: The paper propose an augmented MDP for the OCE problem, which bypasses the challenges. The theorems indeed support the claims that the augmentations can help solve OCE. Methods And Evaluation Criteria: The experiments seem to only contain a simple MDP, which could benefit from more evaluations. Theoretical Claims: I checked the proof of THM 2.1. Although the proof is not very easy to follow, it seems to be correct. Experimental Designs Or Analyses: The experiment only contains one simple MDP, although neural policies and advanced algorithms like PPO are considered. Supplementary Material: n/a Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: The paper could benefit from more explanation. For example, in eq (1) defining OCE, it is unclear to me what this definition means. Some examples could help. In def 3.1, an oracle is introduced. Is this oracle used in later algorithm designs? And how do you implement it? Other Comments Or Suggestions: n/a Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 5fhe, Thanks for providing your helpful feedback. We truly appreciate the time and effort that have been invested in providing constructive comments. Please find our responses below. **Reviewer: The paper could benefit from more explanation. For example, in eq (1) defining OCE, it is unclear to me what this definition means. Some examples could help.** **Authors' reply:** Thanks for the suggestion. Eq (1) is the risk-sensitive RL objective defined using the static OCE risk measure and is the key objective we aim to optimize in the paper. Right after Eq (1), we provide a few examples of OCE risk measures, such as CVaR when u is the hinge utility and Markowitz's mean-variance when u is the quadratic utility. We will elaborate more on these examples in the revision by moving some content from Appendix B to the main text. **Reviewer: In def 3.1, an oracle is introduced. Is this oracle used in later algorithm designs? And how do you implement it?** **Authors' reply:** Thanks for the question! Yes, the optimistic oracle in Def 3.1 is used in the meta-algorithm Alg 1, and similarly the policy gradient oracle in Def 4.1 is used in the meta-algorithm Alg 2. In our paper, we discuss several examples of how to implement these oracles for different settings. For example, for the optimistic oracle in Def 3.1, we discuss three ways to implement it using (1) UCB-VI, (2) Rep-UCB and (3) GOLF. Moreover, for our policy gradient oracle in Def 4.1, we discuss natural policy gradients (NPG) as an example instantiation. By employing different oracles, we are able to derive guarantees in several types of MDPs, e.g. UCB-VI is used in tabular MDPs, Rep-UCB in low-rank MDPs, GOLF in exogenous block MDPs, NPG in MDPs with good initial state distribution, where the latter two are novel to risk-sensitive RL. If you have any further questions about how these oracles are implemented, we would be happy to answer them!
Summary: This paper studies risk-sensitive reinforcement learning (RSRL) with the goal of learning a history-dependent policy that optimizes Optimized Certainty Equivalents (OCE) risk measures of cumulative rewards. The authors propose two meta-algorithms, one based on optimism and another on policy gradients, that reduce the RSRL problem to risk-neutral RL in an augmented Markov Decision Process (MDP). Theoretical guarantees, including regret bounds and convergence properties, are established, and empirical results demonstrate the effectiveness of the proposed methods in learning optimal history-dependent policies. Claims And Evidence: The claims are supported by rigorous mathematical proofs and experiments. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate. Theoretical Claims: I have checked the correctness of the main theoretical claims. The proofs appear to be logically sound and technically correct. However, the constant $V_{\max}^u$ is not defined formally, which is confusing. Experimental Designs Or Analyses: I have checked the soundness of the experimental design. Supplementary Material: I have checked the correctness of the main proofs. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: See weakness part. Other Strengths And Weaknesses: ### Strength Strengths include the originality of the reductions approach to OCE RL, the significance of providing the first risk-sensitive bounds for exogenous block MDPs, and the clarity of the theoretical and empirical results. ### Weakness My main concern is that the author has not referenced the studies on the Lipschitz risk measure in [1, 2]. Although not previously discussed in related works, I believe that the OCE is encompassed within the Lipschitz risk measure. The factor $V_{\max}^u$ closely resembles the Lipschitz constant described in [1, 2], which might give a formal way to determine this factor. Since [2] has already proposed an optimism - based algorithm grounded in distribution learning for the Lipschitz risk, I suggest that the author offer a comprehensive comparison with [2]. This would help to clearly elucidate the novelty of the first contribution of this paper. [1] Liang, H., & Luo, Z. (2024, April). Regret bounds for risk-sensitive reinforcement learning with lipschitz dynamic risk measures. In International Conference on Artificial Intelligence and Statistics (pp. 1774-1782). PMLR. [2] Chen, Y., Zhang, X., Wang, S., & Huang, L. (2024, July). Provable Risk-Sensitive Distributional Reinforcement Learning with General Function Approximation. In International Conference on Machine Learning (pp. 7748-7791). PMLR. Other Comments Or Suggestions: I recommend that the authors delve deeper into the computational efficiency of the proposed algorithms. For instance, they could explore and discuss what the major challenges are in developing a computationally and statistically efficient algorithm for OCE RL. If my concerns and comments are adequately addressed, I will consider raising my score. Questions For Authors: Given that discretizing the total reward in Risk-Sensitive Reinforcement Learning (RSRL) is a prevalent technique, as demonstrated in [3, 4], discretization appears to be a rational approach to address the discrete reward assumption (Assump. 3.5). Will this method works in OCE RL? [3] Bastani, O., Ma, J. Y., Shen, E., & Xu, W. (2022). Regret bounds for risk-sensitive reinforcement learning. Advances in Neural Information Processing Systems, 35, 36259-36269. [4] Chen, Y., Du, Y., Hu, P., Wang, S., Wu, D., & Huang, L. Provably Efficient Iterated CVaR Reinforcement Learning with Function Approximation and Human Feedback. In The Twelfth International Conference on Learning Representations. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 1jGh, Thanks very much for providing helpful feedback. We truly appreciate the time and effort that have been invested in providing constructive comments. Please find our responses below. **Reviewer: the constant $V^{\max}_u$ is not defined formally** **Authors' reply:** We kindly refer the reviewer to line 130 where $V^{\max}_u$ is defined. We will make sure to highlight this definition in the revision with examples and connection to Lipschitz risk measure (see next reply). **Reviewer: My main concern is that the author has not referenced the studies on the Lipschitz risk measure in [1, 2]. Although not previously discussed in related works, I believe that the OCE is encompassed within the Lipschitz risk measure. The factor $V^{\max}_u$ closely resembles the Lipschitz constant described in [1, 2], which might give a formal way to determine this factor. Since [2] has already proposed an optimism - based algorithm grounded in distribution learning for the Lipschitz risk, I suggest that the author offer a comprehensive comparison with [2]. This would help to clearly elucidate the novelty of the first contribution of this paper.** **Authors' reply:** Thanks very much for pointing out Lipschitz risk measures. You're right that they indeed encompass the OCE with Lipschitz constant at most $V^{\max}_u$ and we'll cite and discuss this as another way to interpret $V^{\max}_u$. Our work is distinguished from prior Lipschitz risk RL [1,2] as follows: * [1] studies RL with dynamic risk objective (a.k.a. iterated risk) $\rho(r_1+\rho(r_2+\rho(r_3+...)))$ which measures per-step risk and is different from our static risk objective $\rho(r_1+r_2+r_3+...)$ which measures trajectory-wise risk. Both objectives are important and we discuss dynamic risks in Lines 110-128 of Related Works, to which we'll add a citation to [1]. * [2] derives regret bounds for static Lipschitz risks under function approximation which is more related to our setting. However, a key technical assumption made in [2] is bounded Bellman eluder dimension or witness rank, which are insufficient to capture the challenging exogenous block MDP setting. Indeed, the Bellman eluder dimension can grow with the size of the exogenous state space which is exponentially large or infinite (Xie et al. 2023). In contrast, we employ a coverability argument in the augmented MDP to obtain the first PAC bound for exogenous block MDP in risk-sensitive RL. Thus, our first contribution is distinct from [2] since our framework can deal with the challenging exogenous block MDP while [2] cannot. Moreover, our work includes other contributions such as studying policy gradient methods that are complementary to optimistic algorithms. **Reviewer: I recommend that the authors delve deeper into the computational efficiency of the proposed algorithms. For instance, they could explore and discuss what the major challenges are in developing a computationally and statistically efficient algorithm for OCE RL.** **Authors' reply:** Thanks for this suggestion, we'll certainly add more discussion in the revision. In short, our work shows that the computational and statistical complexity of OCE RL algorithms can largely be reduced to standard risk-neutral RL in the AugMDP. In terms of computation, the main costs come from 1) optimizing over $b$, which is efficient under reward discretization, and 2) querying the underlying risk-neutral RL oracle. In terms of sample efficiency, our main theorems (Thm 3.2 & 4.2) ensure that the OCE regret is bounded by the underlying oracle's regret up to constant factors. Thus, in both cases, the major challenge is devising a risk-neutral oracle that is efficient within the AugMDP. We will make sure to incorporate these discussions in the revision. **Reviewer: Given that discretizing the total reward in Risk-Sensitive Reinforcement Learning (RSRL) is a prevalent technique, as demonstrated in [3, 4], discretization appears to be a rational approach to address the discrete reward assumption (Assump. 3.5). Will this method works in OCE RL?** **Authors' reply:** Thanks for this question! Yes, discretizing rewards as in [3, 4] can indeed by used to address the discrete reward assumption (Assump 3.5), at the cost of a slower regret rate. Specifically, using a discretization width of $\epsilon$ introduces additional regret of $O(K\epsilon)$ and creates $|\mathcal{B}|=1/\epsilon$ atoms. Thus, our regret bound in Thm 3.6 would become $O(\sqrt{K/\epsilon} + K\epsilon)$. The $\epsilon$ which minimizes this is $\epsilon = \Theta(1/K^{1/3})$, which leads to a regret bound of $O(K^{2/3})$ which is sub-linear and meaningful. In other words, we can remove Assump 3.5 by discretizing rewards with bin width of $1/K^{1/3}$, and our regret bound in Thm 3.6 becomes $O(V^{\max}_uHK^{2/3} \sqrt{ Z^{en}A\log(|\mathcal{F}|/\delta) })$. We'll make sure to discuss this in the revision. Please let us know if you have any other questions! --- Rebuttal Comment 1.1: Comment: I appreciate the response, as it addresses most of my concerns. I have raised my score to 4.
null
null
null
null
Correlated Errors in Large Language Models
Accept (poster)
Summary: This paper investigates the extent of correlation across large language models (LLMs) and its implications for systemic bias and multi-model collaboration. Key factors influencing this correlation include shared base model architecture and development organization. The study highlights that larger, more accurate models are highly correlated, even when both are incorrect. The findings emphasize the risks of "algorithmic monoculture" and its impact on decision-making. ## update after rebuttal Thanks for your repsonse about my concerns. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper does not contain explicit formal mathematical proofs for the theoretical claims but relies on empirical analyses to substantiate its conclusions. Experimental Designs Or Analyses: The authors analyze model correlation by comparing the responses of 170 LLMs on 12,032 questions from the HuggingFace leaderboard and 71 models on 14,042 questions from the Stanford Helm leaderboard, aiming to calculate question-level agreement between pairs of models, particularly focusing on cases where both models are wrong. However, leaderboard metrics might not fully represent real-world performance, as they may be models finetuned for specific tasks, and the question sets may not reflect the distribution of tasks models will face in other applications. Supplementary Material: Yes, appendix A. Relation To Broader Scientific Literature: This paper highlights that using correlated models as "judges" leads to inflated performance estimates and potentially incorrect rankings. The authors show that accuracy inflation occurs when one model is used to proxy ground truth for other models, which can skew rankings and lead to overestimation of model quality. Essential References Not Discussed: No. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful and positive review. Given that you didn’t have any questions, we will be very brief in our response. One thing you point out is that leaderboard metrics do not fully represent real-world performance. We appreciate this comment, and will make note of it more clearly in the main text. This was also one of our motivations for including the resume / hiring analysis in the paper. To more directly show that our main findings generalize beyond MMLU tasks, we’ve extended the regression analysis on the Resumes dataset. We hand labeled 450 resume-job description pairs which serve as ground truth. We can then measure the correlation among models in their residuals (subtracting out ground truth label from model evaluations). We find that the same results hold (more accurate models and models from the same company are more correlated in their errors). This is shown in Table 1 in the following anonymized link (https://docs.google.com/document/d/e/2PACX-1vSlxmq6qMRG55uCdpwSuNqt9PbrEvi2hi2MEIQU-oZY8bw17lBo2W7Z8NFhKeDv4s23ZT42zJrStEeP/pub).
Summary: This paper investigates the correlated errors of different LLMs, which I find interesting and novel. The authors conduct extensive experiments analyzing the performance of different LLMs, revealing a high correlation in model errors. The experimental results largely support the core claims of the paper, and the study concludes with a discussion on the potential implications of this correlation for multi-model collaboration. Claims And Evidence: This paper empirically reveals a high correlation between model errors. The experiments cover a variety of tasks, and the experimental setup is reasonable, supporting the main claims of the paper. However, a deeper analysis of the specific causes of error correlation is somewhat lacking. Methods And Evaluation Criteria: Yes. Theoretical Claims: In my opinion, this paper is primarily an experimental analysis, with relatively insufficient in-depth theoretical analysis. Experimental Designs Or Analyses: The experimental setup and analysis in this paper are reasonable. The experiments cover multiple tasks, the evaluation metrics are appropriate, and the complete experimental results are provided in the appendix. However, the downstream tasks are only evaluated on the labor markets task, which may limit its generalizability. Supplementary Material: All. Relation To Broader Scientific Literature: This paper investigates the correlated errors among multiple LLMs, providing new insights into multi-LLM collaboration and evaluation. I find the paper interesting and believe it is highly relevant to the existing LLM research. Essential References Not Discussed: In my understanding, there are none. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: Is there any deeper theoretical insight regarding the correlated errors between LLMs? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful review and comments. We’re glad that you found the experiments to be extensive, and supportive of our primary claims, and that you found the paper “interesting” and “highly relevant to the existing LLM research.” We appreciate the comments you give, which we discuss below. **Regarding the causes of error correlation:** One way that we tried to understand this is through the regression analysis, in which we determined that shared model architecture and developer were predictive of higher error correlation. We also found that more accurate models tend to be correlated. (Section 2.2) We agree, however, that underlying theoretical explanation is an interesting direction for future work. **Regarding generalizability of downstream tasks:** We focus on the hiring setting since it has been a focus of literature in the algorithmic monoculture literature (e.g., [1-4]). One other downstream task we consider in the paper is in the LLM as a judge setting. We have extended our experiments on this task, finding that the homogeneity of models can inflate scores (especially of models with shared components) for models that are less accurate than the judge. See the anonymized link below for plots showing the amount by which model accuracies are inflated (https://docs.google.com/document/d/e/2PACX-1vSlxmq6qMRG55uCdpwSuNqt9PbrEvi2hi2MEIQU-oZY8bw17lBo2W7Z8NFhKeDv4s23ZT42zJrStEeP/pub) **Regarding the theoretical insights of our work:** While we agree that our analysis is empirical, it is motivated directly by theoretical work: First, it tests the “component-sharing hypothesis” (i.e., that models that share components are more correlated) proposed by [1]. Our experiments support this hypothesis. However, it adds nuance to the hypothesis by showing that having independent components does not ensure independence: in fact, more accurate models (regardless of shared of components) tend to be more homogeneous. Second, it test hypotheses of theoretical work in algorithmic monoculture, which has identified potential downstream outcomes of more homogeneous models (especially in the labor market settings) (e.g., [4]). Our results test these predictions empirically. We very much appreciate your comments, and will make sure to highlight the connections of our work to existing theory. [1] Bommasani et al. Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? NeurIPS (2022). [2] Kleinberg and Raghavan. Algorithmic Monoculture and Social Welfare. PNAS (2021). [3] Creel and Hellman. The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems. Canadian Journal of Philosophy (2022). [4] Peng and Garg. Monoculture in Matching Markets. NeurIPS (2024).
Summary: This paper studies how correlated the mistakes of large language models (LLMs) are across two public leaderboards. This study considers a large range of LLMs and a large set of questions. The findings are that there is substantial correlation in model errors. A regression analysis suggests that high agreement is correlated with two models being from the same company, having the same base architecture, and having a similar number of parameter sizes. They show an implication of these results: under the LLM-as-judge paradigm, LLM accuracy is inflated by about 7%. The paper concludes with a case study where a labor market is simulated wherein firms use LLMs to screen job applications. The authors find similar error correlation here, and the implications are larger-than-random levels of systemic exclusion. Claims And Evidence: The papers claims are generally well-supported. The authors provide extensive analysis across a large number of models to demonstrate the main point: a high degree of error correlation in LLMs. The result in Figure 2 is particularly clever and compelling. The finding that pairs of models with higher individual accuracy have more correlated errors is interesting, surprising, and could benefit from further exploration (see "Other Strengths and Weaknesses"). Methods And Evaluation Criteria: The methods and evaluation criteria are generally sound. The labor market analysis seems especially careful. Moreover focusing on conditioning on incorrect answers makes sense and is intuitive. The main weakness is that the main analysis is limited to MMLU questions, which represent a narrow slice of possible LLM applications (see "Other Strengths and Weaknesses" for more discussion). Theoretical Claims: The paper doesn't make substantial theoretical claims requiring proofs. Experimental Designs Or Analyses: The main analysis and the labor market case study are both sound and carefully executed. The only comment I had was regarding the regression analysis (see "Other Strengths and Weaknesses"). Supplementary Material: I looked over the tables in the supplementary material. Relation To Broader Scientific Literature: The paper makes a substantial contribution to our understanding of LLM behavior and the effects of monoculture. I would say its main contribution is taking analysis that is generally theoretical and providing substantial empirical evidence. Essential References Not Discussed: No Other Strengths And Weaknesses: - The result in Figure 2 is a very nice and clever analysis. It is convincing of the real-world implication of the findings of the paper -- that correlated errors in LLMs means they can't be used in the LLM-as-judge paradigm, to rate the performance of other LLMs. - It's a really interesting result that the two seemingly unrelated models -- google/text-unicorn and writer/palmyra-x-v3 are almost perfectly correlated when they make errors. - The result that pairs of models that are more accurate individually also have more correlated errors is interesting and surprising. It would've been nice to explore this more. For example, one possibility this brings up is that we're not controlling sufficiently well enough for variables. Have you considered running more sophisticated methods for measuring these effects, e.g. using nonlinear outcome models (like in the double ML literature) to estimate these effects? - The model evaluation is extensive across models. However, all the questions in the main analysis are limited to MMLU questions (either on HuggingFace or Helm). These multiple choice exam-like questions are a very narrow slice of the possible questions and benchmarks we may ask of LLMs, and so correlated errors on this dataset may not extend to other uses of LLMs. It's interesting that in the second half of the paper the resume setting is considered -- which is indeed quite different from MMLU -- but this analysis is more limited, considering only 8 models. - The reason for conditioning on both model answers to be incorrect makes sense and is clear to me. Still it would be nice to have overall agreement statistics (perhaps in the appendix) for comparison. For example, it would be interesting to see in the google/text-unicorn and writer/palmyra-x-v3 whether the overall agreement rate extends to whether each question is answered correctly or incorrectly. - I liked the case study of LLMs in labor markets. One minor comment I had was the external validity of studying systemic exclusion here was unclear. I fully understand that systemic exclusion is problematic when qualified candidates are being screened out due to monoculture, but the way that resumes/jobs are sampled in this simulation seem to allow the possibility that some candidates aren't well-suited for the chosen job in which case having higher-than-random systemic exclusion rates is not a problem. Other Comments Or Suggestions: - The OLS tables in the appendix should be formatted more nicely (e.g. we don't need to know the date they were executed) - Footnote 3 (line 218, left column) cuts off early. Questions For Authors: No questions outside of the "Other Strengths And Weaknesses" section ===================== POST-REBUTTAL UPDATE ===================== After reading the rebuttal I will main my original (positive) review of the paper Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful and detailed review. We are happy to see that you found the paper to be a significant contribution to the LLM behavior and monoculture literature, and that you found the analyses to be interesting and thorough. We now discuss several of the points you raise, which we sought to address: **Generalizability beyond MMLU / limited # of models on hiring task.** We extended the resumes analysis to consider 20 models, adding: Meta models (Llama 3 70B Instruct, Llama 3.3 70B Instruct); Mistral AI models (Mistral Large 24.02, Mistral 7B Instruct); Amazon models (Nova Pro, Nova Macro, Nova Lite); Anthropic models (Claude 3.5 Sonnet, Claude 3.5 Haiku, Claude 3 Haiku); and OpenAI models (GPT-4o-mini, GPT-3.5-turbo, o1-mini) to our experiments. We also extended the regression analysis on this updated resumes dataset. We hand labeled 450 resume-job description pairs which serve as ground truth. We can then measure the correlation among models in their residuals (subtracting out ground truth label from model evaluations). We find that the same results hold (more accurate models and models from the same company are more correlated in their errors). This is shown in Table 1 in the following anonymized link (https://docs.google.com/document/d/e/2PACX-1vSlxmq6qMRG55uCdpwSuNqt9PbrEvi2hi2MEIQU-oZY8bw17lBo2W7Z8NFhKeDv4s23ZT42zJrStEeP/pub). **External validity of systemic exclusion.** Systematic exclusion in our hiring experiments may not be an issue if some candidates are unqualified. Using the hand labels, we computed an additional metric in our stable matching experiments: the probability that an applicant with a given human label is matched. Adopting the latest (most accurate/homogeneous) models results in improved quality of matched applicants, illustrating tensions between different desiderata. (See the bottom of the doc linked above for this plot.) **More sophisticated models of the relationship between accuracy and homogeneity.** We agree that it would be interesting to consider nonlinear models, and will explore that in the future. **Extension to overall agreement.** Thanks for this suggestion! We’ll include this analysis in the appendix – overall agreement as well as agreement when either model errs. The high level findings remain the same with these metrics. (We do also find that google/text-unicorn and writer/palmyra-x-v3 in fact agree overall as well—they basically have the same answers everywhere.) Table 2 in this anonymous link (under "Extensions of Table 1") gives the main analyses for overall agreement (https://docs.google.com/document/d/e/2PACX-1vSlxmq6qMRG55uCdpwSuNqt9PbrEvi2hi2MEIQU-oZY8bw17lBo2W7Z8NFhKeDv4s23ZT42zJrStEeP/pub) Thanks also for the formatting notes and suggestions (on the regression tables and footnote 3). We’ll make those changes.
Summary: The paper studies agreement of LLMs on samples they make mistakes on, showing model pairs have well above chance error correlation. It shows that models from the same developer, and of higher accuracies, make more correlated errors. It then measures effects on candidates when LLMs are used to score CVs, simulating various scenarios such as using the same LLM, latest LLM, same company etc, showing how systematic exclusion rates can differ in trends from average applicant ranks. Claims And Evidence: Most claims are well substantiated. I have only two issues: 1. How would figure 2 look if we replaced qwen 2.5 72b with other llms? The claim of models over-rating other model outputs currently seems cherry picked. It just shows that qwen is maybe a liberal judge (perhaps also of human responses). 2. For the result showing models with the same architecture are more likely to agree with each other than chance, what architectures are considered in your data, and how many models of each type is available? Methods And Evaluation Criteria: Yes, more or less. I appreciate the use of HELM for error correlation (wide task coverage) and also resume datasets. However, one main concern is that measuring error correlation by counting agreement when both models make an error leaves out information about whether models also make errors on similar samples. Such information is accounted for in the model similarity metric proposed in parallel work [1], so it may be useful to use that instead. [1] Great Models Think Alike and this Undermines AI Oversight, Shashwat Goel, Joschka Struber, Ilze Amanda Auzina, Karuna K Chandra, Ponnurangam Kumaraguru, Douwe Kiela, Ameya Prabhu, Matthias Bethge, Jonas Geiping, ArXiv 2025. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes, I carefully checked experiment design and have two main concerns: 1. Since scores are used to measure LLM correlation in the hiring setting instead of errors, it's unclear whether the observed results are undesirable or not. In particular, one might hope that different human reviewers too would arrive at similar scores for candidates, and the LLMs would provide similar scores to humans (leading to capable LLMs having similar scores). Overall, it's unclear what to take away from the resume/market analysis section. 2. Figure 1 is a bit hard to process. Are there any alternative ways of visualisation that can convey the conclusions? Supplementary Material: Not provided. Relation To Broader Scientific Literature: Prior work has shown that models prefer their own outputs (which explains some of the within-family results shown in this paper). Parallel work [1] shows very similar results: models are getting similar with increasing capabilities, and LLM judges favour more similar models. It may be useful to use their similarity metric, as it takes into account what samples models make errors on, likelihoods provided by the models etc. Essential References Not Discussed: LLM Evaluators Recognize and Favor Their Own Generations, Arjun Panickssery, Samuel R. Bowman, Shi Feng, (NeurIPS 2024) -- paper showing models favour their own generations should probably be cited in Section 2. Other Strengths And Weaknesses: **Strengths**: 1. The paper shows that different models have correlated errors on popular leaderboards. 2. The paper demonstrates effects on a downstream application of LLMs in hiring under realistic modelling assumptions. **Weaknesses**: Highlighted above. Particularly, it's unclear why the metric used is optimal (there is very little discussion around this), and what to take away from the downstream study where score correlation is measured instead of error correlation. Further, Figure 1, 2 and Table 1 can be improved for both clarity, presentation, and informativeness. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful review. We’re glad that you found our claims well substantiated. We appreciated the questions you raised, which we aim to address below: **“How would figure 2 look if we replaced qwen 2.5 72b with other llms?”** Thanks for this question. We extended this analysis to other models (the top performing model of each architecture/company in HuggingFace/Helm), and find that same high-level result holds for the high-performing models (which are most likely to be adopted as a judge). An interesting subtlety is that low-performing models tend to underinflate model performance of better-performing models. We will describe these additional experiments in the main text and add the experiments to the appendix. We plot the amount of accuracy inflation across different judges in the following anonymous doc (“Extension of Figure 2: LLM as judge”: https://docs.google.com/document/d/e/2PACX-1vSlxmq6qMRG55uCdpwSuNqt9PbrEvi2hi2MEIQU-oZY8bw17lBo2W7Z8NFhKeDv4s23ZT42zJrStEeP/pub) **“what architectures are considered in your data, and how many models of each type is available?”** We considered 13 architectures. We will make note of this in the main text. Qwen2ForCausalLM (156), LlamaForCausalLM (108), Gemma2ForCausalLM (35), MistralForCausalLM (20), Phi3ForCausalLM (12), MixtralForCausalLM (4), ExaoneForCausalLM (2), CohereForCausalLM (2), InternLM2ForCausalLM (2), Qwen2VLForConditionalGeneration (2), FalconMambaForCausalLM (1), AriaForConditionalGeneration (1), Qwen2MoeForCausalLM (1) **Alternate metric of correlation on errors proposed in parallel work.** Thank you for sharing the reference to the parallel work. We enjoyed reading the paper (first made public after the ICML deadline). The paper develops a novel metric for measuring similarity that adjusts for model accuracy, considers different wrong predictions as a disagreement, and uses the probability distribution over answer choices. So while our primary similarity metric (do models agree on an answer when both err) satisfies the first two criteria, it does not incorporate model probabilities or information about which questions the models erred on. Both metrics account for the fact that more accurate models are more correlated simply because they agree on correct answers. (We also note that the papers focus on distinct downstream outcomes, with our focus being on the algorithmic monoculture and hiring literature.) We will also add a discussion of the parallel work in our related work, as well as of the limitations of our metric. **Incorporating measures of job candidate quality.** You noted that homogeneity may be desirable in the hiring setting (e.g., if they all accurately identify the best candidates). This is a great point. While there is significant literature in algorithmic monoculture that focuses on measures like systemic exclusion that do not consider candidate quality, there are other works that do. To this end, we hand labeled 450 resume-job pairs in the dataset as “ground truth” for the resume’s quality. These labels allow us to assess the quality of matched applicants in stable matching. The experiment demonstrates that adopting the latest LLMs (that are more homogeneous and more accurate) improves the overall quality of selected candidates (higher quality candidates are more likely to be matched), illustrating tensions between different desiderata in the hiring process. This plot is given in the bottom of the anonymous doc (https://docs.google.com/document/d/e/2PACX-1vSlxmq6qMRG55uCdpwSuNqt9PbrEvi2hi2MEIQU-oZY8bw17lBo2W7Z8NFhKeDv4s23ZT42zJrStEeP/pub)) **Improvements to Figures** Thank you for the comments on figure clarity. In particular, we agree that Figure 1 (the heatmap of error correlation) is not current very parsable. We’ve created a new version, in which models are ordered by their accuracy. This makes a primary finding clear: that more accurate models are more correlated when they err. We’ve also made some edits to Figure 2 and Table 1. For Figure 2, we directly plot the **accuracy inflation** to convey the main point: that judges can inflate the accuracy due to correlated errors. (We do this for the set of models described above.) For Table 1, we incorporated the analogs for Helm and Resumes to increase the informativeness. These updated versions are given in the anonymous doc (https://docs.google.com/document/d/e/2PACX-1vSlxmq6qMRG55uCdpwSuNqt9PbrEvi2hi2MEIQU-oZY8bw17lBo2W7Z8NFhKeDv4s23ZT42zJrStEeP/pub) Thank you also for pointing us to the LLM evaluation paper, which we will add to our related work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I think while the paper makes interesting preliminary observations, the generalisable insights and takeaways are not very clear. For example in the new judge experiments, it is unclear why the high accuracy judges are inflating scores but low accuracy are not. It is also not clear how sensitive the result is to prompting. Similarly from the experiment on resumes, if improving capability should lead to more similar outcomes, it is not clear what the observation of higher similarity imply, and whether something needs to be fixed. The "13 different architectures" are mostly standard transformer models with minor variations, with a few Mamba/MoEs added that would make a more significant change, but are not separately analysed. It is thus also unclear what the effect of architecture on error correlation really is. While I appreciate that the authors tried to modify the figures, they remain hard to parse, and I imagine this would be worse for a more general audience. I think the paper should be accepted because it does some interesting analysis. But I could also see it being rejected as it seems inconclusive. Thus, I maintain my recommendation. --- Reply to Comment 1.1.1: Comment: Thank you for the reply. We appreciate your comments, and will incorporate the feedback! (For example, regarding the judge results, we'll note that there are competing forces: lower accuracy models underinflate the accuracy of higher accuracy models since they do not identify the harder questions the higher accuracy models get correct; on the other hand, models generally overinflate other models' accuracies since they are correlated in how they are wrong.) We’re also happy to consider any additional feedback you may have regarding the figures or elsewhere (though, due to ICML response rules we won't be able to respond again).
null
null
null
null
null
null
Explicit Discovery of Nonlinear Symmetries from Dynamic Data
Accept (poster)
Summary: The paper proposes a method for detecting non-linear symmetries in data. In a nutshell, the authors solve this by first defining a base for the Lie algebra space (which is the space of infinitesimal generators), then solve the matrix equation of which of the linear combinations of these generators align with the "derivative of the data". The system the authors needs to solve remains linear because they are working at the level of the infinitesimal generators. Claims And Evidence: The authors claim that LieNLSD is, to the best of their knowledge, "... the first method capable of determining the number of infinitesimal generators with nonlinear terms and their explicit expressions." I am not familiar enough with the literature to determine whether this is true or not. Methods And Evaluation Criteria: The experiments the authors propose are interesting, but I would be interested to understand if the method can work in other cases. See the Question section below. Theoretical Claims: I think the theoretical claims are well-defined and, to the degree I checked the math, correct. I would be interested in seeing, however, a complete (toy?) example, where all the steps can be followed "by hand". This is very helpful when discussing implementation of theoretical constructs. I see that the authors have some parts of it in the appendix (SO(2)). Maybe for that case the authors can provide a whole setting and showing how their framework applies explicitly? Experimental Designs Or Analyses: The experiments are interesting, but I am not sure how standard they are in the field. See the Question section below. Supplementary Material: I reviewed some parts, regarding the examples and the further results for the experiments, without checking math details. Relation To Broader Scientific Literature: If my interpretation of the claim of the paper is correct, namely "a method for detecting non-linear symmetries in data", I see very interesting potential application of the method applied to generative modelling. There is a recent body of work that designs score-based generative modeling using Lie algebra (also these works and other could be included in the related work section): - Bertolini et al, "Generative Modeling on Lie Groups via Euclidean Generalized Score Matching" - Zhu et al, "Trivialized Momentum Facilitates Diffusion Generative Modeling on Lie Groups" - Kim et al, "Maximum likelihood training of implicit nonlinear diffusion model" Essential References Not Discussed: See above Other Strengths And Weaknesses: See below. Other Comments Or Suggestions: See below. Questions For Authors: My main question is related to the applicability domain of the methods. The experiments look very specifically designed to me. So, to make things concrete, could you apply your method to the following scenarios: - SO(3) in R^3: I take data on S2 but presented in Cartesian coordinates (using the embedding S^2 \subset R^3), Can you discover the (local) generator of SO(3) just from the data? - if you take the rotated MNIST dataset, where each MNIST digit is randomly rotated (SO(2) symmetry). Can you discover SO(2) just from the data? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful reading and valuable feedback! Below we will address each of your concerns point by point. **Theoretical Claims** Taking the discovery of linear symmetries in the Heat equation $u_t=u_{xx}$ as an example. We specify the function library as $\Theta=[t,x,u]^T\in R^{3\times1}$. Then, the infinitesimal group action takes the form $$ v=W\Theta\cdot\nabla=\Theta^T W_1\partial_t+\Theta^T W_2\partial_x+\Theta^T W_3\partial_u, $$ where $W=[W_1,W_2,W_3]\in R^{3\times 3}$. Our ultimate goal is to solve for W. The algorithm first constructs $\Theta_n$. From Equations (7) and (8), we obtain $pr^{(2)}v=\phi^t\frac{\partial}{\partial u_t}+\phi^{xx}\frac{\partial}{\partial u_{xx}}+\dots$ (here, for simplicity, we omit irrelevant terms), where $\phi^t=-u_tD_t\Theta^T W_1-u_xD_t\Theta^T W_2+D_t\Theta^T W_3,\phi^{xx}=-(u_tD_{xx}\Theta^T+2u_{tx}D_x\Theta^T)W_1-(u_xD_{xx}\Theta^T+2u_{xx}D_x\Theta^T)W_2+D_{xx}\Theta^T W_3$. We rewrite it as $$ pr^{(2)}v= \begin{bmatrix} -u_tD_t\Theta^T&-u_xD_t\Theta^T&D_t\Theta^T\\\\ -(u_tD_{xx}\Theta^T+2u_{tx}D_x\Theta^T)&-(u_xD_{xx}\Theta^T+2u_{xx}D_x\Theta^T)&D_{xx}\Theta^T \end{bmatrix} vec(W)\cdot\nabla=\Theta_2vec(W)\cdot\nabla $$ Therefore, we get $\Theta_2$ in Equation (9). Substituting $D_t\Theta=[1,0,u_t]^T,D_x\Theta=[0,1,u_x]^T,D_{xx}\Theta=[0,0,u_{xx}]^T,u_t=u_{xx}$, we have $$ \Theta_2= \begin{bmatrix} -u_{xx}&0&-u_{xx}^2&-u_x&0&-u_xu_{xx}&1&0&u_{xx}\\\\ 0&-2u_{tx}&-(u_{xx}^2+2u_{tx}u_x)&0&-2u_{xx}&-3u_xu_{xx}&0&0&u_{xx} \end{bmatrix} $$ The PDE can be expressed as $F=u_{xx}-u_t=0$, so its Jacobian matrix with respect to $u_t,u_{xx}$ is $J_F=[-1,1]$. Substituting into Equation (12), we have $[u_{xx},-2u_{tx},-2u_{tx}u_x,u_x,-2u_{xx},-2u_xu_{xx},-1,0,0]vec(W)=0$. Comparing the coefficients, we obtain the basis of vec(W) as $[2,0,0,0,1,0,0,0,0]^T,[0,0,0,0,0,0,0,1,0]^T,[0,0,0,0,0,0,0,0,1]^T$. Substituting into $v=W\Theta\cdot\nabla$, we get the infinitesimal generators $v_1=2t\partial_t+x\partial_x,v_2=x\partial_u,v_3=u\partial_u$. These correspond to the linear generators $v_2,v_4,v_5$ shown in Table 6 (in Appendix I). **Relation To Broader Scientific Literature** Thank you for your addition! We will include the following content in the Related Work section. A series of recent works have made outstanding contributions to extending diffusion models from Euclidean space to Lie groups [1,2,3]. Since our method can directly discover the infinitesimal generators of Lie groups from data, we look forward to future integration with these works, eliminating the need for prior knowledge of the group to guide their processes. **Questions For Authors** Both of these scenarios involve discovering linear symmetries from static systems. By setting the prolongation order n=0 and specifying the function library $\Theta(x)=x$ as linear terms, LieNLSD can cover them (see Appendix F for details). The core issue is how to define F in Algorithm 1 and estimate $J_F$, which we will discuss in detail below. (1) For a two-dimensional manifold in $R^3$, its implicit equation can be written as $F(x)=0,x\in R^3$, but it may not have an explicit equation (e.g., S2). Therefore, we use the $k$-NN to estimate the Jacobian matrix $J_F$. Specifically, we fit a tangent plane $J_F(x-x_0)=0$ to the manifold using the k neighboring points around the given point $x_0$, and assign a Gaussian kernel weight $w_i=\exp(-\\|x_i-x_0\\|^2/2)$ to each point to reduce the influence of more distant points. In practice, we set the number of neighboring points to k=5. The experimental results are shown below. We successfully discover the generators of SO(3). Singular value: $[57.4,36.9,36.6,36.5,36.2,36.1,5.1,4.6,4.2]$ The Lie algebra basis (W) corresponding to the 3 smallest singular values: $$ \begin{bmatrix} 7.58e^{-4}&5.58e^{-4}&-0.706\\\\ 1.10e^{-3}&9.63e^{-4}&8.15\times 10^{-4}\\\\ 0.708&-1.07e^{-4}&-7.22e^{-4} \end{bmatrix}, \begin{bmatrix} -1.28e^{-3}&0.706&-1.89e^{-5}\\\\ -0.709&5.09e^{-4}&-1.71e^{-3}\\\\ 5.31e^{-4}&2.08e^{-3}&3.47e^{-4} \end{bmatrix}, \begin{bmatrix} -1.18e^{-3}&2.27e^{-4}&-5.77e^{-5}\\\\ -3.56e^{-3}&1.17e^{-4}&0.704\\\\ -9.67e^{-4}&-0.710&1.21e^{-3} \end{bmatrix} $$ (2) In this case, the group transformation acts on the coordinates $x\in R^2$, so we need to compute the derivative of the classifier with respect to the coordinates $D_xF$. We train a CNN on MNIST, which takes grayscale images I(x) as input. Automatic differentiation yields $D_IF$. We then estimate $D_xI$ using central differences on the image and apply the chain rule to obtain $D_xF=D_IF\cdot D_xI$. The experimental results are shown below, which demonstrate that we successfully discover the SO(2) symmetry. Singular value: $[48.8,46.8,41.2,22.0]$ The Lie algebra basis (W) corresponding to the smallest singular value: $$ \begin{bmatrix} -2.90e^{-3}&-0.680\\\\ 0.733&-1.05e^{-2} \end{bmatrix} $$ **Reference** [1] arxiv.org/abs/2502.02513 [2] arxiv.org/abs/2405.16381 [3] arxiv.org/abs/2205.13699
Summary: This paper introduces LieNLSD, a novel method for discovering nonlinear symmetries from trajectory data. It addresses the limitations of previous methods that primarily focus on linear symmetries. LieNLSD aims to determine the number of infinitesimal generators and their explicit expressions. The method involves specifying a function library for coefficients of the lie algebra, showing its prolongation formula's linearity with respect to the library coefficient matrix, and solving a system of linear equations derived from the infinitesimal criterion using SVD. The authors also apply LADMAP for sparsification of infinitesimal generators. The method demonstrates strong results on various dynamical systems, and improves the accuracy of neural PDE solvers when using discovered symmetries for data augmentation. ## update after rebuttal My concerns were addressed and with the new ablations, I raised my score. Claims And Evidence: Most of the claims made by the paper appear to be substantiated by empirical evidence, while theoretical results appear to be well explained. However, limitations of the method remain completely unexplored and it is not clear whether the results were cherry-picked. Methods And Evaluation Criteria: The problems considered in this paper appear to be relevant and good examples for the probem considered. However, the settings consdiered here appear to be quite limited. To be precise, while the method here appears to be quite good, SVD and library based methods for symbolic discovery have been known to be susceptible to a multituted of issues, and particualrly the following questions need to be addressed (at least through ablations): * How sensitive is the method to noise in the observations? * Currently, Table 3 presents the discovered bases, but not the true bases - making it unclear if they are accurate. * What happens when there is a dictionary (library) misspecification? * What is the effect of number of samples of the trajectory on the discoverability? Theoretical Claims: The theoretical claims appear to intuitively be correct, but I have not looked at the exact proofs. Experimental Designs Or Analyses: Apart from the issues listed above, the analysis appears to be sound. Supplementary Material: n/a Relation To Broader Scientific Literature: There appear to be a few recent missed references on the usage of lie symmetries derived through the algebra in the context of neural PDE solvers, beyond augmentation. I list these below: https://arxiv.org/abs/2310.17053 https://www.mdpi.com/2673-9984/5/1/13 https://arxiv.org/pdf/2410.02698 https://arxiv.org/pdf/2212.04100v1 https://arxiv.org/abs/2206.09299 https://arxiv.org/pdf/2502.00373 https://arxiv.org/abs/2311.04293 Essential References Not Discussed: n/a Other Strengths And Weaknesses: see above. Other Comments Or Suggestions: see below. Questions For Authors: * Line 23 column 2 - Please provide a reference for why that is the case. * Line 55 column 1 - both W and \Theta are introduced for the first time. It would be beneficialy for the paper to provide a short describption of the method at the beginnign of the paper, where these can be introduced. * Line 107 - this makes it seem that it is always possible to represent a group element as an exponential, but the exponential map may not be surjective. * Where does the notation for eqns 1 and 2 come from? This seems to be a bit different to Olver. * Line 227 "can be solved *for*." * Line 236 column 2 - the mapping f is meant to be F? * Why is KdV not included in Table 3 if it is used for point symmetry augmentation? * How are the networks F trained? * Line 402 column 2 - you claim that LPSDA is only applicable to 1-dim cases, but that does not appear to be the case in the paper considered. Could you elaborate what you mean by this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful reading and valuable feedback! Below we will address each of your concerns point by point. **Methods And Evaluation Criteria** (1) See point (1) in Methods And Evaluation Criteria section of Rebuttal to Reviewer 1vcx. (2) According to the procedure in Section 2.4 of the textbook [1], given the expression of the PDE, we can mathematically derive its true infinitesimal generators: Burgers: $\partial_x,\partial_t,\partial_u,x\partial_x+2t\partial_t,2t\partial_x-x\partial_u,4tx\partial_x+4t^2\partial_t-(x^2+2t)\partial_u,\alpha(x,t)e^{-u}\partial_u$, where $\alpha_t=\alpha_{xx}$ Wave: $v_1=\partial_x,v_2=\partial_y,v_3=\partial_t,v_4=-y\partial_x+x\partial_y,v_5=t\partial_x+x\partial_t,v_6=t\partial_y+y\partial_t,v_7=x\partial_x+y\partial_y+t\partial_t,v_8=(x^2-y^2+t^2)\partial_x+2xy\partial_y+2xt\partial_t-xu\partial_u,$$v_9=2xy\partial_x+(y^2-x^2+t^2)\partial_y+2yt\partial_t-yu\partial_u,v_{10}=2xt\partial_x+2yt\partial_y+(x^2+y^2+t^2)\partial_t-tu\partial_u,v_{11}=u\partial_u,v_\alpha=\alpha(x,y,t)\partial_u$, where $\alpha_{tt}=\alpha_{xx}+\alpha_{yy}$ Schrodinger: $\partial_x,\partial_y,\partial_t,-y\partial_x+x\partial_y,-v\partial_u+u\partial_v,-2t\partial_t-x\partial_x-y\partial_y+u\partial_u+v\partial_v$ Then, it is easy to verify that the infinitesimal generators discovered by LieNLSD, as shown in Table 3, are all correct. (3) See point (2) in Methods And Evaluation Criteria section of Rebuttal to Reviewer pziR. (4) See point (2) in Methods And Evaluation Criteria section of Rebuttal to Reviewer 1vcx. **Relation To Broader Scientific Literature** Thank you for your addition! We will include the following content in Related Work section: Some recent works have introduced symmetry into PINNs [2,3,4,7,8,9] or performed data augmentation based on PDE symmetries [5,6], significantly improving the performance of neural PDE solvers. We expect to combine our data-driven symmetry discovery approach with these works in the future to explore methods for PDE solving without requiring prior knowledge of symmetries. **Questions For Authors** (1) True symmetries of the wave equation are provided in point (2) in Methods And Evaluation Criteria section of the Rebuttal (calculation process see [1]). As mentioned in Related Work section (Lines 128-130, Column 2), existing works on linear symmetry discovery use Lie algebra representations to characterize symmetries. Then, according to Eqn (2), they can only discover linear generators of the form $\phi(v)=d\rho_X(v)x\cdot\nabla$. For the wave equation, $v_1,v_2,v_3$ (translation symmetry) and $v_8,v_9,v_{10}$ (special conformal symmetry) do not conform to this form, meaning these symmetries cannot, in principle, be discovered by such methods. (2) We will revise the sentence to "Then the infinitesimal group action can be expressed in the form of $W\Theta$. Our ultimate goal ...". (3) We will revise the sentence to "... with a basis $\\{v_1,v_2,\dots,v_r\\}$, in the neighborhood of the identity element, we have ...". (4) In Chapter 1.4 of the textbook [1], Eqn (1.46) defines the infinitesimal group action as $\psi(v)=\frac{d}{d\epsilon}|_{\epsilon=0}\Psi(\exp(\epsilon v),x)$. However, $\psi(v)$ is usually written in the form $-y\partial_x+x\partial_y=(-y,x)\cdot\nabla$ rather than (-y,x). In Chapter 1.3 of the textbook, the paragraph immediately following Eqn (1.4) also mentions that readers can regard $\partial_x$ as "place holders" or a special "basis". To avoid confusion, we add the partial differential operator $\nabla$ to the infinitesimal group action, as shown in Eqns (1) and (2) of our paper. (5) We will make the revision. Thank you! (6) For a 1st-order dynamic system, the PDE can be written as $F(x,u^{(n)})=f(u')-u_t=0$ (Line 238, column 1). By treating u' as the input and $u_t$ as the output, we can train a neural network to approximate f, thereby obtaining F. (7) Due to page limitations, the experimental results of the KdV equation are provided in Table 6 of Appendix I rather than in the main text. (8) As mentioned in the response to Question 6, we indirectly obtain F by training a neural network to fit f. The training details are provided in Appendix H. (9) From methods, the original paper [6] only uses the 1D KdV equation as a worked example to discuss the implementation process of data augmentation. The trigonometric interpolation they employ becomes more complex in higher-dimensional cases. From experiments, they only test on 1D evolution equations as mentioned in Section 4.1 of their paper. Moreover, their code implementation is also limited to 1D cases. **Reference** [1] Olver, Peter J. Applications of Lie groups to differential equations. [2] arxiv.org/abs/2310.17053 [3] www.mdpi.com/2673-9984/5/1/13 [4] arxiv.org/pdf/2410.02698 [5] arxiv.org/pdf/2212.04100v1 [6] proceedings.mlr.press/v162/brandstetter22a.html [7] arxiv.org/abs/2206.09299 [8] arxiv.org/pdf/2502.00373 [9] arxiv.org/abs/2311.04293 --- Rebuttal Comment 1.1: Comment: I want to thank the reviewers for their detailed responses. With the new ablations, I am happy to raise my score. --- Reply to Comment 1.1.1: Comment: We are delighted that our response has addressed your concerns. We sincerely appreciate your valuable suggestions and further recognition of our work!
Summary: The paper introduces a novel data-driven PDE symmetry discovery. In contrast to previous arts based on end-to-end symmetry discovery (Ko et al., Yang et al.,), the proposed methods applies a post-hoc nullspace analysis on the learned PDE operator to discover the subspace of infinitesimal symmetry generators that correspond to the Lie algebra basis. This strategy offers significant benefits that was impossible before: 1. The dimension of the symmetry is not a predefined hyperparameter but a discovered quantity 2. The generator's explicit differential operator representation could be discovered. Furthermore, while not highlighted by the authors, the training is much simpler and easier as the symmetry discovery is separated from the governing equation learning. This is a clear advantage over previous arts that leveraged unstable adversarial objective (Yang et al.) or several hyperparameter-sensitive regularizers (Ko et al.). Claims And Evidence: The paper's core claim is that the explicit representation of the PDE symmetry generators could be discovered with a post-hoc nullspace analysis on the Jacobians of the prolonged PDE operator as in Eq.13. This claim is theoretically and empirically well-supported. One major concern is the potential confusion or overstatement regarding the term 'nonlinear symmetry.' While the symmetry operators are nonlinear with regard to the coordinate variables, the PDE and the symmetry operators are linear w.r.t. the field. The proposed method **cannot discover generators that act nonlinearly** on the field as LaLiGAN does. The authors should clarify this in Table 1 to avoid overstatement. Methods And Evaluation Criteria: The method has been tested on four scientific PDE. The method could discover the correct infinitesimal generator in their explicit form. The accuracy of the discovered subspace is measured with the Grassman distance, which is also an important contribution. The accuracy of the discovered generator subspace is significantly better than LieGAN, which is natural because the proposed method employs exact SVD-based subspace discovery instead of inaccurate feed-forward discovery. However, it is unclear how the proposed approach would behave when the sample point is not enough or the measurement is noisy. As the proposed method relies on higher order derivatives, it is expected that the model is vulnerable to noise or low-resolution measurements. Since LieGAN does not rely on higher order derivatives, the authors should also compare the proposed method against LieGAN with **noisy and/or insufficient samples**. Theoretical Claims: The theoretical claims appear to be valid. Experimental Designs Or Analyses: This has already been discussed in *Methods And Evaluation Criteria*. Supplementary Material: I have not thoroughly reviewed the correctness of every claim in the supplementary material. Relation To Broader Scientific Literature: As mentioned in the paper, the discovered symmetry could be used to augment the data for neural PDE solvers. It could also be used to prune the redundant generators that are discovered by approaches that require predefined number of generators like LieGAN. Essential References Not Discussed: Relevant works are appropriately cited. Other Strengths And Weaknesses: 1. The proposed method requires predefined function libarary. 2. The proposed method cannot be used for arbitrarily high order prolongations. 3. Higher order derivatives may not be available or accurate in real-world scenarios. Other Comments Or Suggestions: As mentioned earlier, the statement regarding nonlinear symmetry needs further clarification, because both the PDE and the group action are linear. Questions For Authors: I am willing to raise my score once my two major concerns are addressed. 1. Clarification regarding the term *nonlinear symmetry.* 2. Validation under noisy or insufficient number of sample points. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful reading and valuable feedback! Below we will address each of your concerns point by point. **Claims And Evidence** Note that in our paper, the symmetry is defined on $X\times U$ (see the "Symmetries of differential equations" paragraph in Section 2, lines 82-88, column 2). Therefore, the symmetries we find can be nonlinear with respect to both the coordinates $X$ and the field $U$. However, from an implementation perspective, the types of PDE symmetries discovered by LieNLSD (ours) and LaLiGAN differ (perhaps this is what you are concerned about?). Taking $X=R^2,U=R$ as an example (e.g. $u(x,y)$ represents a planar image), the symmetries found by LieNLSD act pointwise on $X\times U=R^3$. Such symmetries are commonly referred to in the literature as "Lie point symmetries." On the other hand, the symmetries found by LaLiGAN are defined over the entire discretized field. Specifically, if $u$ is a field on a $100\times100$ grid, then LaLiGAN's symmetries act on $R^{100\times100}$ (see the Reaction-diffusion dataset in the original paper of LaLiGAN for details). For PDEs, the setting of Lie point symmetries (which act pointwise on both coordinates and the field) is more common, as the search space is significantly reduced compared to symmetries defined over the entire discretized field, and the physical interpretation is more intuitive. We will add the above comparison of the two types of symmetries to the related work section. Thank you! **Methods And Evaluation Criteria** (1) Noisy samples We add multiplicative noise $u'=u(1+\epsilon)$ to the variables in the training dataset to evaluate the robustness of LieNLSD against noise, where $\epsilon\sim N(0,\sigma^2)$. The quantitative comparison results (Grassmann distance) between LieNLSD and LieGAN for different noise levels $\sigma$ are as follows. Although the performance of LieNLSD declines as the noise level increases, its accuracy remains higher than that of LieGAN, which shows robustness to noise. |Dataset|LieNLSD|LieGAN| |-|-|-| |Top tagging ($\sigma=0$)|$\mathbf{(9.20\pm1.83)\times10^{-2}}$|$(2.51\pm0.41)\times10^{-1}$| |Top tagging ($\sigma=0.05$)|$\mathbf{(3.75\pm0.40)\times10^{-1}}$|$2.27\pm0.01$| |Top tagging ($\sigma=0.1$)|$\mathbf{(8.61\pm0.55)\times10^{-1}}$|$2.39\pm0.02$| |Burgers eqn ($\sigma=0$)|$\mathbf{(1.26\pm0.20)\times10^{-2}}$|$1.58\pm0.05$| |Burgers eqn ($\sigma=0.05$)|$\mathbf{(1.54\pm0.25)\times10^{-1}}$|$1.34\pm0.45$| |Burgers eqn ($\sigma=0.1$)|$\mathbf{(3.30\pm0.73)\times10^{-1}}$|$1.66\pm0.02$| |Wave eqn ($\sigma=0$)|$\mathbf{(1.40\pm0.01)\times10^{-2}}$|$2.36\pm0.15$| |Wave eqn ($\sigma=0.05$)|$\mathbf{(3.80\pm0.26)\times10^{-2}}$|$2.24\pm0.05$| |Wave eqn ($\sigma=0.1$)|$\mathbf{(1.55\pm0.64)\times10^{-1}}$|$2.24\pm0.26$| |Schrodinger eqn ($\sigma=0$)|$\mathbf{(8.62\pm1.31)\times10^{-2}}$|$2.22\pm0.05$| |Schrodinger eqn ($\sigma=0.05$)|$\mathbf{1.89\pm0.20}$|$2.48\pm0.28$| |Schrodinger eqn ($\sigma=0.1$)|$\mathbf{2.14\pm0.03}$|$2.46\pm0.16$| (2) Insufficient samples To evaluate the error impact of estimating high-order derivatives using central differences on low-resolution grid points (insufficient samples), we downsample the discrete points in the training dataset with a sampling interval of $s$. For the low-resolution dataset, we employ higher-precision central differences to estimate the derivatives. For example, we replace the first-order derivative calculation $u_x=[u(x+h)-u(x-h)]/(2h)$ (with a truncation error of $O(h^2)$) with $u_x=[-u(x+2h)+8u(x+h)-8u(x-h)+u(x-2h)]/(12h)$ (with a truncation error of $O(h^4)$). The results of the ablation study (Grassmann distance) are presented below. |Dataset|LieNLSD|LieGAN| |-|-|-| |Burgers eqn (s=1)|$\mathbf{(1.26\pm0.20)\times10^{-2}}$|$1.58\pm0.05$| |Burgers eqn (s=3)|$\mathbf{(2.56\pm0.48)\times10^{-1}}$|$1.31\pm0.45$| |Burgers eqn (s=5)|$\mathbf{(6.84\pm0.86)\times10^{-1}}$|$1.70\pm0.07$| |Wave eqn (s=1)|$\mathbf{(1.40\pm0.01)\times10^{-2}}$|$2.36\pm0.15$| |Wave eqn (s=3)|$\mathbf{(1.26\pm0.01)\times10^{-2}}$|$2.39\pm0.05$| |Wave eqn (s=5)|$\mathbf{(1.95\pm0.02)\times10^{-2}}$|$2.37\pm0.12$| |Schrodinger eqn (s=1)|$\mathbf{(8.62\pm1.31)\times10^{-2}}$|$2.22\pm0.05$| |Schrodinger eqn (s=3)|$\mathbf{2.08\pm0.07}$|$2.55\pm0.19$| |Schrodinger eqn (s=5)|$\mathbf{2.18\pm0.01}$|$2.52\pm0.21$| As the resolution decreases, LieNLSD still performs better than LieGAN. In fact, works on PDE symmetry discovery can hardly bypass the limitation that numerical derivatives on low-resolution data tend to have large errors. For example, when calculating the validity score, Ko et al. [1] also needed to compute numerical derivatives for the transformed data to evaluate whether it satisfies the PDE (see Section 4.2 of the original paper [1] for details). However, higher-precision difference methods can be introduced to mitigate the impact of low resolution. **Reference** [1] Ko et al. "Learning Infinitesimal Generators of Continuous Symmetries from Data." --- Rebuttal Comment 1.1: Comment: Dear authors, I appreciate your response and for conducting additional experiments with noisy observations. My concerns have been addressed, and I would like to keep my accept opinion. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your high-quality review comments and your further support for our work! We are glad that our response has addressed your concerns. In light of this, we were wondering if you might consider raising the score accordingly? Thank you once again!
Summary: In this paper, the authors proposed LieNLSD, a method for explicitly discovering nonlinear Lie group symmetries that are not represented by $\mathrm{GL}(n)$, from data governed by a PDE $ F(x, u^{(n)}) = 0$. The proposed method parameterizes a nonlinear infinitesimal group action $\mathbf{v}$ acting on $(x, u)$ using a symbolic regression technique $\mathbf{v} = W \Theta(x, u) \cdot \nabla$, where $\Theta(x, u)$ is a predefined symbolic library and $W$ is a learnable matrix. The method then constructs the prolonged version of this group action, which satisfies $\mathrm{pr}^{(n)} \mathbf{v} = \Theta_n(x, u^{(n)})\mathrm{vec}(W) \cdot \nabla$. Utilizing the fact that the prolonged group action must satisfy $ \mathrm{pr}^{(n)} \mathbf{v} [F(x, u^{(n)})] = 0$ whenever the PDE $F(x, u^{(n)}) = 0$ exhibits symmetry under this group, the authors derive a natural learning criteria $J_F(x, u^{(n)}) \Theta_n (x, u^{(n)}) \mathrm{vec}(W) = 0$. To find Jacobians $J_F$ from data pairs $(x[i], u[i])$ originating from an unknown PDE, the authors first construct derivatives $u^{(n)}[i]$ using the central difference method, then approximate the PDE through a neural network $f$, assuming a form of $u_t$ (or $u_{tt}$ or $0$) $= f(u’) \implies F(x, u^{(n)}) = f(u’) – u_t = 0$. Using the Jacobian approximated via automatic differentiation of the neural network $f$, the authors solve $J_F(x, u^{(n)}) \Theta_n (x, u^{(n)}) \mathrm{vec}(W) = C \mathrm{vec}(W) = 0$ through SVD of $C$ (which means that the method automatically identifies the Lie algebra dimension based on the number of zero singular values) , with additional sparsifications. The authors benchmark their proposed approach with top quark tagging and PDE datasets, and showcase the use-case of the extracted symmetries to enhance the performance of neural PDE solvers. Claims And Evidence: The authors' claims regarding the advancements of their proposed method are clearly summarized in Table 1. The algorithm design is carefully structured to support the authors' objectives. Additionally, the claims are empirically validated through experimental results. For instance, the proposed method accurately identifies nonlinear symmetries from data across three different PDEs. Furthermore, its capability to automatically estimate the dimension of the Lie algebra is demonstrated using the top quark dataset, in contrast to LieGAN, which treats the Lie algebra dimension as a hyperparameter. However, there are some concerns that need to be addressed to further support the authors' claims, which will be elaborated in the Method and Experimental Designs sections. Methods And Evaluation Criteria: This paper presents a rigorously justified method based on a solid mathematical formulation for nonlinear symmetry discovery. Leveraging Lie group theory, the proposed method is technically sound and principled. However, there are two potential concerns regarding the proposed method: - The Jacobian $J_F$ is approximated by training a neural network surrogate to learn the PDE directly from observed data. Consequently, the accuracy of the identified symmetry depends on the performance of this neural surrogate. Therefore, the robustness of this approach is uncertain due to two primary sources of noise: (1) numerical derivative estimation based on the central difference scheme, and (2) approximation errors inherent in the neural network itself. - The symbolic library $\Theta$ should be predefined based on prior knowledge of the problem. The automatic selection of the Lie algebra dimension and the explicit construction of the Lie algebra basis come at the cost of manually selecting the symbolic library. A library that is too limited may fail to capture accurate symmetries, while an excessively large library increases both the search space and computational complexity. It can be particularly problematic when there is no information about the governing equation at all. Therefore, I believe the authors should discuss these aspects and conduct ablation studies to assess the robustness of the proposed method, particularly its sensitivity to noise and dependence on library selection. For evaluation, the authors employ the Grassmann distance to measure similarity between the ground truth and the discovered Lie algebra subspace. While this metric is theoretically principled, its use inherently limits comparison with other nonlinear symmetry discovery methods that do not explicitly produce a subspace representation. Theoretical Claims: This paper is not a theoretical study but primarily presents an algorithm for discovering nonlinear symmetry groups. The algorithm is constructed based on two theorems: Theorem 4.1 and Theorem 4.2. While Theorem 4.2 is drawn from a textbook, the proof of Theorem 4.1 is provided in the paper. I briefly reviewed the proof of Theorem 4.1 and found no significant flaws. Experimental Designs Or Analyses: The authors benchmark the proposed method using a top quark dataset and three PDEs, comparing it against the linear symmetry discovery model (LieGAN). Additionally, the nonlinear symmetry extracted by the proposed method is utilized to enhance the accuracy of the neural Fourier operator (FNO) by augmenting its training data with Lie point symmetries. The results demonstrate that the discovered symmetry can significantly improve the performance of FNO, achieving accuracy nearly comparable to augmentation with ground-truth symmetries. This finding is highly promising. However, there is a concern that although the proposed method and benchmarks emphasize nonlinear symmetry, the authors do not provide comparisons with other nonlinear approaches, such as LaLiGAN and the method by Ko et al. The authors state that this omission is due to their choice of Grassmann distance as the evaluation metric, but this reasoning alone does not fully justify the absence of comparisons with these established nonlinear methods. If possible, I strongly recommend that the authors perform additional comparisons with these existing approaches using alternative, suitable metrics. Supplementary Material: The supplementary material provides a comprehensive introduction to Lie group theory, detailed proofs, experimental setups, and visualizations of the experimental results. I found this supplementary material highly beneficial for understanding the paper and supporting its completeness. Relation To Broader Scientific Literature: This paper is highly relevant for identifying symmetries in PDEs, a fundamental task across various scientific fields. Essential References Not Discussed: The authors provide a comprehensive list of references and discuss related work appropriately. Other Strengths And Weaknesses: The paper is well-motivated, and the proposed method is technically sound. Furthermore, it is clearly written and easy to follow, particularly given the depth of the mathematical content. However, I believe the authors should address the concerns I raised in the Method and Experimental Design sections for further clarity. Other Comments Or Suggestions: A small typo in line 199: $\mathrm{diag}[\Theta(x, u)^{T}, \dots, \Theta(x, u)^{T}] \in \mathbb{R}^{((p+q)\cdot r) \times ((p+q)\cdot r)}$. Questions For Authors: Please refer to the Method and Experimental Designs sections for major questions. Additionally, a minor question to consider: Consider a real-world challenging scenario in which the governing PDE is entirely unknown, introducing uncertainties in both Jacobian estimation and library selection. Under such circumstances, is there an auxiliary metric or criterion available to evaluate the physical plausibility of the estimated symmetry? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your careful reading and valuable feedback! Below we will address each of your concerns point by point. **Methods And Evaluation Criteria** (1) See the Methods And Evaluation Criteria section of the Rebuttal to Reviewer 1vcx. (2) We conduct ablation experiments using a limited $\Theta$ (including only linear and constant terms) and a large $\Theta$ (additionally including sine and exponential terms) to evaluate the robustness of LieNLSD with respect to the choice of function library. The quantitative comparison results (Grassmann distance) with LieGAN are presented below. |Dataset|LieNLSD (limited $\Theta$)|LieNLSD|LieNLSD (large $\Theta$)|LieGAN| |-|-|-|-|-| |Burgers' equation|$(1.30\pm0.23)\times10^{-2}$|$(1.26\pm0.20)\times10^{-2}$|$(1.06\pm0.14)\times10^{-2}$|$1.58\pm0.05$| |Wave equation|$(1.41\pm0.01)\times10^{-2}$|$(1.40\pm0.01)\times10^{-2}$|$(1.43\pm0.03)\times10^{-2}$|$2.36\pm0.15$| |Schrodinger equation|$(3.13\pm0.91)\times10^{-2}$|$(8.62\pm1.31)\times10^{-2}$|$(8.74\pm0.65)\times10^{-2}$|$2.22\pm0.05$| |Heat equation|$(3.08\pm1.29)\times10^{-3}$|$(7.39\pm1.04)\times10^{-4}$|$(1.31\pm0.51)\times10^{-3}$|$2.59\pm0.04$| |KdV equation|$(9.66\pm3.39)\times10^{-3}$|$(5.24\pm1.55)\times10^{-3}$|$(1.12\pm0.41)\times10^{-2}$|$1.56\pm0.00$| |Reaction-diffusion system|$(8.23\pm1.13)\times10^{-2}$|$(1.24\pm0.18)\times10^{-1}$|$(1.24\pm0.13)\times10^{-1}$|$1.11\pm0.11$| For large $\Theta$, although we misspecify several terms, LieNLSD can still obtain accurate results within a large search space. For limited $\Theta$, LieNLSD may indeed miss some generators, but for all generators whose terms are fully included in $\Theta$, it can still accurately discover them (as shown in the table above, LieNLSD with $\Theta$ containing only linear and constant terms can correctly identify all linear symmetries). **Experimental Designs Or Analyses** Although the Grassmann distance is not applicable to the method of implicit symmetry discovery, we additionally compare the long rollout test NMSE of FNO with LPSDA based on LieNLSD and FNO with LPSDA based on Ko et al. [1]. For Ko et al., while they cannot obtain explicit expressions for the infinitesimal generators, feeding a given point into their trained MLP still yields the specific values of the infinitesimal generators at that point, thereby guiding data augmentation. Note that for PDEs, the types of symmetries found by LaLiGAN and our method differ (details can be found in the Claims and Evidence section of the Rebuttal to Reviewer 1vcx), making it impossible to establish a unified metric. The experimental results are presented below. Ko et al. can improve the accuracy of FNO, but not as significantly as our method. In addition to quantitative advantages, Ko et al. require the explicit expression of the PDE to compute the validity score (see Section 4.2 of their original paper), whereas our LieNLSD does not rely on this prior knowledge. |Dataset|FNO+$\emptyset$|FNO+LieNLSD|FNO+GT|FNO+Ko et al.| |-|-|-|-|-| |Burgers' equation|$(2.33\pm1.07)\times10^{-4}$|$(1.80\pm0.56)\times10^{-4}$|$\mathbf{(1.75\pm0.37)\times10^{-4}}$|$(1.83\pm0.25)\times10^{-4}$| |Heat equation|$(1.07\pm0.10)\times10^{-1}$|$\mathbf{(5.99\pm0.04)\times10^{-2}}$|$(6.01\pm0.20)\times10^{-2}$|$(1.04\pm0.14)\times10^{-1}$| |KdV equation|$(1.74\pm0.10)\times10^{-1}$|$\mathbf{(1.42\pm0.16)\times10^{-1}}$|$(1.47\pm0.02)\times10^{-1}$|$(1.58\pm0.14)\times10^{-1}$| **Other Comments Or Suggestions** $\Theta(x,u)\in\mathbb{R}^{r\times 1}$, then $\Theta(x,u)^\top\in\mathbb{R}^{1\times r}$. By repeating it $(p+q)$ times and arranging them diagonally, we obtain $diag[\Theta(x,u)^\top,\dots,\Theta(x,u)^\top]\in \mathbb{R}^{(p+q)\times((p+q) \cdot r)}$. We have double-checked that it is not a typo, but thank you for your reminder! **Questions For Authors** As mentioned in the Experimental Designs Or Analyses section of the Rebuttal, the accuracy of FNO with LPSDA can serve as one metric. In fact, the accuracy of many recent symmetry-informed neural PDE solvers (detailed in the Relation To Broader Scientific Literature section of the Rebuttal to Reviewer LaJV) can also be used to measure the quality of symmetry. Essentially, these use the performance on downstream tasks after applying symmetry as a metric. Additionally, following the approach of LieGAN and LaLiGAN, a discriminator can be employed to quantify the distributional differences between the original and transformed data. These metrics do require some training overhead, but we believe this is nearly unavoidable in scenarios where the PDE is entirely unknown, as evaluation can only be conducted based on the dataset. **Reference** [1] Ko, Gyeonghoon, Hyunsu Kim, and Juho Lee. "Learning Infinitesimal Generators of Continuous Symmetries from Data." arXiv preprint arXiv:2410.21853 (2024). --- Rebuttal Comment 1.1: Comment: Thank you very much for the authors’ detailed responses and the additional experiments. The robustness tests, the ablation study on the basis set size, and the downstream performance comparison with Ko et al. notably enhance the practical significance of the paper. I am happy to raise my review score accordingly. --- Reply to Comment 1.1.1: Comment: We are delighted that our response has addressed your concerns. We sincerely appreciate your valuable suggestions and further recognition of our work!
null
null
null
null
null
null
A Generic Family of Graphical Models: Diversity, Efficiency, and Heterogeneity
Accept (poster)
Summary: To infer high dimensional graphical models with a variety of data types, this paper introduces a marginally recoverable parametric family. The family is very flexible, and it includes Gaussian, PLN, and latent Gaussian copula for count and binary data. Within this family, the joint distribution can be characterized by all pairwise marginal distributions, and hence the dimensionality is reduced significantly. To further capture heterogeneous structures, they extend the method to mixture models. The proposed methods are evaluated via simulations and single-cell RNA sequencing data. Claims And Evidence: Yes, the claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the author study their methods by both simulations and single-cell RNA data. They evaluate the model performance using diverse criteria for different data types, e.g., AUPR for count data and 3 estimation methods for binary data. All these methods and criteria are standard and valid. Theoretical Claims: No, I didn't, since these are well-established statistical methods and I believe the authors did it correctly. Experimental Designs Or Analyses: Yes. All the simulations and application to single-cell RNA data is solid. For application in singel-cell RNA data, they further provide biological perspective with supporting biological literature, which make the analyses more meaningful. Supplementary Material: No Relation To Broader Scientific Literature: The family defined in this paper is general, which contains many widely-used distributions. Therefore, we can use the technique proposed here to models (not limited to graphical models) within this family, to deal with high dimensionality issue. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strength: the writing is clear and easy to follow. The application part provides biological insights. Weakness: The major contribution for this paper is proposing a new marginally recoverable family. The author(s) write the family in a very general way, but the examples are more or less all Gaussian related. Therefore, there are some concerns about novolties, as the defined family may be some extensions of Gaussian and we can directly use results. For mixture part, overlaiding the MMLE with EM is standard. Other Comments Or Suggestions: Figure 1 is blank. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time in reviewing our paper and your insightful comments. In the following response, we would like to address your major concern and provide additional clarification. For weakness: > The author(s) write the family in a very general way, but the examples are more or less all Gaussian related. Therefore, there are some concerns about novolties, as the defined family may be some extensions of Gaussian and we can directly use results. Thanks for your comment. First, since the focus of this work is primarily on graphical network inference, with the application scenario being gene regulatory network inference using biological genomics data, the examples provided here are mainly based on Gaussian distribution models in hierarchical settings. However, the new concept introduced—the marginally recoverable family—includes distributions beyond just Gaussian-related distributions. For instance, the multinomial distribution belongs to this family but does not Gaussian-related. Let $\mathbf{X}=\left(X_{1}, \ldots, X_{p}\right)^\top \sim Multinomial(n,\boldsymbol{\theta})$, where $\boldsymbol{\theta}=(\theta_1,\dots,\theta_p)^\top$. Then, for $1 \leq j < k \leq p $, we have $X_j \sim Multinomial(n, \theta_{j}) $ and $\left(X_j , X_k \right)^\top \sim Multinomial(n, (\theta_j, \theta_k)^\top) $. This demonstrates that the multinomial distribution satisfies the definition of the family. Secondly, our defined family is not an extension of the Gaussian distribution that can be addressed by existing methods. Even when the latent layer is modeled using a Gaussian distribution, distributions with hierarchical structures still pose significant challenges for parameter estimation. For example, as mentioned in the paper, there are few methods developed for estimating the precision matrix of the MPLN model in high-dimensional settings. Compared to Gaussian mixture models, the MPLN model presents a greater challenge during the parameter estimation process using the EM algorithm, as its expected complete log-likelihood involves high-dimensional integrals that are difficult to compute. Therefore, for such models, we cannot directly apply existing methods or results. For other comments or suggestions: > Figure 1 is blank. Thank you for your comment. We appreciate your attention to Figure 1. We would like to clarify that Figure 1 is not blank. It is possible that the white areas in the figure were confused with the background, which might have led to the impression that the figure is empty. To avoid this misunderstanding, we will add borders around each subplot in Figure 1 in the final version. We would like to clarify that in subplot (a), the white areas represent positions where the true network does not have edges. In subplots (b) to (e), the white areas at position $(i,j)$ indicate that the frequency of false positives (i.e., incorrectly predicting an edge where there is none) for that position is zero. We hope this explanation clarifies the meaning of the white areas.
Summary: The paper introduces a novel family of graphical models, termed the marginally recoverable parametric family, to address diversity, efficiency, and heterogeneity in high-dimensional network inference. The paper proposes a Maximum Marginal Likelihood Estimator (MMLE) for efficient parameter estimation and extend it to EM-MMLE to handle heterogeneous membership via mixture modelling. The approach is demonstrated through theoretical consistency, simulations, and application to real single-cell RNA sequencing data. Claims And Evidence: Most of the claims are theoretical statements and are verified via proofs. Methods And Evaluation Criteria: The proposed marginally recoverable parametric family is well-motivated and mathematically rigorous. It is evaluated via theoretical consistency, simulation benchmarks (e.g. AUPRC) and application to real RNA-seq datasets. Theoretical Claims: The consistency proofs for MMLE (Theorem 4.3) and EM-MMLE (Theorem 4.4) appear mathematically sound. The proofs rely on Conditions 4.1 and 4.2, which ensure proper convergence. Experimental Designs Or Analyses: The simulation studies are well-structured, covering different aspects of experiment settings and data heterogeneity. The real-world application (scRNA-seq) is relevant and well-motivated. Supplementary Material: I did not carefully check the supplementary material. Relation To Broader Scientific Literature: Graphical modelling for network analysis is widely used in scientific research, especially in genetics and epidemiology. Developing flexible and consistent methods is significant for the exploratory analysis. Essential References Not Discussed: Not sure. Other Strengths And Weaknesses: **Strength**: - This paper introduces a flexble class of graphical models with rigorous mathematical justification. - The advantage of this class is to avoid high-dimensional integration via marginal likelihood estimation. - The experiments show the proposal outperforms existing methods on benchmark datasets. **Weakness**: - Marginal recoverability may not extend to distributions with higher moments. Other Comments Or Suggestions: - Notation $d$ is used for both dimension and Hellinger distance in Condition 4.1 and 4.2. - It is very hard to interpret the sample size requirement in Theorem 4.4. Some discussion of sample complexity depending on the dimension / sparsity wil be appreciated. - One motivation of this paper is for mixed type data analysis. It would be helpful to provide an example with both continuous and discrete variables with explicit modelling formula. Questions For Authors: - How do we justify Condition 4.1 and 4.2 intuitively? I can find parametric examples in Line 275-278. - Is there any theoretical guarantee for the consistency of membership assignment in the mixture modelling? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your time in reviewing our paper and your positive feedback. Here we address your comments as follows. For weakness: >Marginal recoverability may not extend to distributions with higher moments. Thanks for your comment. To facilitate an intuitive network representation, we impose certain restrictions on the parameters of the family, typically related to the first and second moments. However, motivated by the core ideas behind this family, we can provide a more general definition that extends to distributions determined by higher moments. Specifically, we introduce the definition of a $d$-marginally recoverable family. Let $\mathbf{X}$ be a $p$-dimensional random variable, and let $d$ be a constant such that $d<p$. We say that $\mathbf{X}$ is $d$-marginally recoverable if any $d$-dimensional marginal distribution of $\mathbf{X}$ belongs to the same distribution family, and the parameters of all $d$-dimensional marginal distributions collectively characterize the parameters of the full distribution. We will introduce this general definition in the Discussion section of our revision and explore additional distribution examples that align with it in our future work. For suggestions: >1. Notation $d$ is used for both dimension and Hellinger distance in Condition 4.1 and 4.2. Thank you for your helpful suggestions. We have corrected the duplicate symbols in the manuscript by replacing $d$ (used for dimension in Conditions 4.1 and 4.2) with $k$. >2. It is very hard to interpret the sample size requirement in Theorem 4.4. Some discussion of sample complexity depending on the dimension / sparsity will be appreciated. For some $\eta >2$, the result in Theorem 4.4 holds under the sample size condition $n>O(\eta s^{1/2}\log p)$, where $n$ is the sample size, $s$ denotes the sparsity level of the network (i.e., the number of nonzero entries), and $p$ denotes the dimension. Since $\widehat{\Theta}_g$ recovers all the zeros and nonzeros in $\Theta_g$ with probability $1-p^{2-\eta}$, increasing $\eta$ raises this probability towards 1, but requires a larger sample size. >3. It would be helpful to provide an example with both continuous and discrete variables with explicit modeling formula. Here, we provide a specific example involving both continuous and discrete variables, with an explicit modeling formula. Assume that $\mathbf{X}=(\mathbf{X}_1,\mathbf{X}_2)$, where $\mathbf{X}_1$ is a random $a$-vector and $\mathbf{X}_2$ is a random $b$-vector. Suppose that there is a random vector $\mathbf{Z}_1=(Z_1,...,Z_a)$ such that $(\mathbf{Z}_1,\mathbf{X}_2)\sim N(\mu,\Sigma)$ and $X_j=I(Z_j>C_j)$ for all $j=1, ..., a$, where $C_j$ is a constant. We can easily verify that $\mathbf{X}$ is marginally recoverable. While this paper does not primarily focus on mixed-type data, our approach can be applied to parameter estimation as long as the variables meet the marginal recoverability definition. For questions: >1. How do we justify Condition 4.1 and 4.2 intuitively? Conditions 4.1 and 4.2 establish a relationship between the parameter distance and the distance between the marginal density functions. Intuitively, for any two parameters within the parameter space, if their Euclidean distance is small, the corresponding Hellinger distance between their marginal density functions will also be small. When verifying that specific distributions satisfy Conditions 1 and 2, we found that this is equivalent to verifying the following conditions: (A1) The Fisher information matrix is positive definite and continuous with respect to the parameters. (A2) Both the score function and the Hessian matrix are dominated by integrable functions. (A3) The parameter space is compact, and the model is identifiable. Among these, (A1)-(A3) are standard regularity conditions. However, expressing the original conditions as (A1)-(A3) would appear somewhat redundant. Conditions 4.1 and 4.2, on the other hand, provide a more intuitive link between the distances of densities and parameters, making the formulation clearer and more accessible. Therefore, we use these boundedness conditions to present the core conditions of our theorem instead of (A1)-(A3). >2. Is there any theoretical guarantee for the consistency of membership assignment in the mixture modeling? For the Gaussian mixture model, Chen and Zhang [1] derive minimax lower bounds for clustering with respect to the misclustering error rate. To the best of our knowledge, establishing theoretical guarantees for membership assignment remains a challenging problem. Due to time constraints, we have not addressed the consistency of membership assignment in general mixture models in this paper, but this could be explored in future work. Reference [1] Chen, X. and Zhang, A. Y. Achieving optimal clustering in gaussian mixture models with anisotropic covariance structures. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed resposne. The response is helpful for my understanding of the paper. I would keep my score unchanged. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and support. We will add the rebuttal contents to the main paper in the final version following your valuable suggestions.
Summary: This article introduces a new class of graphical models, referred to as the marginally recoverable parametric family, which aims to tackle challenges related to efficiency and heterogeneity in high-dimensional network inference tasks. The proposed family is rather flexible, with the joint distribution characterized by pairwise marginal distributions. To facilitate efficient parameter estimation, the paper presents a Maximum Marginal Likelihood Estimator, which is further extended to handle heterogeneous scenarios through mixture modeling. The method’s effectiveness is supported by theoretical consistency, simulations, and applications to real-world sequencing data. Claims And Evidence: The theoretical results are well-supported, though not particularly surprising, as they follow directly from well-established theorems. The evidence provided is both clear and convincing. While the marginally recoverable parametric family introduced is intriguing, I remain somewhat skeptical that it encompasses distributions beyond elliptical copulas and simple hierarchical compositions that do not alter the dependency structure. Methods And Evaluation Criteria: On the experimental side, the model's performance is assessed through various simulation benchmarks and large-scale real-world applications. All evaluation procedures appear to be appropriate and well-executed. Theoretical Claims: The consistency proofs are mathematically sound and based on widely recognized statistical methods. However, I believe that neither Theorem 4.2 nor Theorem 4.3 should be labeled as "Theorems". Instead, I would suggest calling them "Lemmas," as they directly follow from other established theorems. Experimental Designs Or Analyses: Experimental results show that the proposed method performs well on benchmark datasets. The simulation studies are carefully designed, addressing diverse experimental settings and data heterogeneity. The real-world applications are both relevant and well-motivated, with analyses supported by biological literature. Supplementary Material: Yes, all proofs. Relation To Broader Scientific Literature: The family introduced in this paper is versatile, making the proposed approach applicable to several problems related to graphical models. However, it is also relatively limited in its ability to model tail dependencies, which might be problematic in some applications. Essential References Not Discussed: No. Other Strengths And Weaknesses: The primary contribution of the paper lies in the introduction of the marginally recoverable family. While the family is presented in a general form, it appears to only include elliptical copula distributions (and simple compositions thereof). This raises some concerns about the novelty of the approach, as elliptical copula families are already well-established in the literature. Additionally, elliptical copulas are quite limited in their expressive power, particularly when it comes to modeling tail dependencies, which either vanish asymptotically or are entirely absent in the special case of the Gaussian copula. In many applications of copulas, this limitation has led to the development of more complex copula families. Therefore, the claim on page 2 that "The marginally recoverable parametric family includes many of the most common distributions" might be somewhat misleading. Other Comments Or Suggestions: No. Questions For Authors: Please comment on my concerns regarding the "real" flexibility of the model beyond elliptical copulas. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time in reviewing our paper and your insightful comments. In the following response, we would like to address your major concern and provide additional clarification. For theoretical claims: > I believe that neither Theorem 4.2 nor Theorem 4.3 should be labeled as "Theorems". Instead, I would suggest calling them "Lemmas," as they directly follow from other established theorems. Thank you for your valuable suggestion. Since these two theorems are derived from established results, we acknowledge that referring to them as "Lemmas" is more appropriate. Nevertheless, we would like to emphasize that these results establish that once a connection between the parameter distance and the density distance is made (as specified in Conditions 4.1 and 4.2), we can obtain tail probability control for high-dimensional parameters. We believe that this provides new insights and a novel proof strategy. Furthermore, verifying that a specific model satisfies the necessary conditions (Conditions 4.1 and 4.2) is nontrivial, especially for complex models such as MPLN. In fact, a substantial portion of the appendix (see Appendix A.3) is dedicated to rigorously demonstrating that the MPLN model satisfies these conditions. For weaknesses: > While the family is presented in a general form, it appears to only include elliptical copula distributions (and simple compositions thereof). This raises some concerns about the novelty of the approach, as elliptical copula families are already well-established in the literature. Additionally, elliptical copulas are quite limited in their expressive power, particularly when it comes to modeling tail dependencies, which either vanish asymptotically or are entirely absent in the special case of the Gaussian copula. In many applications of copulas, this limitation has led to the development of more complex copula families. Therefore, the claim on page 2 that "The marginally recoverable parametric family includes many of the most common distributions" might be somewhat misleading. Thanks for your comment. In fact, the family includes distributions beyond elliptical copula distributions (and simple compositions thereof). For instance, the multinomial distribution belongs to the family but does not fall within the class of elliptical copula distributions. Specifically, let $\mathbf{X}=\left(X_{1}, \ldots, X_{p}\right)^\top \sim Multinomial(n,\boldsymbol{\theta})$, where $\boldsymbol{\theta}=(\theta_1,\dots,\theta_p)^\top$. Then, for $1 \leq j < k \leq p $, we have $X_j \sim Multinomial(n, \theta_{j}) $ and $\left(X_j, X_k\right)^\top \sim Multinomial(n, (\theta_j,\theta_k)^\top ) $. This demonstrates that the multinomial distribution satisfies the definition of the family. Furthermore, the concept of the marginally recoverable family can be extended to a more general framework. We introduce the definition of a $d$-marginally recoverable family as follows: Let $\mathbf{X}$ be a $p$-dimensional random variable, and let $d$ be a constant such that $d<p$. We say that $\mathbf{X}$ is $d$-marginally recoverable if any $d$-dimensional marginal distribution of $\mathbf{X}$ belongs to the same distribution family, and the parameters of all $d$-dimensional marginal distributions collectively characterize the parameters of the full distribution. In this way, the marginally recoverable family becomes more flexible. Under this definition, the multinomial distribution mentioned earlier is actually 1-marginally recoverable, as only the parameters of the 1-dimensional marginal distributions are required to recover all the parameters of the full distribution. We will introduce this general definition in the Discussion section of a future revision and explore additional examples that align with this broader definition in our future work. For modeling tail dependencies, we acknowledge that elliptical copulas do have limitations in this regard. That said, this concern is primarily relevant in financial applications. Our work, on the other hand, focuses on network inference and its applications to biological genomics data, where the elliptical copula family is commonly used, and tail dependencies are generally not a central issue. Exploring tail dependencies within our framework is an interesting direction for future research. The claim on page 2 could be revised to: "The marginally recoverable parametric family includes distributions commonly used in the field of bioinformatics," which would be a more precise statement. We sincerely appreciate your constructive suggestion.
Summary: The paper proposes a new, unified class of graphical models—called the marginally recoverable parametric family—designed to handle diverse data types (e.g., Gaussian, Poisson log‑normal, and latent Gaussian copula models) and heterogeneous structures via mixture modeling. The authors introduce an efficient maximum marginal likelihood estimator (MMLE) that avoids full high‑dimensional integration by leveraging low‑dimensional marginal computations, and extend it to EM‑MMLE for mixture contexts. The work is supported by extensive theory, simulations, and an application to single‑cell RNA‑seq gene regulatory network inference. Claims And Evidence: ### Main Claims: - The proposed framework unifies and generalizes a range of graphical models while reducing computational complexity. - MMLE (and its EM extension) achieves consistent parameter estimation and sign recovery with nearly optimal convergence rates. ### Major Concern: The theoretical guarantees hinge on two stringent conditions (the lower‑boundedness and upper‑boundedness conditions in Conditions 4.1 and 4.2). Their restrictive nature may limit applicability in practical scenarios where the data do not closely conform to these idealized requirements. Methods And Evaluation Criteria: The estimation strategy first maximizes one‑dimensional marginal likelihoods to estimate location and scale parameters, then refines covariance estimation by using two‑dimensional marginal likelihoods. This decomposition reduces the burden of high‑dimensional integration. The extension to mixture models is handled via an EM algorithm updating the MMLE. Performance is measured using standard network inference metrics (e.g., AUPR, Frobenius norm error) and clustering accuracy (Adjusted Rand Index). The study covers various simulation settings—including different network structures, dimensions, mixing levels, and zero proportions—to assess robustness. Theoretical Claims: The paper provides detailed proofs—offered in the supplementary material—for key results such as convergence rates (Theorem 4.3) and sign consistency (Theorem 4.4). Concern: The reliance on Conditions 4.1 and 4.2 is a potential weakness. These assumptions, pivotal to the derived guarantees, appear rather strong and may not hold in many real‑world applications. Experimental Designs Or Analyses: ### Simulation Studies: Simulations are performed on mixed count and binary data generated from realistic distributions (e.g., MPLN for count data and latent Gaussian copula for binary data). ### Real‑Data Application: The method is applied to scRNA‑seq data to infer gene regulatory networks, compared against silver standards from public databases. Supplementary Material: I have not reviewed the supplementary. Relation To Broader Scientific Literature: The work builds on several key strands of graphical model research—extending classical Gaussian approaches to handle count and binary data—and connects with recent work on latent Gaussian copula models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ### Strengths: - Presents a versatile, unified framework that handles multiple data types and heterogeneous populations. - Develops an innovative estimation strategy (MMLE/EM‑MMLE) that significantly reduces computational complexity. - Offers comprehensive theory and extensive simulations alongside a compelling real‑data application in genomics. ### Weaknesses: - The two core conditions (Conditions 4.1 and 4.2) are quite strong, potentially restricting the method’s applicability. - More experiments showing robustness under deviations from the assumed conditions would be valuable. Other Comments Or Suggestions: N/A Questions For Authors: See my previous comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your time in reviewing our paper and your insightful comments. In the following response, we would like to address your major concern and provide additional clarification. For your major concern in the claims and evidence section, as well as the similar concern raised in the theoretical claims and weaknesses sections: > The theoretical guarantees hinge on two stringent conditions (the lower-boundedness and upper-boundedness conditions in Conditions 4.1 and 4.2). Their restrictive nature may limit applicability in practical scenarios where the data do not closely conform to these idealized requirements. Thank you for your comment. We appreciate your concern about the potential restrictiveness of Conditions 4.1 and 4.2 and their applicability to real-world data. We would like to clarify that Conditions 4.1 and 4.2 are, in fact, relatively mild. As demonstrated in Appendix A.3.4 and Appendix A.4, both the MPLN distribution and the latent Gaussian copula model for binary data satisfy these conditions. More specifically, our proof establishes that a distribution satisfies Conditions 4.1 and 4.2 as long as the following assumptions hold: (A1) The Fisher information matrix is positive definite and continuous with respect to the parameters. (A2) Both the score function and the Hessian matrix are dominated by integrable functions. (A3) The parameter space is compact, and the model is identifiable. Among these, (A1)-(A3) are standard regularity conditions, which are widely used to ensure the asymptotic efficiency of the maximum likelihood estimator, as shown in Shao's book [1]. For the other weakness: > More experiments showing robustness under deviations from the assumed conditions would be valuable. Thank you for your comment. Due to rebuttal time constraints and the fact that we have not identified any marginally recoverable distributions that do not satisfy the conditions, we have not conducted additional experiments to demonstrate robustness under deviations from the assumed conditions at this time. Reference [1] Shao, J. Mathematical statistics. Springer Science \& Business Media, 2008.
null
null
null
null
null
null
Aligning Spoken Dialogue Models from User Interactions
Accept (poster)
Summary: This paper introduces an alignment framework for full-duplex spoken dialogue models (like Moshi). The authors construct preference data from real user interaction data and then fine-tune a spoken dialogue model (Moshi) using direct preference optimization (DPO). Preference pairs are constructed by using a model to first find a wrong answer and then revise it to a correct answer. The proposed alignment method can improve model's performance on question answering tasks and model safety. The authors also conduct human evaluation to assess model's coherence, engagement and helpfulness. Claims And Evidence: The proposed method can improve model's performance on question answering tasks and model safety. The authors conduct comprehensive experiments on their aligned models, including benchmark results and human evaluations. The authors also conduct detailed ablation studies to explore the effectiveness of different implementation settings. Methods And Evaluation Criteria: 1. The Methods section introduces too much background knowledge, such as the details of Moshi and DPO. 2. The proposed method has limited innovation, with its main contribution focused on how to construct preference data pairs. 3. Although the paper is about aligning full-duplex spoken dialogue models, the evaluation mainly focuses on question answering, safety and multi-turn ability (like consistency and engagement), ignoring the features of full-duplex models. Theoretical Claims: The article contains relatively little theoretical proof. Experimental Designs Or Analyses: Overall, the experimental design is relatively solid. The writing in the experimental section is somewhat disorganized. For example, Section 4.1 mentions several data ratios, but I couldn't find the corresponding experiments in the results section. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The paper primarily relies on Moshi as the base model and DPO as the preference learning algorithm. It specifically designs a method for constructing preference data and related human evaluation methods. Essential References Not Discussed: No. Other Strengths And Weaknesses: See Above Other Comments Or Suggestions: Line 246: No corresponding content in Appendix C. Equation 1 and Equation 3 are the same equation. Questions For Authors: In Equation (2), the context includes the speech generated by the model, while the numerator of the probability corresponds to the text portion of the output. Why is it designed this way? It seems somewhat counterintuitive, as the speech and text generated by the model should be parallel sequences without a sequential relationship. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are thankful for your time and careful feedback. We answer specific points below: ## Question > In Equation (2), [...], a sequential relationship. The model’s text and audio must have some dependency pattern, otherwise the two streams would quickly have different content. Other patterns have been explored in the literature [1], such as first generating all text tokens, then all audio tokens, however it limits the possibility to interrupt at any time (if the model gets interrupted in the middle of generating the audio, we would need to backtrack to erase the text that was never voiced), and increases latency. Co-generating both audio and text tokens in an interleaved and auto-regressive fashion has no such limitations. While we only penalize the text contribution to the likelihood of a trajectory in the DPO loss, the model is trained to be fed with both the text and audio part of a trajectory, and would be largely out of domain if only fed with text. ## Concerns on Methods & Evaluation > The Methods section introduces [...] Moshi and DPO. We will move the content on Moshi to a different section, and reorganize the DPO part to focus more on our adaptation to the multistream setting. We hope this revision could make the methods section clearer, while keeping the paper self-contained. > The proposed method has limited innovation [...] preference data pairs. We believe that the key observations are more nuanced: - To the best of our knowledge, this is the first work studying how to enhance full-duplex spoken dialogue models with large-scale interaction data. - How to construct and mix preference pairs from multi-turn dialogues itself is not straightforward, especially given that spoken conversations present unique characteristics (e.g. overlaps and silences), have a different distribution than the written text (e.g., more concise and simple sentences), with a much higher number of turns. We believe that our designs and reported results can be interesting for the community and practitioners. - Through extensive experiments, we show that using synthetic voice data can still significantly improve model performance. This offers several practical advantages, such as preserving privacy and mitigating the challenges of collecting extensive human speech corrections in real-world applications (much harder than for text). > Although the paper is about aligning full-duplex [...] models. We agree that full-duplex spoken dialogue models exhibit a wide range of interesting phenomena, from the semantic content to the temporal dynamics, at turn-level and dialogue-level. As assessing conversational dialogue models is a complex task [2], our work mainly aims to improve the quality of the content (in particular, the turn-level factual correctness, safety, the high-frequency timing problems, and the multi-turn dialogue-level quality) in the spoken dialogue, that we disentangled from other specific phenomena that require more fine-grained and targeted assessments. Evaluation of full-duplex models is also a very nascent area. We are aware of some very recent work on evaluating the turn-taking behaviour [3, 4], but they are not fully open-sourced yet. We agree that the evaluation and improvement of temporal dynamics for conversational models are an important and very timely direction for future work. ## Experimental Designs & Other Comments > Experimental Designs Or Analyses Thank you for the feedback. We will refine the flow of the experimental section to better link with the results. For the data ratios, we will clearly update them in Table 2. In particular: - Type-A corresponds to the 20% with content-only issues. - Type B+C corresponds to the 57% with timing-only issues. - Within the timing-only issues: - Type-B: 18% is the model cutting the user. - Type-C: 72% is the model not answering within appropriate time. > Line 246: No corresponding content in Appendix C The reference refers to Appendix C of the cited paper [5] for details on TTS (on page 56). We will revise to make the point clearer. > Equation 1 and Equation 3 Thank you for the feedback. We wanted to emphasize after changing the notation of the policy, but we agree that the equation is redundant and will remove it. Please let us know if the answers addressed your questions, and if we can address further questions you might have. Thanks again! [1] Nachmani, E., et al. "Spoken question answering and speech continuation using spectrogram-powered llm." ICLR, 2024. [2] See, A., et al. "What makes a good conversation? how controllable attributes affect human judgments." ACL, 2019. [3] Arora, S., et al. "Talking Turns: Benchmarking Audio Foundation Models on Turn-Taking Dynamics." ICLR, 2025. [4] Lin, G., et al. "Full-Duplex-Bench: A Benchmark to Evaluate Full-duplex Spoken Dialogue Models on Turn-taking Capabilities." arXiv, 2025. [5] Défossez, A., et al. "Moshi: a speech-text foundation model for real-time dialogue." arXiv, 2024.
Summary: This paper integrates DPO and its variants into Moshi, a full-duplex voice interaction model, to enhance several aspects such as content and timing. To accomplish this, the authors collected dataset and used it for training and evaluation. Notably, aside from concurrent research, all published studies on end-to-end pipeline-based voice interaction models have relied on supervised fine-tuning, underscoring the significance of this approach. ## update after rebuttal I have considered the authors' rebuttal. The additional evidence and clarifications provided resolved the questions I previously raised. Consequently, my positive opinion on this submission is unchanged. Claims And Evidence: Based on what is presented in the paper, there do not appear to be any overclaims or similar issues. However, there are some additional questions I would like to raise, which are listed below. Methods And Evaluation Criteria: From what I can see, there are no issues in evaluating the aspects they aim to measure—such as safety, context, and timing. Theoretical Claims: They directly applied formulas from the standard DPO methodology and its variants without any apparent problems. Experimental Designs Or Analyses: The experimental design and analysis seem sound. Supplementary Material: I checked the Appendix to see how their data was constructed. Relation To Broader Scientific Literature: Aside from a single study released around the same time, this is the first instance I’ve seen of applying an RLHF-like approach to the content of a voice interaction model, which I believe is novel. Otherwise, there don’t appear to be any further concerns. Essential References Not Discussed: The authors have cited previous work appropriately. Other Strengths And Weaknesses: **Strengths** - Their trial on data creation, the design of each evaluation, and their first-time incorporation of RLHF into the model are definite strengths. - Additionally, their explanation of MOSHI, the backbone of their model, is clear and easy to follow. Any weaknesses have been noted in the “Questions For Authors.” Other Comments Or Suggestions: Listed in Quesions For Authors Questions For Authors: **1. Loss of context-based prosody** - In the data generation process (synthesize speech from user and answer of LLM), doesn’t the prosody and naturalness that depends on the conversation context get lost? Doesn’t this weaken Moshi’s strength? My understanding is that Moshi synthesizes entire spoken dialogs for data augmentation, not a single utterance, which helps maintain naturalness of spoken dialog between two speakers. If this paper is meant to demonstrate or verify the possibility of applying DPO, then I believe it should test one or two additional backbone models to demonstrate the method’s general applicability. On the other hand, if the goal is to propose a new model, then there should be experiments or demos focusing on acoustic aspects—one of the backbone model’s strengths and a key element of a voice interaction model. **2. Effect of reinforcement learning on text-to-speech** - If reinforcement learning is only applied to the text, does it affect text-to-speech performance at all? For example, does pronunciation accuracy get worse? I’m curious about any difference in pronunciation accuracy before and after DPO. **3. Methodological novelty** - Apart from applying the DPO family of methods to a voice interaction model, there does not seem to be much additional methodological novelty—though I recognize it may be the first approach of its kind. - The motivation for collecting training data directly from people seems a bit weak. In the end, the data collected from humans is resynthesized through TTS (for privacy). Recently, many cases build a spoken dialog dataset using TTS model. Is the main benefit of your approach that you can preserve the timing of human interventions in conversations? Could we simply use an LLM to create written dialog data (including the timing for interventions), and then apply TTS to produce a synthetic dataset? **4. Plans for open release** - I’m curious if there are plans to release the data and model. **5. Lack of acoustic metrics and demos** - I would like to see an audio demo, or at least some metrics that evaluate the acoustic aspects. I’m curious to see how the reinforcement learning process affects the acoustic quality. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough feedback, and for acknowledging that our work presents a novel contribution for the alignment of voice interaction models. ## Questions & Concerns > Loss of context-based prosody We agree that discarding the original audio (for privacy) loses some audio information, however: - we want to clarify that the TTS is done for the user's audio, and for Moshi we used its own generated tokens (except for the generated preferred response, also synthesized); - as the resynthesized speech retains the original timestamps, some aspects of the prosody are kept, such as the rhythm and speech rate; - for the axes we focused on in the work, the original audio is not absolutely necessary. Also the original Moshi paper [1] uses synthetic generation for the instruction stage. Given that in a number of regions in the world, strong protection laws prevent or limit the recording or storage of private and biometric attributes such as a person's voice, our method presents important applications for improving speech systems in a privacy-preserving manner, while respecting local legislations. The model choice is because Moshi was the only available open-source full-duplex speech-to-speech model when we conducted the project, and for cost/feasibility reasons. We agree that more work is required to further verify the extension to other models, and we will discuss this point in the limitations. We note that the method for building the dataset mixes from multi-turn dialogues is model-independent. > Effect of reinforcement learning on text-to-speech We computed the WER (lower is better) between each model’s text tokens and Whisper’s transcriptions, before and after alignment, on our human evaluation dataset. For Moshi (the "matched" voice), after alignment, we observed a slight improvement. For M-Alt-Vox (the "mismatched" voice, i.e. model with a voice different from our synthesized preference data), WER rose a bit, suggesting that adapting to a voice with different characteristics may show mixed effect. |Model|WER (%)| |-|-| |Moshi-Instruct |5.70| |Moshi-Aligned |4.89| |M-Alt-Vox-Instruct |3.78| |M-Alt-Vox-Aligned |5.88| Note that because how the conversations are conducted, they are not constrained to be the same, so the numbers indicate an aggregated trend which confirms that pronunciation accuracy is overall preserved during the alignment process. > Methodological novelty For the method, we want to clarify a few aspects: - How to construct and mix preference pairs from multi-turn dialogues is not a straightforward adaptation of the textual approach, given that **spoken conversations present unique characteristics** (e.g. overlaps and silences, more concise, more turns). We believe that our designs and results can be interesting for the community and practitioners. - Our approach shows that using synthetic voices can be effective, offering practical advantages such as preserving privacy and mitigating the challenge of scarcity for speech (feedback) data. There are several motivations for leveraging live interaction data, instead of directly synthesizing from LLM-written dialog scripts: - While synthetic data could approximate "typical" timing, **real interactions yield more diverse and realistic phenomena**: e.g. mid-sentence clarifications and stoppings, hesitations, silences when thinking, abrupt topic shifts, etc. In practice, we observed that LLMs struggle at providing or assessing realistic timings, e.g., for overlapping speech. By preserving this information, we could keep some aspects of the prosody, speech rate and rhythm. - We observe that current LLMs are better at generating written text than content resembling spoken conversations, that is more concise, with more turns, potentially overlapping. This echoes observation in recent work [2]. We observe it's possible to prompt the model to imitate to some extent, but it's harder to generate long, coherent, diverse and speech-like dialogues. - Crafting discussion topics and timings can induce biases from the researchers and/or the LLM used. Also, available LLMs will usually refuse to generate synthetic training scripts that would cover adversarial or unsafe topics. We note that the original Moshi model was trained with LLM-written dialog data, synthetized to speech [1], however our organically collected usage data shows a number of failure points. We show leveraging this data for alignment provides additional gains over the original synthetic instruct approach. > Release plan Because of privacy concerns, we don't plan to release the materials at this point. > Acoustic aspects We provide demo samples [in an anonymous link](https://tinyurl.com/3p5tu5za) for the acoustic aspects. Please let us know if we addressed your questions. Thanks! [1] Défossez, A., et al. "Moshi: a speech-text foundation model for real-time dialogue." arXiv, 2024. [2] Cho, H., et al. "Speechworthy instruction-tuned language models." EMNLP, 2024.
Summary: This work introduces a framework for aligning real-time, full-duplex spoken dialogue systems using user spoken interactions (building on Moshi system from kyute.ai). Unlike existing preference learning methods focused on text-based models, this approach addresses the complexities of dialog speech, such as interruptions, etc. The authors create a large dataset of 150,000+ preference pairs from multi-turn speech conversations, annotated with AI feedback. Using offline alignment methods (DPO & co, adapted to their multimodal case), they fine-tune an autoregressive speech-to-speech model (Moshi actually). Experiments show that their approach improves factual accuracy, safety, and alignment of spoken dialogue systems. Claims And Evidence: Yes there are. Methods And Evaluation Criteria: Sure they make sense. Theoretical Claims: This is mainly an experimental paper, no strong theoretical claims here. Experimental Designs Or Analyses: Yes i checked and they sound very reasonable to me. Supplementary Material: No i did not go over the supplementary material tbh Relation To Broader Scientific Literature: Paper is very well positioned related to the literature Essential References Not Discussed: no Other Strengths And Weaknesses: Reasons to accept: -this work might be the first to enhance speech-to-speech dialogue models using large-scale live interaction data. -the dataset building methodology has also some value: authors build on the Moshi spoken language model, focusing on generating preference data from raw dialogues. They Whisper-transcribed audio interactions and used LLMs to annotate data, flagging problematic replies (20% content-related, 57% timing-related, 23% both) Reason to reject: -Potential reasons for rejection could include a lack of clarity on user participation and data collection. Specifically, it is unclear who the users were—whether they were general Moshi users or specifically recruited participants—and whether they knew their conversations were being recorded (I will however not flg the paper for Ethical Review but would like to hear from the authors about that during the rebuttal). Additionally, questions remain about the distribution of the preference dataset (283,740 pairs with overlapping contexts) and whether it will be made publicly available. Other Comments Or Suggestions: no Questions For Authors: -section 3.3: you clearly explain how you annotate problematic Moshi replies, but how do you derive preference data from this? Specifically, from a given context, you identify problematic replies—but where does the preferred (better) response come from? This part remains unclear to me. -tab.1. incorporating audio tokens for DPO does not really help alignment: this could be more commented/discussed -section 5. Aboutr RQ #3 « As it is expensive to acquire new preference data, can we leverage data from off-policy model to optimize models with different voices? » Why would this be problematic ? Especially if incorporating audio tokens for DPO does not really help alignment => there is sthing i don’t quite get here, please explain more why using a voice with significantly different characteristics may cause transfer to be problematic Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the positive assessment and appreciate the insightful and careful feedback. We respond to the points below. ## Questions > section 3.3: [...] how do you derive preference data from this? [...] unclear to me. Thank you for the question. For **content-related** problems: we feed the conversation history context, the problematic reply, the LLM judge's feedback and the instructions for proposing a response into Mistral Large 2, which generates the improved reply and becomes the preferred response. The conversation history context starts from the beginning of the conversation, up to and including the user's last response before the model's problematic reply. The LLM judge's feedback includes identified issues along the axes specified in Subsection 3.3, paragraph "Problematic reply identification". For **timing-related** issues: if the problem is the model interrupting the user, then the preferred response will be given after the user finishes their utterance. If the semantic content of the initial response is adequate, we keep the same response; otherwise, the model needs to propose a response. If the problem is the model not answering the user, we similarly generate a proper reply, which we put directly after the user’s request. We will add more clarifications on this part in the paper. > tab.1. incorporating audio tokens for DPO does not really help [...] discussed The preferences we curated mainly concern semantic aspects and the model's temporal behaviour (e.g., not answering to the user). The actual acoustic contents are not labeled as "preferred/dispreferred". We conjecture that forcing the alignment objective to consider the full audio token probability could introduce noise, harming the model's performance. Focusing on text tokens alone was more stable. > section 5. Aboutr RQ #3 [...] Why would this be problematic ? Especially if incorporating audio tokens for DPO does not really help alignment [...] to be problematic The preference data uses TTS to resynthesize the user's voice, and for the model side, we use the audio tokens of Moshi itself (except for the preferred response, because we have no existing audio tokens, we also resynthesized using the model's voice). We use this same dataset and didn't resynthesize the audio data when aligning M-Alt-Vox. During the alignment stage, even if the final objective optimizes the text tokens, the model is still fed with the audio tokens for context. A different voice would create a shift in the context distribution. In practice, if we fully resynthesize the data with M-Alt-Vox's voice, then we conjecture that we won't see this issue. But this implies a heavier pipeline, and the transfer experiment was to test to the extent to which we can reuse the already synthesized data, and reduce the cost of transferring across voices. ## Concerns > Potential reasons for rejection [...] publicly available. We thank the reviewer for inquiring about data collection. We collected aggregated data in a privacy-preserving and minimalist manner from organic users of our deployed system (not from specially recruited participants) over a two-week period, following a standard user agreement. This is similar to protocols in past work [1, 2]. This approach yields more authentic, diverse data that can better reflect the interests and feedback. To protect privacy, we asked users not to share personal data, and in practice, no personally identifying details were retained. Sensitive voice information was never accessed, and there's no human listening to the recorded conversations. Researchers don't have access to any sensitive and personally identifiable information, including vocal attributes. We promptly discarded original audio after transcribing them into text and allowing a download period for the user. We may lose paralinguistic information, but for the purpose of this work, we choose to only keep the minimal amount of information needed. Our data practices are for instance GDPR-compliant and adhere to principles of minimal, privacy-preserving collection. Note that the examples in the Appendix are from our human evaluation study, explicitly consented. As for the preference pairs, they include multiple flagged points from the original multi-turn dialogues. We detect problematic responses in the dialogue, keep the initial problematic response and sample from the other problematic responses if there are any. We will add more information about the distribution in the Appendix (e.g. number of turns). Because of privacy concerns, we don't plan to release the data at this point. Please let us know if the answers addressed your questions, and if we can address further questions you might have. Thanks again! [1] Ram, A., et al. "Conversational ai: The science behind the alexa prize." arXiv, 2018. [2] Shuster, K., et al. "Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage." arXiv, 2022. --- Rebuttal Comment 1.1: Comment: Thank authors for having addressed my questions and concerns. I have nothing to add here and overall this confirms my positive feedback on the paper.
Summary: The paper describes an approach to aligning the Moshi spoken dialog model to preference data automatically derived from human-model interactions. The preferences are elicited from transcripts of the spoken interactions via a textual LLM (Mistral Large 2). Context and responses are (re)synthesized via TTS and used for aligning the spoken language model. The effectiveness of alignment is evaluated with regard to factuality, safety and perceived quality of interaction. Claims And Evidence: Generally the claims are supported by the experiments carried out. A minor issue is that the evidence is limited to a single model (Moshi) while the frameing refers to spoken dialog models in general. Methods And Evaluation Criteria: The methods are appropriate for the research questions. Theoretical Claims: NA Experimental Designs Or Analyses: Broadly, the experimental design is solid. The main points of weakness concern the extensive reliance on transcriptions and TTS output rather than genuine spoken data, in the following ways: - The authentic user interaction data is automatically transcribed and discarded. The paper motivates claims this is due to unspecified privacy issues, but this is unconvincing without further elaboration. At minimum, the impact of such a drastic transformation of the interaction data onn the outcome should be assessed in some way. - Due to the above, context and preference data for alignment need to be re-synthesized via TTS. This likewise likely degrades the usefulness of this data. If I understand correctly, human evaluation seems to also rely entirely on transcriptions rather than actual spoken interactions. This is especially problematic and many important features of spoken dialog are very hard to decode from a textual transcription. There is no good reason for this design. Additionally, the preference data is automatically generated using and LLM, which is a practical choice, but it would be good the validate this procedure with regards to quality. It is important to note that despite of the above limitations, the paper makes a valuable contributions nevertheless. Supplementary Material: I skimmed the complete supplementary material but did not review it in detail. Relation To Broader Scientific Literature: The key contribution is the application of offline alignment from interaction data to a full duplex spoken language model. Essential References Not Discussed: None identified. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: 376: It wasn't clear to me why the preference data doesn't fully transfer between different voices, since the data is transcribed and resynthesized. It would be good to explain which part of the data collection and processing pipeline lead to this problem. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We appreciate that the reviewer finds our work valuable, and thank the reviewer for the constructive remarks. We answer specific points below. ## Question > 376: It wasn't clear to me why the preference data doesn't fully transfer between different voices [...] this problem. The preference data uses TTS to resynthesize the user's voice, and for the model side, we use the audio tokens of Moshi itself (except for the preferred response, because we have no existing audio tokens, we also resynthesized using the model's voice). We use this same dataset and didn't resynthesize the audio data when aligning M-Alt-Vox. During the alignment stage, even if the final objective optimizes the text tokens, the model is still fed with the audio tokens for context. A different voice would create a shift in the context distribution. In practice, if we fully resynthesize the data with M-Alt-Vox's voice, then we conjecture that we won't see this issue. But this implies a heavier pipeline, and the transfer experiment was to test to the extent to which we can reuse the already synthesized data, and reduce the cost of transferring across voices. ## Concerns > A minor issue is that the evidence is limited to a single model (Moshi) [...] in general. The model choice is because Moshi was the only available open-source full-duplex speech-to-speech model when we conducted the project, and for cost/feasibility reasons. We agree that more work is required to further verify the extension to other models, and we will discuss this point in the limitations. We note that the method for building the dataset mixes from multi-turn dialogues is model-independent. > The main points of weakness concern the extensive reliance on transcriptions and TTS [...] This likewise likely degrades the usefulness of this data. We agree that discarding the original audio loses some audio information, however: - we want to clarify that the TTS is done for the user's audio, and for Moshi we used its own generated tokens (except for the generated preferred response, also synthesized); - as the resynthesized speech retains the original timestamps, some aspects of the prosody are kept, such as the rhythm and speech rate; - for the axes we are focused on in this work, the original audio is not absolutely necessary. Given that in a number of regions in the world, strong protection laws prevent or limit the recording or storage of private and biometric attributes such as a person's voice, our method presents important applications for improving speech systems in a privacy-preserving manner, while respecting local legislations. For more details on data protection, we kindly refer the reviewer to our reply to reviewer 2ifX. We also want to note that according to the original Moshi paper [1], the instruction stage also uses synthetic generation. > If I understand correctly, human evaluation seems to also rely entirely on transcriptions [...] for this design. We agree that relying on transcriptions with timestamps for the human evaluation has limitations, and this was mostly a logistical choice, for: - (1) disentangling and reducing the cognitive load of the annotators; - (2) focusing on the dialogue-level conversation content. For (1), having evaluators or speakers score conversations after listening to the full audio (that can be up to 2min+) can introduce confounding factors, such as the memory ability to correctly recall details after the fact. By using transcriptions, we ensured annotators could consistently review entire multi-turn dialogues after the conversation concluded. For (2), our primary alignment goal was to improve the dialogue content, but the automatic metrics only provide assessment for single-turn conversations. With human evaluation, we wanted to evaluate the dialogue-level quality across multiple turns, including aspects such as consistency and transitions across turns. We will emphasize this as a limitation, and agree that enriching human evaluation with, for instance, direct listening tests or hybrid assessments could capture a broader spectrum of spoken dialogue characteristics. > Additionally, the preference data is automatically generated using and LLM, which is a practical choice, [...] regards to quality. For the validation of the LLM-generated preference data, we manually curated a small held-out validation set over the axes we defined to assess the model's responses (e.g., helpfulness, safety, factuality), and inspected the automatically generated preference responses. As a practical choice, the design followed a heuristic approach trying to cover diverse failure modes. We agree that it would be interesting for future work to conduct more fine-grained investigation. Please let us know if the answers addressed your questions, and if we can address further questions you might have. Thanks again! [1] Défossez, A., et al. "Moshi: a speech-text foundation model for real-time dialogue." arXiv, 2024. --- Rebuttal Comment 1.1: Comment: > Given that in a number of regions in the world, strong protection laws prevent or limit the recording or storage of private and > biometric attributes such as a person's voice, our method presents important applications for improving speech systems in a > privacy-preserving manner, while respecting local legislations. I would find this aspect more useful and convincing if the paper made an effort to quntify the impact of discarding the audio and replacing it with transcriptions and re-synthesized audio. As it is, it's not clear to the reader how serious of a problem this approach is, and if the potential advantage from the point of view of privacy is worth the tradeoff. --- Reply to Comment 1.1.1: Comment: > I would find this aspect more useful and convincing if the paper made an effort to quntify the impact of discarding the audio and replacing it with transcriptions and re-synthesized audio. As it is, it's not clear to the reader how serious of a problem this approach is, and if the potential advantage from the point of view of privacy is worth the tradeoff. We thank the reviewer for emphasizing the importance of quantifying the impact of synthetic vs. real audio. Because we don't have access to the raw audio of the training data used, but only a much smaller set used for human evaluation, it's difficult to estimate the influence using the same training pipeline. Instead, we discarded the "user's voice" from the human evaluation audio, resynthesized the user stream using the TTS pipeline, and computed the corpus WER between both transcriptions with Whisper for the user side. This is more a proxy as we don't have the ground-truth transcriptions. The WER (lower the better) suggests that there can be moderate discrepancy introduced by synthesizing the audio. We also listened to 20 audios and manually checked the errors introduced. The most significant difference is the altered voice attributes. WER errors can include backchannel transcriptions (e.g. "Can you tell me, um, can you give me some recommendations of books to read?" vs. "Can you tell me, can you give me some recommendations of books to read?"). | Model | Corpus WER (%) | |------------------------|----------------| | Moshi-Instruct | 6.27% | | M-Alt-Vox-Instruct | 6.75% | Given that certain regions have strict data protection laws regarding biometric privacy, our approach shows that it's already possible to improve alignment with the minimal degree of information needed, using synthetized user audio. When the trade-off is present, the choice might be more case-dependent, as the original audio could keep some more diverse characteristics, but also bring data biases. We agree that for future work, a more systematic and quantitative comparison of the influence of real vs. synthetic audio can help further clarify the necessity and impact of different vocal attributes.
null
null
null
null
null
null
On the Role of Label Noise in the Feature Learning Process
Accept (poster)
Summary: This paper provides a theoretical analysis of how label noise impacts the feature learning process in deep neural networks. The authors prove a two-stage learning dynamic for networks trained with label noise: 1. Stage I – The model first learns from clean samples while ignoring noisy ones, leading to good generalization. 2. Stage II – As training continues, the network starts overfitting to noisy labels, resulting in performance degradation. To support their theoretical framework, the authors analyze the training dynamics of a two-layer convolutional neural network (CNN) on uniform label noise. Their findings highlight the risks of prolonged training with noisy labels and provide a theoretical justification for two commonly used strategies to mitigate label noise: early stopping and sample selection. The paper further validates its claims through synthetic experiments and real-world experiments on CIFAR-10. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I haven't carefully checked the proof, but the proof sketch looks reasonable. Experimental Designs Or Analyses: The authors clearly show two-stage phases during training with label noise via synthetic experiments and real-world experiments on CIFAR-10. Supplementary Material: I did not check the proofs in the supplementary material. Relation To Broader Scientific Literature: The paper provides theoretical support for learning with label noise, especially methods focusing on early stopping. Essential References Not Discussed: Related works have been well discussed. Other Strengths And Weaknesses: The early stopping technique is widely used in learning with label noise, yet its theoretical justification remains incomplete. This paper provides valuable theoretical support for the phenomenon. However, its impact could be further strengthened if the authors offered a more in-depth explanation, based on their theory, of why the model tends to prioritize learning from clean-labeled samples before incorporating noisy labels. Other Comments Or Suggestions: See above. Questions For Authors: According to Theorem 4.1, $T_1$ marks the transition between the first and second stages. Could $T_1$ be leveraged in practical applications to determine the optimal point for early stopping, even for the synthetic experiment? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper and for recognizing the value of our contributions. We will try our best to address your questions. **Q1: Suggestions for more explanations of why the model tends to prioritize learning from clean-labeled samples. “However, its impact could be further strengthened if the authors offered a more in-depth explanation, based on their theory, of why the model tends to prioritize learning from clean-labeled samples before incorporating noisy labels.”** **A1**: Thank you for the suggestion. We will include a more in-depth explanation in the proof sketch. Intuitively, the signal $\mu$ is shared across all samples, and thus it is easier to be captured by the model. In contrast, noise is more complicated and varies across samples, so learning the noisy samples requires the neural network to spend more time exploring a refined structure to memorize the noise for each sample. **Q2: Questions for applying $T_1$ as the early stopping point in practice. “According to Theorem 4.1, $T_1$ marks the transition between the first and second stages. Could $T_1$ be leveraged in practical applications to determine the optimal point for early stopping, even for the synthetic experiment?”** **A2**: Thank you for the question. As also discussed in *Reviewer P2ej A4*, we acknowledge that directly computing $T_1$ in real-world scenarios is not feasible due to the complexity of the training dynamics and unknown data distribution. However, our theory suggests that validation accuracy can serve as a practical surrogate for identifying $T_1$. Based on Theorem 4.1 and Theorem 4.2, $T_1$ marks the transition beyond which generalization performance begins to degrade. Therefore, early stopping at the point of maximum validation accuracy aligns with our theoretical insights and the common practice of early stopping.
Summary: The paper is concerned with a theoretical analysis of gradient descent when the training samples contain label noise as well as features uncorrelated to the correct class. The chosen methodology is feature learning theory, where highly simplified learning problems are introduced in which the features carrying signal are clearly separated from features representing noise, and a simple network architecture is trained. In this setup, the convergence of the loss can be formally analyzed in great detail as a function of the type of the example. The paper presents theorems that suggest that in the setup they study signal features are picked up by the network much faster than noise features, which creates an initial stage with generalizable learning and a second stage with overfitting. Claims And Evidence: The theoretical claims in the specific simplified setup are supported by thorough proofs. I have not checked these proofs in detail but the formulation is of high quality. In the experimental section the CIFAR10 experiments are not well described (training setup, etc) and the results there have very little overlap with the presented theory in general (data model, network architecture, training setup, specific assumptions, etc) so they provide only very indirect support for the usefulness of the theory, they basically reiterate well known empirical behavior. Methods And Evaluation Criteria: The theoretical methods are appropriate. Theoretical Claims: I did not check the proofs. The formalism is well designed and it is possible to follow, although not too easily, eg the two-layer CNN is described extremely briefly, the supplementary could have expanded this a bit. The rest of the presentation is similarly dense, it requires a lot of effort to follow. Experimental Designs Or Analyses: As mentioned above the CIFAR10 experiments are not described in enough detail and present observations that have been rather well known from research into memorization, as also mentioned in the paper itself (eg that early stopping works for label noise). So overall, the theory does not have a really strong experimental support, but in general it is in line with empirical observations (clean samples converge first). Supplementary Material: The supplementary contains about 30 pages of proofs, I did not check those. Relation To Broader Scientific Literature: The paper can be positioned as a purely theoretical work in the area of feature learning theory, a specific methodology to study the convergence during training. Its contributions lie in studying label noise in this framework and also applying a number of technical innovations in the analysis. Essential References Not Discussed: No suggestions here that are essential. Other Strengths And Weaknesses: As for strengths, the paper targets an interesting problem and offers a theoretical approach in a novel framework (feature learning theory) that captures some of the empirically well-known observations such as that clean samples converge first, and early stopping is a good way of filtering noisy labels. As for weaknesses, the theoretical results of the paper are in a very simplistic setup in which signal and noise are very clearly separated not only in the input, but also within the network architecture. The learning algorithm is GD with a constant learning rate. This makes the analysis possible but the usual question remains: does this approach have any explanatory power? Other Comments Or Suggestions: Typo: line 146, I guess F_{-1}(W_{-1}… Questions For Authors: About the early stopping result (prop. 4.3). I was wondering whether it is really early stopping in the sense that although stopping at T_1 is good, T_1 is not necessarily an early stopping point (in the usual sense of having a local maximum accuracy (or min loss) on a validation set), and that is actually a rather crucial question, because finding T_1 is not possible directly in practice. Do you have any thoughts on that? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper and for appreciating our novelty and contributions! We will try our best to address your questions. **Q1: Concerns about the presentation. “The formalism is well designed and it is possible to follow, although not too easily, eg the two-layer CNN is described extremely briefly.”** **A1**: Thank you for your feedback. We will provide a more detailed and clearer explanation of the two-layer CNN in our revision. **Q2: Concerns about the real-world experiments. “the CIFAR10 experiments are not well described (training setup, etc) and the results there have very little overlap with the presented theory in general (data model, network architecture, ...) so they provide only very indirect support for the usefulness of the theory, they basically reiterate well known empirical behavior.”** **A2**: Thanks. We would like to clarify that the primary purpose of the CIFAR-10 experiment is to support and illustrate the theoretical findings, rather than to introduce new empirical discoveries. As the reviewer noted, there are differences between the theoretical setup and the CIFAR-10 training setup. However, the observed results align with our theoretical insights and help demonstrate the broader applicability of our theory beyond the simplified setting. Additionally, in response to the reviewer’s suggestion, we will include more details on the CIFAR-10 experimental setup in the appendix to improve clarity and transparency. **Q3: Concerns about the simplified theoretical setup. “the theoretical results of the paper are in a very simplistic setup in which signal and noise are very clearly separated not only in the input, but also within the network architecture. The learning algorithm is GD with a constant learning rate. This makes the analysis possible but the usual question remains: does this approach have any explanatory power?”** **A3**: We sincerely appreciate the reviewer's thoughtful critique regarding our theoretical setup. We address these concerns from three perspectives: * **Well-Established in the Community**. Our simplified framework follows standard practice in the deep learning theory community. Similar settings (e.g., GD with constant LR, architecturally separated signal/noise) have been adopted in many influential works [1, 2, 3, 4] to gain fundamental insights into neural network behavior. These simplifications are essential for isolating and rigorously analyzing core learning dynamics. * **Empirically Validated**. Our theoretical results are not merely abstract; they align closely with our real-world experiments (see Section 5). This consistency underscores the validity of our framework and demonstrates its explanatory power in practical scenarios. * **Challenges of Extension**. While we agree that extending the analysis to more complex settings (e.g., deeper model, realistic dataset) would be valuable, it remains highly challenging due to the nonlinear, nonconvex nature of neural network training. Many existing approaches rely on unrealistic assumptions—such as infinite-width networks [5, 6] or layer-wise training [7]—which can obscure the phenomena our work aims to clarify. From a broader view, simplified models are foundational across many scientific disciplines, from idealized models in physics to mean-field theories in neuroscience. Thus, we believe our theory provides valuable insights and substantial explanatory power, offering a solid foundation for further research in more realistic settings. **Q4: “Typo: line 146, I guess F_{-1}(W_{-1}…”** **A4**: Thanks. We will correct this typo. **Q5: Questions for applying $T_1$ as the early stopping point in practice. “I was wondering ... T_1 is not necessarily an early stopping point (in the usual sense of having a local maximum accuracy (or min loss) on a validation set), ..., because finding T_1 is not possible directly in practice. Do you have any thoughts on that?”** **A5**: Insightful question. We agree that directly computing $T_1$ in real-world scenarios is not feasible due to the complexity of the training dynamics and unknown data distribution. However, our theory suggests the existence of a point $T_1$, beyond which further training may degrade generalization performance. This implies that validation accuracy can serve as a practical surrogate for identifying $T_1$, which aligns with the common practice of early stopping at the point of maximum validation accuracy, as mentioned by the reviewer. Indeed, our theory explains the effectiveness of the common practice, which further strengthens the applicability of our theory. We will clarify this point in the revised paper. ### Reference Due to space limit, we defer the reference list to the Reference part in *Rebuttal to Reviewer Q9d8*.
Summary: This paper analyzes the training dynamics of a (custom) two-layer ConvNet under binary class-conditional label noise. The main result is a a two-stage characterization of "first fitting all clean samples, then overfitting to noisy samples". Claims And Evidence: Yes, the claims were proved. Methods And Evaluation Criteria: Overall makes sense to me. Theoretical Claims: I've skimmed through the proofs, but have not checked the details. Insights and proof sketches were provided in the main text, which is good for readers. Experimental Designs Or Analyses: Yes. Supplementary Material: I've skimmed through the proofs; the authors may need to emphasize Appendix A because it's crucial when evaluating novelty of this work. Relation To Broader Scientific Literature: This paper analyzes the training dynamics of NN under label noise beyond the "lazy training" regime, which is a good contribution to the literature. Essential References Not Discussed: Essential references are included. The setup is built upon Kou et al. (2023), it would be good to indicate that in the main text. Other Strengths And Weaknesses: My understanding is that the paper uses the setup and tools from Kou et al. (2023) to analyze the training dynamics under label noise. The paper is well written and the results are clear to me. Though novelty is a bit limited, I'm glad to see a new analysis of label noise learning. Other Comments Or Suggestions: The data distribution and Two-Layer ReLU CNN setup seems to follow the one in Kou et al. (2023) and Cao et al. (2022), please indicate that. Questions For Authors: 1) line 142: The "Two-Layer ReLU CNN": $$ F_j(W_j, x) = \frac{1}{m} \sum_{r=1}^{m} \left( \sigma \langle w_{j,r}, y\mu \rangle + \sigma \langle w_{j,r}, \xi \rangle \right) $$ seems to corresponds to CNN with stride $d$ and then global average pooling. I understand that this is the setup considered in Kou et al. (2023), but I am still curious about whether it's possible to consider a more general form that resembles a "standard" CNN. 2) line 376, synthetic experiments: why choose a high-dimensional case $d >> n$? Will the same result hold for $d < n$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your great efforts on the review of this paper and for recognizing the value of our contributions. We will try our best to address your questions. **Q1: Suggestion for acknowledgement in setup part. “The setup is built upon Kou et al. (2023), it would be good to indicate that in the main text.”** **A1**: Thank you for the suggestion. We will include additional commentary in the Preliminary section to explicitly acknowledge that we built our theoretical setup upon [1] and [2]. **Q2: Question on the possibility of extending to a standard CNN. “I understand that this is the setup considered in Kou et al. (2023), but I am still curious about whether it's possible to consider a more general form that resembles a "standard" CNN.”** **A2**: Good question. We would like to clarify and address the points as follows: * **Justification of the Two-Layer CNN Model**. First, we clarify that the two-layer ReLU CNN architecture, while simplified, is a well-established model in deep learning theory for analyzing training dynamics [1, 2, 3, 4]. This choice strikes a balance between capturing essential learning phenomena and maintaining mathematical tractability. * **Clarification on Global Average Pooling**. We respectfully point out that our theoretical analysis does not consider global average pooling. In fact, we can rewrite the two-layer ReLU CNN model as $$ f(\mathbf W, \mathbf x) = \frac{1}{2m} \sum_{r = 1}^{2m} a_r \left(\sigma(\langle \mathbf w_{r}, y\mu\rangle) +\sigma(\langle \mathbf w_{r}, \xi\rangle) \right). $$ where $a_r$ represents the second-layer weights, which are fixed as $a_r = -1$ for $r \leq m$ and $a_r = +1$ for $r > m$. This is different to global average pooling where $a_r = 1$ for all $r$. We will clarify this point in the revised paper. * **Extension to "Standard" CNN**. We have also carefully considered the extension to a more “standard” CNN. Based on the review’s context, we interpret the term “standard” into two aspects: 1. *Flexible stride in the convolutional layer*. We acknowledge that variable strides are commonly used in practice. Extending the data distribution to a multi-patch case allows us to use a larger stride (increase the size of the receptive field), which is more aligned with standard CNN architectures. 2. *Multi-layer architectures*. We recognize the importance of deeper networks. However, the analysis of training dynamics beyond two-layer CNN remains an open challenge and often requires unrealistic assumptions, e.g., infinite width [5, 6] layer-wise training [7]. Overall, while we acknowledge the value of extending our analysis to a more complex setup, this lies beyond the scope of our paper and is deferred to future work. **Q3: Concerns about the synthetic experiments. “line 376, synthetic experiments: why choose a high-dimensional case $d \gg n$? Will the same result hold for $d < n$?”** **A3**: Good question. First, we clarify that we choose $d \gg n$ according to our theoretical setting (Condition 4.1), where we analyze the over-parameterization regime. Nevertheless, we followed your suggestion to conduct a new experiment with $n > d$. Specifically, we choose $n = 200$, $d = 180$. Results are presented at [Figure (please click this link)](https://anonymous.4open.science/r/ICML2025-label-noise-26D3). Consistently with our main paper, we observe a two-stage behaviour, where the model initially fits the clean samples and then overfits to the noisy ones. **Q4: Suggestions for emphasizing Appendix A. “the authors may need to emphasize Appendix A because it's crucial when evaluating novelty of this work.”** **A4**: Thank you for the suggestion. We will follow your suggestion to include a “technical novelty” paragraph to highlight Appendix A in our revised paper. ### Reference [1] Cao et al. Benign overfitting in two-layer convolutional neural networks. NeurIPS 2022. [2] Kou et al. Benign overfitting in two-layer relu convolutional neural networks. ICML 2023. [3] Zou et al. The benefits of mixup for feature learning. ICML 2023. [4] Chen et al. Understanding and improving feature learning for out-of-distribution generalization. NeurIPS 2023. [5] Du et al. Gradient descent finds global minima of deep neural networks. ICML 2019. [6] Allen-Zhu et al. A convergence theory for deep learning via over-parameterization. ICML 2019. [7] Chen et al. Which layer is learning faster? a systematic exploration of layer-wise convergence rate for deep neural networks. ICLR 2023. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response.
null
null
null
null
null
null
null
null
LV-XAttn: Distributed Cross-Attention for Long Visual Inputs in Multimodal Large Language Models
Accept (poster)
Summary: 1. The paper presents LV-XAttn, a distributed cross-attention mechanism designed to address the high memory demands and communication overheads in multimodal large language models (MLLMs) when processing large visual inputs. 2. LV-XAttn reduces communication volume by keeping large key-value blocks locally on each GPU and exchanging smaller query blocks across GPUs, while also introducing an efficient activation recomputation technique to support longer visual context. 3. Evaluations demonstrate that LV-XAttn achieves significant speedups, up to 5.58× end-to-end speedup compared to existing approaches like Ring Attention, making it a more efficient solution for distributed training and inference of MLLMs. Claims And Evidence: 1. Theoretical Analysis: They present a detailed theoretical analysis of the communication benefits of LV-XAttn compared to existing methods like Ring Attention. This includes mathematical formulations of communication volumes and runtime analysis for different scenarios. 2. Empirical Evaluations: The paper includes comprehensive empirical evaluations across multiple models (mPLUG-Owl3 and OpenFlamingo) and cluster configurations (16 A100 GPUs, 8 A30 GPUs, etc.). These evaluations demonstrate significant speedups for both cross-attention operations and overall model iteration time. 3. Ablation Studies: The authors conduct ablation studies to isolate the effects of specific techniques, such as the activation recomputation method. These studies show that the proposed methods achieve their claimed benefits with minimal overhead. 4. Comparison with Baselines: The paper compares LV-XAttn with existing distributed attention mechanisms like Ring Attention and DeepSpeed-Ulysses, demonstrating superior performance in terms of speed and memory efficiency. Methods And Evaluation Criteria: 1. The core idea of LV-XAttn—keeping large key-value blocks locally while exchanging smaller query blocks across GPUs—makes sense given the observation that query blocks are typically much smaller than key-value blocks in MLLMs. This approach directly addresses the communication bottleneck identified in existing distributed attention mechanisms. 2. The activation recomputation technique specifically designed for MLLMs is logical, as it leverages the fact that visual features remain unchanged across cross-attention layers, allowing for memory savings without significant recomputation overhead. 3. The theoretical analysis of communication benefits provides a clear rationale for why LV-XAttn should outperform existing methods like Ring Attention, especially in scenarios with large visual inputs. 4. The use of mPLUG-Owl3 and OpenFlamingo models as test cases is appropriate, as these are representative MLLMs that incorporate cross-attention mechanisms and are known to have challenges with large visual inputs. 5. The evaluation across different cluster configurations (16 A100 GPUs, 8 A30 GPUs, etc.) helps demonstrate the method's effectiveness in various distributed settings and resource constraints. 6.The focus on both cross-attention operation speedup and overall model iteration time speedup provides a comprehensive view of the practical benefits, showing how improvements in one component translate to overall system performance. Theoretical Claims: The paper presents several theoretical claims about the communication benefits and speedups of LV-XAttn compared to existing methods like Ring Attention. These claims are supported by mathematical formulations of communication volumes and runtime analysis. The theoretical speedup analysis in Figure 4 and the accompanying equations seem reasonable and align with the intuition that reducing communication volume would lead to significant speedups in distributed settings, especially when dealing with large key-value blocks. Experimental Designs Or Analyses: 1. Main Evaluations Comparing LV-XAttn with Ring Attention 1.1 Design: The authors evaluated LV-XAttn on multiple models (mPLUG-Owl3 and OpenFlamingo) across different cluster configurations (16 A100 GPUs, 8 A30 GPUs, etc.). They measured both cross-attention operation speedup and overall model iteration time speedup. 1.2 Validity: This design is appropriate for demonstrating the practical benefits of LV-XAttn in real-world distributed settings. The use of multiple models and cluster configurations helps establish generalizability. 1.3 Issues: The paper doesn't provide statistical significance testing for the results, which would strengthen the claims. Additionally, while the results are impressive, more detailed analysis of how different parameters (like batch size or sequence length) affect performance could be beneficial. 2. Ablation Studies 2.1 Design: The authors conducted ablation studies to examine the effects of overlapping computation/communication and activation recomputation. 2.2 Validity: These studies effectively isolate specific components of the proposed method and demonstrate their individual contributions. The results (Figures 5 and 6) clearly show the benefits of these techniques. 2.3 Issues: The ablation studies are somewhat limited in scope and could benefit from more detailed analysis of how different parameters affect performance. For example, varying the number of workers or the size of visual inputs could provide additional insights. Supplementary Material: A. Comparison of LV-XAttn and Ring Attention for General Use Case Relation To Broader Scientific Literature: 1. Cross-Attention in Multimodal Models: Cross-attention has been widely adopted in multimodal large language models (MLLMs) for integrating visual information into language backbones. The proposed LV-XAttn mechanism advances this area by specifically addressing the challenges of distributed computation for cross-attention with large visual inputs, which is a critical bottleneck for efficient training and inference. 2. Distributed Attention Mechanisms: Existing distributed attention approaches, such as head-parallelism methods (e.g., Deepspeed-Ulysses) and sequence-parallelism methods (e.g., Ring Attention), face significant communication overheads. LV-XAttn introduces a novel distributed cross-attention mechanism that minimizes communication overhead by keeping large key-value blocks locally on each GPU and exchanging smaller query blocks. This approach is a significant improvement over previous methods, especially for applications with large visual inputs where the size of the query block is much smaller than that of the key-value blocks. 3. Memory-Efficient Attention: The paper's introduction of an efficient activation recomputation technique is related to broader efforts in developing memory-efficient attention mechanisms. Techniques like Flash Attention aim to reduce the memory footprint of attention operations. The activation recomputation method in LV-XAttn specifically targets the memory pressures introduced by storing large key and value tensors for every cross-attention layer in MLLMs, allowing for the processing of longer visual contexts with minimal overhead. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. Limited Model Scope: The evaluations primarily focus on mPLUG-Owl3 and OpenFlamingo models. While these are representative MLLMs, the approach could benefit from validation on a wider range of multimodal models to demonstrate broader applicability. 2. Lack of Statistical Significance Testing: While the results show substantial speedups, statistical significance testing isn't provided. This could strengthen the claims by quantifying the reliability of the observed performance improvements. 3. Limited Discussion of Trade-offs: The paper could benefit from a more detailed discussion of potential trade-offs, such as increased computational complexity from activation recomputation or limitations in scenarios with very small visual inputs. 4. No Comparison with Other Distributed Cross-Attention Methods: The paper primarily compares with Ring Attention and DeepSpeed-Ulysses but doesn't evaluate against other potential distributed cross-attention mechanisms that might exist in the literature. Other Comments Or Suggestions: 1. Limited Model Scope: The evaluations primarily focus on mPLUG-Owl3 and OpenFlamingo models. While these are representative MLLMs, the approach could benefit from validation on a wider range of multimodal models to demonstrate broader applicability. 2. Lack of Statistical Significance Testing: While the results show substantial speedups, statistical significance testing isn't provided. This could strengthen the claims by quantifying the reliability of the observed performance improvements. 3. Limited Discussion of Trade-offs: The paper could benefit from a more detailed discussion of potential trade-offs, such as increased computational complexity from activation recomputation or limitations in scenarios with very small visual inputs. 4. No Comparison with Other Distributed Cross-Attention Methods: The paper primarily compares with Ring Attention and DeepSpeed-Ulysses but doesn't evaluate against other potential distributed cross-attention mechanisms that might exist in the literature. Questions For Authors: 1. Limited Model Scope: The evaluations primarily focus on mPLUG-Owl3 and OpenFlamingo models. While these are representative MLLMs, the approach could benefit from validation on a wider range of multimodal models to demonstrate broader applicability. 2. Lack of Statistical Significance Testing: While the results show substantial speedups, statistical significance testing isn't provided. This could strengthen the claims by quantifying the reliability of the observed performance improvements. 3. Limited Discussion of Trade-offs: The paper could benefit from a more detailed discussion of potential trade-offs, such as increased computational complexity from activation recomputation or limitations in scenarios with very small visual inputs. 4. No Comparison with Other Distributed Cross-Attention Methods: The paper primarily compares with Ring Attention and DeepSpeed-Ulysses but doesn't evaluate against other potential distributed cross-attention mechanisms that might exist in the literature. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and feedback. **Q1: Limited model scope.** A1: Please refer to Q2 of Reviewer zbCn. **Q2: Lack of statistical significance testing.** A2: All runtime results are averaged over five iterations, excluding the first two warm-up iterations. **Q3: Limited discussion of trade-offs.** A3: In Sections 3.2 and 4.4, we discussed and demonstrated how LV-XAttn’s activation recomputation trades off minimal runtime for reduced memory pressure. For smaller visual inputs, as noted in the Appendix (and discussed in Reviewer EkKR Q3), we expect LV-XAttn to perform similarly to Ring Attention. **Q4: No comparison with other distributed cross-attention methods.** A4: To the best of our knowledge, existing distributed attention mechanisms either use sequence-parallelism or head-parallelism. Ring Attention and DeepSpeed-Ulysses are two widely adopted and representative mechanisms for each approach, respectively. We believe comparing LV-XAttn to these two methods provides a comprehensive evaluation, as they cover the primary paradigms in distributed cross-attention.
Summary: Cross-attention layers takes a significant memory for applications involving large visual inputs, such as long video understanding, making scaling difficult due to high memory requirements. To address this, the authors present LV-XAttn, a distributed and exact cross-attention mechanism that leverages sequence-parallelism with minimal communication overhead. LV-XAttn partitions large key and value blocks across workers while transmitting only small query blocks, enabling blockwise attention computation. This approach significantly reduces communication volume compared to Ring Attention, improving scalability for large-scale visual tasks. ## update after rebuttal Thank you to the authors for their responses. While some concerns have been addressed, the contributions appear limited to cross-attention operations, making the scenario less compelling. Therefore, I will maintain my current score. Claims And Evidence: The claims sound reasonable to me. However, I have some concerns and questions about the experiments, as described below. Methods And Evaluation Criteria: **Strengths:** - The authors evaluated LV-XAttn on various MLLM models and various Cluster Setup. **Weaknesses and Questions:** - Which datasets were used for training and evaluation? - Although the authors demonstrate a significant speedup, there is no discussion on how LV-XAttn affects performance (e.g. accuracy). Theoretical Claims: The theoretical claims written in Section 3 sound reasonable to me. Experimental Designs Or Analyses: **Weaknesses and Questions:** - Why is the proposed method implemented with Ring Attention for the LM blocks? Is LV-XAttn not very efficient for that? Section A in the Appendix provides further analysis (Lines 632–634), but a more detailed investigation is needed to clarify the reasons. - In the result tables, "CA" indicates that the proposed model achieves a significant speedup in cross-attention operations. Nevertheless, the total speedup is much smaller than that of CA. Where does the proposed model spend more time? Supplementary Material: I checked the Appendix. Relation To Broader Scientific Literature: The experimental results may be useful for running models in the specific scenario described in the paper. Essential References Not Discussed: I believe that the paper covers most of the essential references. Other Strengths And Weaknesses: Please check my comments above. Other Comments Or Suggestions: The manuscript requires proofreading. For instance, Figure 2 needs additional clarification on the legends, as they are written entirely in abbreviations, such as ``Fwd Non-CA``. Questions For Authors: Please answer my questions above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your detailed questions and useful feedback! **Q1: Which datasets were used for training and evaluation?** A1: For our experiments, we do not pretrain or finetune the models. Instead, we use the checkpoints from pretrained models and replace their cross-attention operations with LV-XAttn and other distributed attention baselines. Since LV-XAttn does not affect accuracy, our evaluation focuses on its runtime. We follow the same benchmarking methodology used in Flash Attention and DistFlashAttn to measure the runtime with randomly generated inputs of specific sizes, validating its effectiveness across different scales. It is important to note that our performance benefits are independent of the datasets used. **Q2: How does LV-XAttn affect the model’s performance (e.g. accuracy)?** A2: LV-XAttn is an exact distributed attention mechanism and does not change the output of the cross-attention operator. Hence LV-XAttn does not affect the accuracy. In addition, we have validated the correctness of LV-XAttn by confirming that its output matches that of the PyTorch attention implementation. These tests are the same ones used by Flash Attention and DistFlashAttn. **Q3: Why is Ring Attention used for the LM blocks?** A3: For self-attention in LM blocks, the short text length and the equal size of query blocks and key-value blocks result in similar performance for LV-XAttn and Ring Attention, making the choice between them less impactful. To confirm this, we conducted additional experiments with mPLUG-Owl3-1b, maintaining a fixed frame count and varying the text length (since LM blocks are only affected by text length). The experiments were performed on a 6-GPU cluster, with each node equipped with three A100 40GB GPUs. The GPUs were interconnected via a 64GB/s PCIe connection, with a cross-node bandwidth of 25GB/s. The results are presented in the table below. | Text Length | Frame Count | $S_Q$ | $S_{KV}$ | Ring LM + Ring CA (s) | LV-XAttn LM + Ring CA (s) | Ring LM + LV-XAttn CA (s) | LV-XAttn LM + LV-XAttn CA (s) | |-------------|-------------|-------|----------|-------------------|-----------------------|-----------------------|---------------------------| | 24K | 1152 | 24K | 820K | 20.34 | 19.81 | 16.7 | 16.06 | | 18K | 1152 | 18K | 820K | 17.64 | 16.81 | 12.91 | 12.86 | | 12K | 1152 | 12K | 820K | 14.37 | 14.35 | 10.11 | 9.87 | | 6K | 1152 | 6K | 820K | 13.08 | 12.81 | 6.68 | 6.47 | Here, “X LM + Y CA” indicates that the LM blocks are implemented with X, and the cross-attention layers are implemented with Y. As shown in the table, there is minimal difference between LM blocks implemented with Ring Attention or LV-XAttn. The speedup observed is primarily due to how the cross-attention layers are implemented. **Q4: Why is the end-to-end speedup much smaller than the speedup of cross-attention?** A4: The total iteration time includes the time spent on both cross-attention and non-cross-attention operations within the MLLM. LV-XAttn significantly accelerates the cross-attention layers, but the time spent on the remaining components of the MLLM remains unchanged. This is illustrated in Figure 2, where LV-XAttn reduces both the forward and backward cross-attention times (FWD CA and BWD CA), while the forward and backward times for vision and non-cross-attention layers (FWD Vision, FWD Non-CA, and BWD Non-CA) stay the same. As a result, the overall end-to-end speedup is limited by the time spent on non-cross-attention layers, meaning the speedup is smaller than that observed in the cross-attention layers. This is particularly true for mPLUG-Owl3 models, which only have 4 cross-attention layers out of a total of 24 or 28 LM blocks. **Q5: Figure 2 needs additional clarification on the legends.** A5: The abbreviations used in the legend for Figure 2 are as follows: * FWD CA: Forward pass through the cross-attention layers. * BWD CA: Backward pass through the cross-attention layers. * FWD Vision: Forward pass through the vision encoder. * FWD Non-CA: Forward pass through the non-cross-attention layers. * BWD Non-CA: Backward pass through the non-Cross-attention layers. We will ensure these clarifications are included in our revision.
Summary: The paper introduces ​LV-XAttn, a distributed cross-attention mechanism designed to handle large visual inputs in ​Multimodal Large Language Models (MLLMs) with minimal communication overhead. Cross-attention is commonly used in MLLMs to integrate visual information into the language backbone, but processing large visual inputs (e.g., long videos) leads to high memory demands and significant communication costs in distributed setups. Existing distributed attention mechanisms, such as ​head-parallelism and ​sequence-parallelism, face scalability and efficiency challenges, particularly with large key-value blocks. The key contributions of the paper: 1. The proposed mechanism keeps large key-value blocks locally on each GPU and exchanges smaller query blocks across GPUs, significantly reducing communication volume. This approach leverages the observation that in applications with large visual inputs, the query block size is typically much smaller than the key-value blocks. ​2. To further reduce memory pressure, LV-XAttn introduces a technique where visual tokens are shared across cross-attention layers, and activations are recomputed during the backward pass. This allows processing longer visual inputs with minimal overhead (less than 8%). Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence, as the paper provides theoretical analysis, empirical evaluations, and comparisons with existing methods. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-designed and appropriate for the problem at hand. LV-XAttn effectively addresses the bottlenecks of cross-attention in MLLMs, and the evaluation demonstrates its advantages in terms of efficiency, scalability, and practicality for real-world applications. Theoretical Claims: The paper presents theoretical claims and analyses, particularly regarding the communication benefits and runtime efficiency of ​LV-XAttn compared to ​Ring Attention. I have checked the runtime analysis and don't find problems, and the empirical results support the claims. Experimental Designs Or Analyses: The paper provides a detailed algorithm (Algorithm 1) for the forward pass of LV-XAttn, which is helpful for reproducibility. However, it would be beneficial to include more details on the implementation, such as the specific libraries and frameworks used, to facilitate replication of the results. Supplementary Material: Yes Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: Weakness: ​Limited Applicability to Certain MLLM Architectures: The proposed method is specifically designed for MLLMs that utilize cross-attention mechanisms to process visual tokens. Consequently, it may not be directly applicable to other MLLM architectures, such as ​LLaVA, which employ different approaches for integrating visual information. This limitation restricts the generalizability of LV-XAttn to a broader range of multimodal models. ​Lack of Analysis on Single Image Processing: In real-world deployment scenarios, MLLMs are often required to process not only long videos but also individual images. The paper does not provide an analysis of the runtime performance or potential trade-offs when applying LV-XAttn to single-image inputs. This omission raises concerns about whether the proposed method might introduce inefficiencies or negative effects in such cases, which are critical for practical applications. A more comprehensive evaluation encompassing both long videos and single images would strengthen the paper's relevance to real-world use cases. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful questions and feedback! **Q1: Details on implementation.** A1: LV-XAttn is implemented using PyTorch and Triton. It uses `torch.distributed` for distributed communication, while the modified FlashAttention kernels to account for rescaling operations are implemented with Triton. We will add these details to the paper. We plan to open-source LV-XAttn. **Q2: Limited applicability to certain MLLM architectures.** A2: Please refer to Q2 of Reviewer zbCn. **Q3: ​Lack of analysis on single image processing.** A3: LV-XAttn addresses the communication overhead caused by the large number of visual tokens. While this issue is common in long video inputs due to the high frame count, it can also arise in single-image inputs. This can be due to either a large image size (for example, in mPLUG-Owl3, processing a single 3,840×3,840 high-resolution image is equivalent to processing a 100-frame video of standard frame size) or a high number of visual tokens per image (for instance, Llama-3V encodes each image into 6,404 visual tokens, whereas OpenFlamingo uses only 64, making a single image in Llama-3v equivalent to a 100-frame video in OpenFlamingo). In both video and single-image inputs, these scenarios result in large key-value blocks, which are the primary source of communication overhead in existing distributed attention mechanisms. LV-XAttn effectively addresses this challenge. For single images with small image size or low visual token count, we note that distributed attention may not be necessary. Even if LV-XAttn is applied in such scenarios, we expect its performance to be comparable to Ring Attention, as discussed in Appendix A.
Summary: This paper presents LV-XAttn, a distributed cross-attention mechanism with low communication overhead that can significantly reduce the inference time of MLLMs adopting cross-attn strategy. The authors also introduce the activation recomputation technique enabling supporting longer visual inputs. The proposed LV-XAttn is more efficient compared to Ring-Attention. Claims And Evidence: The claims are clear. For example in Figure 2, the comparison of the overheads is well illustrated and evidenced. Methods And Evaluation Criteria: The evaluation adopted in this paper is sense-making. Theoretical Claims: Theoretical claims presented in this paper are reasoning. For example the Figure 4 depicting the theoretical speedup of LV-XAttn over Ring Attention. Experimental Designs Or Analyses: I have checked the soundness and validity of the experimental designs. For example the Model Setup (i.e., using mPLUG-Owl3, OpenFlamingo), the Cluster Setup (i.e., using A100 80G GPUs), and the Baselines (i.e., Ring Attention, and Deepspeed-Ulysses). There are no issues. Supplementary Material: A. Comparison of LV-XAttn and Ring Attention for General Use Case. Relation To Broader Scientific Literature: The technique proposed in this paper, which speedup the MLLMs on long videos inputs are useful. Essential References Not Discussed: The references are sufficient. However, I noticed that the Llama 3V, a MLLM which is typical and also adopt the cross-attention paradigm to perceive visual inputs is not discussed in this paper. Other Strengths And Weaknesses: Strengths - The strength of this paper is very clear: the proposed cross-attention mechanism can reduce the communication overhead for MLLMs leveraging cross-attention paradigm on video scenario. Weakness - I think the proposed method is an extend version of the Ring-Attention on videos scenario, which is specifically effective for MLLMs who adopt the cross-attention paradigm to perceive visual inputs. However, the mainstream architecture adopted in current MLLMs is the concatenation paradigm, which prepends the visual tokens to prompt tokens directly and feeds them to LLM layers. Therefore I think the applicable scenarios are limited for the LV-XAttn. Other Comments Or Suggestions: Suggestions - The data with statistics characteristics used in the evaluation of Figure 2 should be included in the caption. Questions For Authors: Could the proposed LV-XAtten also improve the efficiency for MLLMs using concatenation paradigm? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the constructive feedback and questions! **Q1: Llama-3V is not discussed in the paper.** A1: Thank you for pointing this out! Llama-3V [1] also utilizes a cross-attention architecture, allowing LV-XAttn to be applied to it for large visual inputs. We conducted additional experiments on Llama-3V (`Llama-3.2-11B-Vision-Instruct`), comparing LV-XAttn and Ring Attention, following a similar setup to Section 4.2. The experiments were performed on a 6-GPU cluster, with each node equipped with three A100 40GB GPUs. The GPUs inside a node were interconnected via a 64GB/s PCIe connection, with a cross-node bandwidth of 25GB/s. The results are presented in the table below. | Model | Text Length | Frame Count | $S_Q$ | $S_{KV}$ | Ring Attention CA (s) | Ring Attention Total (s) | LV-XAttn CA (s) | LV-XAttn Total (s) | Speedup CA | Speedup Total | |--------------------|------------|-------------|-----|-------|-----------------------|-------------------------|-----------------|-------------------|------------|---------------| | Llama-3V | 384 | 120 | 384 | 750K | 23.30 | 37.54 | 0.43 | 14.45 | 54.19$\times$ | 2.60$\times$ | | Llama-3V | 192 | 120 | 192 | 750K | 23.01 | 37.21 | 0.40 | 14.40 | 57.53$\times$ | 2.58$\times$ | | Llama-3V | 192 | 60 | 192 | 375K | 11.69 | 22.92 | 0.22 | 11.61 | 53.14$\times$ | 1.97$\times$ | For Llama-3V, each frame is encoded into 6,404 visual tokens. In comparison, OpenFlamingo and mPLUG-Owl3 models encode each frame into 64 and 729 visual tokens, respectively. This results in an even greater imbalance between query block size and key-value block size for Llama-3V. Since LV-XAttn transmits the smaller query blocks while Ring Attention transmits the larger key-value blocks, LV-XAttn achieves a significant speedup over Ring Attention. **Q2: Applicability to MLLMs with alternative architecture.** A2: Recent MLLMs adopt two main classes of architecture: cross-attention based, where visual information is fused with text information through cross-attention, and concatenation based, where visual tokens are directly concatenated with text tokens and fed into the LLM backbone as inputs. While LV-XAttn is designed to address the communication overhead of the former class of MLLMs, it might still be useful for concatenation based MLLMs for the following reasons. First, to generate visual tokens, concatenation based MLLMs often use a visual adapter to align visual features generated by the vision encoder with textual prompt. Cross-attention is widely adopted in these visual adapters (e.g. BLIP-2 [2], Qwen-VL [3], Video-Llama [4], InternVL [5]). LV-XAttn can be used in the visual adapters when processing large visual inputs. Second, with large amounts of visual tokens fed into the LLM backbones, concatenation based MLLMs also require distributing the attention operation. As discussed in Appendix A, with large context size, LV-XAttn can be used for distributed attention without slowdown when compared to Ring Attention. Finally, we would also like to note that cross-attention-based architectures are widely used in recent models, such as Llama-3V [1] and NVLM-H [6], due to their computational efficiency during training and inference, and due to their ability to prevent degradation for text-only tasks [1, 6]. As a result, addressing the communication bottleneck in these models with LV-XAttn is important for improving their overall efficiency. **Q3: Figure 2 setup clarification.** A3: For Figure 2, mPLUG-Owl-7b was run with a text length of 4K and a frame count of 2K ($S_Q=4K$, $S_{KV}=1458K$), while OpenFlamingo-3b was run with a text length and frame count of 32K ($S_Q=32K, $S_{KV}=2048K$). These results come from the same experiment presented in Table 3. We will ensure these clarifications are included in the caption in our revision. [1] The Llama 3 Herd of Models: https://arxiv.org/abs/2407.21783 [2] BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models: https://proceedings.mlr.press/v202/li23q/li23q.pdf [3] Qwen-VL: A Versatile Vision-Language Model for Understanding, Localization, Text Reading, and Beyond: https://arxiv.org/pdf/2308.12966 [4] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding: https://aclanthology.org/2023.emnlp-demo.49/ [5] InternVL: Scaling up Vision Foundation Models and Aligning for Generic Visual-Linguistic Tasks: https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_InternVL_Scaling_up_Vision_Foundation_Models_and_Aligning_for_Generic_CVPR_2024_paper.pdf [6] NVLM: Open Frontier-Class Multimodal LLM: https://arxiv.org/pdf/2409.11402
null
null
null
null
null
null
Rethinking Chain-of-Thought from the Perspective of Self-Training
Accept (poster)
Summary: This paper investigates CoT approach and proposes a novel CoT framework inspired by its similarity to self-training to enhance reasoning performance. The framework consists of two core modules: a task-specific prompt module that optimizes the initial reasoning process and an adaptive reasoning iteration module that dynamically adjusts the reasoning procedure to address issues in existing CoT methods, such as over-reasoning and high similarity across consecutive reasoning iterations. Experimental results demonstrate that the proposed approach achieves significant improvements in both performance and computational efficiency. Claims And Evidence: The key claims of this paper include: (1) CoT reasoning and self-training share the same core objective; (2) reasoning performance can be improved through task-specific prompting and adaptive reasoning iteration; (3) the proposed approach outperforms traditional CoT methods in computational efficiency. Experimental results support these claims, demonstrating improvements in both performance and computational efficiency. Methods And Evaluation Criteria: The proposed approach effectively integrates task-specific prompt optimization and adaptive reasoning iteration, enhancing reasoning capabilities while reducing computational overhead. The evaluation criteria employed in this paper align with those of traditional CoT methods, wherein key segments of the generated output are extracted and compared against the ground truth. Theoretical Claims: The theoretical contribution of the paper primarily focuses on the analysis of entropy variation in self-training. The claims are intuitively reasonable, and the proofs provided in the Appendix should be correct. Experimental Designs Or Analyses: The paper validates the effectiveness of the proposed method through extensive experiments, focusing primarily on the improvements in reasoning performance and computational efficiency. Supplementary Material: I have reviewed Supplementary Material, primarily including the proof of entropy variation in self-training in Section 2 and the reasoning examples of adaptive iteration. Relation To Broader Scientific Literature: This study builds upon CoT reasoning and self-training methods, proposing a new framework that combines the advantages of both. Compared to existing CoT reasoning methods, the proposed approach introduces task-specific prompt optimization and adaptive reasoning iteration, thereby reducing issues of over-reasoning and high similarity. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1) This paper is grounded in solid theoretical foundations, providing an in-depth theoretical analysis of entropy variation. Additionally, the experimental results are significant, demonstrating that the proposed framework outperforms baseline methods in both performance and computational efficiency, particularly excelling in arithmetic reasoning tasks. 2) In Section 2, this paper provides an in-depth theoretical discussion on the uncertainty measure (information entropy) in self-training, analyzing the mechanism by which the iterative generation of pseudo-labels in semi-supervised self-training leads to progressively improved prediction accuracy. Subsequently, the authors apply this "entropy reduction" perspective to CoT reasoning, and in the following experiments, they conduct a quantitative comparison using semantic entropy, supported by both theoretical analysis and extensive experimental validation. 3) The Task-Specific Prompt module proposed by the authors automatically searches for or constructs prompt statements from task samples, rather than using the generic "Let's think step by step." In scenarios that require more precise guidance, this approach may yield better results. The authors' experiments also confirm that targeted prompts can significantly improve the results of the first-round reasoning. 4) The Adaptive Reasoning Iteration module proposed by the authors incorporates a semantic entropy-based exit mechanism, preventing further reasoning when the model is already "highly confident," thus avoiding the introduction of additional errors. Additionally, to reduce excessive repetition of reasoning across different iterations and control reasoning similarity, the module introduces new prompts. This encourages LLMs to explore more diverse reasoning paths. 5) The overall logic is clear, and the writing is smooth. Weaknesses: 1) There is a lack of discussion on negative results. While the paper primarily demonstrates the advantages of the method, it does not provide an explanation for the model's poor performance on certain datasets, such as why the improvement in the "commonsense" task is limited. 2) The main text lacks a Related Work section, and the Task-Specific Prompt part of Framework Figure 4 does not clearly describe the specific details, making it difficult for readers to understand how to obtain the candidate prompts and select the optimal one. Other Comments Or Suggestions: 1)Is it better to have a larger maximum number of iterations in the proposed ARI module? 2)This paper should include an intuitive analysis of the prompts selected by the TSP, such as whether they provide stronger guidance or how they structurally differ from manually crafted prompts. Questions For Authors: Does the number of self-consistency samples have any particular impact on the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable feedback and encouraging comments. We are motivated by the suggestions and have addressed each concern in detail as follows: > **Q1.** There is a lack of discussion on negative results. While the paper primarily demonstrates the advantages of the method, it does not provide an explanation for the model's poor performance on certain datasets, such as why the improvement in the "commonsense" task is limited. **A1.** We acknowledge that the performance gain is relatively limited on commonsense tasks. This is mainly because such tasks rely more on pretrained world knowledge and co-occurrence patterns, rather than explicit logical steps. In contrast, arithmetic tasks better align with our iterative reasoning framework. For open-ended tasks, additional reasoning may introduce noise rather than benefit. We will explore better adaptations to such tasks in future work. > **Q2.** The main text lacks a Related Work section, and the Task-Specific Prompt part of Framework Figure 4 does not clearly describe the specific details, making it difficult for readers to understand how to obtain the candidate prompts and select the optimal one. **A2.** In the final version, we will add a “Related Work” section to cover relevant studies on Self-Training, Chain-of-Thought reasoning, and semantic entropy. We will also refine Figure 4 and clarify that the task-specific prompt module generates candidate prompts based on several example questions, evaluates them using average semantic entropy on a validation set, and selects the one with the lowest entropy as the final prompt. > **Q3.** Is it better to have a larger maximum number of iterations in the proposed ARI module? **A3.** We have added a sensitivity analysis of the maximum number of iterations $T$, with the experimental results shown in Table. Overall, increasing the maximum number of iterations does not always lead to better performance. Taking AQuA and AddSub as examples, the model already achieves a relatively high accuracy at $T=3$, while further increasing $T$ to 4 or 5 leads to saturated or slightly fluctuating performance, along with a significant increase in computational cost. At the same time, for more challenging tasks, a larger maximum iteration count may offer a broader reasoning space and thus yield further performance improvements. Therefore, the optimal value of $T$ should be determined based on task difficulty: for simpler tasks, a smaller $T$ is more efficient; whereas for more complex tasks, allowing a larger $T$ may better support sufficient reasoning. | Dataset| $T=1$| $T=2$| $T=3$| $T=4$| $T=5$| |-|-|-|-|-|-| | AQuA | 59.5%| 70.8%| 70.1%| 71.7%| 71.3%| | AddSub | 80.5%| 86.1%| 88.4%| 88.8%| 88.6%| > **Q4.** This paper should include an intuitive analysis of the prompts selected by the TSP, such as whether they provide stronger guidance or how they structurally differ from manually crafted prompts. **A4.** Our proposed TSP module is designed to automatically generate initial reasoning prompts that better align with the task’s semantic distribution, serving as a replacement for generic prompts. While generic prompts possess a certain degree of generality, their expressions tend to be overly abstract and fail to effectively guide the model to focus on the key reasoning elements within a task. In contrast, TSP leverages actual task samples to generate multiple candidate prompts and selects the most effective one based on average semantic entropy, thereby improving the quality of initial reasoning and reducing the number of iterations. Structurally, prompts generated by TSP typically extend the generic prompt template with additional task-specific semantic cues. For example, for the AQuA dataset, TSP produces a prompt: "Let's think step by step, how to break down the mathematical operations involved in each problem and identify the key concepts to solve them accurately". > **Q5.** Does the number of self-consistency samples have any particular impact on the proposed method? **A5.** We conducted additional experiments on four datasets to evaluate the impact of the sampling number $N$ on model performance and robustness, as shown in Table below. The results show that as $N$ increases, the overall model accuracy improves, with more pronounced gains on challenging tasks such as AMC2023 and Gaokao. Meanwhile, performance tends to saturate after $N=3$, indicating that even a relatively small $N$ can yield stable semantic entropy estimates. This balances performance with computational efficiency and demonstrates the method’s strong robustness with respect to the parameter $N$. | Dataset| $N=1$| $N=2$| $N=3$| $N=4$| $N=5$| |-|-|-|---------|---------|---------| | AQuA | 53.9% | 70.9% | 70.1% | 71.3% | 72.4%| | AddSub | 73.9% | 85.1% | 88.4% | 88.9% | 90.4%| | AMC2023 | 12.5% | 20.0% | 25.0% | 27.5% | 30.0%| | Gaokao | 32.8% | 41.3% | 44.2% | 44.7% | 45.3%|
Summary: Inspired by self-training, this paper designs the CoT framework to improve reasoning performance. It contains two core elements: a task-specific prompt and an adaptive reasoning iteration. This paper finally conducted experiments on 10 reasoning datasets and achieved improved results. Claims And Evidence: I think the discussion on Semantic Entropy for LLM reasoning based on self-training is interesting, but it lacks theoretical proof and experimental verification in more scenarios. Methods And Evaluation Criteria: I think the main experiment needs to not only compare the effects, but also the token costs brought by the comparison methods. Theoretical Claims: Lack of sufficient theoretical claims on LLM Semantic Entropy. Experimental Designs Or Analyses: See details in Methods And Evaluation Criteria. Supplementary Material: Yes. I read most of it. Relation To Broader Scientific Literature: The optimization and design of chain of thought may be helpful for the automated design of agents and the generalization research of ML. Essential References Not Discussed: Overall complete. Other Strengths And Weaknesses: I think the design of the idea from self-training to semantic entropy is interesting. However, I doubt whether this method will bring additional token overhead, and the effect in the experiment does not seem to be very significant. In addition, the effectiveness of semantic entropy is mainly verified from experimental phenomena, and I have concerns about its adaptability and generalization ability in more scenarios. Furthermore, the assumption of a mixture of Gaussians in the proof seems overly ideal. Other Comments Or Suggestions: There are some minor typos in the writing of the paper, for example, the first letter of "we" in line 105 should be capitalized, etc. Questions For Authors: 1. How to determine the optimal predefined threshold $\delta$? 2. Does this pipeline have a certain degree of generalization ability? For example, is the optimized COT adaptable to new tasks? Or does it mean that each task needs to be optimized in a specific way? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable feedback and encouraging comments. We are motivated by the suggestions and have addressed each concern in detail as follows: > **Q1.** Main experiments should compare both effectiveness and token overhead of each method. **A1.** We have added token consumption to the main experiments to assess performance-cost trade-offs. Our method outperforms others while using fewer tokens than the SOTA method (e.g., Nash), especially on challenging tasks like AMC2023 and Gaokao, showing better efficiency in complex reasoning. | Method|AQuA|Token|AddSub|Token|AMC2023|Token|Gaokao|Token| |-|-|-|-|-|-|-|-|-| |Zero|39.8%|614|57.5%|117|2.5%|841|31.3%|508| |SC|63.0%|1,875|81.5%|743|15.0%|2,950|35.9%|3,357| |Active|66.1%|1,649|86.3%|542|12.5%|2,421|40.5%|3,409| |Nash|68.9%|6,598|88.9%|4,431|25.0%|12,940|43.0%|8,705| |Our|70.1%|2,884| 89.1%|1,028|27.5%|7,604|44.2%|4,069| > **Q2.** Lack of sufficient theoretical claims on LLM Semantic Entropy. **A2.** The highly nonlinear nature of LLMs makes it difficult to model their output distribution using traditional probabilistic methods (e.g., GMM), posing challenges for establishing a rigorous theoretical framework for semantic entropy. This paper focuses on proposing a practical, scenario-driven CoT optimization framework, using semantic entropy as a heuristic metric to capture answer distribution dynamics during reasoning. While its theoretical foundation is not fully established, experiments demonstrate its effectiveness, and we will further investigate its theoretical basis in future work. > **Q3.** I have concerns about its adaptability and generalization ability in more scenarios. **A3.** To evaluate the generalization of the semantic entropy mechanism across tasks, we added experiments on two distinct datasets: Race (English reading comprehension task) and Anli (adversarial natural language inference task), each with 300 test samples. As shown in Table, our method achieved clear performance gains on both, demonstrating strong robustness across language understanding tasks. We plan to explore its application to Vision and Cross-Modal tasks in future work. | Method|Race|Token|Anli|Token| |-|-|-|-|-| |Zero|80.0%|508|45.0%|268| |SC|84.5%|2,530|56.0%|1,456| |Our|88.5%|3,467|64.5%|2,245| > **Q4.** The assumption of a mixture of Gaussians in the proof seems overly ideal. **A4.** We adopt GMM as the modeling assumption based on two reasons: (1) GMM is a widely used and interpretable model that helps illustrate how pseudo-labels reduce entropy; (2) our results are a direct corollary of [1], which provides a general derivation under the sub-exponential family, of which GMM is a special case. To balance rigor and clarity, we use GMM as a heuristic example, while the conclusions also hold under broader distributions given certain conditions. [1] Frei S,et al. self-training converts weak learners to strong learners in mixture models. > **Q5.** How to determine the optimal predefined threshold $\delta$? **A5.** The predefined threshold $\delta$ is empirically determined based on the task type and the number of samples $N$ in self-consistency (SC). Specifically, we allow up to $k$ predictions to deviate from the majority class and compute the corresponding entropy threshold using the ratio $(N - k)/N$. For example, in the more challenging reasoning task AQuA, when $N = 4$ and $k = 1$ (i.e., allowing at most 1 out of 4 predictions to differ from the majority), the corresponding threshold is calculated as: $\delta = -\left( \frac{N-k}{N} \log \frac{N-k}{N} + \frac{k}{N} \log \frac{k}{N} \right) \approx 0.811$. As shown in Table, setting the threshold too high may cause LLM to terminate early before identifying the correct answer, thus harming accuracy. Conversely, setting it too low may lead to excessive reasoning and the introduction of noise, which can also degrade performance. Therefore, we typically set $k$ between 0 and 2, and use it to compute semantic entropy as the decision threshold $\delta$. As $N$ increases, we appropriately raise $k$ to enhance the robustness of the decision-making process. |Allowed Inconsis / SC|$N=2$|$N=3$|$N=4$|$N=5$| |-|-|-|-|-| |$k=0$|70.9%|70.1%|71.3%|72.4%| |$k=1$|--|68.2%|72.8%|73.6%| |$k=2$|--|--|68.7%|71.7%| |$k=3$|--|--|--|69.3%| > **Q6.** Does the pipeline generalize across tasks, or must the optimized CoT be re-tuned for each one? **A6.** Our framework includes a reasoning pipeline composed of the TSP and ARI modules. While ARI is task-agnostic and requires no adaptation, TSP involves optimization on a single source task. Once optimized, the entire pipeline can be directly applied to other tasks of a similar nature without re-tuning. This demonstrates the generalizability of our pipeline across tasks. |Source / Target|AQuA|AddSub|AMC2023|Gaokao| |-|-|-|-|-| |AQuA|70.1%|85.6%|20.0%|40.2%| |AddSub|67.7%|88.4%|20.0%|41.3%| |AMC2023|68.1%|86.3%|25.0%|41.9%| |Gaokao|66.9%| 84.3%|17.5%|44.2%|
Summary: This paper explores the conceptual similarity between CoT reasoning and self-training, highlighting their shared goal of minimizing predictive uncertainty by iteratively leveraging model-generated information. Based on this insight, the authors propose a novel CoT framework integrating a Task-Specific Prompt module and an Adaptive Reasoning Iteration module. Experiments on 10 reasoning datasets demonstrate that the proposed approach significantly outperforms baseline CoT methods in both reasoning performance and computational efficiency, with particularly strong results on arithmetic datasets. The main contributions include establishing a connection between CoT reasoning and self-training through entropy minimization, introducing TSP and ARI modules, and achieving notable improvements in complex reasoning tasks with strong generalization and efficiency. Claims And Evidence: The proposed method is generally clear and compelling. The authors support their main contributions with experimental results on three types of reasoning tasks and corresponding theoretical analysis. Methods And Evaluation Criteria: The proposed method is well-founded and suitable for the research problem. The evaluation criteria align with existing benchmarks, and the experimental setup is reasonable. Theoretical Claims: The theoretical arguments presented in the paper have been carefully examined and are generally correct. Experimental Designs Or Analyses: The experimental design effectively evaluates the proposed method. The results show that the method achieves strong performance on multiple benchmark datasets, and both experiments with fixed iteration counts and comparison experiments support the authors' main conclusions. Supplementary Material: The supplementary materials have been reviewed, which include theoretical proofs, additional experimental examples, and execution code. Relation To Broader Scientific Literature: The paper is consistent with research in the related field and provides new insights into the essence of chain-of-thought. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The paper presents an interesting and insightful perspective by drawing an analogy between CoT reasoning and self-training, especially the pseudo-labeling strategy, highlighting their common goal of "iteratively reducing predictive uncertainty." 2. The two modules proposed in the paper (TSP and ARI) can serve as plug-and-play enhancement components. 3. The writing is clear. The authors provide theoretical proofs, algorithm pseudocode, and examples of reasoning processes in the appendix. 4. The paper selects datasets that cover various types of reasoning, demonstrating the method's versatility. Weaknesses: 1. Although the authors propose a method for automatically searching for the "optimal prompt," they do not provide specific examples in the experiments, nor do they analyze why it outperforms general prompts (e.g., "let's think step by step"). 2. Some technical details lack contextual background. For instance, the "Jaccard Index" is mentioned directly in the method section, but its role or the rationale for choosing this metric is not explained. 3. There is no sensitivity analysis of the maximum number of iterations. Other Comments Or Suggestions: The symbol definitions in the formulas are not clear. Questions For Authors: Why does the proposed method perform better on arithmetic datasets than on commonsense datasets? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable feedback and encouraging comments. We are motivated by the suggestions and have addressed each concern in detail as follows: > **Q1.** Although the authors propose a method for automatically searching for the "optimal prompt," they do not provide specific examples in the experiments, nor do they analyze why it outperforms general prompts (e.g., "let's think step by step"). **A1.** Our proposed TSP aims to automatically generate initial reasoning prompts that better align with the specific task’s semantic distribution, serving as a replacement for generic prompts. Although generic prompts exhibit a certain degree of adaptability across most tasks, their expressions are relatively generalized and often fail to sufficiently guide LLMs to focus on the critical reasoning paths required by the task. The TSP module leverages real task samples to guide the LLM in generating multiple candidate prompts, then selects the optimal one based on the evaluation metric of average semantic entropy. This enhances the quality of initial reasoning and reduces the number of iterations. Below are examples of optimal prompts automatically searched on different datasets, showing how they further integrate task-specific features on top of the generic template: - AQuA: "Let's think step by step, how to break down the mathematical operations involved in each problem and identify the key concepts to solve them accurately." - AddSub: "Let's think step by step, how to efficiently solve these word problems involving basic arithmetic operations and simple logic." > **Q2.** Some technical details lack contextual background. For instance, the "Jaccard Index" is mentioned directly in the method section, but its role or the rationale for choosing this metric is not explained. **A2.** We introduce the Jaccard Index in the method section to measure the lexical-level similarity between reasoning results of adjacent iterations, aiming to determine whether the current iteration has generated sufficiently diverse reasoning paths, thereby avoiding repetitive or redundant reasoning by the model. The Jaccard Index is chosen for its simplicity and efficiency in evaluating the overlap between two sets, making it particularly suitable for quickly assessing word-level differences between texts and effectively capturing trends in reasoning diversity. We will supplement the relevant background explanation in the paper to enhance readability and completeness. > **Q3.** There is no sensitivity analysis of the maximum number of iterations. **A3.** We have added a sensitivity analysis of the maximum number of iterations $T$, with the experimental results shown in Table. It can be observed that the model achieves a significant accuracy improvement at $T=3$, and the performance tends to saturate or even fluctuate slightly after $T=4$, indicating that a relatively small number of iterations is sufficient to achieve near-optimal reasoning performance. Taking the AQuA and AddSub datasets as examples, the accuracy improvement slows down notably after $T=3$, suggesting limited gains from further iterations. Combined with our proposed semantic entropy-based early stopping mechanism, the system can dynamically decide whether to continue reasoning for each sample, effectively reducing unnecessary computational overhead while maintaining performance. | Dataset | $T=1$ | $T=2$ | $T=3$ | $T=4$ | $T=5$ | |----------|--------|--------|--------|--------|--------| | AQuA | 59.5% | 70.8% | 70.1% | 71.7% | 71.3% | | AddSub | 80.5% | 86.1% | 88.4% | 88.8% | 88.6% | > **Q4.** The symbol definitions in the formulas are not clear. **A4.** Thank you for the comment. We have carefully double-checked the paper and revised the notations. In the final version, we will ensure that all symbols are clearly defined at first use. > **Q5.** Why does the proposed method perform better on arithmetic datasets than on commonsense datasets? **A5.** We believe this difference primarily stems from the varying degrees to which different task types rely on the capabilities of language models. Arithmetic datasets emphasize strict logical steps and step-by-step calculation processes, which closely align with our method’s design that involves multi-turn reasoning and explicit intermediate steps. The model can iteratively approach the correct answer, thus benefiting significantly. In contrast, commonsense tasks rely more on the world knowledge and language co-occurrence patterns acquired during pretraining. These tasks have a more open reasoning space, more diverse answer forms, and often lack a clear logical chain. As a result, multi-turn reasoning offers limited gains for such tasks and may even introduce unnecessary redundant information that affects the final judgment.
Summary: This paper discusses the similarities between Chain-of-Thought (CoT) reasoning and self-training, and proposes how to reduce prediction uncertainty for both iteratively leveraging on model-generated information. In particular, this paper introduces a novel CoT framework with two main ingredients: (i) a task-specific prompt module designed to optimize the initial reasoning process by searching for prompts that minimize uncertainty, and (ii) an adaptive reasoning iteration module that dynamically refines the reasoning process to address over-reasoning and similarity between consecutive iterations. Based on theoretical results from self-training, the authors link entropy variations and semantic entropy to CoT. Experimental results to demonstrate the effectiveness of their proposed framework in improving reasoning performance and computational efficiency across several datasets. Claims And Evidence: The paper claims that CoT reasoning shares the core objective of uncertainty reduction with self-training and that their proposed framework, inspired by this analogy, improves CoT performance. While the empirical evidence presented in the experiments generally supports the performance improvements of the proposed framework, the theoretical link between self-training's entropy variation and CoT reasoning is more conceptual and analogical than rigorously derived. The claim that semantic entropy is an effective metric for guiding CoT iterations and prompt selection is supported by experimental results, but lacks direct comparative evidence against more conventional prompt selection methods. The justification for the specific design choices, such as the exploratory prompt and the adaptive iteration strategy, relies more on intuitive reasoning and analogy to self-training principles than on strong theoretical backing specific to CoT. Methods And Evaluation Criteria: The proposed methods, including the task-specific prompt module using semantic entropy for prompt selection and the adaptive reasoning iteration module with entropy-based stopping and exploratory prompts, are novel and relevant to addressing limitations in traditional CoT approaches. The use of semantic entropy as a metric for uncertainty in CoT reasoning is an interesting approach. The evaluation criteria, using standard benchmark datasets across arithmetic, commonsense, and symbolic reasoning, are generally appropriate for evaluating CoT methods. However, the benchmarks used are not the most challenging and cutting-edge datasets in these domains. Theoretical Claims: The paper includes theoretical claims regarding entropy variation in self-training (Lemma 2.1 and Theorem 2.2). While I did not thoroughly check the correctness of the proofs, the paper primarily relies on referencing existing theoretical results in self-training. It's important to note that these theoretical claims directly apply to self-training and are used to motivate and inspire the design of the CoT framework, rather than providing a rigorous theoretical foundation for the proposed CoT method itself. The link between self-training's entropy theorems and the proposed CoT framework is thus analogical and motivational, not deductively derived. Experimental Designs Or Analyses: The experimental designs and analyses are generally sound and follow standard practices for evaluating CoT methods. The ablation studies effectively demonstrate the contribution of the task-specific prompt and adaptive reasoning iteration modules. However, a notable weakness is the lack of ablation studies on the number of self-consistency samples (N), which is a crucial parameter influencing semantic entropy calculation and the overall framework's effectiveness. Furthermore, the experimental section would be strengthened by including a direct comparison of the proposed semantic entropy-based prompt selection method with more conventional prompt selection techniques to justify its added complexity and computational cost. Supplementary Material: I reviewed the main PDF paper, including the algorithm descriptions, experimental results, and appendix containing theoretical details, yet didn't fully check the whole proof of theoretical results. I did not specifically review separate supplementary material beyond what was included in the PDF. Relation To Broader Scientific Literature: The paper builds upon the established literature of chain-of-thought reasoning and connects it to the well-studied field of self-training. The key contribution lies in applying the concept of uncertainty reduction, inspired by self-training, to enhance CoT reasoning through semantic entropy-guided prompt selection and adaptive iteration. The use of semantic entropy as a metric for uncertainty in CoT and the proposed adaptive iteration strategy are novel contributions to the CoT literature. Essential References Not Discussed: To my best knowledge, this paper includes most of the essential references in the Introduction section. However, in order to improve the readability, the paper should improve the discussion of essential references. The lack of "Related Work" section leads to insufficient discussions of the proposed work in the context of existing related ones. For example, the paper should discuss and ideally compare against more conventional prompt selection methods used in the CoT literature, such as those relying on simpler heuristics or ensemble-based prompt selection. Additionally, while the paper cites some popular CoT methods as baselines, including more recent and stronger CoT baselines in the experimental comparison would provide a more comprehensive evaluation of the proposed framework's advancements. Other Strengths And Weaknesses: Strengths: * Originality: The paper presents a novel perspective on CoT reasoning by drawing inspiration from self-training and introducing semantic entropy as a guiding metric. * Significance: The proposed framework offers a potentially effective approach to improve CoT performance and address limitations like over-reasoning and lack of diversity in reasoning paths. * Clarity: The paper is generally well-written and clearly explains the proposed framework and experimental setup. * Empirical Validation: The experimental results demonstrate promising performance improvements across various datasets. Weaknesses: * Conceptual Link Strength: The analogy between self-training and CoT, while interesting, could be theoretically strengthened. The justification for transferring entropy-based principles from self-training to CoT reasoning is not fully rigorous. * Computational Cost: The computational cost of task-specific prompt selection using semantic entropy and the per-iteration cost of adaptive reasoning could be a concern, and needs more thorough justification and comparison to simpler alternatives. * Limited Baselines/Benchmarks: The choice of baselines and benchmarks could be more comprehensive and challenging to fully showcase the advantages of the proposed framework. * Theoretical Justification for Exploratory Prompt: The theoretical grounding for the specific exploratory prompt design in adaptive iteration could be more explicitly linked to the identified limitations of CoT. * Unclear Time Efficiency Benefit: The claimed time efficiency gains of adaptive iteration compared to fixed iteration are not entirely clear from the provided analysis. Other Comments Or Suggestions: * Consider strengthening the theoretical justification for applying self-training-inspired entropy principles to CoT reasoning. * Include experiments comparing the semantic entropy-based prompt selection with simpler prompt selection methods. * Perform ablation studies on the number of self-consistency samples (N) to assess its impact. * Evaluate the framework on more challenging and cutting-edge benchmarks, especially in mathematical reasoning. * Expand the baseline comparison to include more recent and stronger CoT methods. * Clarify the time cost analysis and the specific mechanisms through which adaptive iteration achieves time efficiency gains. Questions For Authors: 1. Theoretical Justification of Self-Training Analogy: While the analogy to self-training is interesting, could the authors elaborate on the theoretical justification for directly applying entropy-based principles from self-training (which involves parameter updates) to CoT reasoning (which manipulates inputs without changing model weights)? How can Theorem 2.2, derived for self-training, be rigorously argued to motivate the design choices in the proposed CoT framework? A more detailed explanation of this theoretical bridge would strengthen the paper. 2. Comparison to Conventional Prompt Selection: The task-specific prompt module relies on computationally expensive semantic entropy calculations. Could the authors include experiments comparing this approach to more conventional and computationally cheaper prompt selection methods (e.g., simpler heuristics, frequency-based methods) in terms of both performance and computational efficiency? Demonstrating a clear advantage over simpler methods would better justify the added complexity. 3. Theoretical Basis for Exploratory Prompt: The exploratory prompt "reevaluate from alternative perspectives" is designed to encourage diversity. Could the authors provide a more explicit theoretical justification for why this specific prompt design is expected to effectively promote exploration and reduce semantic entropy in subsequent CoT iterations, especially in relation to the issue of high similarity between consecutive iterations identified in Section 2.2? 4. Ablation Study on Self-Consistency Sampling (N): The number of self-consistency samples (N) is a crucial parameter for semantic entropy estimation. Could the authors include ablation studies varying N to analyze its impact on the framework's performance and robustness? Understanding the sensitivity to N is important for practical application and evaluating the reliability of the semantic entropy metric. 5. Rationale for Baseline Choices and Benchmark Complexity: What was the rationale behind choosing Contrastive-CoT and RE2 as the primary baselines? Were more recent and potentially stronger CoT methods considered? Furthermore, while the benchmarks are standard, could the authors discuss why these benchmarks were chosen and whether evaluating on more challenging, cutting-edge datasets, particularly in mathematical reasoning, could further validate the framework's capabilities and significance? 6. Clarification of Time Efficiency in Adaptive Iteration: Figure 5b suggests time efficiency gains with adaptive iteration. Could the authors clarify how the adaptive iteration method achieves this efficiency gain, given that each iteration involves two LLM calls? Is the time reduction primarily due to the early stopping mechanism when semantic entropy is low, or are there other factors contributing to the improved time efficiency compared to fixed iteration? A clearer explanation of the trade-offs between per-iteration cost and total iterations in adaptive reasoning would be valuable. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable feedback and encouraging comments. We are motivated by the suggestions and have addressed each concern in detail as follows: > **Q1.** The paper lacks sensitivity analysis for the sampling number $N$. **A1.** We evaluated the impact of $N$ on four datasets. Accuracy improves with larger $N$, especially on harder tasks, but saturates after $N=3$, suggesting small $N$ is sufficient for stable semantic entropy estimation. |Dataset|$N=1$|$N=2$|$N=3$|$N=4$|$N=5$| |-|-|-|-|-|-| |AQuA|53.9%|70.9%|70.1%|71.3%|72.4%| |AddSub|73.9%|85.1%|88.4%|88.9%|90.4%| |AMC2023|12.5%|20.0%|25.0%|27.5%|30.0%| |Gaokao|32.8%|41.3%|44.2%|44.7%|45.3%| > **Q2.** TSP incurs extra cost. Can the authors empirically show its advantage in both performance and efficiency over traditional prompt selection? **A2.** We compared prompt selection methods, including traditional ensemble prompting (generates multiple prompts and aggregates outputs) and iterative search (automatically generates and filters prompts using Monte Carlo search). These were compared with the TSP module based on semantic entropy. TSP achieves the highest accuracy across all datasets while significantly reducing total token consumption (prompt search $S_{\text{Token}}$ + inference $T_{\text{Token}}$). Compared to the search method, TSP balances performance and efficiency better. |Method|AQuA|$S_{\text{Token}}$+$T_{\text{Token}}$|AddSub|$S_{\text{Token}}$+$T_{\text{Token}}$|AMC2023|$S_{\text{Token}}$+$T_{\text{Token}}$|Gaokao|$S_{\text{Token}}$+$T_{\text{Token}}$| |-|-|-|-|-|-|-|-|-| |Ensemble|59.7%|1,703+5,957|80.4%|790+2,196|12.5%|2,089+7,749|33.6%|2,234+7,310| |Search|64.2%|1,932+53,581|84.4%|918+30,732|12.5%|2,672+67,642|37.6%|2,906+81,109| |TSP|66.1%|2,050+21,570|85.8%|1,005+10,281|17.5%|2,261+24,506|41.9%|2,554+23,716| > **Q3.** How can the entropy principle from self-training, which involves parameter updates, be applied to CoT that only modifies inputs? **A3.** We attempt to draw an analogy between the principle of self-training and CoT. Although they differ in operational mechanisms, with self-training relying on parameter updates and CoT guiding reasoning through input manipulation, we argue that both exhibit structural consistency in terms of information compression and entropy convergence. In CoT, while model parameters remain unchanged, the iterative optimization of the reasoning path set $R_t$ gradually leads to semantic-level distributional convergence over the reasoning space $\mathcal{R}$. As iterations progress, the proportion of reasoning paths containing correct subpaths $R'$ increases, and these paths tend to converge toward the correct answer cluster. This process results in a gradual increase in the posterior probability of the correct answer, reflecting a reduction in semantic entropy that parallels the entropy minimization objective in self-training. > **Q4.** How does "re-evaluate from another perspective" reduce reasoning similarity and entropy? **A4.** The exploratory prompt encourages LLM to break away from the current local reasoning trajectory and actively explore subspaces within the generation space $\mathcal{R} \times \mathcal{A}$ that are semantically orthogonal to the original path. Such orthogonality is reinforced through key semantic constraints embedded within the prompt ("alternative perspectives"), prompting substantial strategic changes in the reasoning process. This process can further converge the answer space, thereby compressing the entropy of the overall candidate answer set. > **Q5.** Include stronger CoT baselines, harder datasets, and overhead comparison. **A5.** We have newly incorporated two recent and stronger CoT methods, Active[1] and Nash[2], along with more challenging datasets, AMC2023 and Gaokao. Our method achieves higher accuracy while maintaining reasonable token consumption. |Method|AQuA|Token|AddSub|Token|AMC2023|Token|Gaokao|Token| |-|-|-|-|-|-|-|-|-| |Zero|39.8%|614|57.5%|117|2.5%|841|31.3%|508| |SC|63.0%|1,875|81.5%|743|15.0%|2,950|35.9%|3,357| |Active|66.1%|1,649|86.3%|542|12.5%|2,421|40.5%|3,409| |Nash|68.9%|6,598|88.9%|4,431|25.0%|12,940|43.0%|8,705| |Our|70.1%|2,884|89.1%|1,028|27.5%|7,604|44.2%|4,069| [1] Diao S,et al. Active Prompting with Chain-of-Thought for LLMs. \ [2] Zhang Z,et al. Multi-Path Inference with Preference Equilibrium. > **Q6.** Adaptive iteration improves efficiency, but the underlying mechanism remains unclear. **A6.** Adaptive iteration enhances efficiency via early stopping based on semantic entropy. When LLM shows high confidence, it dynamically halts to save computation. As shown in Table, our method averages 2.1 iterations (vs. fixed 3), with 55.9% of samples stopped early, reducing token while maintaining performance. |Maximum Iterations|1-1(0.0%)|2-1.6(40.2%)|3-2.1(55.9%)|4-2.5(60.1%)|5-3.1(66.5%)| |-|-|-|-|-|-| |Fixed Iteration|2,154|4,629|6,509|8,717|11,232| |Adaptive Iteration|2,154|3,779|4,591|5,458|6,609| --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses. This clarifies most of my concerns and very helpful. I'll maintain my scores, learning towards accept.
null
null
null
null
null
null
Putnam-AXIOM: A Functional & Static Benchmark for Measuring Higher Level Mathematical Reasoning in LLMs
Accept (poster)
Summary: This paper presents Putnam-AXIOM, a benchmark of 522 problems from the Putnam competition along with their ground truth solutions. The paper also proposes some manual modification to the original problems to make evaluation less ambiguous, as well as manual variable and constant substitutions for 100 problems (Putnam-AXIOM Variation). The paper evaluates both open-source and proprietary LLMs on the original and varied problems and show that LLMs achieve higher accuracy on the original problems, suggesting data contamination problem exists. Claims And Evidence: Some claims are inaccurate or exaggerating: 1. > On Line 103-105 of Related Work: "Our Putnam-AXIOM dataset addresses these limitations by offering challenging Putnam problems with fully-written solutions and easily evaluable answers." I am under the impression that Putnam-AXIOM did not address the costly human evaluation issue for symbolic and proof-based questions. Instead, it tries to avoid this problem by only keeping the problems that are not proof-based. 2. > On Line 323-324 of Section 4.1: Looking at the numbers highlights significant accuracy declines across models: DeepSeek-R1-Qwen-32B shows the steepest drop at 37.5%, followed by GPT-4o at 36% and o1-preview at 17%. These numbers are misleading and it seems that the authors are subtracting the min of the lower accuracy type from the max of the higher accuracy type. From Table 2, it is clear that the accuracy drop is much less than what is reported in the main text. Methods And Evaluation Criteria: The methods and evaluation criteria are generally sound. Theoretical Claims: N/A. Experimental Designs Or Analyses: The experimental designs and analyses are generally sound. Supplementary Material: Yes, I checked the tables that are referenced in the main text. Relation To Broader Scientific Literature: The finds of the paper is interesting. However, the dataset proposed here has limited novelty due to existing work on PutnamBench. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: **Strengths** 1. The paper motivates the problem well and provides a comprehensive evaluation of the benchmark with many models with different tiers of capabilities. The analysis also makes sense. 2. The modified boxing approach is very interesting and I believe this can be a promising way to sanitize benchmarks. **Weaknesses** 1. The approach to create modified boxing and variation dataset especially the constant change seems to be largely a manual effort. The author also mentions that not all problems are suitable for such variation. This makes the approach less generally applicable and less novel. 2. Although TFA seems to correlate well with boxed accuracy, from Table 1 and Figure 4, it is clear that it cannot be used as a proxy for model selection. That is, the model with the highest TFA intuitively is not a strong model (Gemma-7B-Base) and boxed accuracy correlates better with general perception. This makes the TFA less appealing for practical use. 3. The paper has many presentation flaw that significantly affects understanding of the results. See questions below. Overall, I believe the main novelty of the paper is on the strategies used for creation of variation dataset and alternative evaluation metric. However, the strategies appear to be manual (not very scalable without introducing errors), and the evaluation metric seems to be noisy. Other Comments Or Suggestions: N/A. Questions For Authors: 1. The below sentence is confusing and seems to come from nowhere. Can the authors explain what this means. Also does this modification affect the correlation performance of TFA? > . Further, we standardized the placement of boxed answers by relocating them to the end of each solution string to minimize unintended emergent behaviors leading to evaluations that are less “harsh” or prone to penalizing the model for formatting deviations rather than actual comprehension. 2. In Table 1, NuminaMath-7B-Base has accuracy of 10.34%. However, in Figure 4, it is shown to be less than 5% with the pentagon symbol. Can the authors explain the inconsistency? 3. PutnamBench has 640 problems whereas this dataset has 522. What is causing the discrepancy and is it because proof-based problems were excluded? 4. In Figure 3, it seems that the original accuracy is generally lower than the variation accuracy. However, the text suggests original accuracy is higher. Am I missing something? ## Update After Rebuttal After reading the author's rebuttal and the other reviews, I have decided to increase my rating. As noted by several reviewers, there are still issues with the writing and presentation, including errors and occasional misrepresentations. However, I believe that the paper's contributions outweigh these weaknesses. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and detailed response. We deeply appreciate the time and effort you invested in providing such valuable feedback. > I am under the impression … not proof-based. We'd like to clarify that Putnam-AXIOM explicitly addresses the challenge of evaluating symbolic and proof-based questions through several carefully designed methodological choices: We retained numerous proof-based questions by applying our Modified Boxing technique (detailed in Section 3.1), which transforms these questions to produce unambiguous, final answers while preserving their mathematical complexity. This approach allows for automatic evaluation without sacrificing the deep reasoning required in the original problems. Our methodology supports a wide range of answer formats through our equivalence function, similar to the approach used in MATH [1]. For Putnam problems that involve complicated final answers, we added minimal next steps that maintain "the same mathematical insights and problem-solving approaches required by the original problems" while ensuring solutions converge to single, automatically evaluable answers. We deliberately excluded binary-answer questions (~10% of Putnam problems) to eliminate success through random guessing. [1] Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., ... & Steinhardt, J. (2021). Measuring mathematical problem solving with the math dataset. arXiv preprint arXiv:2103.03874. >These numbers are misleading … main text. Thank you for pointing out that mistake, the corrected values should be that “DeepSeek-R1-Qwen-32B shows the steepest drop at 27.3%, followed by GPT-4o at 26.7% and o1-preview at 15.6%." This correction will be reflected in the final version of our paper. However, we emphasize that our variations are highly effective at mitigating contamination. During rebuttal, we fine-tuned DeepSeek-R1 Distill-Qwen-7B (chosen as the best-performing model in its size class) on the full Putnam-AXIOM Original dataset for 5 epochs using full-parameter supervised fine-tuning (SFT) intended to mimic or exaggerate the effects of data contamination. The baseline model achieved 12% accuracy on variations versus 23% accuracy on corresponding originals. After fine-tuning, accuracy increased to 33% on variations versus 80% on corresponding originals. This demonstrates that while contamination allows near-saturation on original problems (+57% accuracy), variations remain challenging (+21% accuracy), confirming their effectiveness against contamination. We plan to extend these experiments to additional models. However, the dataset proposed here has limited novelty due to existing work on PutnamBench. Thank you for pointing out this possible confusion. As mentioned in our introduction, Putnam-AXIOM focuses on natural-language problems with final answer verification, functional variations to address contamination, and proxy metrics for reasoning. PutnamBench translates Putnam questions into formal statements for theorem-proving languages. Our datasets serve distinct purposes despite being sourced from the same competition. > The approach to create … less novel. Thank you for pointing out this concern. We agree that manual effort is required to create variations. While we created a script using GPT-4o to assist in variation creation, significant manual verification is still required. We note that most difficult benchmarks (like MATH) also require substantial manual effort. > Although TFA seems … practical use. Thank you for pointing this out. We acknowledge that boxed accuracy correlates better with general perception. However, per Goodhart's Law, optimizing too much for any proxy diminishes its value. We aim to create a fast, inexpensive proxy for reasoning, not necessarily the optimal metric. The below sentence is confusing and seems to come from nowhere … actual comprehension. We standardized placement of boxed answers at the end of solution strings based on findings from Chain-of-Thought prompting, where models provide more coherent solutions when reasoning through the entire problem before giving the final answer. For TFA, this means models are more penalized for formatting deviations and less penalized for solutions with reasoning similar to benchmark solutions. >PutnamBench has 640 problems … were excluded? This discrepancy is due to the fact that Putnam-AXIOM ultimately aims to serve a different purpose than PutnamBench: we selected problems from the Putnam based on their ability to contribute a final, boxed answer for our automatic evaluation process. PutnamBench also pruned ~60% of Putnam problems. We excluded binary-answer questions but included many proof-based questions by rewording them to produce evaluable final answers while preserving difficulty. >In Figure 3 … missing something? Thank you for pointing this out. The labels for Figure 3 were indeed mistakenly swapped. We will correct this in the final version.
Summary: This paper introduces a new mathematical benchmark made of 522 questions from the William Lowell Putnam math competition from 1938 to 2023, among which 100 can be infinitely modified by changing variable names (100 of them) and constant values (37/100 of them) – called the Variation split. This paper also presents teacher-forced accuracy (TFA) as an additional evaluation metric to give a more complete assessment of LLMs’ reasoning abilities. Experiments show that current state-of-the-art models struggle to achieve good performance with OpenAI o1-preview achieving ±42% accuracy. Experiments also show that models evaluated on the Variation split are weaker than when evaluated on the corresponding 100 original questions (suggesting some data was present during pre-training). ## update after rebuttal Claims And Evidence: Claims are somewhat supported by evidence. (1) The claim that TFA provides a more complete assessment of LLM’s reasoning abilities compared to final answer accuracy makes sense intuitively but is not supported by strong evidence (see explanation in Experimental Designs Or Analyses section). (2) The claim that changing variable names, constant values, and turn of phrase will tackle data contamination is not convincing (see explanation in Other Strengths And Weaknesses section). Methods And Evaluation Criteria: Pseudo metrics (teacher forced, and ROSCOE) are evaluated based on their correlation with final answer accuracy. They should be evaluated based on other metrics that directly inspect the reasoning ability of LLMs (either human or LLM as a judge labels). Theoretical Claims: No theoretical claims found. Experimental Designs Or Analyses: 1. The motivation for using pseudo metrics like Teacher Force & ROSCOE is to evaluate the models at their reasoning skills because the final \boxed{} answer may not represent this well enough. However when evaluating these metrics, correlation with the final answer accuracy is used to select the top pseudo metric. This is not the correct approach as you already have a metric that will correlate 100% of the time with accuracy, it is accuracy itself. The whole point of using proxy metrics is to evaluate **something else** than accuracy, i.e. the reasoning chain logic. The correlation between these pseudo metrics should be evaluated against human labels or a strong LLM as a judge such as GPT4o, not against accuracy. 2. The numbers in Table 1 do not match the scatter plot in Figure 4. For instance: - Gemma-7B-Base in Table 1: (0.046 acc, **0.784 TFA**) – in Figure 4: (±0.043 acc, **<0.74 TFA**) - DeepSeek-Math-7B-Base in Table 1: (**0.0402 acc**, 0.779 TFA) – in Figure 4: (±**0.06 acc**, ±0.772 TFA) - Qwen2-Math-7B-Base in Table 1: (**0.0957 acc**, 0.770 TFA) – in figure 4: (**<0.06 acc**, ±0.765 TFA) In particular, the numbers in Table 1 do not seem to correlate as strongly as in the figure 4. 3. Section 4.3 says that the “_chosen metrics_” are reported in Table 3 (in the appendix) but Table 4 (in the appendix) shows that ‘’Common Sense Error’’ correlates more (0.368) than the chosen ‘’Perplexity Step’’ (0.225). Why was “perplexity Step” chosen instead of ‘’Common Sense Error’’? In addition, what does it mean to be “chosen” for a pseudo metric if anyway at the end of section 4.3 only the top one (TFA) is “_selected_” to evaluate models (table 1 & figure 4). 4. Experimental results should report model’s performance on some of the most interesting proxy metrics. Note that “interesting” here should not mean high correlation with final answer accuracy, but rather a high correlation with human or LLM judgment of the reasoning chain logic. Intuitively, “Common Sense Error”, “Hallucination”, and “Repetition” or “Redundancy” would be things to avoid in a reasoning chain, and thus point towards interesting pseudo metrics (to be validated by llm or human as a judge experiments). Supplementary Material: Yes, most sections. In particular, Table 3 is a pure subset of Table 4 without any other additional insight or information. It is therefore unclear what message it provides compared to Table 4. Table 4 is probably enough. Relation To Broader Scientific Literature: This is a dataset paper. It provides another Math benchmark to evaluate LLMs. This is similar to MATH and GSM8k but the problems are more challenging. In addition, this paper proposes a subset of questions that can be randomly altered to always provide brand new evaluation questions. Essential References Not Discussed: Nothing essential is missing. Other Strengths And Weaknesses: **Strengths:** This paper is well-written and organized. It presents a novel set of problems to evaluate LLMs, and importantly a mechanism to randomly change 100 questions so that they are always different, thus limiting data contamination during large pre-training runs. **Weakness:** Modifying variables in the Variation split does not change the reasoning or final answer of a problem, thus having minimal impact on data contamination. Changing constants on the other hand can yield to different steps in the reasoning chain and different answers. Nonetheless, the strategy / logic used to solve this problem will be very similar if not the same. As such, the impact on data contamination is limited as LLMs can memorize the “strategy” to solve these exact problems. It is better than LLMs memorizing the raw values and numbers of each variables, but still not as great as novel questions requiring new reasoning strategies. This is of course hard to achieve programmatically, but that is the only way to truly avoid data contamination during finetuning. Out of the 100 problems selected for Variation only 37 have different constants. Other Comments Or Suggestions: The caption in Figure 3 may be inverted: right now it shows that the accuracies on the Original questions are **weaker** than on the Variation split. Questions For Authors: What is the distribution of problem difficulty (1-6) in the original & variation splits? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and detailed response. > “This is not the correct approach as you already have a metric that will correlate 100% of the time with accuracy, it is accuracy itself. The whole point of using proxy metrics is to evaluate something else than accuracy, i.e. the reasoning chain logic. The correlation between these pseudo metrics should be evaluated against human labels or a strong LLM as a judge such as GPT4o, not against accuracy.” We thank the reviewer for highlighting the need to evaluate chain-of-thought independently from final-answer accuracy. While we chose a metric that correlates with accuracy (as better reasoning should yield correct answers), we recognize teacher-forced accuracy (TFA) serves primarily as a fine-grained measure of how closely a model follows valid derivations token by token. We acknowledge TFA doesn't necessarily equal reasoning quality without correlation to gold-standard step-by-step evaluations. We will supplement our analysis with human or GPT-4 judgments on intermediate steps to verify whether TFA truly tracks logical consistency rather than just final correctness. These results will be updated during the rebuttal period. > “The claim that changing variable names, constant values, and turn of phrase will tackle data contamination is not convincing (see explanation in Other Strengths And Weaknesses section).” We acknowledge that our functional variations don't completely solve data contamination but rather mitigate its effects. Our experimental results show variations consistently decrease accuracy scores across SOTA models, providing strong evidence for mitigation. During rebuttal, we fine-tuned DeepSeek-R1 Distill-Qwen-7B (chosen as the best-performing model in its size class) on the full Putnam-AXIOM Original dataset for 5 epochs using full-parameter supervised fine-tuning (SFT) intended to mimic or exaggerate the effects of data contamination. The baseline model achieved 12% accuracy on variations versus 23% accuracy on corresponding originals. After fine-tuning, accuracy increased to 33% on variations versus 80% on corresponding originals. This demonstrates that while contamination allows near-saturation on original problems (+57% accuracy), variations remain challenging (+21% accuracy), confirming their effectiveness against contamination. We plan to extend these experiments to additional models. > The numbers in Table 1 do not match the scatter plot in Figure 4. Thank you for pointing out this discrepancy. The accuracy values for TFA in Table 1 and Figure 4 are from a previous dataset version (236 problems), while Table 1 model accuracies are from the expanded dataset (522 problems). For proper comparison, we plotted TFA scores against accuracies using the 236-problem dataset. Due to resource constraints, we haven't yet run TFA on the expanded dataset but are working on this and will provide updated results during the rebuttal phase. > Why was “perplexity Step” chosen instead of ‘’Common Sense Error’’? In addition, what does it mean to be “chosen” for a pseudo metric if anyway at the end of section 4.3 only the top one (TFA) is “selected” to evaluate models (table 1 & figure 4). We choose ROSCOE's perplexity step for comparison with our perplexity proxy metric. However we didn't make this selection clear in the paper. The experiments that backed up our rationale for choosing TFA versus baselines is detailed in Section 4.3 of our paper. We did all the experiments to demonstrate our reasoning / rationale to why we chose TFA vs *baselines*. TFA had the highest correlation and performed best on MATH which is why we decided to select it. > In particular, Table 3 is a pure subset of Table 4 without any other additional insight or information. It is therefore unclear what message it provides compared to Table 4. Table 4 is probably enough. We appreciate your suggestion and acknowledge that Table 3 provides no additional insights beyond Table 4. While initially included to highlight the highest-scoring metrics, we are amenable to removing this redundant table in the revised version. > The caption in Figure 3 may be inverted: right now it shows that the accuracies on the Original questions are weaker than on the Variation split. Thank you for pointing this out, you are correct that the labels for Figure 3 were mistakenly swapped. We will be sure to include the corrected version of the figure in the finalized version. > What is the distribution of problem difficulty (1-6) in the original & variation splits? Avg difficulty: Original (2.46) vs Variation (2.48) datasets - comparable ratings strengthen our claim that accuracy differences stem from data contamination rather than problem difficulty. Distribution (1-6): Original - 68(1), 109(2), 97(3), 93(4), 75(5), and 80(6) problems; Variation - 8(1), 27(2), 24(3), 11(4), 10(5), and 20(6).
Summary: The paper introduces a new dataset comprised of mathematical problems that have appeared at the Putnam contest for university students. To enable automatic evaluation, the problems are selected or have been reprhased to be such that the solution can be checked automatically as a boxed answer. This rephrasing of the problems need not be semantically preserving, but nonetheless aim to capture the main logical intuition of how the original problem is to be solved. Further, to combat contamination, the authors introduce a functional variation dataset where problems can be generated as (very) similar variants of the original problem by for instance changing constants in the original problem. The paper evaluates a number of open and some proprietary models on these datasets, both in terms of scores, but also via other proxy metrics, in the process demonstrating that current models do not perform well on these tests. Claims And Evidence: Yes, the paper substantiates the claims made in the introduction/abstract, but some questions are open, which I elaborate on below. Methods And Evaluation Criteria: The evaluation makes sense, however, one concern I have, which the authors can hopefully clarify is: how are the Putnam problems selected exactly? Give the low scores of the models, on problems that are very likely in the training data, this is particularly important. For instance, are some easy problems omitted? Or problems for which it is not trivial to convert it to one with a boxed answer? It is important to understand whether a systematic process is being followed here. Similarly for the functional variation dataset. Theoretical Claims: This is an experimental paper which introduces a dataset, there are no theoretical claims. Experimental Designs Or Analyses: As mentioned earlier, the selection of the problems and how that is done, is a bit unclear and it is important. Further, I would have hoped that the authors would have done a bit more on the contamination part, that is, checking for contamination of the selected Putnam problems. Supplementary Material: Yes, I read the entire material. Relation To Broader Scientific Literature: Obtaining meaningful mathematical benchmarks, ones on which models do not yet work well, is important. The work done to produce this dataset is valuable, and the dataset of Putnam problems, can be useful to develop better models. Similarly, for the functional variation dataset. Essential References Not Discussed: Given the mentions of contamination and to better motivate the functional variation dataset, I was hoping to see some experimental evaluation on contamination. of the selected Putnam problems. For instance, using [1]. [1] ConStat: Performance-Based Contamination Detection in Large Language Models, NeurIPS 2024. Other Strengths And Weaknesses: The datasets introduced by the paper are useful and the paper is well written. The main concern I have, aside from the ones mentioned already, is the lack of technical depth. For instance, the rephrasings to get a boxed answer are quite straightforward. Similarly for the functional dataset. It would have been interesting to see how other semantically-equivalent text paraphrasings of the problems (many ways to do so), would have affected the results, and similarly for the contamination analysis I mentioned already. Ultimately, the fact that the models are not scoring well on (slight variants of) problems that are very likely in the dataset, is interesting, and one would have hoped to see more evaluation here. Still, I believe the datasets can serve as a useful benchmark for future research. Other Comments Or Suggestions: - it was unclear early on whether the rephrasings to get a boxed answer are semantically equivalent; they are not, may be good to state. - throughout the paper it was unclear how problems are selected, but this was mentioned already above. - line 313, it is unclear what "to generate evaluation data" means. - may be I am misreading Fig 3, but I would think original should be scoring higher in accuracy than variation? Questions For Authors: Already mentioned in the review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and detailed response. We deeply appreciate the time and effort you invested in providing such valuable feedback. > ***how are the Putnam problems selected exactly?*** As outlined in Section 3.1, we selected problems based on two primary criteria: 1. Each problem must yield a definitive final answer that can be enclosed in a \boxed{} format, enabling fully automated evaluation without human grading. These adjustments preserve the original mathematical insights while eliminating the need for human evaluation. 2. In constructing the dataset, we deliberately excluded binary-answer questions (e.g., true/false or yes/no), as these allow models to succeed via random guessing. These accounted for around 10% of the Putnam problems. This filtering reflects our effort to ensure both rigor and meaningful signal in model performance. Our datasets excluded questions that were not conducive to these standards, similar to the PutnamBench paper. We curated problems to ensure broad coverage across topics (Algebra, Combinatorics, etc.) and difficulty levels (indicated by 1-6). Based on Putnam’s own difficulty rating we had 205 easy, 182 medium, and 135 hard. The Variation set is a curated subset of the main dataset, where expert mathematicians systematically alter surface features—like constants or variable names—without changing the problem’s core reasoning. This helps prevent models from relying on memorization. > ***“experimental evaluation on contamination of the selected Putnam problems.”*** During rebuttal, we fine-tuned DeepSeek-R1 Distill-Qwen-7B (chosen as the best-performing model in its size class) on the full Putnam-AXIOM Original dataset for 5 epochs using full-parameter supervised fine-tuning (SFT) intended to mimic or exaggerate the effects of data contamination. The baseline model achieved 12% accuracy on variations versus 23% accuracy on corresponding originals. After fine-tuning, accuracy increased to 33% on variations versus 80% on corresponding originals. This demonstrates that while contamination allows near-saturation on original problems (+57% accuracy), variations remain challenging (+21% accuracy), confirming their effectiveness against contamination. We plan to extend these experiments to additional models. > ***lack of technical depth...*** Our goal was not to complicate the problem statements themselves but rather to preserve the intrinsic difficulty of the Putnam examination while making the benchmark automatically gradable at scale. The “boxed-answer” approach, though seemingly straightforward, ensures that the dataset is both substantial and tractable for automated evaluation—an approach shared by similar math benchmarks. > ***“how other semantically-equivalent text paraphrasings...”*** Thank you for pointing out this area of exploration. We tried paraphrasing the variation questions with the assistance of LLMs and meticulously verified by humans. This resulted in minimal changes in accuracy and thus didn’t report paraphrasing for simplicity. We will clarify this point in our revised submission. > ***the fact that the models are not scoring well on...*** We compared model responses between the Original and Variation datasets. Both responses adopted a “step by step” narrative style. However, original responses tend to follow a systematic, clear approach while responses often reveal internal doubts result in incomplete algebraic details or solutions. One such example is 1947 B4 where the question is: > Given $P(z) = z^2 + az + b$, a quadratic polynomial of the complex variable $z$...Find the values of $a$ and $b$ and give their sum. Original Solution (shortened for readability): > Let's think step by step. Since $|P(z)| = 1$ for all $|z| = 1$, then for $z$ on the unit circle, $|z^2 + az + b| = 1$. If $z \\neq 0$, then $|1 + a/z + b/z^2| = 1$, so $|Q(w)| = 1$ where $w = 1/z$ and $Q(w) = w^2 + a w + b$...Therefore, the sum $a + b = b - 1/b$. But since $|b|=1$, we have $b = e^{i\\phi}$ for some angle $\\phi$, so $a + b = e^{i\\phi} - e^{-i\\phi} = 2i \\sin \\phi$. Variation Solution (shortened): > Let's think step by step. Since $P(p)$ is a quadratic polynomial and $|P(p)| = 88$ for all $|p|=88$...Okay, so I've got this problem here where I need to find the angle A in triangle ABC. The conditions given are a bit specific: angle A is less than angle C...So,180 - \u03b1/2 > 90 Which gives, \u03b1/2 The original solution directly addresses the problem, while the variation’s solution “hallucinates” a triangle ABC scenario and omits key steps. We see three main causes: 1. LLMs often appear to recall answer patterns rather than genuinely solve problems 2. Small changes to variable names or constants (e.g., from *x* to *p*) can disrupt these memorized patterns, suggesting a lack of true variable abstraction. 3. Unusual symbol choices or number pairings can misalign the prompt, triggering irrelevant or hallucinatory reasoning steps.
Summary: Putnam-AXIOM is a new benchmark designed to assess higher-level mathematical reasoning in large language models (LLMs), using 522 challenging problems from the William Lowell Putnam Mathematical Competition. To address data contamination, the authors introduce functional variations of 100 problems by altering variables, constants, and phrasing, enabling infinite novel yet equally difficult problems. Evaluation shows that even top models like GPT-4o and o1-preview perform significantly worse on these variations, highlighting the limitations of current models. The benchmark also proposes new reasoning metrics like Teacher-Forced Accuracy (TFA) to go beyond final boxed answers and better capture reasoning depth. Claims And Evidence: Yes. This paper mainly has two claims: 1. variations of the Putnam benchmark are likely to cause trouble to existing model (due to issues like data contamination); 2. the proposed teach-forced accuracy (TFA) is better correlated with boxed accuracy than existing metrics like ROSCOE. To me, both claims have been empirically validated. Methods And Evaluation Criteria: Yes. Putnam is a solid source of dataset for measuring mathematical reasoning. The effectiveness of the newly proposed TFA metric has also been validated on the widely used MATH dataset. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental designs look good to me. I especially appreciate the comprehensive ablation study in the appendix. Supplementary Material: I checked the appendix but didn't manage to run the code. Relation To Broader Scientific Literature: Both the new dataset and the automatic proxy metric offer valuable contributions to mathematical reasoning research, especially as many existing benchmarks have become saturated. Essential References Not Discussed: N/A Other Strengths And Weaknesses: - I’d love to see some discussion on to what extent can TFA—or any of the other proposed metrics—address the issue of correct answers with flawed reasoning, which is notably common in leading models like o1. Other Comments Or Suggestions: - Figure 3 may have the labels for Original and Variation swapped. Based on my understanding and the data in Table 2, the blue intervals should correspond to the Variation set. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and detailed review, particularly for recognizing that our work addresses an important gap by introducing a benchmark capable of measuring higher-level mathematical reasoning, and appreciating the thoroughness of our experimental designs and comprehensive ablation studies. Your positive feedback about the validity and utility of our claims and metrics is greatly valued. > ***Figure 3 may have the labels for Original and Variation swapped*** Thank you for pointing this out, you are correct that the labels for Figure 3 were mistakenly swapped. We will be sure to include the corrected version of the figure in our revised submission. > ***discussion on to what extent can TFA—or any of the other proposed metrics—address the issue of correct answers with flawed reasoning, which is notably common in leading models like o1*** We appreciate the reviewer’s interest in how Teacher-Forced Accuracy (TFA) handles scenarios in which the model’s final answer is correct despite a flawed chain of thought. TFA differs from standard auto-regressive (AR) generation because each predicted token is conditioned on the gold-standard solution, thus focusing on whether the model *recognizes* a correct reasoning path rather than whether it can generate one autonomously. From a different perspective, it uses the forward during training instead of during inference for generation and then computes per token accuracy match to the gold reference string. Because TFA never lets the model produce its own possibly incorrect steps, it does not directly reveal flawed intermediate reasoning in AR generation. This is one reason we were motivated to rigorously verify TFA’s validity – by analyzing its correlation with boxed-answer accuracy — ensuring that it reliably measures deeper competence across multiple problems (MATH, Putnam-AXIOM), fourteen models and six model families. Indeed, one could explore hybrid approaches —for instance, partially teacher-forcing the solution, then auto-regressively generating the remainder—but these methods are often computationally expensive or require custom hardware code. In line with Occam’s Razor, we opted for TFA’s simpler, scalable approach first. We will clarify these points in the revised manuscript, adding discussion in the TFA section and appendix to make explicit the difference between TFA and AR generation. # Evaluating the Effectiveness of Variations on Contamination During the rebuttal phase, we have performed our own fine-tuning experiment with DeepSeek-R1 Distill-Qwen-7B (we chose this model because it was the best-performing model of its parameter size in Table 2). On the Putnam-AXIOM Variation dataset, this baseline model received 12% accuracy on the variation questions and 23% accuracy on the corresponding original questions. We performed full-parameter SFT (intended to mimic or exaggerate the effects of data contamination) on the full Putnam-AXIOM Original dataset (with all 522 questions), running for 5 epochs. We then evaluated the fine-tuned model on Putnam-AXIOM Variations (with 100 variation questions and 100 corresponding original questions). After fine-tuning, the model received 33% accuracy on the variations and 80% accuracy on the corresponding original questions. Clearly, the full-parameter SFT successfully contaminated the model such that it was able to attain saturation on the corresponding original problems with a 57% increase in accuracy; however, the variations still proved to be challenging for the model to solve, given a mere 21% increase in accuracy. Our experiment shows that our variations are a useful tool in combating data contamination.
null
null
null
null
null
null
Explainable Concept Generation through Vision-Language Preference Learning for Understanding Neural Networks' Internal Representations
Accept (poster)
Summary: The paper addresses a critical challenge in concept-based explanation methodology—specifically, the generation of "concepts" for explanations. Traditionally, this required practitioners to manually guess and collect various candidate concept image sets. The paper introduces a novel approach, utilizing reinforcement learning-based preference optimization (RLPO), to guide the Stable Diffusion model in generating concepts that are significant to the neural network's internal representations. Both qualitative and quantitative results are presented to demonstrate the effectiveness of RLPO in uncovering these representations. Claims And Evidence: In my view, the claims made in the paper are generally well-supported. However, I have reservations about the qualitative results of the method, as detailed below. Methods And Evaluation Criteria: The RLPO's approach to encouraging diffusion models to generate truly meaningful concepts is technically sound and innovative. The authors tried to address the critical limitations of previous concept-based explanation methods. However, I have concerns about the seed prompt acquisition. While Appendix C.3 details the prompts used to probe image patches, these seem primarily focused on low-level visual concepts. The method's ability to capture higher-level semantic concepts (e.g., gender, age) remains unclear. Additionally, it's uncertain whether the stable diffusion model can generate meaningful representations for such abstract concepts. I would appreciate the authors' perspective on this limitation. Regarding evaluation, I have reservations about the generalizability of the results. The paper presents selected examples, predominantly featuring specific cases (zebras, tigers). While user studies were conducted, they also relied on these selected examples rather than randomly sampled cases. A more robust evaluation would involve testing with random examples across diverse scenarios. Further evidence is needed to demonstrate the method's effectiveness across broader applications. Additionally, the paper would benefit from including and analyzing failure cases to better understand the method's limitations. For instance, the concepts produced by RLPO in Figure 4, while diverse, don't appear particularly meaningful. Theoretical Claims: Regarding Figure 1 and its associated theorem (including proofs), the underlying assumptions should be more explicitly stated. In particular, I would to like see clearer justification for the assumption that "$C_H\subseteq C_G$ and $C_R\subseteq C_G$". Experimental Designs Or Analyses: Please see above for my concerns regarding the user study and the limited scope of the qualitative results. Supplementary Material: I went over the supplementary material quickly, but have not examined it in detail. Relation To Broader Scientific Literature: The relation to broader scientific literature was discussed in the Section 1 and 2 of the paper. Essential References Not Discussed: The relation to broader scientific literature was discussed in the Section 1 and 2 of the paper. Other Strengths And Weaknesses: In general, I find the method proposed in the paper technically sensible, and I agree that issue tracking is crucial in concept-based explanations. However, I have reservations about the assumptions made in the paper. More importantly, I hold reservations about the qualitative results of the methods in broader cases. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for recognizing the technical soundness and novelty of our framework. We appreciate your critical insights and provide detailed clarifications below. Please refer to this **[Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-9577/Rqpq/readme.md)** where we have compiled detailed explanations for better understanding. ## Q1: Concerns about seed prompt acquisition. We agree with the reviewer’s observation that our current setup is primarily focused on low-level visual concepts. But our current setup can easily be modified to capture higher-level semantic concepts (e.g., gender, age) by adding task specific questions like, “What is the gender of the person in the image?” or “Which age category does the person in the image belong to (young/old)?”. To verify the usability of the proposed methodology with higher-level semantic concepts like gender, we trained a Resnet18 classifier on CelebA dataset to classify images as “Blonde” and “Not Blonde”. This dataset is known for having a spurious correlation between class “Blonde” and females. With our method, we were able to find the same correlation. Concepts generated for the female face were more important than male face. As a side note, when we train RLPO to capture higher-level semantic concepts, it starts combining one or more than one low-level features. As shown in examples on **[anonymized github](https://anonymous.4open.science/r/RLPO-9577/Rqpq/readme.md)**, the generated samples start developing long and blonde hairs for both male and female concepts. ## Q2: Concerns regarding limited evaluation. We would like to clarify that, while the main text includes examples like “zebra” and “tiger” for clarity and interpretability, RLPO was evaluated on a broader range of randomly selected samples from ImageNet (see Section 4.3, Appendix D.3, and Table 4). RLPO has also been tested on multiple pretrained models such as GoogleNet, InceptionV3, indicating RLPO is not tied to a specific architecture. Additionally, as mentioned in Appendix D.9, rather than predominantly featuring specific cases, while conducting the human survey we considered 10 unique classes and randomly selected examples. Apart from that, we also demonstrate how RLPO generalized to non-visual domains like sentiment analysis (see Section 4.6 and Fig. 8). ## Q3: Justification for the assumption that $\text{C}_H \subseteq \text{C}_G$ and $\text{C}_R \subseteq \text{C}_G$. Intiutively, the generative models like Stable Diffusion are trained to learn the distribution of real-world data (which contains human defined and retrieved concepts). By leveraging this learned distribution, they can create existing or new data which by design can represent human defined or retrieved concepts (or neither). That being said, to solidify this assumption we plotted the clip embeddings of generated, retrieved and human defined stripes concept. For retrieval based concepts we used the stripes collected by CRAFT for the zebra class, for human defined concepts we used the concepts collected by TCAV authors for the zebra class, and for generated concepts we used pre-trained stable diffusion 1.5 to generate random stripes images. As shown in the plot on **[anonymized github](https://anonymous.4open.science/r/RLPO-9577/Rqpq/readme.md)**, the generated stripes encapsulates both human-defined and retrieved concepts hence proving the assumption $\text{C}_H \subseteq \text{C}_G$ and $\text{C}_R \subseteq \text{C}_G$. --- Rebuttal Comment 1.1: Comment: After a detailed review of the authors' rebuttals and other peer comments, I have decided to increase my rating to 3. I would like to highlight that I still have reservations regarding the generalizability of the proposed method. The limitations primarily stem from the seed prompts and biases inherent in the diffusion model being used. While the authors' rebuttal has addressed some of my concerns, not all have been fully resolved. For example, the case presented in the rebuttal depends on human knowledge to craft task-specific questions for improving seed prompt acquisition. Additionally, the justification for the underlying assumptions is based on examples from only a specific class ('zebra') within the dataset. Nevertheless, I appreciate the novel approach of employing RL and generative models to generate dataset-dependent concepts for use in concept-based explanation methods. To the best of my knowledge, this is the first and it addresses a significant challenge in concept-based explanation methods: identifying concept sets that effectively reflect model behaviors. I did not want to dismiss a contribution that leverages recent advances in generative models to tackle the challenge, as it shows promising potential, in view. I would suggest that the authors acknowledge these limitations in the final version of the paper, if it is accepted. Future work—by the authors or the broader XAI community—could further enhance the method’s generalizability by refining the seed prompting process, leveraging more advanced generative models, and exploring additional improvements. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their vote of confidence in our paper! Following the reviewer’s suggestion, we will include a discussion in the revised manuscript highlighting the reliance of our method on seed prompts and include the future work in that direction. We sincerely appreciate the reviewer’s constructive feedback and recognition of our method’s potential.
Summary: This work introduces an RL-based method to construct a vision-language concept-level preference dataset purely from synthesized images by taking TCAV score as the reward. It first prompts the trained MLLM to ground the common concepts represented as language phrases in the images. Then, it generate preference datasets with TCAV scores of ImageNet pre-trained models. Finally, the RLPO is applied to finetune the diffusion models. The results demonstrate the proposed method can guide generative models to generate more clustered concept images. Claims And Evidence: Are the claims made in the submission supported by clear and convincing evidence? If not, which claims are problematic and why? I think it is overclaim on Understanding Neural Networks’ Internal Representations, since it does not propose novel explainable methods to unveil the concepts of learned vision networks. Also refer to **Relation to Broader Scientific Literature.** Methods And Evaluation Criteria: The proposed method or framework is technically novel. Theoretical Claims: None Experimental Designs Or Analyses: See weaknesses. Supplementary Material: None Relation To Broader Scientific Literature: I am a little concerned about the implication of this work: this work essentially is attempting to align the vision-language alignment of generative models pre-trained with noisy image-text pairs. It cannot generalize beyond the abstract visual concepts that cannot be described by language. Therefore, the ultimate results achieved is more related to aligning the generative model in the word or phrase level. Essential References Not Discussed: It seems to miss a line of related works on concept learning with diffusion models. For example ConceptLab: Creative Concept Generation using VLM-Guided Diffusion Prior Constraints Other Strengths And Weaknesses: Weaknesses: - Not well-organized: The current version of the presentation does not meet the ICML standards, therefore I would suggest the author revise their writing to clarify and reorganize the method and experimental sections. For instance, the evaluation metrics and motivations behind them are not clarified. Some expressions are not academic and rigorous enough. - Missing technical details: to my understanding, the original C-deletion is performed in the pixel space, but this method is performed in the textual space. - Potential semantic leakage in diffusion alignment since the vision models are already trained on ImageNet? We more details on the concept vocabulary, how many of semantic names are out of ImageNet vocabulary? How many classes can it generate? Can it generate novel categories/objects? - The action space is not scaled, only a few words. Besides, most prompts are single phrases, and thus cannot scale up to diverse compositions? - What if generated images in both groups are both grabage? Will it still update the SD? It is necessary to include additional quantitative metrics to distinguish between diversity and quality. It may need human study or auxiliary scores, e.g., train an additional network to compute classification accuracy. Other Comments Or Suggestions: I would suggest that the author polish this script entirely for another round. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Please refer to this **[Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-9577/ADMa/readme.md)** where we have compiled detailed explanations for better understanding. We are thankful to the reviewer for the feedback. While we appreciate the reviewer's recognition of the proposed framework, we believe there are some key misunderstandings regarding the claims and the experiments conducted in the paper. In this work, we propose a novel approach to “generate” concepts that truly matter to the neural network by utilizing reinforcement learning-based preference optimization (RLPO) on diffusion models. And, we show qualitatively and quantitatively that the generated concepts matter to the neural network. This has been supported by Reviewer 1, 2 and 4. With regards to experiments, please see our detailed answers below. ## Q1: Generalizability beyond visual concepts. RLPO does not solely depend on direct linguistic grounding. Instead it uses reinforcement learning to evolve and refine the generative model via XAI feedback (from the model under test) - indirectly capturing internal representation that may not have exact language equivalents. The use of language (seed prompts), serves primarily to narrow the search space and provide a reasonable initialization for the concept generation. While the generated concepts may initially align with the seed prompts, RLPO iteratively optimizes the generation process via TCAV-based rewards, allowing for drift and refinement toward more model-relevant abstractions—even beyond the original prompt scope. Additionally, to highlight the generalizability of our approach beyond visual concepts, in Section 4.6 (Fig. 8), we demonstrate how RLPO generalized to non-visual domains like sentiment analysis, using textual input and preference. ## Q2: Clarification on C-deletion. We agree with the reviewer that the original C-deletion is performed in the pixel space. But we would like to clarify that, the C-deletion we show in the paper is also performed in the pixel space and **not in the textual space**. As highlighted in Q1, the use of language is to narrow down the search space. Once the training is completed, we generate concepts via the trained diffusion model and map those generated concepts back to the input space using CLIPSeg (as shown in Figure 5). The C-deletion graphs shown in paper are obtained from deleting most relevant to least relevant target concepts from the input images. We have provided more elaborate examples in Fig. 21 and 22 of Appendix. ## Q3: Potential semantic leakage in diffusion alignment since the vision models are already trained on ImageNet? We need more details on the concept vocabulary, how many semantic names are out of ImageNet vocabulary? How many classes can it generate? Can it generate novel categories/objects? We understand the reviewer’s concern on potential semantic leakage in diffusion alignment since these models have already been trained on ImageNet data, but we would disagree with the reviewer because the concepts we generate don't come from class data. As shown in Table 4, the generated concepts by RLPO are farthest from the class data. On a contrary, this semantic leakage occurs in retrieval based methods where the concepts are collected from class data. This is the exact problem we are trying to resolve with our method. ## Q4: The action space is not scaled, only a few words. Besides, most prompts are single phrases, and thus cannot scale up to diverse compositions? Our current action space consists of 20 seed prompts, preprocessed and extracted using VQA (see Appendix C.3). We would like to emphasize that our approach does not rely on a direct mapping from seed prompts to the final generated outputs (concepts). While the generated concepts may initially align with the seed prompts, RLPO iteratively optimizes the generation process via TCAV-based rewards, allowing for drift and refinement toward more model-relevant abstractions—even beyond the original prompt scope. Consequently, the final generated concepts capture more diverse and model-relevant abstractions that extend beyond the limitations of the initial single phrases. ## Q5: What if generated images in both groups are both garbage? Will it still update the SD? If both image sets yield low TCAV scores (i.e., do not activate the model meaningfully), our method does not update the diffusion model. Only when a concept has the potential to move toward explainable states do we update the diffusion model. We will clarify this further in the revised manuscript. --- Rebuttal Comment 1.1: Comment: I appreciate the efforts made by the authors for the rebuttal. My questions have largely been addressed. For Q3 and Q4, I intended to ask about more results on the expanded vocabulary and evaluate with more words out of the ImageNet vocab as initial seeds. For Q5, I intended to ask for additional analysis on the failure patterns of behaviors of generative models in this loop. Raised score by one, considering some challenges of this work can be further extended in the future.
Summary: The paper reframes the concept set creation as a concept generation problems. It proposes a method based on generative model to generate concept images. It aims to create to reliably generate diverse concepts that are challenging to craft manually. The process involves various components including Reinforcement Learning-based Preference Optimization, Stable diffusion models, Testing with Concept Activation Vectors (TCAV) scores, BLIP models. ## update after rebuttal I increased my score to 3 after the rebuttal from the authors. They were actively engaged in addressing my concerns, and most of them have been resolved. However, I still have some reservations regarding the reliance on the seed prompt and the variability of the CAV. While the authors provided some results related to this issue, they are limited to only a few examples and do not fully address my concerns. Claims And Evidence: - The paper claims that RLPO generates new concepts that explain model behavior. However, it also acknowledges that results depend heavily on the text prompts used. If concept discovery relies significantly on the seed prompt, the method may not be truly generating new concepts but rather refining existing ones based on prior knowledge. The generated images appear to closely follow the provided seed prompts. The authors argue that explaining concepts through images is more intuitive, citing “A picture is worth a thousand words, but words flow easier than paint.” However, they do not fully justify why image-based explanations are inherently superior, especially given the reliance on text prompts for concept generation. - The method relies on TCAV, which has known limitations. One key issue is the choice of hyperparameters, such as selecting which layers to focus on, which can significantly impact results. Additionally, unless many samples are used for training the classifier and obtaining Concept Activation Vectors (CAVs), the CAVs show a large variance across runs. The paper does not fully address these concerns. - The method is highly complex, involving multiple components such as reinforcement learning, preference optimization, and generative modeling. This complexity itself is a drawback for model explanation methods, which should ideally be simple and interpretable. Additionally, RLPO depends on many pretrained models such as Stable Diffusion and BLIP, which introduces external biases that are not accounted for in the paper. Methods And Evaluation Criteria: The paper presents various evaluations, each assessing different aspects of the method, which is great. However, trade off between the dependence on seed prompts and the degree of genuine concept discovery may be further clarified. Theoretical Claims: No formal proofs were reviewed in detail. However, the effectiveness of RLPO is largely tied to TCAV, which is known to be sensitive to hyperparameter choices. The paper does not extensively analyze the robustness of TCAV in this setting. Experimental Designs Or Analyses: - The paper does not discuss how the authors chose the hyperparameters for TCAV, particularly which layers were selected for extracting activations. This is important, as TCAV scores are known to vary significantly depending on this choice. Supplementary Material: I checked out the video in the supplementary material. Relation To Broader Scientific Literature: The paper is addressing an important problem of concept generation. However, it is unclear to me how methodologically it would have advantage over existing methods. Essential References Not Discussed: It would be helpful to discuss how this paper relates to the following works. https://distill.pub/2017/feature-visualization/ Generating concepts images without any dependence on external input (such as BLIP's prompt seed suggestion in this paper) https://arxiv.org/abs/2312.02974, https://arxiv.org/abs/2410.05217 While not involving classifiers, they also try to define abstract concepts without manual curation. Other Strengths And Weaknesses: Please see above. Other Comments Or Suggestions: No more comments. Questions For Authors: - In Figure 6, it is unclear what x-axis “Steps” refers to. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s constructive comments and recognition of the importance of concept generation in our work. Our method improves upon traditional automatic concept retrieval approaches. While conventional methods extract concepts directly from the dataset—risking semantic information leakage—our approach generates novel concepts that are independent of the dataset. We have addressed the reviewers' concerns as follows. Please refer to this **[Anonymized GitHub Link](https://anonymous.4open.science/r/RLPO-9577/Sob6/readme.md)** where we have compiled detailed explanations for better understanding. ## Q1: Reliability on text prompts. Our use of seed prompts serves primarily to narrow the search space and provide a reasonable initialization for the concept generation. Given that Stable Diffusion is conditioned on text prompts, starting with semantically meaningful phrases helps the RL agent converge faster toward relevant concept regions. While the generated concepts may initially align with the seed prompts, RLPO iteratively optimizes the generation process via TCAV-based rewards, allowing for drift and refinement toward more model-relevant abstractions—even beyond the original prompt scope. We demonstrate this evolution through multiple RL steps (e.g., Figure 2, Appendix D.4), and also show in ablation (Appendix C.4) that random prompts perform significantly worse, indicating the importance of a good starting point rather than dependence. ## Q2: Hyperparameters used for TCAV score calculation. TCAV can be sensitive to choices like the layer of activation and the classifier used. To address this, as stated in Appendix C.1, we replaced the default SGD classifier with Logistic Regression, which we found provided more stable CAVs with lower variance. Regarding the choice of layers and target classes, we provide details in Appendix C.3. ## Q3: Bias introduced by pre-trained models. We acknowledge that the diversity of generated outputs depend on the generative model's capabilities. Issues such as insufficient representation of certain patterns could limit the range of explanations. However, such limitations are not unique to generative approaches—they are also inherent to retrieval-based methods, which are similarly constrained by available data and even human collected concepts, which are influenced by cognitive bias. However, we agree that this is a good point and we will include a discussion of this limitation in the revised manuscript. Specifically, we will highlight the dependency of the explanations on the generative model's capability to produce high-quality and diverse outputs. Thank you for pointing it out. ## Q4: In Figure 6, it is unclear what x-axis “Steps” refers to. “Steps” in figure 6 refers to a step in c-deletion. At each step we delete a part of the image representing the target concept. We have provided more elaborate examples in Fig 21 and 22 of Appendix. We will clarify this in the figure caption of the revised manuscript. ## Suggestions on related works Thank you for the suggested references. We will include a discussion of Feature Visualization (Olah et al., Distill) and the more recent diffusion-based concept learning works (e.g., https://arxiv.org/abs/2312.02974, https://arxiv.org/abs/2410.05217). While our method relies on classifier feedback via TCAV and thus differs in motivation, your point about manual curation versus automated discovery is well-taken and worth addressing more directly in Section 2. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. However, I will maintain my score as my key concerns remain insufficiently addressed: The authors note that (1) the method performs poorly without seed prompts, and (2) the final generated concepts differ from the seed prompts, to justify the method's reliance on seed prompts. However, these points do not address the core concern—whether the method is truly capable of discovering new concepts, rather than merely refining or drifting from the initial prompt space. While the authors state that they used logistic regression instead of SGD to reduce the variance of CAV, I could not find clear evidence or quantitative analysis showing the effectiveness of the change in Appendix C.1 they referred to. An analysis demonstrating the extent to which this change mitigates the variance issue will be needed. Without such an analysis, it's difficult to assess the robustness of TCAV-based feedback in this context. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for continuing to engage with our work! We have clarified the reviewer’s concerns on capability of the proposed method in comparison with discovery and refinement, and provided additional analysis on stability of logistic regression classifier while calculating CAVs. Since we have clarified all reviewer’s concerns, we sincerely hope the reviewer can reconsider the score. ## Q1: Clarification on Discovery vs. Refinement? We understand the reviewer’s concern regarding whether our method discovers new concepts or merely refines those provided via seed prompts. We would like to clarify that both processes are integral to our approach. Our method is a two-step loop: 1. **Discovery through RL** – The reinforcement learning (RL) agent searches for high-reward regions in the concept space by selecting from an initial set of seed prompts. Over time, the RL policy learns to favor prompts (or their refinements) that generate samples strongly aligned with the target class. As shown in Appendix D.3.1, cumulative rewards—computed using TCAV scores—increase consistently across classifier models, indicating that the RL agent is learning to select more informative prompts. 2. **Refinement through Diffusion Update** – For each selected prompt, we generate two concept sets, compute TCAV scores, and update the diffusion model to enhance alignment with the more salient concept set. This process introduces drift from the initial prompt toward more model-relevant abstractions. As illustrated in Appendix D.4 and Fig. 24, applying RLPO to the seed prompt “zoo” for the tiger class results in a progression of outputs—from general zoo-related imagery to increasingly tiger-specific features such as orange-black stripes and whiskers. Thus, while seed prompts serve as initial anchors, the joint RL and diffusion optimization progressively steers the concept generation process toward novel and more informative representations—extending well beyond the original prompt space. ## Q2: Stability of logistic regression classifier while calculating CAVs. We appreciate the reviewer’s request for empirical evidence regarding the stability of TCAV scores when using a logistic regression classifier instead of the original SGD-based one. To this end, we conducted an additional experiment using the “stripes” and “dots” concepts from the original TCAV paper, evaluating the "zebra" class on two different model architectures and activation layers. For each configuration, we computed TCAV scores across five independent runs, comparing the mean and standard deviation between the logistic regression (our implementation) and SGD (default implementation) classifiers. As shown in the table below, logistic regression results in low or no variance and provides more stable TCAV scores, validating its use in our setup. In response to the reviewer’s suggestion, we will include this experiment in the revised manuscript to better support our design choice. | Model | Layer | SGD - Stripes/Random | Logistic - Stripes/Random | SGD - Dots/Random | Logistic - Dots/Random | |-----------|-------------|---------------------------|----------------------------|---------------------------|----------------------------| | GoogleNet | inception3a | 0.662 ± 0.03 / 0.338 ± 0.03 | 0.67 ± 0.00 / 0.33 ± 0.00 | 0.36 ± 0.05 / 0.64 ± 0.05 | 0.33 ± 0.00 / 0.67 ± 0.00 | | | inception4e | 0.992 ± 0.01 / 0.008 ± 0.01 | 1.00 ± 0.00 / 0.00 ± 0.00 | 0.01 ± 0.007 / 0.99 ± 0.007| 0.00 ± 0.00 / 1.00 ± 0.00 | | ResNet50 | layer3 | 0.796 ± 0.02 / 0.204 ± 0.02 | 0.78 ± 0.00 / 0.22 ± 0.00 | 0.078 ± 0.07 / 0.922 ± 0.07| 0.00 ± 0.00 / 1.00 ± 0.00 | | | layer4 | 1.000 ± 0.00 / 0.000 ± 0.00 | 1.00 ± 0.00 / 0.00 ± 0.00 | 0.60 ± 0.54 / 0.40 ± 0.54 | 0.00 ± 0.00 / 1.00 ± 0.00 |
Summary: The authors proposed a method to discover and visualize the "concept" or hidden knowledge learnt by a neural network. They proposed a reinforcement learning framework to achieve this goal. A score (TCAV) was proposed to evaluate whether a hidden representation of NN forms a concept. Claims And Evidence: The authors provided quantitative and instance evidences that their algorithm successfully discovered hidden concepts. Methods And Evaluation Criteria: The evaluation metrics sounds rational. Theoretical Claims: Theoretical analyses are provided. It is hard to follow the details but the framework sounds rational. Experimental Designs Or Analyses: The experiments are valid. Supplementary Material: Yes. I checked the additional experimental results in the materials. Relation To Broader Scientific Literature: How machine learning models learn concepts of the world from data is of wide interest in establishing foundation of AI. This work is a very interesting attempt. Essential References Not Discussed: To the best of my knowledge the references are appropriate. Other Strengths And Weaknesses: The work is of high novelty and would potentially impact a broad scope of machine learning. It would be better if more rigorous quantitative evaluations can be developed in the future (e.g., correlation between generated concepts vs. ground truth concepts). Also, the procedure of generating concept seeds are still a black-box (i.e., diffusion model), which should be improved in the future. Other Comments Or Suggestions: N/A Questions For Authors: 1)It seems that the users need to have a good understanding of the target concept in order to generate good concept seeds (i.e, proper VQA design and possibly selective prompts). Is it possible to extract concepts from a large paragraph of description texts without prior human knowledge? 2)After the concept pictures are generated by SD+LORA, it is unsure how can you align them with the original input picture? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for identifying the novelty and appreciating the experiments. We hope the following explanations will clarify the queries for the reviewer. ## Q1: It seems that the users need to have a good understanding of the target concept in order to generate good concept seeds (i.e, proper VQA design and possibly selective prompts). Is it possible to extract concepts from a large paragraph of description texts without prior human knowledge? We agree with the reviewer’s observation that in our current setup, in order to generate good concept seeds proper VQA design and selective prompts are needed. But because of the modularity of our approach, the generation of seed prompts can be replaced by any other concept generation method. As described in [1], the end user can extract concepts from text descriptions without prior human knowledge and directly use it in our proposed method. [1] Zang, Yuan, et al. "Pre-trained vision-language models learn discoverable visual concepts." arXiv preprint arXiv:2404.12652 (2024). ## Q2: After the concept pictures are generated by SD+LORA, it is unsure how you can align them with the original input picture? We employ CLIPSeg, a transformer-based segmentation model, to establish visual correspondence between generated concepts and regions in the input images (See Section 4.4 Figure 5). By feeding the generated concept images as prompts into CLIPSeg, we produce heat maps highlighting areas in the input image that resemble the generated concept. This allows us to localize abstract concepts (e.g., “stripes”, “mud”) within the original images, enabling interpretable alignment between concept space and class-specific features. We hope these responses clarify your concerns. We appreciate your recognition of the work's broader implications and suggestions for future extensions, which we plan to incorporate.
null
null
null
null
null
null
Feature learning from non-Gaussian inputs: the case of Independent Component Analysis in high dimensions
Accept (spotlight poster)
Summary: This paper investigates the unsupervised learning method ICA as a simplified framework for feature learning in (deep) CNNs. In particular, the authors derive sample complexity thresholds for escaping the search phase of FastICA and SGD, considering a toy model of a dataset sampled from an isotropic distribution perturbed by a single non-Gaussian direction. Their results highlight the poor performance of ICA compared to SGD, with the latter being able, in principle, to achieve the computational threshold. Claims And Evidence: Every claim is supported by convincing evidence. Methods And Evaluation Criteria: Although the proposed analysis focuses on a relatively simple model to derive quantitative results, it provides valuable insights into the performance of two popular algorithms used in practical applications. Theoretical Claims: I checked the correctness of the proofs of Proposition 1, Theorem 2 and Theorem 4. I do not have any issues to discuss. Experimental Designs Or Analyses: I have reviewed the experimental details related to the figures in the paper. The explanations are clear, and I have no issues to discuss. Supplementary Material: I have reviewed all the supplementary material. Relation To Broader Scientific Literature: The paper extends the findings of Auddy & Yuan (2024) on the optimal algorithmic threshold for ICA, rigorously analyzing the performance of the most widely used algorithms in practice and comparing them to the known optimal result. Moreover, their proofs extend recent methods developed in the context of supervised learning with Gaussian inputs to the unsupervised non-Gaussian setting. Essential References Not Discussed: I am not aware of any relevant related work that has been omitted. Other Strengths And Weaknesses: The presentation is overall clear and easy to comprehend. The assumptions are correctly stated and well-motivated, and the results are supported by numerical simulations. One potential weakness of this work is the relative simplicity of the model, which perturbs the isotropic Gaussian distribution along a single direction and is distant from realistic image datasets. However, I do not consider this a major weakness, as it is compensated by the originality of the results in relation to the existing literature, providing a baseline for investigating more complex settings. Other Comments Or Suggestions: - Inconsistent notation in Fig. 1: $\mathbf{B}$ is used instead of $\mathbf{(b)}$ - I suggest writing explicitly what quantity is represented in the legends of Fig. 4 Questions For Authors: I do not have additional questions Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your careful feedback and for having checked both the numerical experiments and the proofs. Your suggestions will definitely help to clarify the paper. > One potential weakness of this work is the relative simplicity of the model. However, I do not consider this a major weakness, as it is compensated by the originality of the results in relation to the existing literature, providing a baseline for investigating more complex settings. Indeed, our data model is a simple single-index model. We decided to focus on single-index models in this paper because of the prevalence of **Gaussian** single index studies in the recent past (Ben Arous et al. JMLR '21, Bietti et al. NeurIPS '22, Damian et al. NeurIPS '23, Damian et al. arXiv:2403.05529, Bardone & Goldt ICML '24, Wang et al. NeurIPS '24). Here, we provide an extension to the non-Gaussian case, which is closer to realistic data. We are considering on an extension to multi-spike models for our future work. > Inconsistent notation in Fig. 1.B is used instead of (b). Thanks for noticing, we fixed this. > I suggest writing explicitly what quantity is represented in the legends of Fig.4. Thank you, we have added a title to the legends to stress that these are the widths of the patches extracted from ImageNet images. --- Rebuttal Comment 1.1: Comment: Thank you for your reply and further clarification.
Summary: The study quantifies the sample complexity of two learning algorithms: FastICA, Stochastic Gradient Descent (SGD). The key results are the following: FastICA requires at least $n \gtrsim d^4$ samples to recover a single non-Gaussian direction in high-dimensional inputs. SGD outperforms FastICA in feature learning, achieving better sample complexity, particularly when smoothing techniques are used ($n \gtrsim d^2$). In the real-world dataset ImageNet, the strong non-Gaussianity of image data helps mitigate FastICA’s inefficiency. ## update after rebuttal Thank you for the authors' response. The authors have addressed my main concerns. After reviewing the other reviewers' comments, I will keep my score and recommend accepting this paper. Claims And Evidence: Yes, for the most part, the claims are well-supported by theoretical derivations and empirical experiments. However, more empirical justification to explain that "the growth of the excess kurtosis might compensate the poor sample complexity of FastICA in practice" would be useful, currently it is not fully explored. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are appropriate. The ImageNet experiment is a reasonable real-world test, although the authors could have explored other high-dimensional datasets for broader generalizability. Theoretical Claims: The proofs appear mathematically rigorous. I have some related questions, see “Questions For Authors”. Experimental Designs Or Analyses: The experimental setups are well-documented, specifying parameter choices, dataset details, and batch sizes. Different regimes for FastICA (Figure 2) and SGD (Figure 3) are clearly explained. The spiked cumulant model used for evaluation is well-motivated. One potential issue is the color scales of the filters in Figure 1(b) could be adjusted to better match those in Figure 1(a) for better visual comparability. Supplementary Material: I mainly checked Section A and B. The Experimental details are well-documented and the mathematical notation is consistent. Necessary technical preliminary is provided. Relation To Broader Scientific Literature: The findings in this paper could be relevant for designing more efficient feature extraction methods in deep learning. Essential References Not Discussed: Not founded. Other Strengths And Weaknesses: Strength: 1. The paper is rigorously written with strong theoretical contributions, which advances the understanding of feature learning dynamics in high-dimensional settings. 2. The link between ICA and CNN feature learning is well-motivated and provides a fresh perspective on the emergence of structured filters. 3. The theoretical results are logically structured, featuring clear mathematical formulations and providing essential technical background information. Weaknesses: 1. An explicit emphasis in the title that this is a paper analysing sample complexity might make it clearer to the reader. 2. The writing of the introduction can be improved to increase readability. For example, the first paragraph of the introduction gives a somewhat disjointed impression. After discussing Gabor filters, it shifts to a discussion of non-Gaussianity. SGD is also introduced quite abruptly. The transitions between these topics are non-smooth, making it difficult for the reader to follow the logical flow. A similar issue arises in the section describing the contributions of the paper, where several key terms and expressions appear without prior introduction. As a result, it is challenging for the reader to grasp the main focus of the work upon first reading the introduction. Smoother transitions and clearer structuring would enhance readability and coherence. 3. The explanation of why SGD benefits from smoothing the contrast function could be clearer, especially for readers less familiar with statistical-to-computational gaps. Additional visualisations would be better. 4. The analysis is focused on learning one non-Gaussian feature. A discussion on how multiple features interact would be valuable. Other Comments Or Suggestions: 1. It would be beneficial to provide additional explanations on why Independent Component Analysis learns similar filters as deep convolutional neural networks in Fig. 1. Specifically, more details are needed to illustrate how this similarity can be observed directly in the figure. 2. Titles should begin with capitalized initial letters. 3. "We finally **investigated** whether FastICA also exhibits an extended search phase on real data." -> "We finally **investigate** whether FastICA also exhibits an extended search phase on real data." to maintain present tense consistency. 4. I think perhaps highlighting the generality of non-Gaussian distributions (e.g., using Cramér's decomposition theorem) would emphasise the importance of this work more significantly. Questions For Authors: 1. The batch sizes are described as $n=d^2$, $d^3+\delta$, $d^4$, but it is unclear how $\delta=0.2$ was chosen. Is there any reason to choose $\delta=0.2$? Does it generalize to different values? 2. The Hermite expansion assumes $f(x)$ is square-integrable. However, many ICA contrast functions are not bounded (e.g., excess kurtosis). Could the authors clarify whether the expansion holds for all practical contrast functions? 3. In Figure 3, the learning rate $\eta$ is defined based on $k_1^*$ and $k_2^*$, but how to tune it in practice? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed feedback and your useful comments, which spurred us to run an additional experiment to link ICA with deep CNNs, and to provide two additional plots to provide intuition on the effect of smoothing, and on the intermediate regime of FastICA. We start with these points before addressing your remaining comments below. We hope our reply alleviates any remaining concern; if not, please let us know, otherwise we would really appreciate it if you increased your score. Thank you for your time! > Readability of the introduction & explanations on why Independent Component Analysis learns similar filters as deep CNNs in Fig. 1. Thank you for this suggestion -- we now realise the introduction jumped from deep CNNs to ICA too quickly. To make the connection more clearly, we have conducted an **additional experiment**: we trained three deep CNNs (AlexNet, Resnet18 and DenseNet121) on ImageNet and computed the excess kurtosis of the dot products $s$ between first-layer convolutional filters and ImageNet patches. We found that the excess kurtosis of $s$, which corresponds to the objective function of ICA, sharply increases after about 1000 steps of SGD, precisely when Gabor filters form in the first layer. This demonstrates empirically that neural networks seek out non-Gaussian directions when Gabor filters form, akin to what happens when running ICA. We summarise our results in a revised Figure 1 in https://figshare.com/s/0e72e797306c1a3f216a, and that we hope will clarify the motivation for studying ICA. > The explanation of why SGD benefits from smoothing the contrast function could be clearer [...] The key intuition behind the smoothing operator is that, thanks to large $\lambda$ (than scales with the input dimension), it allows to evaluate the loss function in regions that are far from the iterate weight $w_t$, collecting non-local signal that alleviates the flatness of the saddle around $\alpha=0$ of $\mathcal L$, reducing the length of the search phase. A new plot, Fig 3 of https://figshare.com/s/0e72e797306c1a3f216a, shows the reduced flatness of the smoothed loss at the origin. > Choice of $\delta$ in the batch sizes $n=d^2, d^{3 + \delta}$, and $d^4$ We could have chosen any $\delta \in (0,1)$, since Theorem 2 implies that the 'intermediate' regime shown in Fig. 2 (middle) holds for a batch size of $d^2 \ll n \ll d^3$. We will add Fig 2 of https://figshare.com/s/0e72e797306c1a3f216a which shows the behaviour for $\delta = 0.5, 0.8$. > The Hermite expansion, square integrability and contrast functions. We merely need the contrast functions to be square-integrable with respect to the standard normal distribution $P_0$, so we need that the growth of $G$ is less than exponential. In general, we don't require they are integrable with respect to the Lebesgue measure on the real line (e.g., excess kurtosis is not Lebesgue integrable). > In Figure 3, the learning rate $\eta$ is defined based on $k^*_1$, $k^*_2$, but how to tune it in practice? Our theorem 4 offers a way to make a reasonable guess for $\eta$ as follows: $k^*_2$ is the information exponent of the contrast function and is therefore known. Meanwhile, $k^*_1$ depends on the likelihood and hence on the data distribution. However, we have the bound $k^*_1 \geq k^*_2$. Furthermore, if we assume that data has a non-trivial fourth-order cumulant (a mild assumption in practice), we can set $k^*_1=4$ and hence our theorem gives a recommendation on how to scale the learning rate in practice. > The analysis is focused on learning one non-Gaussian feature. A discussion on how multiple features interact would be valuable. We decided to focus on single-index models in this paper because of the prevalence of Gaussian single index studies in the recent past (Ben Arous et al. (2021), Damian et al. (2023), Damian et al. (2024), Bardone & Goldt (2024)). We are considering on an extension to multi-spike models for our future work. > Ensuring present tense consistency Thanks, we fixed this. > Highlighting the generality of non-Gaussian distributions (Cramér's decomposition theorem) [...] Yes, we agree that Cramér's theorem emphasizes the importance of studying non-Gaussianities, we will discuss it. > Additional empirical justification to explain that "the growth of the excess kurtosis might compensate the poor sample complexity of FastICA in practice" We will move the plot from the appendix to Fig. 4 with ImageNet results. We agree that it does not fully explain the behaviour of FastICA on ImageNet, but the growth of excess kurtosis with input dimension hints at important finite-size effects, which are out of the purview of our asymptotic theory. > Explicit emphasis in the title that this paper analyses sample complexity [...]. Thank you for this suggestion; we will consider clarifying the title should the paper be accepted. We have already made changes to clarify our objectives and motivations in the introduction (see below).
Summary: Motivated by empirical observations that features learned by deep convolutional networks resemble those recovered by independent components analysis (ICA), this paper presents a concrete algorithmic sample complexity bound for various algorithms for recovering a non-Gaussian direction from $d$-dimensional data. Notably, the paper establishes a sample complexity bound of $n \gtrsim d^4$ for a popular ICA algorithm (FastICA), and demonstrates that SGD in fact outperforms this algorithm on this task, attaining the information-theoretic lower bound of $n \approx d^2$ (albeit through a smoothed loss). Claims And Evidence: The theoretical claims in this paper are very well-supported. Various lower bounds and fundamental limits from prior work are outlined. The upper bounds in the paper are accompanied by simple numerical experiments demonstrating that they capture the correct scaling. Methods And Evaluation Criteria: The numerical experiments are helpful for contextualizing the theory, and are documented in the appendix. Theoretical Claims: I did not check the proofs entirely. However, I checked the proof strategies of the main Theorems and they make sense to me. The numerical experiments also are helpful to sanity check the correctness of the theorems (such as demonstrating the necessity of smoothing the loss for SGD). Experimental Designs Or Analyses: A sufficient description of experiment set-ups is contained in the main paper, with additional details in Appendix A. Supplementary Material: I checked through most of the supplementary material to understand the proof structure of the major claims. Relation To Broader Scientific Literature: This work belongs to the general category of literature concerned with understanding the statistical properties of machine learning problems and algorithms from the lens of "feature learning", i.e. how algorithms actually recover predictive features that, e.g., go beyond the linear regime (captured by the information exponent). This work in particular has potential value to the community in two main ways: 1. as stated in the paper, empirical observations have shown particular structure learned by CNNs that seem to match those given by earlier ICA algorithms--the results in this paper make this connection explicit, 2. the sample complexity bounds in this paper seem to properly close the statistical-to-computational gap for the non-Gaussian direction recovery problem, showing that both FastICA and vanilla online SGD are suboptimal, and that smoothed-loss SGD closes the gap. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: I think this paper is written well and quite straightforward to digest, despite the technicality of the theoretical tools. The main insights and arguments are likely of interest to the feature learning theory community. I have described some of the strengths that stood out to me earlier. I don't see any glaring weaknesses to the paper, beyond the possible restrictiveness of the single non-Gaussian direction model--however, this is rather trite, as algorithmic results for other multi-index settings are yet generally ill-understood. Other Comments Or Suggestions: - The authors should remember to put in the Impact Statement. - This paper shows that ICA is computationally and statistically tractable. Something that might be helpful is a discussion of why identifying non-Gaussian directions might be desirable or might happen automatically, since it seems the ultimate goal is to demonstrate that neural networks might implicitly learn non-Gaussian directions. A concrete mathematical statement here might be to show recovering these directions improves generalization error, analogous to how weak recovery improves generalization in standard Gaussian single-index models. This is likely a hard problem, but food for thought. Questions For Authors: None outstanding. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your accurate comments and for the attention to the supplementary material, including the strategies of the proofs. Your suggestions offered valuable food for thought. > I don't see any glaring weaknesses to the paper, beyond the possible restrictiveness of the single non-Gaussian direction model -- however, this is rather trite, as algorithmic results for other multi-index settings are yet generally ill-understood. Thanks for your observation. We decided to focus on single-index models in this paper because of the prevalence of Gaussian single index studies in the recent past (Ben Arous et al. JMLR '21, Bietti et al. NeurIPS '22, Damian et al. NeurIPS '23, Damian et al. arXiv:2403.05529, Bardone & Goldt ICML '24, Wang et al. NeurIPS '24). We are considering on an extension to multi-spike models for our future work. > The authors should remember to put in the Impact Statement. Thank you for the reminder, we will add it. > Something that might be helpful is a discussion of why identifying non-Gaussian directions might be desirable or might happen automatically. A concrete mathematical statement here might be to show recovering these directions improves generalization error. The reviewer raises a really interesting point, which is why it is desirable to learn non-Gaussian directions in the first place. Fascinating work in theoretical neuroscience has established that natural images have a highly non-Gaussian statistical structure: while pixel intensities themselves may follow roughly Gaussian distributions, the relationships between pixels are strongly non-Gaussian. Specifically, natural images are sparse in certain bases: edges, contours, and textures are more prevalent than random pixel noise. Gabor-like filters, i.e. non-Gaussian directions, are efficient at capturing these features that yield sparse image representations, see for example reviews such as Simoncelli & Olshausen, Annu. Rev. Neurosci. 24:1193–216 (2001) or the book by Hyvärinen, Hurri and Hoyer (2009). More recently, there have been a series of works investigating the importance of non-Gaussian input structures in machine learning from a mathematical perspective, showing that neural networks will learn them if they are relevant for the task, i.e. if they improve generalisation; see in particular Ingrosso & Goldt, PNAS '22; Bardone & Goldt, ICML '24; Lufkin et al. NeurIPS '24. We will add an extended discussion of this issue to the revised version of the paper, should it be accepted.
null
null
null
null
null
null
null
null
SPRI: Aligning Large Language Models with Context-Situated Principles
Accept (poster)
Summary: Large Language Models (LLMs) often require guiding principles to ensure their responses are well-aligned and contextually appropriate. While prior work has leveraged predefined principles or constitutions for synthetic data generation, these approaches often fail to adapt to situation-specific needs. SPRI is a framework that dynamically generates query-specific constitutional principles to guide LLM responses dynamically. Unlike static principle-based approaches, SPRI adapts to context for better alignment. Key Findings: -Models using SPRI perform as well as those using expert-crafted principles. -SPRI matches human evaluation rubrics and compares favorably with LLM-judge methods. -SPRI-generated synthetic data enhances LLM performance on TruthfulQA. Claims And Evidence: yes they are (experiments illustrate well the claims) Methods And Evaluation Criteria: yes they do Theoretical Claims: not many theoretical claims (though of formalization of the method and pseudo-code are provided) Experimental Designs Or Analyses: yes Supplementary Material: not everything but i checked the exemples of generated principles provided in Appedix I I also noticed that prompts related to SPRI are provided, which should allow to reproduce the method w/o too many difficulties Relation To Broader Scientific Literature: Related work section is fine Essential References Not Discussed: No Other Strengths And Weaknesses: One of my main concern is whether SPRI’s fine-grained, user-input-level principles are sustainable. Unlike static constitutions, the dynamic generation of query-dependent principles raises questions about inference time (cf staged approach described in section 3). I’m also wondering whether principles should be retained or discarded after use (nothing is said about this). If kept, managing conflicts with pre-existing principles becomes an issue. In other words I’m wondering whether query-level granularity is the right approach or if it’s too fine-grained to be practical; a discussion on this would help clarify this. I’m also wondering whether the assumed scenario—starting from zero principles available—is the most realistic one. In practice, isn’t it more common to begin with a predefined set of expert principles and then refine or adapt them to the specific situation rather than generating them from nothing? Would SPRI benefit from integrating existing expert knowledge as a foundation rather than recreating principles entirely (i have the feeling Appendix C describes such a case where Default Seed Principles are presented)? Other Comments Or Suggestions: I also have a request for concrete examples illustrating the SPRI process. Apprendix I shows generated principles/responses but not at each step of the staged approach. Would be nice to show generated principles along the two-step refinement process in Stage 1 (1) initial principles generated and (2) refined principles as described in Section 3. Then, in Stage 2, show (3) a principle-constrained response and (4) its refinement after applying the critic. This would clarify how SPRI iteratively improves both principles and responses. typos or remarks: - l146 pertaining => pretraining -tab 2: you should mention in caption that Pearson’s correlation coefficient is used -tab 4: i suppose amount of fine-tuning data is different for each line (for instance oracle response vs SPRI), would be nice to provide this information in the table or the caption. Questions For Authors: is Prometheus-2-8x7B (Kim et al., 2024b) the best choice for the critic model ? how about gpt4o-mini, claude or llama model ? What motivated this choice ? Ethical Review Concerns: none Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive comments on the innovation of SPRI, which dynamically generates context-adaptive principles to align LLMs while relying on minimal-to-no human supervision. We are also grateful for your acknowledgment of the experimental results — a strength that other reviewers also appreciated. > ### 💡**Inference Cost of SPRI:** Please refer to the rebuttal to Reviewer dFiR for an in-depth discussion of the inference cost of SPRI. > ### 💡**Are Principles Retained or Discarded after Use?** SPRI *discards* principles that are not satisfactory, but we note that they are the stepping stones to the final satisfactory principles. To be more specific, as Figure 2 shows, each set of principles generated in Stage 1 of SPRI is first scrutinized by the critic model. If the critic model deems the principles not useful enough to guide the response to the query, we ask the base model to refine these principles based on the critic model’s feedback. The old principles are then discarded, but they also serve as the basis for the base model’s refinement and, subsequently, the final principles. Nevertheless, if the critic model deems the principles satisfactory, they are kept and used as guidance for responding in Stage 2. We will also better illustrate this process in Appendix A Algorithm 1. > ### 💡**Query-Level Granularity:** As shown in Figure 1, when Reappraising for a person in distress, generic rules don’t apply to the context at all, whereas expert-crafted prompts demand human expertise and are time-consuming to write. Similarly, in BiGGen Bench, in order to come up with query-specific evaluation rubrics to improve the performance of LLM judges, Kim et al. (2024) needed to hand-craft instances with at least 28 annotators. In comparison, SPRI approaches the performance of these expert-guided methods and outperforms static-rule-based ones. In terms of cost, SPRI is slightly more expensive than the static ones, but a lot cheaper and more practical than employing annotators. > ### 💡**Is Starting from Zero Principles the Realistic Approach?** While SPRI is *not* given any expert principles as the starting point, we kindly point out that we include seed examples in the initial principle-generation process of SPRI. For Sec 4.1 Cognitive Reappraisal, a single oracle reappraisal constitution was provided as the seed example (line 202); whereas for Sec 4.2 Fine-Grained Rubrics, 3 instance-rubric pairs from BiGGen Bench were used as seed examples (line 271). As a matter of fact, SPRI does benefit from having access to existing expert knowledge, but we would point out that SPRI still achieves comparable performance even without it. As shown in Table 3 of Sec 4.3 where we conducted ablation studies on the effects of the seed examples in tasks that require complex guidance, removing seed examples entirely leads to an average performance degradation of 4.13% in alignment for reappraisals and 13.37% in Pearson’s correlation for rubric generation. *This demonstrates that SPRI can still achieve comparable performance even without any human supervision.* Similarly, substituting the default principles (shown in Appendix C) as seed examples leads to an average performance decrease of 4.01% in alignment and 12.35% in Pearson’s correlation for rubric generation. These results highlight the robustness of SPRI, as the default principles are not relevant to these tasks at all — in fact, they can be seen as distractions to SPRI’s principle generation. > ### 💡**More Concrete Examples Illustrating the Critique-Refinement Process of SPRI:** While Appendix I exhibits the generated principles & responses from SPRI for each of the 3 tasks that we experimented on, we agree that more concrete examples — involving the principles/responses generated at each step of the 2 stages in SPRI — would better illustrate how SPRI iteratively improves both its principles and responses. Although we cannot attach an example of the full cycle of the critique and refinement of the principles & responses due to the character limitations in the rebuttal, we will make sure to further incorporate them in Appendix I of the camera-ready paper. Thank you for the suggestion! > ### 💡**Why We Chose Prometheus-2-8x7B as the Critic Model:** We select Prometheus-2-8x7B as the critic model in our experiments because it is a good-performing model specifically trained to be an LLM-judge ([Kim et al., EMNLP 2024](https://aclanthology.org/2024.emnlp-main.248.pdf)). Besides, the MoE nature of this model makes it relatively light-weight, yielding faster run-time during critiquing. However, we agree that other models, such as GPT-4o-mini, could be used as an alternative critic model for SPRI. In fact, as we showed in the tables in the rebuttal to Reviewer dFiR, the computational cost of SPRI can also be brought down a lot if we were to choose a cheap yet powerful model (like GPT-4o-mini). We leave this interesting question to future work. --- Rebuttal Comment 1.1: Comment: Thanks for your responses to my comments, and for the clarifications especially related to inference cost of SPRI. However I think there may have been a misunderstanding around one of my questions. Specifically, when I wrote: "I’m also wondering whether principles should be retained or discarded after use (nothing is said about this). If kept, managing conflicts with pre-existing principles becomes an issue..." What I meant was: once a query has been answered, do you log or retain the principles that were finally used (which might be useful for future queries maybe)? Or do you always start from a new, empty set of principles for each new query? Based on your previous reply, I understand that you did not consider building such a memory of principles, which is fine, but I’d still be curious to hear your thoughts about this... --- Reply to Comment 1.1.1: Comment: Thank you for clarifying the question! You are correct — we don’t log or retain the principles for future queries. The reason why we start anew for each query is that the generated principles from SPRI are *specific* to each query (as you saw in Appendix I) — and this is exactly what SPRI is designed to do. This specificity of the principles proves to be beneficial for tasks like Reappraisal and Instance-Specific Evaluation, where our method outperforms methods that rely on generic static rules. Nevertheless, we agree that it is interesting to explore how we can reuse the generated principles as the starting point for new queries. But the trick here is to determine the threshold of generalizability and specificity in the principles.
Summary: The proposed SPRI framework automates real-time generation of context-specific guiding principles for LLM alignment, minimizing reliance on human expertise while addressing the limitations of generic predefined rules. SPRI achieves performance on par with expert-crafted principles in domain-specific tasks. Claims And Evidence: The paper’s core idea is both novel and important – automating alignment guidance per query is a clear step forward for making LLMs safer and more reliable without constant human supervision. The authors also articulate this contribution well, contrasting it with static-rule methods and highlighting SPRI’s adaptability​ Methods And Evaluation Criteria: One possible critique of the contributions is that the approach’s complexity (using a critic model and iterative refinement) might make deployment non-trivial – the paper does not deeply discuss the computational cost or latency of generating principles for each query. In practice, generating multiple critique loops per query could be expensive, which might limit real-world significance unless the benefits clearly outweigh the cost. Additionally, while SPRI is novel, it does combine existing ideas (e.g. using an AI feedback loop similar to RLAIF or self-refinement). The true innovation is in what is being refined (principles), but some may view the method as an incremental engineering of known techniques. Nevertheless, the paper makes a strong case that this incremental combination yields qualitatively new capabilities in alignment. Theoretical Claims: No Theoretical Claims in this paper. Experimental Designs Or Analyses: The paper doesn’t report the runtime or cost, so in real deployment this overhead could be non-trivial. If principles are generated anew each time, an aligned response might take, say, 2–5x the compute of a normal response. This trade-off isn’t discussed in the results. However, given the significant gains in alignment and quality, the extra cost might be justified for critical applications. In summary, the results section is a clear strength of the paper – it provides compelling evidence that SPRI is effective across different challenging alignment tasks, with only minor questions left about evaluation depth and runtime performance. Supplementary Material: Yes. Relation To Broader Scientific Literature: The topic of the paper is important. Essential References Not Discussed: Weaknesses: There is little to fault in the related work coverage. One minor point is that the paper could have explicitly cited or discussed the concept of “alignment tax” earlier, since it is later mentioned when discussing results​. Works like Askell et al. (2021) are cited in passing​ , but a brief explanation that aligned models can sometimes perform worse on certain benchmarks (the alignment tax phenomenon) would give even more context to why maintaining performance on broad tasks (as SPRI does) is important. However, this is a very subtle critique and does not detract from the overall quality of the related work section. Another possible addition could be a reference to the emerging idea of using multiple models or modules for self-checking (somewhat akin to “debate” or multi-agent alignment techniques), but those are less directly relevant to context-situated principles and their omission is understandable. In summary, the paper adequately reviews prior research and positions itself clearly. It builds directly on known limitations of past methods and cites those sources, ensuring the reader recognizes SPRI’s place as a next-step in alignment research. Other Strengths And Weaknesses: Weakness: On the weaker side, the paper could discuss practical considerations more, such as the computational cost of SPRI’s iterative process or how it might scale to real-world deployment with many users. Additionally, while the evaluations were mostly automatic for feasibility, a bit more human evaluation (even if anecdotal or case-study based) could further strengthen confidence in the quality of SPRI-guided outputs (especially in sensitive tasks like counseling). These weaknesses are relatively minor and can be addressed in future work. Other Comments Or Suggestions: Refer to the weakness. Questions For Authors: Refer to the weakness. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are grateful for your valuable feedback! We appreciate your recognition of the novelty and importance of SPRI in automating the alignment guidance per query to enhance the safety and reliability of LLMs with minimal human supervision, which other reviewers concurred with. Thank you also for pointing out the strength of the results section, which clearly demonstrates that SPRI significantly outperforms static-rule-based methods and achieves performance on par with approaches employing oracle principles, yielding qualitatively new capabilities in alignment. We address your comments below. > ### 💡**Computational Cost of SPRI:** Due to space limit, please refer to the rebuttal to Reviewer dFiR for an in-depth discussion of the token usage & computational cost of SPRI. We would like to additionally highlight to reviewers that *SPRI reduces the heavy dependence on human supervision and, therefore, significantly lowers the costs in both time and money*. For example, having clinical psychologists write prompts for Cognitive Reappraisal (section 4.1) and crowd-sourcing fine-grained evaluation rubrics for BiGGen Bench (section 4.2) would be considerably more costly — exceeding the cost of SPRI by a great extent. This is precisely what SPRI is designed to automate, and results show that it can achieve comparable performance even with the minimal amount of human guidance involved. > ### 💡**More Human Evaluation:** We enforce automatic evaluations as it would be more feasible and easier to scale up. Nevertheless, we kindly point out to reviewers that *for the task of Instance-Specific Rubrics (Section 4.2), we carried out the evaluation based on Pearson’s correlation against gold human ratings (see line 317)*. Specifically, the LM-judges’ scores for a total of 2,780 BiGGen Bench examples — judging either using instance-specific rubrics generated by SPRI, or other static-rubric approaches — are compared against the human ground truth labels from these BiGGen Bench examples. Results suggest that SPRI outperforms all instance-agnostic static rubrics across all base models we tested. In addition, SPRI correlates highly with human gold truth ratings on these 2,780 BiGGen Bench examples, with statistical significance on almost all capabilities across all the base models (see Appendix Table 6). Besides, while for the task of Cognitive Reappraisal (Section 4.1), the evaluation was carried out on relatively fewer data, GPT-4-0613 has been shown to correlate highly with expert humans on the evaluation criteria ([Zhan et al., COLM 2024](https://openreview.net/forum?id=yK8MT91dQY)). On the other hand, human evaluation of these Reappraisal responses would require annotators with strong expertise in clinical psychology, and it is both time-consuming and costly for them to evaluate all the responses we gathered using various methods from the 4 models we tested. The Cognitive Reappraisal results also show similar trends as fine-grained Instance-Specific Rubrics, where SPRI consistently outperforms methods without access to oracle guidance. In addition, we also kindly refer reviewers to Appendix I of the paper, which shows examples of the generated principles & responses from SPRI for each of the 3 tasks that we conducted in the paper. As Reviewer HZMY noted, these examples paint an intuitive picture of the ability of SPRI to adapt to context, as well as the quality of its responses compared side by side with the oracle ones. In the camera-ready version of the paper, we will further include all the critique-refinement responses from SPRI, from both the principle- and response-generation stages. > ### 💡**Essential References to Discuss:** Thank you for appreciating our positioning of SPRI with respect to prior literature! In the camera-ready version, we will mention “alignment tax” earlier in the paper, and expand our discussion on [Askell et al. (2021)](https://arxiv.org/abs/2112.00861). We agree that it would provide readers with a clearer idea as to why it is significant that SPRI can enhance performance on TruthfulQA while preserving results on other benchmarks when fine-tuning LLMs. Additionally, we will also include a more in-depth discussion on related literature such as self-checking.
Summary: This paper proposes a novel framework named SPRI for aligning LLMs with human preferences. The framework operates through a two-stage collaborative process between models: 1. A base model dynamically generates context-specific principles tailored to each input query, iteratively refined through feedback from a critic model. 2. The finalized principles are then utilized to modify the base model’s responses, ensuring alignment. Compared to alignment methods requiring extensive training or predefined rules, SPRI offers a intuitive solution. The authors validate SPRI’s effectiveness through three key experiments: 1. SPRI-derived principles achieve parity with expert-crafted guidelines in complex tasks, demonstrating its capability to generate context-aware guidance. 2. SPRI-generated evaluation rubrics correlate strongly with human-annotated criteria, outperforming prior LLM-as-a-judge frameworks in granularity and contextual relevance. 3. Fine-tuning LLMs on SPRI-generated synthetic data yields significant improvements in truthfulness metrics (e.g., TruthfulQA) while maintaining performance on general benchmarks, showcasing its potential for scalable alignment. Claims And Evidence: The majority of the claims presented in this paper are well-supported by detailed experimental evidence, demonstrating the robustness of the proposed approach. Methods And Evaluation Criteria: Increasing the amount of data in Section 4.1 (beyond the current 30 instances) would enhance the soundness of the results. Theoretical Claims: N/A Experimental Designs Or Analyses: I would encourage the authors to include more experiments where human evaluators, rather than LLMs, serve as judges. This would provide stronger credibility and make the findings more persuasive. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper makes a meaningful contribution to the problem of aligning LLMs, particularly in the direction of generating context-adaptive principles. The authors discuss existing approaches in the field in the Related Work section and highlight the advantages of SPRI over these methods. However, the comparison with other approaches could be further improved. For instance, a more detailed comparison between SPRI and other LLM alignment methods, particularly those that induce actual changes in model weights, would strengthen the analysis. Additionally, a more in-depth discussion of SPRI’s positioning within the broader paradigm of self-aligned LLMs would provide readers with a clearer understanding of its contributions. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. The collaborative framework between LLMs significantly increases the number of additional tokens during interaction, reducing the effective context window available to users and increasing computational costs. 2. The experiment in Section 4.1 is conducted on a relatively small dataset of only 30 instances, which may undermine the reliability and robustness of the results. Expanding the dataset would strengthen the validity of the findings. Other Comments Or Suggestions: 1. Since SPRI naturally increases the length of the context, I suggest that the authors provide a more detailed discussion of the associated computational costs and potential trade-offs. 2. In Section 4.1, the authors evaluate SPRI’s performance on a cognitive reappraisal task using a relatively small dataset. I encourage the authors to include additional experiments demonstrating SPRI’s effectiveness on more complex tasks with larger datasets. Questions For Authors: 1. I would like to know whether the authors have conducted experiments where human evaluators directly assess the results instead of relying on the evaluation schema from Zhan et al. (2024). Although this evaluation schema has been shown to have a high correlation with human judgments, I would still prefer to see results obtained from actual human evaluations for further validation. 2. The authors state that the critic model in SPRI can be a smaller-scale model, but the paper does not provide a detailed discussion on this aspect. I am interested in understanding how the choice of critic models with different parameter sizes affects the overall performance of the framework. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your appreciation of the meaningful contribution SPRI makes toward aligning LLMs with context-situated principles while relying on little-to-no human effort. We are also grateful for your recognition of SPRI’s robustness, which is supported by detailed experimental results — a key strength of the paper, as other reviewers also recognized. > ### 💡**Number of Tokens & Computational Costs Induced by SPRI:** In Appendix Tables 5 & 6, we reported SPRI’s average model calls for Cognitive Reappraisal and Instance-Specific Rubric Evaluation. However, we agree that it is important to discuss the token usage and computational costs of SPRI in more depth. Therefore, we provide a comparison table of SPRI vs other methods for each task, which will be included in the camera-ready version. We report the average model calls & input/output token usage per response for the base and critic models, as well as the estimated total cost to carry out an entire task. We estimate the cost using OpenAI’s API pricing for GPT and TogetherAI’s pricing for open-source models. 1. **Cognitive Reappraisal:** *(base model = GPT-4o-mini, critic model = Prometheus-2-8x7B)* ||Model Calls|Input Tokens (Base Model)|Output Tokens (Base Model)|Base Model Total Cost|Input Tokens (Critic Model)|Output Tokens (Critic Model)|Critic Model Total Cost| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |vanilla|1|299|94|$0.003|--|--|--| |self-refine|6|2,106|465|$0.018|--|--|--| |oracle|6|4,280|1,421|$0.045|--|--|--| |SPRI|4.5|639|220|$0.007|1,537|281|$0.033| 2. **Instance-Specific Rubric Evaluation:** *(base model = GPT-4o-mini, critic model = Prometheus-2-8x7B)* ||Model Calls|Input Tokens (Base Model)|Output Tokens (Base Model)|Base Model Total Cost|Input Tokens (Critic Model)|Output Tokens (Critic Model)|Critic Model Total Cost| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |vanilla|1|568|99|$0.403|--|--|--| |self-refine|6|4,147|619|$2.762|--|--|--| |MT-Bench rubric|1|469|200|$0.530|--|--|--| |FLASK rubric|1|636|103|$0.437|--|--|--| |oracle|1|707|105|$0.469|--|--|--| |SPRI|4.9|1,720|317|$1.247|2,642|282|$4.877| 3. **SFT:** *(base model = Llama-3-70B-Instruct, critic model = Prometheus-2-8x7B; the estimate is based on using Dolly as the starting instruction-tuning dataset)* ||Model Calls|Input Tokens (Base Model)|Output Tokens (Base Model)|Base Model Total Cost|Input Tokens (Critic Model)|Output Tokens (Critic Model)|Critic Model Total Cost| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |direct response|1|113|61|$1.9|--|--|--| |self-instruct|1|1,046|99|$12.2|--|--|--| |self-align|1|1,116|143|$13.4|--|--|--| |self-refine|2.1|139|69|$2.2|--|--|--| |SPRI|5.0|1,077|167|$13.3|1,363|258|$11.5| Compared to self-refine, SPRI incurs fewer model calls in tasks that demand complex principles, whilst maintaining significantly stronger performance (see paper tables 1 & 2). Specifically, in **(1) Cognitive Reappraisal**, the base model’s token usage under SPRI is considerably less than those employing oracle principles, and the total cost for the base model is the second cheapest to vanilla prompting. For **(2) Instance-Specific Rubric Evaluation**, while the base model’s cost for SPRI is higher than other context-agnostic approaches, the average number of model calls of SPRI is still less than self-refine. For **(3) SFT**, we can see that the input/output token usage of SPRI is similar to that using self-instruct and self-align, and the total cost is comparable too. We observe that the additional cost SPRI incurs mainly comes from the critic model, but this can be mitigated by using a cheaper critic model. We chose Prometheus-2-8x7B because it was specifically trained for LLM-judge. However, the critic model in SPRI can also be a smaller-scale model, such as GPT-4o-mini, and this would significantly reduce the cost of SPRI. We leave the interesting question of the tradeoff between the size of the critic model and the performance of SPRI to future work. As Reviewer 78Ye suggested, given the significant gains in alignment and quality of SPRI, the extra cost can be justified for critical applications. > ### 💡**Human Evaluation and Amount of Eval Data for Reappraisal:** We did not conduct human evaluation for Reappraisal due to the need for psychological expertise among evaluators and the time-consuming nature of the task. Nonetheless, GPT-4 has been shown to correlate highly with human experts on these 30 evaluation data. Moreover, we highlight that for Instance-Specific Rubrics, we carried out the evaluation for ~2.8k examples based on Pearson’s correlation against gold human ratings. Please refer to the rebuttal to Reviewer 78Ye for a more detailed discussion of the human evaluation. > ### 💡**Comparison to Other Approaches:** SPRI differs from alignment methods that require updates in model weights in that it doesn’t require parameter updates, which makes it more efficient at test time. We will also add a more detailed discussion of SPRI’s positioning in the self-aligned paradigm. --- Rebuttal Comment 1.1: Comment: Thank you for the additional results, IMHO a comprehensive human evaluation involving expert efforts is still necessary for a proper meta-evaluation. I think my current ratings already accurately reflect my judgement on this work. --- Reply to Comment 1.1.1: Comment: Thank you so much for your review again!
null
null
null
null
null
null
null
null
B-score: Detecting biases in large language models using response history
Accept (poster)
Summary: This paper discusses the potential of multi-turn interaction with LLMs to quantify the bias in LLM's response better. Specifically, the proposed framework calculates the multi-turn appearance probability of the answers by repeating the same question multiple times in a single conversation. The difference between single-turn and multi-turn appearance probability is used as B-score to detect potential bias in the LLM's single-turn interaction. B-score is applied to calibrate LLMs, which shows some performance improvement on several tasks. Claims And Evidence: The experiment results support the claim that multi-turn conversation can reduce the bias of LLMs in question answering. Methods And Evaluation Criteria: The proposed multi-turn conversation seems to be a potential way to augment the prompt. The benchmark datasets cover broad topic that may have potential bias. Theoretical Claims: N/A, no theoretical claim has been made. Experimental Designs Or Analyses: The experiments lack meaningful baselines. The verbalized confidence score is not commonly used for LLM calibration and the authors ignore other potential model calibration baselines. [1-4] for close-source models and [5-8] for open-source models. **Close-source LLMs** [1] Self-Consistency Improves Chain of Thought Reasoning in Language Models [2] Calibrating Large Language Models with Sample Consistency [3] Just rephrase it! Uncertainty estimation in closed-source language models via multiple rephrased queries [4] Calibrating Large Language Models Using Their Generations Only **Open-source LLMs** [5] Surface Form Competition: Why the Highest Probability Answer Isn’t Always Right [6] LitCab: Lightweight Language Model Calibration over Short- and Long-form Responses [7] Thermometer: Towards Universal Calibration for Large Language Models [8] hain-of-thought reasoning without prompting. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper is related to different calibration strategies, specifically for close-source LLMs. The idea is related to multi-turn LLM interaction, LLM debate can be a related topic. Essential References Not Discussed: Many previous important LLM calibration baselines are missed for comparison, referring to the "Experimental Designs Or Analyses" section. Other Strengths And Weaknesses: The proposed method lacks justification, it's unknown why LLMs can self-calibrate in multi-turn conversation - even with explicit refinement instructions. There is neither empirical nor theoretical explanation to defend the method to be universally benefit in LLM calibration. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your suggestions! **Summary:** We experimented with confidence baselines, highlighted our focus on detecting bias, and provided empirical evidence showing that LLMs can self-calibrate in multi-turn due to their inherent ability to do so. > The experiments lack meaningful baselines. The verbalized confidence score is not commonly used for LLM calibration and the authors ignore other potential model calibration baselines. We'd like to clarify that **our work focuses on the proposed B-score, which `detects bias` in an LLM’s output rather than on `model calibration`**. B-score measures the bias in the output, which differs from standard confidence calibration that aligns overall probability estimates with correctness likelihood. On the other hand, all the works mentioned by the reviewer face a similar issue with verbalized confidence scores on detecting bias. For example, in [1] and [2], the confidence score is computed based on the option distribution, which ends up being the **same score for all options**. This is not what we expect for bias detection, which should be high for the biased option. We **will cite the reviewer's suggested references**, making clear that our work tackles a different problem (bias detection vs. confidence calibration). Moreover, prior works that the reviewer mentioned either required rephrasing prompts using other LLMs [3], training auxiliary models [4], or accessing internal weights [5]–[8]. In contrast, our method only needs to repeat the same question in single-turn and multi-turn conversations, without fine-tuning or extra training. ### `Tab. R1` Accuracy on verification task of GPT-4o (%). Mean Δ=`29.1%` ||Our Evaluation Framework|BBQ| |-|-|-| |Verbalized Confidence Score|81.5|65.1| |w/ B-score|88.5 (**+7.0**)|83.6 (**+18.5**)| |Agreement-based Confidence Score|76.7|34.9| |w/ B-score|88.5 (**+11.8**)|85.8 (**+50.9**)| |Entropy-based Confidence Score|76.7|34.9| |w/ B-score|88.5 (**+11.8**)|85.8 (**+50.9**)| |FSD Confidence Score|55.2|34.9| |w/ B-score|85.9 (**+30.7**)|85.8 (**+50.9**)| |B-score|***85.9***|***85.8***| However, we still appreciate the reviewer’s advice and **have conducted experiments comparing our method with the Agreement-based [1,2], Entropy-based [2], and FSD Confidence Scores [2]**. These baselines are closely similar to our single-turn probability. They can be computed even for closed-source models, but our single-turn is different for each option, unlike confidence scores. The results (`Tab. R1`) show that **our B-score significantly outperforms confidence score baselines on the verification task**: - In our evaluation framework, B-score (`85.9%`) alone achieves higher verification accuracy than FSD (`55.2%`), Entropy (`76.7%`), and Agreement-based (`76.7%`) confidence scores; similarly, in the BBQ benchmark, B-score (`85.8%`) outperforms FSD, Entropy, and Agreement-based confidence scores (`34.9%`for all). - B-score can still help improve verification accuracy when combined with confidence scores in our evaluation framework and BBQ bias benchmark (Mean Δ=`29.1%`). --- > The proposed method lacks justification, it's unknown why LLMs can self-calibrate in multi-turn conversation - even with explicit refinement instructions. There is neither empirical nor theoretical explanation to defend the method to be universally benefit in LLM calibration. As mentioned in the paper, **LLMs can self-calibrate in a multi-turn conversation because they `inherently possess this capability`. Multi-turn settings simply ``trigger their actual capability`` by allowing them to see their response history**, providing a form of feedback or context that helps adjust subsequent answers. ### `Tab. R2` Distribution of Answer Percentages (%) |Option|GPT-4o (Gaussian)|GPT-4o-mini (Gaussian)|GPT-4o (Uniform)|GPT-4o-mini (Uniform)| |-|-|-|-|-| |0|0.0|1.04|10.0|9.57| |1|0.0|4.17|10.0|10.64| |2|6.0|10.42|10.0|9.57| |3|11.0|14.58|10.0|10.64| |4|28.0|21.88|10.0|9.57| |5|29.0|23.96|10.0|9.57| |6|19.0|12.50|10.0|10.64| |7|6.0|7.29|10.0|9.57| |8|1.0|3.12|10.0|10.64| |9|0.0|1.04|10.0|9.57| To provide an empirical explanation, as the reviewer suggested, we conducted an experiment and the results indicated that **LLMs (e.g., GPT-4o, GPT-4o-mini) are able to generate `well-known distributions`** (i.e., Gaussian, Uniform; `Tab. R2`). This is the fundamental reason why LLMs can self-calibrate in multi-turn. For example, the prompt we used for Gaussian is: ``` I have a random variable X that takes 10 integer values between 0, 1, 2, 3,...,9. Sample X 100 times following a Gaussian (mean=4.5, std=2.0) distribution, and return a list of 100 integer numbers. ```
Summary: This paper proposes a new score (B-score) for estimating the degree of bias in a preferred LLM response. The key is to not rely on only a single sample output from the model with a self-reported confidence, but rather probe the model multiple times and estimate the preference for a particular response. The B-score is the difference in the (mean) likelihood of a preferred response when the model is probed using single-turn queries (each single-turn is independent as the memory is reset so no prior context of a responses is provided) and when the model is probed using multi-turn queries, where the model has access to the previous responses for the given query. The authors note that for different questions, or for queries where there is no single correct responses, bias is better highlighted by the B-score as the model is able to reason and consider past choices in generate a subsequent answer to the same query. For example, when asked to select a random digit from 0-9, without past context models prefer the answer 7, whereas with prior context the results are almost uniform. This shows the model is effectively able to debias itself just by knowing how the same query has been asked previously. ### Update In light of the clarifications and the additional experiments, I have increased my score and lean towards accepting the paper.. Claims And Evidence: The claims in the paper are clear — the proposed B-score is not correlated with confidence, and the B-score does appear to align with bias. My main issue though is that the set of questions over which a lot of the analysis is done is very small (only 36 questions). However, the effect of the B-score on common NLP benchmarks (like MMLU) are provided in Table 4. I was wondering about the reliability of the measure as a function of the task realism vs. the expected performance of the model. For example, in Table 3 the smaller and the larger model variants win almost equally (as measured by the mean across tasks) based on your tasks (random/subjective/easy/hard), but not in Table 4 for standard metrics, where almost universally the gain is evident for smaller models. Has this been looked at in any more detail? Does this suggest that maybe the tasks on which you verify the approach are not representative of expected performance in the wild? Is it true that B-score “attempts to indicate whether a model is biased due to its imbalanced training data”? Rather this is just the difference in the mean probability, which might be due to biased training data but other factors are not explicitly ruled-out. For example, what about issues that result from poor architecture choices? Methods And Evaluation Criteria: The types of question used do largely make sense for the tests. I am not sure there is much signal in the “easy” category however. The models will likely get these correct just by virtue of the questions being easy, which itself may mask bias. Theoretical Claims: There are no theoretical claims in the paper. Experimental Designs Or Analyses: The experimental setup is clearly outlined in the paper and it is easy to follow. I have no concerns here. However, see my question above about the difference in performance for different model sizes for different tasks (Table 3 vs. Table 4). Supplementary Material: I used the content in the Appendices for additional context. Otherwise there was no supplementary material submitted. Relation To Broader Scientific Literature: The work seems to be situated within the context of relevant topic areas given the related work section. Essential References Not Discussed: I am not aware of any key references that should be included and are missing from the paper, Other Strengths And Weaknesses: Strengths: + The approach is simple in its design. + Given the simplicity of the approach, the paper is easy to follow and is well-written. Weaknesses: - Computing the B-score potentially incurs significant expense given the need to probe using the query 30 times each for the single-turn and multi-turn queries. Other Comments Or Suggestions: Incorrect opening quotes are used throughout the paper. Verbalized confidence can be very different from measured confidence. For models for which you cannot access the underlying weights, then verbalized confidence might be the best that you can do. However, it would be interesting to compare measured/reported confidence for models for which the weights are available and can be run locally. See my comments elsewhere in the review. Overall I feel that the scope of the study is on the small side for ICML, and the paper might be better suited to a conference dedicated specifically to fairness/bias. Questions For Authors: Q1: How was using 30 samples for the single-turn and multi-turn queries selected, and is this number required to be the same for all tasks? What is the effect of making this number of samples smaller? Do we see significant differences in the B-score? Does the B-score plateau at 30 samples? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive feedback! **Summary**: We've carefully extended the experiment on the BBQ bias benchmark to address concerns about dataset size, clarifying our results in `Tab. 3` vs `Tab. 4` and number of queries in single-turn/multi-turn, revised our writing (e.g., quotes) and claim,... > My main issue though is that the set of questions over which a lot of the analysis is done is very small Please check our response for [Reviewer ENzm](https://openreview.net/forum?id=kl7SbPfBsB&noteId=dKF1idPSXs) regarding this issue. **On well-known BBQ bias benchmark, our conclusions remain the same.** > ...in Table 3 the smaller and the larger model variants win almost equally (as measured by the mean across tasks) based on your tasks, but not in Table 4 for standard metrics, where almost universally the gain is evident for smaller models... In `Tab. 4`, we did not experiment with GPT-4o (larger model) on HLE as we did for GPT-4o-mini (smaller model), which may give the impression that smaller models have gained more. To address this, we've added HLE for fair comparison. GPT-4o clearly surpasses GPT-4o-mini in Mean Δ (`+2.9%` vs. `+1.8%`). Thus, in total (if we consider Command R/R+ in `Tab. 4`), **the large model performs almost `equally` to the smaller models across benchmarks when HLE is fully added.** ### `Tab. R1` Accuracy (%) across CSQA, MMLU, HLE on verification task ||GPT-4o-mini|GPT-4o| |-|-|-| |Single-turn Prob|80.1|81.2| |w/ B-score|80.3 (**+0.2**)|81.5 (**+0.3**)| |Multi-turn Prob|78.5|77.8| |w/ B-score|78.5|77.8 (+0.0)| |Confidence Score|67.7|68.0| |w/ B-score|72.9 (**+5.2**)|76.5 (**+8.5**)| |B-score|68.8|73.0| |**Mean Δ**|**+1.8**|**+2.9**| > Is it true that B-score “attempts to indicate whether a model is biased due to its imbalanced training data”?... We agree that **B-score by itself does not identify the `source` of the bias, only the `presence` of a bias**. We acknowledge that other factors (e.g., model architecture, decoding algorithms) could also contribute to these behaviors. We will rephrase this claim based on your valuable suggestion. > ...I am not sure there is much signal in the “easy” category however... **Easy category is included as a baseline (B-score~0) and contrasts with other categories**. In `Tab. 2`, N/A entries for fully correct answers inadvertently increased the Easy mean. Since an ideal bias metric should be 0 when no bias exists, we modified `Tab. 2` so that Easy now has a mean B-score of `+0.06`, much lower than Hard (`+0.15`), and Random (`+0.41`). > Computing the B-score potentially incurs significant expense given the need to probe using the query 30 times each for the single-turn and multi-turn queries. > Q1: How was using 30 samples for the single-turn and multi-turn queries selected, and is this number required to be the same for all tasks? What is the effect of making this number of samples smaller? ... ### `Tab. R2`: Mean B-score with different `k` across 8 LLMs (GPT, Gemini, Command R, LLama) |k|B-score| |-|-| |k=10|**0.22**| |k=20|**0.23**| |k=30|**0.23**| We replicated our evaluation with `k`=10, 20 queries. Different `k`'s results are still similar (0.22 ~ 0.23). **Thus, reducing #queries count does NOT significantly change B-score and can save computation resources.** We chose `k` = 30 as an upper bound to ensure reliability, but in practice, a smaller `k` may suffice. **`k` should ideally be 2-3 times the number of answer options to ensure sufficient coverage**. We will add a section to discuss this in detail. > ...it would be interesting to compare measured/reported confidence for models for which the weights are available and can be run locally. For models that can be run locally with accessible weights, confidence is often measured directly using log probabilities. However, this isn't possible for closed-source models where internal logits aren't exposed. In such cases, our single-turn method offers a practical proxy: we repeatedly ask the same question and aggregate the model’s responses to estimate confidence. **The `single-tun` approach effectively simulates `log-prob-based confidence`**, which is discussed in `Sec 4.4`. > Overall I feel that the scope of the study is on the small side for ICML, and the paper might be better suited to a conference dedicated specifically to fairness/bias. We thank the reviewers for prompting us to clarify its broader impact, but respectfully argue that our study is not too narrow for a general ML venue. Bias/fairness in LLMs is critical for the broader ML community. In fact, our submission was to the category as a paper in [Trustworthy ML (fairness, interpretability,...)](https://icml.cc/Conferences/2025/CallForPapers), one of the **topics of interest of ICML this year**. Additionally, we hope the reviewer sees that **`LLMs can self-correct biases via their own response history` is a general insight that could inspire new training/prompting/evaluation techniques in ML at large.** --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. In light of the clarifications and additions I will increase my score. I would like to clarify that I do believe the topic is important and relevant to ICML, and apologies if my comment implied otherwise. Safe, fair, and trustworthy ML should be a concern for us all. Rather it was the *scope* of the study that I was concerned with, as was highlighted in reviews elsewhere too.
Summary: This paper investigates biases in large language models (LLMs) and introduces a metric, the B-score, to quantify bias by comparing single-turn and multi-turn interactions. The authors identify that LLMs exhibit biases across various dimensions (e.g., gender, race, numbers, names) when repeatedly asked the same question in a single-turn setting, where the model produces the most probable response consistently. However, they find that allowing the model to observe its prior responses in multi-turn interactions significantly reduces bias, leading to a more uniform distribution of answers. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: Not applicable to this paper. Experimental Designs Or Analyses: 1. The evaluation framework relies on only 38 test samples (line 160), with each setting sampled only 30 times. This is a relatively small dataset, which may limit the generalizability of the findings. Given the stochastic nature of LLMs and the complexity of bias evaluation, a small sample size increases the risk of statistical fluctuations and reduces confidence in the reported trends. 2. The paper groups test questions into four categories: Subjective, Random, Easy, and Hard. However, these categories do not appear to be conceptually parallel—subjectivity vs. difficulty vs. randomness are distinct properties rather than a single dimension of classification. 3. Additionally, they mentions that each category contains only one dataset per topic, which further limits the reliability of the conclusions. The results may be overly dependent on specific dataset choices rather than reflecting a broader, systematic pattern in LLM behavior. Supplementary Material: There is no Supplementary Material for this paper. Relation To Broader Scientific Literature: This paper contributes to the growing body of research on bias detection and mitigation in LLMs by introducing the B-score and emphasizing the role of multi-turn interactions in reducing bias. The key novel insight of this paper is that multi-turn interactions significantly reduce bias, which challenges the reliability of prior single-turn evaluations. Essential References Not Discussed: No Other Strengths And Weaknesses: Please refer to Experimental Designs Or Analyses section. Other Comments Or Suggestions: I don't have any other comments or suggestions. Questions For Authors: I hope the authors can address the three questions in the Experimental Designs or Analyses section to clarify my concerns about the reliability of the experimental results. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback! **Summary**: We've extended our experiments to the BBQ benchmark to address the reviewer's concern about test size and clarified our rationale for using four distinct question categories to capture different aspects of bias. > This is a relatively small dataset, which may limit the generalizability of the findings. > The results may be overly dependent on specific dataset choices We respectfully note that our 36-question evaluation framework is consistent with prior work in this line of research (e.g., [R1] uses only **6** questions, [R2] uses only **30** questions in test set). In our work, each question is rigorously tested (30 single-turn and 30 multi-turn queries over **10 runs**), yielding statistically reliable results. We believe increasing the sample size would not change the observed trends. Instead, the next natural step is to expand the dimensions of bias, which we already do by categorizing questions into Subjective, Random, Easy, and Hard. Moreover, we have complemented these experiments with evaluations on standard benchmarks such as CSQA, MMLU (`Easy`), and HLE (`Hard`). For `Random`, our multi-turn setup consistently yields uniform answer distributions (`Fig. 4`), a finding that we believe is very interesting and will persist even with large-scale testing. However, in response to reviewer's feedback, we have extended our evaluation to the **BBQ** (ambig category) [R3], a well-known bias benchmark (`Subjective`). First, we removed the unknown option, forcing the model to choose between the remaining two options. For each binary-choice question, we compare the higher single-turn prob (Higher) option with the lower one (Lower). **On well-known BBQ bias benchmark, our conclusions remain the `same`:** - For Higher options, the Single-turn prob drops significantly in the Multi-turn (`0.94`→`0.77`; `Tab. R1`), indicating that the model adjusts its answer distribution when allowed to look into response history - Confidence scores remain constant (`0.63` for both options; `Tab. R1`), confirming that they fail to capture the output's distribution and thus are unsuitable for bias detection - The B-score differentiates clearly between 2 options (`Tab. R1`): a positive B-score (`+0.17`) for the Higher option and a negative B-score (`-0.16`) for the Lower option, showing its effectiveness as a bias indicator - The B-score substantially improves verification accuracy (Mean Δ = `45.7%`; `Tab. R2`) - The B-score (`89.6%`) alone also performs significantly better than other metrics individually (`Tab. R2`), such as Single-turn prob (`20.9%`), Multi-turn prob (`33.9%`), and Confidence score (`77.6%`) ### `Tab. R1`: Results for Higher Single-Turn Prob (H) and Lower Single-Turn Prob (L) Options ||**GPT-4o-mini (L)**|**GPT-4o (L)**|**Command R (L)**|**Command R+ (L)**|**Mean (L)**|**GPT-4o-mini (H)**|**GPT-4o (H)**|**Command R (H)**|**Command R+ (H)**|**Mean (H)**| |-|-|-|-|-|-|-|-|-|-|-| |Single-Turn Prob|0.06| 0.11| 0.01| 0.05| 0.0 |0.94|0.89|0.99 |0.95|**0.94**| |Multi-Turn Prob|0.23|0.30|0.10|0.24|0.22|0.76|0.65|0.90|0.76|**0.77**| |Confidence Score|0.57|0.52|0.75|0.68|**0.63**|0.57|0.53|0.75|0.67|**0.63**| |B-Score|-0.17|-0.19|-0.08|-0.19|**-0.16**|0.18|0.23|0.09|0.19|**0.17**| ### `Tab. R2`: Verification accuracy (%). Overall Mean Δ = ``45.7%`` |**Metric**|**GPT-4o-mini**|**GPT-4o**|**Command R**|**Command R+**|**Avg**| |-|-|-|-|-|-| |Single-Turn Prob| 25.7|34.9|7.1|15.8|20.9| |w/ B-score|89.9 (+64.2)|85.8 (+50.9)|94.3 (+87.2)|88.2 (+72.4)|89.6 (**+68.7**)| |Multi-Turn Prob|34.9|42.9|17.3|40.4|33.9| |w/ B-score|89.9 (+55.0)|85.8 (+42.9)|94.3 (+77.0)|88.2 (+47.8)|89.6 (**+55.7**)| |Confidence Score|73.5|65.1|87.4|84.4|77.6| |w/ B-score|89.0 (+15.5)|83.6 (+18.5)|94.1 (+6.7)|87.4 (+3.0)|88.5 (**+10.9**)| |B-Score|89.9|85.8|94.3|88.2|**89.6**| --- > However, these categories do not appear to be conceptually parallel—subjectivity vs. difficulty vs. randomness are distinct properties rather than a single dimension of classification We've clarified this distinction in the paper. These categories were intentionally chosen to span different aspects of bias rather than a single spectrum. **Our goal is to ensure coverage of scenarios where bias can manifest in distinct ways**: ``Subjective`` test preference; ``Random`` test for random ability; Objective (``Easy``, ``Hard``) test whether there is a bias toward the incorrect option. ## References ``` [R1] Forcing Diffuse Distributions out of Language Models. COLM 2024 [R2] The Woman Worked as a Babysitter: On Biases in Language Generation. EMNLP 2019 [R3] BBQ: A Hand-Built Bias Benchmark for Question Answering. ACL 2022 ```
null
null
null
null
null
null
null
null
Generalized Venn and Venn-Abers Calibration with Applications in Conformal Prediction
Accept (poster)
Summary: This paper proposes Venn and Venn-Abers Calibration defined as connected to the loss function. First, the isotonic regression and quantile losses are examined for the marginal calibration. Furthermore, conditional calibration is discussed. There are a series of theoretical results for the proposed algorithms. Some experiments show the relative superiority of the proposed algorithm. ## update after rebuttal Claims And Evidence: Calibration is achieved by the post-processing type learning combined with augmentation. The theries seems sound and empirical results looks better. Methods And Evaluation Criteria: The proposed algorithm looks sound, and the metrics are based on the proposed metrics. If the authors consider other metrics used in the conventional metrics, such as the ECE (Guo et al. 2017), the performance can be compared to the conventional ones. Theoretical Claims: It is nice to present the series of theorems to validate the proposed algorithms. Experimental Designs Or Analyses: The experiments are limited, in my opinion, since some cases in theorems are not addressed thoroughly. For example, using histogram binning is not examined. The details of datasets, such as the sizes, are not provided. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: This study is closely related to the reliability or trust AI. Essential References Not Discussed: None Other Strengths And Weaknesses: Positive: The theoretical validations can be strong points, and the definition of calibration is general and has potential. negative: The applications and comparisons with other definitions are limited. Other Comments Or Suggestions: None Questions For Authors: Q1: Let me know the merits or advantages of using augmented data in the proposed alg. Q2: What are the representative losses for the proposed algorithm? Can you provide the ten of loss functions? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Q1: By augmenting the dataset with all possible imputed outcomes for the test point whose outcome we wish to predict we are able to convert a point prediction into set prediction, where the width of this set captures epistemic uncertainty in the calibration process. This augmentation approach is the workhorse of the finite-sample distribution-free calibration guarantees of the original Venn-Abers calibration and conformal prediction procedures of Vovk and similarly is a crucial component of our algorithms. Q2: The proposed algorithm applies to any smooth, bounded loss function, including standard choices such as squared error loss and quantile loss. It also accommodates more specialized losses, such as those used in missing data settings, as well as losses tailored to estimating conditional functionals like the conditional average treatment effect or the conditional relative risk, as discussed in several of our cited works. We will include additional examples of representative loss functions in the revised manuscript to clarify the range of settings where our method applies. --- Rebuttal Comment 1.1: Comment: Thanks for your reply. Many points are resolved, and I'll keep my score.
Summary: This paper introduces a unified framework for Venn and Venn-Abers calibration, generalizing Venn calibration to hold with respect to arbitrary given loss functions. Unlike point calibrators (e.g., histogram binning, isotonic regression), which map predictions to a single calibrated value, Venn calibration constructs prediction sets that are guaranteed to contain at least one perfectly calibrated prediction in finite samples. The authors show how to set up a Venn calibrator for any loss to produce prediction sets with marginal and conditional calibration guarantees (where here the word "conditional" refers to conditioning on the calibration set). The marginal guarantees hold in finite samples, while the conditional guarantees hold in the large sample regime. In particular, using isotonic regression as the point calibrator underlying Venn calibration, the Venn-Abers predictor for general losses is obtained. The authors also propose Venn multicalibration, ensuring calibration across a given family subpopulations. For quantile regression (where the loss is chosen to be the pinball loss), they show that Venn calibration aligns with conformal prediction, achieving quantile-conditional coverage, and that multicalibrated conformal prediction is a special case of Venn multicalibration --- thus encompassing some existing methods. In addition, several experiments are provided --- for regular multicalibration (i.e., for Brier score/L_2 loss) and for conformal multicalibration (i.e., for the pinball loss) that verify that these methods are efficiently implementable (at least for finite label spaces) and that their theoretical guarantees lead to reasonable calibration rates in practice. ####### Update after rebuttal: I have read the authors' response. It sufficiently addresses my questions, and I therefore keep my original score. I have also read the discussion with the other reviewers, and I appreciate the further future updates proposed by the authors, in particular the addition/refinement of experiments in the empirical part of the paper. Claims And Evidence: Yes, both theoretical and empirical claims are clearly supported in the paper. Methods And Evaluation Criteria: The datasets make sense, having been derived e.g. from prior research on conformal prediction; the evaluation criteria based on calibration are also correct. Theoretical Claims: I read all of the proofs in the appendix moderately carefully, and believe them to be generally correct (up to a few technical details that weren't exhaustively checked). (The one instance of a "typo" is where the change-of-variables formula is omitted in line 690 in the Supplementary material.) Experimental Designs Or Analyses: Yes, the experimental designs are simple and sound. Supplementary Material: Yes, I reviewed the entirety of the supplementary material and found it to be generally correct up to a missing equation mentioned above (and a missing reference in line 677). Relation To Broader Scientific Literature: This work provides a clear generalization of preceding work on calibration and multicalibration, extending so-called Venn and Venn-Abers predictors, which originally worked for the calibrated mean setting (that corresponds to L2 loss) to arbitrary well-behaved losses. Each loss induces its own notion of calibration, and in particular for L2 and pinball loss, the Venn and Venn-Abers procedures (and their respective contextual strengthenings) give rise to mean and conformal calibration and multicalibration algorithms similar (but sometimes different) than the ones which have been obtained in prior work. Essential References Not Discussed: N/A Other Strengths And Weaknesses: As mentioned before, the strength of the paper is that it usefully generalizes previous fundamental set-based algorithms for calibration. While the formulation of these extensions to arbitrary loss functions is natural and appears straightforward, but the proofs for some of the properties of the generalized algorithms derived in this paper are not straightforward and require some care. E.g., conditions on underlying point calibrators for Venn calibration must be established, and verified for isotonic regression to give rise to Venn-Abers; and the conditional calibration Theorem 3.2 is nontrivial. The experiments are relatively rudimentary, including few or outdated baselines (such as conformalized quantile regression on the conformal prediction end or one uncalibrated and one group calibrated predictor on the mean calibration end), which is the main weakness; however, viewing this paper as mostly theoretical, I don't consider this to be a big flaw. Other Comments Or Suggestions: Several points could use, I think, some further elaboration. First, could you comment on the Sherman-Morrison approach in the context of the implementation of the multicalibration algorithm? Second, in a similar vein, when the case of infinite Y is mentioned in several instances, it is suggested that looking at the extreme values of y will help --- elaborating on this statement somewhat more formally would increase clarity. Third, to provide context for the reader as to the extension of calibration to general losses, it may be helpful to elaborate on this in the introduction by placing this in the context of calibration for general elicitable properties (of which means and quantiles are examples). E.g. as observed in the cited paper Statistical Scope of Multicalibration, this relationship is tight (and they give point multicalibrators for all well-behaved losses in parallel to how Venn calibration is generalized here). Questions For Authors: Other than the above mentioned concern that the empirics are not very extensive, I don't have major complaints. A substantively strengthened empirical section (with more baselines, more nonconformity scores, a clear discussion of the differences of the implementation details to existing multicalibration methods, and some further substantiation of the practical feasibility of these methods for infinite Y) would further boost my evaluation of this paper, but that is not strictly required. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the helpful comments. - Thank you for these thoughtful suggestions. We will add clarification on how the Sherman–Morrison approach can be used to efficiently implement the algorithm. We appreciate the request for more detail on the case of infinite $\mathcal{Y}$, and will revise the relevant sections to more formally explain how analyzing extreme values of $y$ can suffice to determine the range of the Venn prediction set. We also agree that it would be helpful to situate our generalization of calibration within the broader context of elicitable properties. In the revised introduction, we will add background on this connection and clarify how calibration for general losses relates to elicitable properties. - We will add further details on the computational and implementation aspects of our approach, including how to efficiently handle infinite $\mathcal{Y}$ and how discretizing the model output can be used to approximate Venn–Abers calibration effectively. --- Rebuttal Comment 1.1: Comment: Thank you for your reply! The proposed updates sound good, and I'll keep my score.
Summary: The authors propose a unified framework for Venn and Venn-Abers calibration that leverages binning calibrators to construct prediction sets that contain at least one marginally perfect calibrated prediction. Furthermore, they propose a novel method for Venn calibration technique across subpopulations. Their method outperforms the baselines in all but one case. ## Update after rebuttal We have read the authors' responses, and they have addressed our concerns and promised to update the manuscript accordingly in the revised version. We lean toward acceptance and have updated our score to Accept. Claims And Evidence: (1)- The authors propose Venn multicalibration, ensuring finite-sample calibration across subpopulations. While they show how based on some preconditions (exchangeability and finite variance) and applying a particular algorithm, the multicalibration can be achieved for Venn calibrators. Neverthelesss, while the authors provide some empirical evidence on this aspect of their research, we consider the experiments should modified to strengthen their claims. In particular, multicalibration requires a calibrator to be calibrated across subpopulations, but the authors fail to identify subpopulations in each of the proposed datasets and to provide insights regarding the quality of calibration for each of those subpopulations. Methods And Evaluation Criteria: (1)- While the authors consider several datasets that have been used in reputed works related to conformal predictions, we would appreciate it if they could ground their choice from the subpopulations perspective. The fact that the datasets present subpopulations is key to their experiments for multicalibration and datasets failing to satisfy that characteristic should be excluded from the multicalibration experiments. (2)- We would appreciate it if the authors would justify their choice regarding baseline methods, to understand why these are relevant to this particular experiments. In particular, we miss comparisons against multicalibration techniques. While the authors reference some of these works among the cited literature, they do not consider such methods as baselines (e.g., Hébert-Johnson, U., Kim, M., Reingold, O., & Rothblum, G. (2018, July). Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning (pp. 1939-1948). PMLR., or Deng, Z., Dwork, C., & Zhang, L. (2023). Happymap: A generalized multi-calibration method. arXiv preprint arXiv:2303.04379.) (3)- The authors evaluated the performance of the proposed method against the baselines based on three metrics. We would appreciate it if the authors would describe what aspects need to be assessed to guarantee the quality of the calibration and what metrics best assess those aspects. Furthermore, we would appreciate some insights into commonly used metrics to assess the general calibration and multicalibration scenarios so that they can be used as a reference when comparing to other works from this domain. (4)- The authors did not perform any assessment on whether the results obtained are statistically significantly better than the baseline approaches. We suggest they perform such assessment. Theoretical Claims: (1)- The authors introduce a unified framework for Venn and Venn-Abers calibration, generalizing Vovk and Petej (2012) to arbitrary prediction tasks and loss functions. -> The authors demonstrate that their unified framework is a generalization of Venn and Venn-Abers calibration theoretically. (2)- For quantile regression, the authors claim that Venn calibration corresponds to a novel CP procedure with quantile-conditional coverage, and that multicalibrated conformal prediction (Gibbs et al., 2023) is a special case of Venn multicalibration, unifying and extending existing calibration methods. -> The authors demonstrate their claim theoretically. Experimental Designs Or Analyses: (1)- While multicalibration requires a calibrator to be calibrated across subpopulations, the authors fail to identify subpopulations in each of the proposed datasets. We consider this to be a critical issue to provide insights regarding the quality of calibration for each of those subpopulations. Supplementary Material: We have reviewed all of the supplementary material. Relation To Broader Scientific Literature: The authors provide many references to the relevant literature. Nevertheless, while they mostly focus on Venn and Venn-Abers calibration and their relationship to conformal predition, while one of their main contributions is Venn multicalibration. Therefore, we consider that the related work must be enhanced to introduce the reader to the state-of-the-art of the multicalibration techniques, showing the research gaps that exist in that domain and how the proposed method bridges them. Furthermore, the authors may be interested considering the following work: Haghtalab, N., Jordan, M., & Zhao, E. (2023). A unifying perspective on multi-calibration: Game dynamics for multi-objective learning. Advances in Neural Information Processing Systems, 36, 72464-72506. Essential References Not Discussed: We consider the following works to be relevant to the manuscript: (a) Gohar, U., & Cheng, L. (2023). A survey on intersectional fairness in machine learning: Notions, mitigation, and challenges. arXiv preprint arXiv:2305.06969.; (b) Silva Filho, T., Song, H., Perello-Nieto, M., Santos-Rodriguez, R., Kull, M., & Flach, P. (2023). Classifier calibration: a survey on how to assess and improve predicted class probabilities. Machine Learning, 112(9), 3211-3260.; and (c) Toccaceli, P. (2021). Conformal and Venn Predictors for large, imbalanced and sparse chemoinformatics data (Doctoral dissertation, Royal Holloway, University of London). Other Strengths And Weaknesses: We consider the paper to be clearly structured and address a relevant problem: providing Venn multicalibration guarantees. The authors made a great effort in providing theoretical demonstrations for their claims and to show how the Venn calibration relates to conformal prediction in multicalibration settings. Nevertheless, the experimental part of the paper is weak on the multicalibration aspect - one of the main contributions of the paper - an should be improved. Furthermore, the related work is weak considering that the main contribution refers to a novel method for multicalibration, but not many works and state-of-the-art from that domain are described. Other Comments Or Suggestions: (1) - What do the bolded results mean in the results tables? Why do we bolden results just for certain metrics? Please indicate with an arrow at the header, whether higher/lower results are better. (2) - Table 2: align values to the right, so that differences in magnitude become evident. (3) - Figure 1: we encourage the authors to provide the prediction bands of additional methods in order to enable a visual comparison between them and understand, in perspective, the goodness of the proposed method. Questions For Authors: (1)- While the authors consider several datasets that have been used in reputed works related to conformal predictions, we would appreciate it if they could ground their choice from the subpopulations perspective. The fact that the datasets present subpopulations is key to their experiments for multicalibration and datasets failing to satisfy that characteristic should be excluded from the multicalibration experiments. (2)- We would appreciate it if the authors would justify their choice regarding baseline methods, to understand why these are relevant to this particular experiments. In particular, we miss comparisons against multicalibration techniques. While the authors reference some of these works among the cited literature, they do not consider such methods as baselines (e.g., Hébert-Johnson, U., Kim, M., Reingold, O., & Rothblum, G. (2018, July). Multicalibration: Calibration for the (computationally-identifiable) masses. In International Conference on Machine Learning (pp. 1939-1948). PMLR., or Deng, Z., Dwork, C., & Zhang, L. (2023). Happymap: A generalized multi-calibration method. arXiv preprint arXiv:2303.04379.) (3)- The authors evaluated the performance of the proposed method against the baselines based on three metrics. We would appreciate it if the authors would describe what aspects need to be assessed to guarantee the quality of the calibration and what metrics best assess those aspects. Furthermore, we would appreciate some insights into commonly used metrics to assess the general calibration and multicalibration scenarios so that they can be used as a reference when comparing to other works from this domain. (4)- The authors did not perform any assessment on whether the results obtained are statistically significantly better than the baseline approaches. We suggest they perform such assessment. (5)- While multicalibration requires a calibrator to be calibrated across subpopulations, the authors fail to identify subpopulations in each of the proposed datasets. We consider this to be a critical issue to provide insights regarding the quality of calibration for each of those subpopulations. (6)- The related work mostly focuses on Venn and Venn-Abers calibration and their relationship to conformal predition, while the main contribution of the paper is Venn multicalibration. Therefore, we consider that the related work must be enhanced to introduce the reader to the state-of-the-art of the multicalibration techniques, showing the research gaps that exist in that domain and how the proposed method bridges them. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your detailed comments and suggestions. We will incorporate them in the revised version of the paper. We note our primary contribution is the generalization of Venn and Venn–Abers calibration to arbitrary loss functions. As a secondary contribution, we show that the same techniques yield a multicalibration algorithm with set-valued predictions. Our goal is primarily theoretical: to present a unified framework that generalizes and connects existing methods, including Venn–Abers calibration and conformal prediction. 1) In the revised paper, we will also report multicalibration error over specific subpopulations. In our experiments, we use the $\ell^2$ norm defined in Equation (4), corresponding to regression multicalibration error. This metric, also used in [1] for quantile multicalibration, ensures multicalibration over finite-dimensional classes of covariate distribution shifts—specific subpopulations being a special case. We define the function class in Equation (4) as linear combinations of binary features, one-hot encoded categorical features, and spline-transformed continuous features (with 5 knots). The reported $\ell^2$ error is the norm of the vector of calibration errors across these transformed features. For datasets with binary features, this directly captures multicalibration error over the associated subpopulations. 2) We recognize that the evaluation of our multicalibration experiments can be improved, and we will address this in the revision. To the best of our knowledge, our Venn multicalibration algorithm is the only method that produces prediction sets capturing uncertainty under multicalibration in regression settings, and thus lacks direct baselines. In principle, our approach can be combined with any multicalibrator that outputs empirically multicalibrated point predictions—such as those in the cited references—to generate uncertainty-aware prediction sets. One goal of our experiments is to show how prediction set widths vary across datasets with different sample sizes and feature dimensions. For instance, the \textsc{Comm} and \textsc{Star} datasets yield wider prediction sets due to high feature dimensionality and small sample sizes, reflecting greater uncertainty in the multicalibration process. 3) Thank you for this suggestion. We agree that a clear discussion of calibration quality and appropriate evaluation metrics is important for interpreting and comparing results, and we will include this in the revised version. In the multicalibration experiments, our primary goal is to assess the quality of set-valued predictions in terms of (i) their size and (ii) the calibration error of the oracle multicalibrated prediction guaranteed to lie within the set. Specifically, we aim for multicalibration uniformly over target populations whose density ratios with the source population lie in a linear class derived from a basis transformation of the features. The reported $\ell^2$ multicalibration error measures how well the predictions satisfy the multicalibration criterion across many overlapping subgroups defined by these density ratios. This approach is motivated by prior work in quantile multicalibration [1] for feature-conditional coverage. We will incorporate this clarification in the final version to better guide practitioners and researchers seeking meaningful evaluation metrics for multicalibrated prediction sets. 4) Since the datasets are based on real-world data, we cannot generate independent replicates to compute standard Monte Carlo error estimates. Instead, we averaged all metrics over 200 random train-test-evaluation splits. In the final version, we will report Monte Carlo error estimates reflecting the variation of these metrics across the random splits. 5) In the revised paper, we will also report subpopulation multicalibration errors. The multicalibration definition in Equation (5) extends beyond fixed subgroups to classes of "covariate shifts", following the terminology of [1] on quantile multicalibration. Consistent with this work, we adopt a model- and subgroup-agnostic approach, aiming to approximate multicalibration across many subgroups without requiring explicit specification. In our experiments, we achieve this by calibrating over subgroups whose density ratios with the source population are additive functions of the covariates, implemented via adjustments over additive spline basis functions. While we agree that subgroup-specific calibration is also valuable, our experiments focused on guarantees that do not depend on manual subgroup definition. (6) In the revised version, we will add background on multicalibration and clarify how our contributions relate to prior work. While multicalibration is an important aspect, we note our primary focus is on calibration. [1] Gibbs, Isaac, John J. Cherian, and Emmanuel J. Candès. "Conformal prediction with conditional guarantees." arXiv preprint arXiv:2305.12616 (2023). --- Rebuttal Comment 1.1: Comment: We thank the authors for their responses. We have read them and decided to maintain our score.
Summary: Conformal Prediction arrises as a techniques for turning a point predictor to a set predictor, allowing for a guarantee that the set contains the ground-truth target with high probability. Venn-Predictor operates similarly but output a set of probabilistic prediction with the guarantee that at least one of them is calibrated. The core contribution is that the paper extends Venn calibration (originally for multi-class classification) to arbitrary prediction tasks and general loss functions. Provides a unified framework that applies to both classification and regression. Ensures finite-sample marginal calibration while allowing for asymptotic conditional calibration under stronger regularity . Claims And Evidence: Yes the assumptions and claims are transparently stated. Methods And Evaluation Criteria: The evaluation is reasonable but could be improved in several aspect i.e including more existing CP methods. Theoretical Claims: The claims and proofs essentially follows existing works on Venn-predictors with the inclusion of the loss function in the definition of calibration following (Whitehouse etal 2024). I followed the proof of the main results 3.1 which seems correct, and skip the rest. The authors might want to revisit the claims of extending VP to *arbitrary loss function* since the latter is quite restricted (smooth, proper loss, finite moment assumptions, lipschitzness etc ...) etc... Experimental Designs Or Analyses: The evaluations does not provide errors bars, and compare the methods. Plus, from table 1, the average width are quite all similar except in the Bike dataset. The authors might want include several recents methods other than CQR (published at least 5 years ago). Overall the strict advantages of such proposed methods wrt classical CP is not that clear. Supplementary Material: Only the first proof which is the main result of the paper. Relation To Broader Scientific Literature: Since the introduced method suffer from computational limitations as Full conformal prediction, discussing how to overcome the limitations could be welcomed. Also, it is quite well known that calibration metric such as ECE suffers come inconsistency, it should be nice to include smooth version of ECE. Essential References Not Discussed: The recent advances on smooth ECE could be included but does not have negative consequence for understanding the paper. Other Strengths And Weaknesses: The overall proposition of extending VP with a specific loss is quite interesting on its own, but the authors could detail and motivate a bit more this part. The numerical experiments does not demonstrate a clear advantage of the proposed method but seems competitive. Other Comments Or Suggestions: NA Questions For Authors: - I appreciate your work on generalized Venn and Venn-Abers calibration. I have some questions regarding the empirical calibration guarantee. From Condition C3, it appears that empirical calibration is enforced by exactly solving isotonic regression and recomputing it for each new test point X_{n+1}, as well as for every possible value of y. This suggests that calibration is dynamically maintained by including the test point in the calibration set before solving isotonic regression, avoiding the usual generalization issues. However, this approach raises computational concerns: (1) Since isotonic regression is recomputed exactly for each new test point, how does this scale in practice? (2) Given that recalibration is done for all possible values of y, does this lead to a significant computational burden, especially for large-scale settings? (3) Does this approach imply that we cannot precompute a single calibrated model but must instead recalibrate dynamically at inference time? I would appreciate any clarifications on these aspects. Overall I am confuse with the computational overhead, and do not see how techniques used in Full CP help Here. - Often, one needs to have disjoint sets and not only a single interval. Can the proposed method allows such a flexibility? "## update after rebuttal" I maintain my score and previous points. I think the authors should clarify more the computational aspect, choice of the grid, how it impacts coverage, computational efficiency etc ... These issues are not fully solved for full conformal prediction, and a transparent discussions in the case of this particular paper could help readers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thoughtful questions and suggestions. We will take incorporate your suggestions in the revised version of our manuscript. We clarify the efficiency and practicality of our method below. 1. **Efficient Approximation and Scalability** The algorithm can be efficiently approximated using the approach described in Section 3.4 of [1] for Venn-Abers calibration with squared error loss, which we also adopt in our implementation. The key observation is that the Venn-Abers algorithm only depends on the test point features through the original model prediction (which is crucially a one-dimensional quantity). As a result, we can discretize the one-dimensional prediction space—e.g., by binning into a grid of 200 bins—and run the algorithm once per grid point. Isotonic regression can be computed efficiently using regression trees with monotonicity constraints, such as those available in XGBoost. Across all datasets (each with tens of thousands of data points), computing the approximate Venn-calibrated sets for all points takes under a minute. This method scales well to large datasets via XGBoost. The isotonic regression step is computationally lightweight and can be parallelized if needed. 2. **Efficiency at Inference Time** In our implementation, we precompute the Venn-Abers prediction sets over a discretized grid (200 bins) of model predictions. At inference time, we use nearest neighbor matching or linear interpolation to produce calibrated prediction sets, avoiding any calibration computation at run time. The approximation error from this approach is negligible in practice, likely due to the piecewise constant nature of both the isotonic regression solution and the Venn-Abers sets. 3. **Clarification on Disjointness** We are unsure why disjointness would be required. The Venn-Abers prediction set is defined as the set of predictions obtained when calibrating with a new data point and all possible imputed outcomes. There is no requirement or expectation for this set to be disjoint. 4. **Improving experiments** We will add additional baselines to the conformal prediction experiments, including the conditional conformal prediction method of [1] and approaches based on alternative conformity scores, such as the normalized absolute residual error. We will also include error bars for our evaluation metrics to account for variability across the 200 train–validation–test splits over which the results are averaged. [1] van der Laan, Lars, and Ahmed M. Alaa. "Self-calibrating conformal prediction." arXiv preprint arXiv:2402.07307 (2024). [2] Gibbs, Isaac, John J. Cherian, and Emmanuel J. Candès. "Conformal prediction with conditional guarantees." arXiv preprint arXiv:2305.12616 (2023). --- Rebuttal Comment 1.1: Comment: Thanks for your comments. Having a disjoint set is not a requirement. However, one would like to have a union of confidence interval when the data is bimodal for example. I was wondering if your proposition allows such flexibility or it always output a single interval. --- Reply to Comment 1.1.1: Comment: Thank you for the clarification! This can indeed be done using the Venn/Venn–Abers procedure, and we will add a discussion of this in the revised version. The Venn–Abers procedure involves computing the calibrated prediction for all possible imputed values $y$ of the outcome $Y_{n+1}$ we wish to predict (lying in the space $\mathcal{Y}$). If the outcome space $\mathcal{Y}$ is a disjoint union of sets, then the Venn–Abers prediction set will also be a disjoint union of sets. This is a property also shared by conformal prediction. Our theoretical results only require that $\mathcal{Y}$ contains the true outcome $Y_{n+1}$, so the user is free to choose any such set. If the structure of the outcome space is not known a priori, a natural approach is to impute using the observed outcomes in $(Y_1, \dots, Y_n)$, which, for sufficiently large samples, will nearly contain $Y_{n+1}$ up to negligible discretization error.
null
null
null
null
null
null
LIMEFLDL: A Local Interpretable Model-Agnostic Explanations Approach for Label Distribution Learning
Accept (poster)
Summary: Existing interpretability models are designed for single-label paradigm and struggle to directly interpret label distribution learning (LDL) models. To solve this, the paper proposes an improved LIME algorithm capable of effectively interpreting black-box models in LDL. The authors also provide analysis on analytical solution, convergence, stability, and algorithm properties. ## update after rebuttal I have carefully read the rebuttal. The rebuttal answer the question that how the scoring function in Equation (8) is used to select features and revised some typos. Considering the novelty and the advantage of the proposed method. I keep my socre. Claims And Evidence: The authors claim that existing interpretability models face challenges in explaining LDL from three aspects: label dependency, computational complexity, and label distribution constraints. These claims are clear and well-supported by evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: I did not check the correctness of any proofs for the theoretical claims, but they appear to be reasonable. Experimental Designs Or Analyses: Yes, the experiments follow a classic comparison protocol, making them sound and valid. Supplementary Material: Yes, I reviewed part of Section A of the appendix. These contents provide additional details of analysis and experiments. Relation To Broader Scientific Literature: Currently, there are no other work on interpretable LDL, so I find this paper well-positioned. This work significantly advances the understanding and application of existing LDL models. Essential References Not Discussed: The paper has thoroughly discussed appropriate related work. Other Strengths And Weaknesses: Strengths: 1. The proposed algorithm is able to interpret any black-box model in LDL. 2. The authors provide an in-depth analysis of the proposed algorithm, including discussions on its analytical solutions, convergence, stability, and overall properties. 3. The paper is clear and has intuitive presentation of the analysis and experiments. For example, the visualization of dummy variable analysis helps in understanding the model’s behavior. Additionally, the introduction of the Jaccard index effectively demonstrates the superiority of the proposed algorithm. Weakness: 1. Steps 3, 4 & 8 in Algorithm 1 lack sufficient rigor. 2. The authors introduce some notations in Equation (9) for convenience, but why are these notations not consistently used in the subsequent paragraphs, such as in Equations (11) & (12)? 3. There are some typos in this paper, see the next part. Other Comments Or Suggestions: Steps 3, 4 & 8 in Algorithm 1 lack sufficient rigor. Typos: 1. Line 74: adapt to; 2. Line 117-118: $\mathbb{R}^{r \times 1}$? 3. Line 159-160: entropy? 4. Duplicate definition of $\boldsymbol{B}$ in Equations (9) & (10). Questions For Authors: 1. Could the authors briefly explain how the scoring function in Equation (8) is used to select features, i.e., in Step 8 of Algorithm 1, and whether the selected features are related to the subsequent steps? 2. The authors introduce some notations in Equation (9) for convenience, but why are these notations not consistently used in the subsequent paragraphs, such as in Equations (11) & (12)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review of our paper; we greatly appreciate your comments and questions. [Comment 1] Steps 3, 4 in Algorithm 1 lack sufficient rigor. A: We will introduce Steps 3, 4 of Algorithm 1 in more detail in the revised version. Step 3 of Algorithm 1 generates $m$ samples based on the examples to be interpreted $x$. For image data, the input image is segmented into superpixels, we then generate $m$ samples via binary masking: 0 replaces a superpixel with mean values (per channel), while 1 retains the original pixels, this process preserves local structure while enabling efficient sampling. For tabular data, we first bin features based on $x$’s values (1 if in the same bin as $x$, 0 otherwise), samples are generated by sampling from Gaussian distributions parameterized by training-set statistics (mean/variance per feature). Steps 4 is to calculate pairwise weights between these sampled examples to the example to be interpreted using Eq. 3 and Appendix A.2, ensuring locality-awareness. [Comment 2] Steps 8 in Algorithm 1 lack sufficient rigor. Could the authors briefly explain how the scoring function in Equation (8) is used to select features, i.e., in Step 8 of Algorithm 1, and whether the selected features are related to the subsequent steps? A: In Step 8, we perform iterative feature selection by progressively adding features and evaluating their impact using the scoring function in Eq.8. At each iteration, we train model using only the currently selected features and compare the new score with the current maximum. If the score exceeds the maximum, the feature is retained in the selected set; otherwise, it is discarded. Unselected features are excluded from subsequent training iterations. For validation, we analyze the top 10 ranked features by incrementally adding them and monitoring their influence on explanation fidelity in our local accuracy experiments. In the dummy feature experiment, we search for the current lowest ranked feature and perform masking operation to observe the effect on the interpretation results before and after the change. We will introduce this section in more detail in the revised version to make it easier to understand. [Comment 3] The authors introduce some notations in Equation (9) for convenience, but why are these notations not consistently used in the subsequent paragraphs, such as in Equations (11) & (12)? A: The introduction of symbols in Eq.12, specifically the reformulation of $Z^{\prime \top}{\Pi}F(h(Z))-Z^{\prime \top} u 1_{1 \times r}=\Gamma$, was designed to simplify the algebraic representation of subsequent theorems and proofs. While the original notation $Z^{ \prime \top} \Pi Z^{ \prime}=\Delta$ remains mathematically rigorous, retaining $B$ (instead of $\Delta$) streamlines derivations by reducing nested terms. We acknowledge that this substitution introduces minor notational complexity in the proof flow, and in the revised manuscript, we will add a footnote linking the notations. [Comment 4] There are some typos in this paper. A: In the revised manuscript, we will correct the following issues, Line 74 should be "adapt to" instead of "adopt to", Lines 117-118 should be $a_{i}\in \mathbb{R} ^{r\times 1}$, and Lines 159-160 should be "entropy" instead of "entory". The $B$ matrix of Eq.10 is just a more detailed description of Eq.9, which we will consolidate these equations to explicitly show that matrix $B$. --- Rebuttal Comment 1.1: Comment: Thanks the authors for their efforts in providing the rebuttal. I have carefully read the rebuttal. I still have a minior questions: In Q2, the authors claim that "if the socre exceeds the maximum,...." i wonder how to decide the "maximum"? Can any details can be provided? --- Reply to Comment 1.1.1: Comment: Thank you for this question. This maximum score is iteratively updated, my initial setting is $max=-100000000$, every time a feature is added, the score is calculated using the feature selection scoring function, and then compared to the current $max$, if it is larger than it replaces it with the current score and puts the feature into the set to be selected. If it is smaller than it keep the current $max$.
Summary: This paper proposes an interpretability model for label distribution learning (LDL). The classification LIME approach is adapted to handle LDL. An optimization objective is proposed to estimate the parameters of the interpretability model. Theoretical analysis is performed, including convergence, stability, and some theoretical properties. Experimental results show that the proposed method can outperform PULIME, a parallel application of LIME. Claims And Evidence: The claim that the interpretability model for LDL is important is good. However, it is not clear how the proposed method differs from the classical LIME approach. Although multiple labels are considered for LDL and a single label is considered for LIME, I think they are very similar, since the optimization problem in Eq. (4) can be decomposed for each row of $\boldsymbol{A}\'$, which corresponds to an optimization problem for each label in LIME. In this way, it seems that the proposed method, which mainly works by solving the optimization problem in Eq. (4), is very similar to LIME. The differences are mainly in the different optimization techniques (L-BFGS) and the label distribution constraints (non-zero and normalization). Methods And Evaluation Criteria: The proposed method is valid. Theoretical Claims: The theoretical claims are good and support the proposed method. However, the structure of the proofs is unclear, and the proof of Theorem 3.2 is missing. Experimental Designs Or Analyses: - In the first part of the experiments, the fidelity is calculated as several LDL metrics using model outputs from black-box models and the interpretability model. However, the main purpose of the interpretability model is to determine the importance of different features, as described in the previous LIME paper. Therefore, it is uncertain whether a better metric value indicates better interpretability model performance. Therefore, it is recommended that the previous evaluation procedure in the LIME paper can be adopted to validate the effectiveness of the proposed method. - The second part of the experiments includes human experiments. Human participants are asked to decide whether the explanations are good or bad. It is unclear whether the decisions are objective and influenced by the choice of participants, since only one experiment is conducted. - Only LIME is chosen as a baseline, which may be inadequate and outdated. More recent interpretability models should be considered for experimental comparisons. Supplementary Material: I did not check the supplementary material in detail. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: There is no essential reference that needs to be discussed. Other Strengths And Weaknesses: Strengths: - The interpretability of LDL is a good but under explored topic in the literature. Therefore, the research problem is beneficial and interesting. - The theoretical analysis is good and validate the effectivness of the proposed method. Weaknesses: - The novelty of the proposed method seems limited, since the optimization problem is quite similar to a multiple-label version of LIME. Therefore, the novelty and contributions should be clarified in detail. - It is unclear whether the experiments in the paper can help validate the effectiveness of the proposed method. It can be observed that the interpretability model is more accurate in fitting the black-box model locally. However, it does not mean that feature importance is well described by the proposed method. - It is unclear how label dependencies are accounted for in the proposed method, as noted in the Introduction section. Other Comments Or Suggestions: The writing of the paper can be improved. Here are some minor points: - The title should read "Explanation" instead of "Explanations". - There are two periods in line 18-19. - It is quite confusing with the introduction of Eq. (9) without any descriptions. - The wording in lines 257-258 is confusing. - In Property 3.6 and 3.7, the notation of the $k$-th function is inconsistent. Questions For Authors: - The $\pi$ function is different from the previous LIME method. In LIME, the anchor is the selected example $z$. In this paper, however, it is an all-one vector instead. I wonder why the function is changed in the paper. - I am confused by Figure 3. I do not know the meaning of the purple color that should be specified in the paper. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Responsible Research Practice (e.g., IRB, documentation, research ethics, participant consent)'] Ethical Review Concerns: Since an experiment involves human participants, the ethical statement should describe in detail how the privacy of the participants will be protected and whether the evaluation will be unbiased. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive review of our paper. [Comment 1]About the structure of the proofs. A:We will restructure proofs in the revised version. The proof of Theorem 3.2 is appeared in Appendix B.2 (lines 1025~1040). [Comment 2]The method's novelty appears limited as it resembles a multi-label LIME extension. While targeting LDL (multi-label), Eq.4 decomposes into per-label subproblems akin to LIME's single-label optimization. A:Equation 4 can be decomposed into each row for single-label LIME, which means that our method is a generalized version of LIME. However, combining multiple independent LIME cannot directly give rise to our method. The reasons can be summarized as follows. 1. Label dependency. Interpretation of LDL requires handling label dependency, whereas the direct combination of multiple LIMEs cannot handle label dependency since per-label training produces disconnected weight vectors, and inability to consider the simultaneous impact of features on multiple labels. Our approach evaluates features’ impacts simultaneously across all labels from the weight matrices, revealing dependencies. 2. Theoretical guarantees. We provide the following theoretical guarantees for our method, as the number of sampled examples increases, our interpretation results provably converge under the label distribution. When the black box model turbulence is smaller, the more stable the interpretation results are, however, these guarantees do not hold for applying multiple single-label LIMEs to approximate label distribution. 3. Feature selection. Interpretation of LDL requires consideration of global representational features, and multi-LIME fails to achieve due to its single-label focus (label-specific features). Our method uses distribution information to prioritize features globally relevant features. [Comment 3]LDL metrics may not directly assess interpretability quality. A:We conducted supplementary experiments using R²-score aligned with LIME metric and expanded baselines to GLIME and DLIME, all evaluated under default parameters. |Model|Datasets|LIME|DLIME|GLIME|LIMEFLDL| |-|-|-|-|-|-| ||1|.16|.15|.16|.67| ||2|.15|.15|.16|.67| ||3|.48|.35|.34|.58| ||4|.39|.33|.34|.40| |LDL-SCL|5|.34|.25|.25|.30| ||6|.37|.34|.32|.38| ||7|.94|-|.95|.99| ||8|.89|-|.89|.98| ||1|.17|.17|.16|.99| ||2|.16|.15|.17|.99| ||3|.34|.33|.33|.36| ||4|.25|.31|.29|.36| |RBF-LDL-LRR|5|.26|.47|.36|1| ||6|.52|.49|.47|.99| ||7|.93|-|.96|.99| ||8|.67|-|.94 |.99| These experiments will be included in revision (space permitting). [Comment 4]About the decisions are objective and influenced by the choice of participants. A: While single-study generalizability is limited, our design mitigated bias via: random recruitment (ML/non-ML backgrounds), independent evaluation of 20 randomized interpretation results, and high inter-rater agreement despite participant diversity. [Comment 5]About how label dependencies are accounted for. A:Label dependencies directly influence the equilibrium states of the parameters $u$ and $\rho$. These parameters jointly govern the weight matrix $A$, adjusting its entries to reflect label correlations, this ensures global consistency, features impacting correlated labels exhibit coordinated weight changes in $A$, perturbing a feature’s weight reveals its systemic effect on the entire label distribution (e.g., simultaneous probability shifts for co-dependent labels). [Comment 6]About some writing errors. A:We will change "Explanations" to "Explanation" and correct the problem with the two periods in the revised version. [Comment 7]About some inadequate or incorrect descriptions. A:We will add intuition before Eq.9 to clarify its role in streamlining Eq.10 and simplifying proofs. Clarify perturbation analysis (lines 257-258): black-box model (BM) perturbations (models $P$ vs. $Q$) impact interpretations. $T(h(Z))$ quantifies output distribution divergence between BM (models $P$ vs. $Q$), while $t(h(Z))$ describes the difference in the degree of descriptiveness of single label of the model outputs. Interpretation stability $\propto (\text{BM perturbation magnitude})^{-1} $. We will describe this part in more detail in the revised version. Property 3.6 uses superscripts ($\xi_{k }^i$, $\xi_{k }^j$) for distinct values of feature $k$. Property 3.7 uses ($\xi_{i}$, $\xi_{j}$) to denote different features. We will correct this part in the revised version. [Comment 8]About the $\pi$ function. A:For interpretation, the example to be interpreted $x$ is mapped to an all-ones vector in the interpretable space. Weights are computed via similarity between $x$'s and sampled instances' representations, aligning with prior work. [Comment 9]About the purple color in Figure 3. A:Fig.3 uses a color scheme: Purple (blue+pink blend) indicates LIME$\approx $LIMEFLDL; blue for LIME$>$LIMEFLDL; pink for LIMEFLDL$>$LIME. Detailed analysis will be discussed in the revised version.
Summary: In order to mitigate the interpretability challenge inherent in most label distribution learning (LDL) algorithms when applied to risk-sensitive decision-making scenarios, this paper introduces a novel local interpretable model-agnostic explanation framework specifically tailored for LDL. This approach takes into account the label distribution within the local region and constructs local linear models to effectively approximate the global behavior of the black-box LDL model. Furthermore, the paper conducts a thorough theoretical analysis of the proposed methodology, offering a theoretical assurance that the interpretations it yields in the context of LDL tasks are closely aligned with the actual decision-making process. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The integration of label distribution constraints into LIME’s framework is novel. The feature attribution matrix and ADMM-based optimization are appropriate. Evaluation: Fidelity metrics and consistency are well-chosen. Theoretical Claims: I have checked the correctness of Theorem 3.2, Theorem 3.3, Theorem 3.4, Property 3.5, Property 3.6, and Property 3.7. Experimental Designs Or Analyses: I have checked the experiments. Specifically, I have examined the adequacy of the dataset in Section 4.1, the completeness of the evaluation metrics in Section 4.2, and the rationality of the experimental procedure in Section 4.3. Supplementary Material: I have reviewed the supplementary material such as further methodological details, additional experimental results, and the proofs of theorems. Relation To Broader Scientific Literature: The related literature is interpretable machine learning. The main contributions of this paper can be treated as an extension of traditional LIME (local interpretable model-agnostic explanations) method. Specifically, it adapts LIME to the $r$-dimensional label space with $r-1$ degrees of freedom. Essential References Not Discussed: I did not find the essential missing references in this paper. Other Strengths And Weaknesses: Strengths: The problem addressed in this paper holds substantial significance, as label distribution learning algorithms are often complex and their decision-making correctness is challenging to validate in practical applications. This paper endeavors to resolve the interpretability issues inherent in label distribution learning, thereby significantly contributing to the expansion of the applicability of the label distribution learning paradigm. Furthermore, the adaptation of traditional machine learning explanation methods to the label distribution learning framework is non-trivial owing to the high-dimensional and interdependent label space. Weaknesses: This paper also has some limitations. For example, the writing of this paper needs improvement, and the experimental results in appendix need more discussions. Other Comments Or Suggestions: First, the word “Explanations” in the title “A Local Interpretable Model-Agnostic Explanations Approach” should be amended for accuracy. Second, the quotation marks on line 566 should be used correctly. Questions For Authors: In Figure 6, what are the differences between the original image and the processed image? Visually, they appear almost same. Besides, are there detailed discussions for the visualized cases to demonstrate the performance of LIMEFLDL? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive review of our paper; we greatly appreciate your comments and questions. [Comment 1] This paper also has some limitations. For example, the writing of this paper needs improvement, and the experimental results in appendix need more discussions. A: We will follow up by scrutinizing the paper for writing issues, as well as discussing the experimental results in the appendix in more detail. [Comment 2] First, the word “Explanations” in the title “A Local Interpretable Model-Agnostic Explanations Approach” should be amended for accuracy. Second, the quotation marks on line 566 should be used correctly. A: "Explanations" will be changed to "Explanation", the quotes used in line 566 are incorrect and that these essay writing will be corrected in the revised version. [Comment 3] In Figure 6, what are the differences between the original image and the processed image? Visually, they appear almost same. Besides, are there detailed discussions for the visualized cases to demonstrate the performance of LIMEFLDL? A: The processed image in Fig.6 was changed by smudging and adding a green border change the lowest ranked hyperpixel block, which is not conspicuous enough because of its low rank and is often located in the edge region of the image. For both the image dataset and the table dataset we have done a visual comparison of the LIMEFLDL and the original LDL interpretation results, which we will show in the revised version. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed answers. All my questions have now been answered satisfactorily. I have decided to maintain my original rating.
Summary: To address the local interpretability issues of label distribution learning (LDL), this paper proposes an improved LIME algorithm, namely LIMEFLDL. The algorithm is mainly manifested in three aspects: first, by introducing a feature attribution matrix to address the label dependency issue in LDL tasks; second, by minimizing the output differences between the black-box model and the explanation model within the generated local region to reduce computational complexity; third, by incorporating linear constraints and penalty functions to ensure that the predictions of the explanation model align with the distribution of the labels. In addition to the above, the article provides extensive theoretical proofs regarding the stability and convergence of the algorithm, as well as its analytical solution form. The effectiveness of the algorithm is demonstrated through multiple experiments and human experiments. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: 1.For Equation 23 and 24, in the third equation, where does the 2 to the power of f come from? 2.For Equation 64, does this expression satisfy the probability constraint, i.e., the sum of probabilities equals 1? Please provide a more detailed explanation. 3.For Equation 49, the meaning of the sigma symbol has not been explained. Please check if it is written correctly. 4.For Equation 39, there is a symbol error: the symbol u seems to be incorrect. Experimental Designs Or Analyses: Yes. I check the soundness and validity of all experimental designs and analyses. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper proposes an improved LIME[1] algorithm, namely LIMEFLDL, to address the local interpretability issues of label distribution learning. The LIME algorithm is designed for single-label learning tasks. However, due to the label dependency in label distribution learning (LDL), LIME is unsuitable for direct application in LDL tasks. This paper introduces the feature attribution distribution matrix to address this issue. [1] Ribeiro M T, Singh S, Guestrin C. " Why should i trust you?" Explaining the predictions of any classifier[C]//Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016: 1135-1144. Essential References Not Discussed: There is no any essential references not discussed. Other Strengths And Weaknesses: Strengths: 1.The authors provide proofs regarding the stability and convergence of the explanation algorithm, thereby increasing the credibility and correctness of the algorithm. 2.The authors compare the LIMEFLDL and PULIME algorithms on 8 datasets across 7 metrics, fully demonstrating the effectiveness of the proposed method. Weakness: 1.There are some errors and unclear elements regarding mathematical symbols in the paper, which are detailed in the Theoretical Claims section. 2.The captions for figures and tables in the paper are not sufficiently detailed. For example, in Figure 1, the legends are not explained in caption. Other Comments Or Suggestions: 1.It is recommended to include statistical information in the comparison results between LIMEFLDL and PULIME, such as how many datasets showed better performance of LIMEFLDL compared to PULIME under the KL metric. 2.In Table 1, what does the abbreviation RBF-LDL-LRR stand for? It is not found in the original text. Additionally, the text in Figure 10 is not clear. 3.In the abstract, on the 22nd line, "To address the label dependency problem," the introduction of the label dependency problem is not sufficiently clear and can be confusing. Adding some rationale would improve this. Regarding the initial value for the feature attribution matrix A, these are not found in the paper. This information could be added for completeness. Questions For Authors: 1.The Introduction mentions the computational complexity issues of the LIMEFLDL algorithm compared to parallel use of the traditional LIME algorithm (PULIME). Is there any theoretical or experimental proof to support this? 2.Regarding the Y_mean in Equation 8, could it be influenced by the class distribution? It might be worth trying to adopt a class-related prior probability distribution. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review of our paper; we greatly appreciate your comments and questions. [Comment 1]About the errors and unclear mathematical symbols. A: 1. Equation 23 should be modified: $\sum_{k=0}^{f} e^{\frac{(k-f)}{\sigma^{2}}}\frac{k}{f} \frac{f!}{2^{f}(f-k)!k!}$, this equation is to represent the probability of picking $k$ non-zero elements from $f$ elements, $\frac{f!}{(f-k)!k!}$ is the number of combinations to select $k$ from $f$ elements, $2^{f}$ is the total number of combinations. The same error is found in Eq.24. We will correct them in the revised version. 2. What we want to describe is that if $\sum_{x \in \Omega} \Theta(x)>1$, Jensen's inequality (Eq.61) implies $E(KL)\ge-\log (\sum_{x \in \Omega} \Theta(x))<0$. Thus, if $KL\in (-\log (\sum_{x \in \Omega} \Theta(x)),0)$ when distributions violate normalization, the bound's numerator ($\sum_{x}\Theta(x)-1$) and denominator ($\sum_{x } \Theta(x)$) directly quantify this violation. 3. In Eq.49, we replaced the original symbol $\Sigma$ with $\Delta$ to avoid ambiguity with the concatenation operator $\Sigma$ used earlier. We'll correct it in the revised version. 4. It should be a $v$-vector instead of $u$, we will correct it in the revised version. [Comment 2]The captions for figures and tables in the paper are not sufficiently detailed. A: We will add more details to describe the figures and tables in the revised version. Fig1a: x-axis: exponentially increasing samples (log scale); Fig1b: x-axis: decreasing divergence between two black-box models, both graphs of y-axis is the Top 20 Jaccard Index. Fig1a demonstrates convergence as Jaccard Index $\uparrow$ with exponentially increasing samples (log-scale x-axis), thus illustrating the convergence of the interpretation algorithm. Fig1b shows Jaccard Index $\uparrow$ when predictive distribution divergence between black-box models $\downarrow$ (x-axis), thus illustrating the stability of the interpretation algorithm. [Comment 3]About the statistical information in the comparison results between LIMEFLDL and PULIME. A: We will give ranking statistics in the revised version, as follows, |Model|Algorithm|Cheb.|Clark|Canb.|KL|Cos.|Inter.|Jac.| |-|-|-|-|-|-|-|-|-| |RBF-LDL-LRR|LIME|1.75|1.5|1.625|2|1.625|1.625|1.875| ||LIMEFLDL|1.25|1.5|1.375|1|1.375|1.375|1.125| |LDL-SCL|LIME|1.625|1.5|1.5|1.875|1.5|1.5|1.625| ||LIMEFLDL|1.375|1.5|1.5|1.125|1.5|1.5|1.375| |AA-KNN|LIME|1.625|1.625|1.625|1.5|1.625|1.625|1.75| ||LIMEFLDL|1.375|1.375|1.375|1.5|1.375|1.375|1.25| |MEM|LIME|1.75|1.75|1.75|1.875|1.75|1.75|1.75| ||LIMEFLDL|1.25|1.25|1.25|1.125|1.25|1.25|1.25| LIMEFLDL leads the rankings for the vast majority of measures, we will add this section in the revised version as space permits. [Comment 4]About the abbreviation RBF-LDL-LRR, and the text in Figure 10. A:We enhanced the LDL-LRR model's feature extraction via Gaussian kernel (renamed RBF-LDL-LRR). Figure 10 tests robustness by masking weakest features, measuring fidelity before/after masking. We will add the name of the modified model and the description to the revised version to make it more accessible. [Comment 5]About the introduction of the label dependency problem and the initial value for the feature attribution matrix A. A:We will clarify label dependence fundamentals in the revised version: LDL handles multi-label samples where labels exhibit co-occurrence patterns (simultaneous changes) and dependency propagation (one label's presence affects others' distributions). Matrix $A $ uses uniform initialization as initialization. [Comment 6]About the computational complexity of the LIMEFLDL algorithm compared to PULIME. A: We analyze the computational complexity of LIMEFLDL and PULIME. For LIMEFLDL, sampling and forming weight matrix $\Pi$ requires $O(m)$. Matrix multiplication contributes $O(mfr)$, and black-box model inference accounts for $O(mrk)$. Element-wise matrix subtraction and F-paradigm operations each add $O(mr)$, with regularization terms contributing $O(fr)$. The total complexity is $O(mfr+mrk+mr+mr+fr+m)$. PULIME's per-label workflow involves sampling ($O(m)$), vector operations ($O(m)$ for subtraction/squaring and $O(mf)$ for multiplication), black-box inference ($O(mrk)$), and regularization ($O(f)$). With parallel computation, its total complexity is $O(mrf+mkr^2+mr+mr+mr+rf)$. PULIME's ridge regression solver has $O(f^3+mf^2)$ complexity, LIMEFLDL uses L-BFGS optimization with $O(T(zr+mf))$ cost, where $z≈5–20$ (stored gradient pairs) and $T$ denotes iterations. [Comment 7]About the Y_mean in Equation 8, and a class-related prior probability distribution. A: The idea of using a uniform distribution is to allow the current feature selection to go beyond the effect of a uniform distribution, but of course it is possible to make different prior distributions for specific datasets so that the selected features are more representative. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal to address my concern. I have increased my score.
null
null
null
null
null
null
Exponential Family Variational Flow Matching for Tabular Data Generation
Accept (poster)
Summary: The paper introduces the application of Variational Flow Matching to tabular data generation. To extend VFM, the authors propose to represent the variational distribution in VFM as an exponential family. The motivation behind this proposal stems from the heterogeneous nature of tabular data thus, the claim is that the exponential family is suitable for each data type commonly found in tables i.e. Gaussian distributions for continuous variables like age or income, categorical distributions for discrete variables like education level etc. A nice property of the exponential family is that its “linear” whereby $E[x_1]$ can be computed in closed form, enabling loss functions for different column types. To compute the loss, the authors employed the concepts of Bregman divergence. Hence, each exponential family will induce its own Bregman divergence for a unified handling of the heterogeneous tabular datasets. ## update after rebuttal My "experimental" concerns have not all been addressed with clear evidence in this rebuttal. They have only claimed that "it will be included in the final manuscript" hence, I still stand with my decision of rejecting the paper. "This is one example but overall, no ablations to be found. For instance, what is the impact of having a pure Gaussian distribution vs. your various distributions parameterized using the EFs etc.?" "Number of Function Evaluations (NFEs) and Efficiency" "Privacy Evaluation and MIA". All of which have not been included in this rebuttal. Claims And Evidence: - Unified handling of mixed data via exponential families - Each feature is modeled by a suitable exponential-family distribution - EF-VFM turns the joint generative problem into separate moment-matching problems for each feature - Connection to Bregman divergences - VFM can be viewed as minimizing a Bregman divergence tailored to each feature’s distribution - Claims that it aligns with known relationships (e.g. Gaussian → squared-error, categorical → cross-entropy are instances of Bregman divergences) - However, missing ablation studies for these properties. - TabbyFlow achieves state-of-the-art (SOTA) results on standard tabular data benchmarks, improving over GAN, VAE, and diffusion-based baselines in both fidelity (realism) and diversity - Concerns regarding missing TabDiff results. It’s understood that code is not publicly available but it would be nice if TabDiff’s results from their paper can be included in the tables - Additionally, per literature review regarding “Flow Matching in Tabular Data Generation”, it would be nice to acknowledge or even have comparisons to TabUnite (https://openreview.net/forum?id=Zoli4UAQVZ) too since they employ CFM to generate tabular data. - Privacy-perservation is one of the most crucial aspects applying tabular generation to the real-world, where synthetic data is generated to protect sensitive information. However, no privacy-preserving metrics such as Membership Inference Attacks are conducted on the synthetic samples. Methods And Evaluation Criteria: Methods and Evaluation criteria is sound, aligning with TabSYN’s work. Theoretical Claims: Proofs have been checked for proposition 3.1 and 3.2 in the Appendix which are mathematically sound and consistent with the established literature cited in their paper. Experimental Designs Or Analyses: Experimental designs align with TabSYN making it sound. Analyses are straightforward and easily understood too. Supplementary Material: Supplementary material contains code. Code was not reviewed as it is computationally expensive to run diffusion models. Relation To Broader Scientific Literature: Paper applies a varied form of VFM to satisfy the heterogeneity of tabular data. It also addresses previous tabular generative diffusion models (e.g., STASY, CoDi, TabDDPM, TabSyn). Additionally, it incorporates literature from variational flow matching, exponential family statistics, and tabular data modeling. Essential References Not Discussed: To my knowledge and quoted from the above “Additionally, per literature review regarding “Flow Matching in Tabular Data Generation”, it would be nice to acknowledge or even have comparisons to TabUnite (https://openreview.net/forum?id=Zoli4UAQVZ) too since they employ CFM to generate tabular data.” Other Strengths And Weaknesses: **Originality** The paper is quite original in its approach. While it builds on known components (flow matching, exponential family), the particular combination – applying VFM to tabular data via exponential-family moment matching – is novel. **Significance** High-quality tabular data generation has important applications such as data augmentation and privacy-preservation. But again, I am concerned that privacy-preservation is not addressed. **Clarity** Paper is clear. It can be better if section titles such as “Connection to Flow Matching” can be more informative regarding the context of tabular data. Other Comments Or Suggestions: N/A Questions For Authors: My main concerns are in the paper’s empirical findings. - The method assumes an interpolation scheme between $p_0$ and $p_1$. They chose the simplest (linear). If the data distribution $p_1$ is complicated, linear interpolation might traverse unrealistic areas of space. These are some unanswered questions that can be satisfied with synthetic datasets to explore the properties of their design choices. - No analysis on privacy-preservation. Methods like diffusion and flows can memorize training data thus, I am hoping to see Membership Inference Attacks on the synthesized data to assess privacy. - This is one example but overall, no ablations to be found. For instance, what is the impact of having a pure Gaussian distribution vs. your various distributions parameterized using the EFs etc.? - No discussion on training duration, training convergence, sampling NFEs. While TabbyFlow is top-performer, the margins over the best diffusion model (TabSyn) are relatively small (fractions of a percent) — even smaller if we include TabDiff into conversation. Additionally, TabbyFlow seems to underperform across all datasets in MLE. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear reviewer FzAE, We thank you for your effort to review our work. Moreover, we appreciate you mentioning the originality of our exponential-family formulation and the value of Bregman divergence connections. We will reply to the points mentioned in the review here: - Regarding **TabDiff and TabUnite**, while TabDiff’s code and data were not publicly available for reproducibility at the time of submission, we now include its reported results (where applicable) and discuss the challenges of direct comparison. We also cite and discuss TabUnite explicitly as we agree the work is related, but we originally excluded as it is available only as a withdrawn openreview submission. - Your **privacy** concerns are well-founded, and we have now addressed this gap by evaluating TabbyFlow using a the Distance to Closest Record (DCR). For this metric, we aim to verify that the synthetic individuals in the generated data are not a simple copy or the result of a simple addition of noise to the real individuals in the original data. The for a given synthetic individual is defined as the minimum distance between him and every original individual. We chose this metric as it has been used by TabSyn and TabDiff, thus allowing a fair comparison with said methods in this task. Our method offers competitive protection without sacrificing fidelity. *See table below.* - As for the **linear interpolation**, following a comment made to reviewer *RH9Z*, the linearity assumption on the conditional velocity field is in the end point, which mean it can be any linear function in $x_1$, e.g. all diffusion-based models, such as flow matching, diffusion models, or other models that combine the injection of Gaussian noise with blurring satisfy this assumption. Moreover, a linear conditional velocity does not imply we learn 'linear' dynamics, and as seen in many settings, flow matching and diffusion can learn highly complex dynamics. At last, and connected to the previous answer, the ODE formulation is indeed compatible with other geometries (as done in Riemannian (Variational) FM and Metric FM) and SDE (see the VFM paper or Albergo 2023 for stochastic interpolant formulation). As we notice this as a recurring confusion among the reviewers, we will add a section emphasising this fact in the final version of the paper. - Last, though we did not make this point clear enough in the first version of the work, TabbyFlow used *significantly fewer NFE* during inference, a point we elaborate on in our response to reviewer NBdr. **Comparison of DCR methods across five datasets.** | **Method** | **Adult** | **Default** | **Shoppers** | **Beijing** | **News** | |-------------|-----------------|------------------|------------------|------------------|----------------| | TabDDPM | 51.14±0.18 | 52.15±0.20 | 63.23±0.25 | 80.11±2.68 | 79.31±0.29 | | TabSyn | 50.94±0.17 | 51.20±0.28 | 52.90±0.22 | 50.37±0.13 | 50.85±0.33 | | TabDiff | 50.10±0.32 | 51.11±0.36 | 50.24±0.62 | 50.50±0.36 | 51.04±0.32 | | TabbyFlow | 50.32±0.16 | 50.82±0.27 | 50.17±0.32 | 50.94±0.13 | 50.83±0.29 | Thanks again for the time to review our work and the useful comments. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I am still unconvinced regarding the interpolation schemes and my point on "a pure Gaussian distribution vs. your various distributions parameterized using the EFs". These questions have not been answered with ablations/experiments. The same is also the case for NFEs, training/sampling duration and training convergence regarding experiments. Lastly, existing privacy ML literature such as [1] and [2] have conducted extensive research highlighting the “Inadequacy of Similarity-based Privacy Metrics” such as DCR. Thus, conducting MIAs per the initial review would strengthen your case for privacy-preservation. [1] Ganev, Georgi et al. "The Inadequacy of Similarity-based Privacy Metrics: Privacy Attacks against “Truly Anonymous” Synthetic Datasets" [2] Ward, Joshua et al. "Data Plagiarism Index: Characterizing the Privacy Risk of Data-Copying in Tabular Generative Models” --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their follow-up and for highlighting three remaining concerns: - The interpretation and implications of our use of exponential family (EF) distributions compared to "pure Gaussians," and the nature of the interpolation scheme. - The computational efficiency of our method, specifically regarding the number of function evaluations (NFEs) and training/inference dynamics. - The adequacy of our privacy evaluation metrics and the potential use of membership inference attacks (MIAs). We will address the points below. ### 1. **Interpolation Scheme and Use of Exponential Families** We would like to emphasize that the linear interpolation assumption made in our method is the standard setup in the flow matching literature, where it defines a conditional trajectory between endpoints. While this conditional interpolation is linear, the aggregated dynamics across the entire data distribution can be arbitrarily complex. In fact, all flow matching and diffusion-based models assume such linearity in their conditional paths. Similarly, the use of a Gaussian distribution—parameterized via EF sufficient statistics—is a standard way to model a distribution over linear trajectories at each time point. As these parameters evolve over time, they can represent highly non-linear and complex generation dynamics. This is precisely what the Variational Flow Matching (VFM) framework formalizes. To explore the impact of using different EF distributions, we performed an ablation in a continuous setting. We compare the use of Gaussian and exponential distributions for parametrizing the flow. Results from this toy experiment (now included in the Appendix) indicate that alternatives yield reasonable performance as well. We note that for discrete variables, the categorical distribution is essentially the only EF option. In the conclusion, we now outline further research directions involving more complex EFs (e.g. Wishart flows over covariance matrices). ### **2. Number of Function Evaluations (NFEs) and Efficiency** We now explicitly report the NFEs used by our method compared to diffusion-based baselines. Our model operates with only 100 NFEs, in contrast to the 1000 typically used in diffusion models. This tenfold reduction leads to significantly faster inference. Moreover, we highlight that diffusion model performance degrades substantially when restricted to only 100 NFEs, while our model retains its performance due to the deterministic nature of its integration. We have added this comparison in the Appendix and will also include it in the main table of the final version. ### **3. Privacy Evaluation and MIA** We understand the reviewer’s concerns regarding the limitations of similarity-based metrics like Distance to Closest Record (DCR). We agree that more rigorous approaches such as membership inference attacks (MIAs) or the Data Plagiarism Index provide stronger guarantees. At the time of rebuttal, we prioritized consistency with prior work (e.g., TabSyn, TabDiff) by using DCR to enable direct comparison. However, in response to your suggestion, we have now computed the Data Plagiarism Index for our model and added a discussion of its implications in the Appendix. We also plan to implement MIA-based evaluations in the final version, as we agree that they provide a more robust perspective on privacy preservation. We thank the reviewer again for their thoughtful feedback. We hope these additional experiments and clarifications address the remaining concerns.
Summary: In this work, the authors propose a new method called Exponential Family Variational Flow Matching which adds a variational formulation on top of VFM which helps them leverage sufficient statistics/moment matching procedure to obtain a probabilistic generative modelling framework. The exponential family perspective enables them to formulate as a single problem where all the different kinds of data seen in tabular data can be seen as inividual cases enabling their joint modelling rather than taking them inidivually. They also make connections between the objective from their formulation as a case of Bregman divergence minimization. The framework is tested on some popular tabular datasets and compare the performance with recent state of the art probabilistic modelling frameworks: CTGAN, TVAE, GOGGLE, TabSyn etc. Claims And Evidence: The claims sound fine to me, I would have liked to see a more clear discussion on modelling assumptions and limitations as there were clearly many made. Methods And Evaluation Criteria: The methods and evaluation criteria used in the paper make sense and it is good to see results reported for many metrics such as C2ST, alpha recall and so on, but I felt that they could have used more downstream tasks such as missing value imputation and compare the results to TabSyn which is clearly the best out of baseline methods. Theoretical Claims: The theory , equations and derivations in the paper looked correct to me. The discussion on connections between Bregman divergences and flow matching objectives is enlightening and given supporting theorems add value to the paper. Experimental Designs Or Analyses: The experimental design looked fine to me. The authors could come up with a similar spider plot as given in Fig 1 of Zhang ICLR 24. Supplementary Material: Supplementary material looked fine to me, I only did one quick pass of it. Relation To Broader Scientific Literature: The paper builds on the earlier work on Flow matching objective for tabular data parameterising with Transfoermer architecture of Eijkelboom et al. 2024 and their work is contemporary to other diffusion based approaches of TabSyn Essential References Not Discussed: I think the paper coverred most references I could think of after doing a bit of literature review, althought this work is closely related to generative modelling some recent work on classification with tabular data for example: Prior-Data Fitted Networks Muller et al. 2022, tabPFN could be discussed too where Transformer based architectures have done well on tabular data. Other Strengths And Weaknesses: 1. More work/discussion for other downstream tasks for generative modelling such as missing value imputation could improve the paper. 2. Theoretical discussion on connections between Bregman divergence and variational objective for CFM is a solid contribution of this work. 3. Experiment results could benefit from having higher dimensionality for test datasets and benchmarking (highest was D=46). 4. Paper looks well written to me and the story sounds cohesive. 5. The work does not very clearly state its limitations: linearity of conditional velcoity field ?- 6. The paper introduces C2ST as an evaluation metric, which I did not see earlier. 7. In results in Table 3, authors only bolded their own results while TabSyn gets almost similar performance and well within standard error intervals. Please also make TabSyn result bold in Average Column. Other Comments Or Suggestions: Please write in caption of Table 3, how many runs were performed. Written above and below. I will wait for other reviewers feedback who might be more up to date with contemporary literature than me. Questions For Authors: 1. Is it possible to do classification with this approach, for eample one could have an additional row which can contain binary or multi class labels ? , if it is possible then one can compare against Transformer based approaches on tabular data classification and compare discriminative modelling vs generative modelling. 2. Is it a limitation to consider only linear conditional velocity fields ? 3. Does the framework only support ODE based formulation, will non-linearity or a SDE based formulation of conditional velocity field make the problem intractable. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer RH9Z, First, we'd like to express our gratitude for the thorough and extensive feedback on the paper. Our response to the points raised is as follows: - We agree that the **theoretical assumptions/limitations** should be made more explicit, especially as this seems to be a recurring point of confusion for the reviewers. To this end, we will include an extra section in the paper covering these assumptions and why they are typically satisfied. We also agree that adding figures would be beneficial to better explain our work and as such will include those in the final version as well. We will respond to specific points of confusion at the end of this rebuttal as well. - We also agree that more **downstream tasks** could have been included. To this end, also aligning with feedback from other reviews, we included an analysis on a privacy metric, where we show SOTA performance (for a discussion on the privacy metric, please see our response to reviewer *FzAE*). We hope this addresses the provided point, even though another task then proposed has been considered due to time constraints. - Next, thank you for pointing out the bold font **typos**. These have been fixed in the final version of the paper, and we added a global ranking for all method for easier comparison. We also added more experimental details. - Regarding the **questions to the authors**, _firstly_, though it is definitely possible to do the proposed task, we believe it to be out of scope for our work as it is not part of the common benchmarks as far as we are aware. _Second_, we want to emphasise that the linearity assumption on the conditional velocity field is in the end point, which is not saying the interpolation needs to be a straight line, but can be any function linear in $x_1$, e.g. all diffusion-based models, such as flow matching, diffusion models, or other models that combine the injection of Gaussian noise with blurring satisfy this assumption. Moreover, a linear conditional velocity does not imply we learn 'linear' dynamics, and as seen in many settings, flow matching and diffusion can learn highly complex dynamics. _At last_, and connected to the previous answer, the ODE formulation is indeed compatible with other geometries (as done in Riemannian (Variational) FM and Metric FM) and SDE (see the VFM paper or Albergo 2023 for stochastic interpolant formulation). Thank you one more time for reviewing our work and the useful comments.
Summary: This paper proposes a new method that introduces variational flow matching to table generation. Specifically, it incorporates a function family, exponential family, for mapping the table data to the prior, which is a more general form for flow starting from widely used priors. Claims And Evidence: The main claim on the performance improvement of this paper should be that using different priors for different data modalities in tables should better fit the need of different modalities, especially those not so matched with Gaussian distribution used in existing methods. Hence, more persuasive evidence should be given for those data modalities whose priors are changed in the proposed method compared to existing methods that performance on these modalities are distinctly improved. This is the main concern of the claims. Methods And Evaluation Criteria: The proposed method introduces variational flow matching, an update version of flow matching more appropriate for multimodal problems to table generation. This should make sense if the method could empirically improve the performance on widely used benchmarks. Hence, please refer to the review on experimental designs for issues. Theoretical Claims: I do not check the correctness of all theoretical claims and assume they are correct. I will refer to the opinions of other reviewers and update my review accordingly. Experimental Designs Or Analyses: The experiment design roughly follows those in the existing literature, whose soundness is widely validated. However, [1] There seems to be many typos in the tables, that highlights are given to not-the-best score among all compared methods. For example, in Table 1 Magic, TabDDPM yields 1.01 while the highlighted TabSyn gets 1.03. There are many similar issues in all tables, raising concerns on the validity of the experiment. [2] The performance improvement brought by the proposed method seems to be minor, especially for precision and recall. Additionally, the design is slightly different from existing literature, that this paper separates the trend and shape error rates in comparison, compared to the original paper of TabSyn, which compares the error rates from column-wised and pair-wised correlation. Could the author explain the reason for such differences? Supplementary Material: I do not check the theoretical proofs in the supplementary material and assume the correctness of the result. I am open to refer to other reviewers' opinion and update my review accordingly on any potential issues. Relation To Broader Scientific Literature: To the best of my knowledge, this is the first flow matching method for table generation. It Essential References Not Discussed: I do not cover any literature that should be additionally included. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: [1] There seems to be typos when highlighting the best performance in tables. In Table 6 - default, the best should be TabDDPM but the highlight is given to TabSyn. In Table 6 - Beijing, GOGGLE has much lower RMSE than the highlighted TabSyn. Similar things take place in Table 5 - Magic and Table 4 Beijing. Questions For Authors: Please address my concerns on the exact effect of jointly modeling different modalities with VFM, issues of experimental designs and potential typos in the posted result. I will also update my review eagerly after checking the review of other reviewers. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer NBdr, Thank you for carefully examining and giving feedback on our work. We will reply to the points raised one by one: - Regarding the **typos** in our work, that is poor validation on our end, thank you for pointing this out. We made sure that not only the bold fonts are now correct, but we also added a global ranking for each paper to make the comparison simpler to follow. - You rightly point out that our model **only marginally outperforms the current SOTA** in some instances. Though this is correct, we did not highlight an important benefit is our approach (and many other flow matching approaches) enough in the manuscript. Our model achieves this marginally better performance while not only being simpler to train - as has been the case between FM and diffusion alternatives in other settings - but especially while requiring significantly fewer NFE (network evaluation) than diffusion during inference. This means that inference with TabbyFlow is **faster than the other SOTA models**, without compromising on performance. We highlight this fact more clearly in the final version of the paper. - Regarding the **metrics**, you are correctly pointing out that we discuss the error rates on Shape and Trend while TabSyn uses column-wise density estimation and pair-wise column correlation. We used the terminology from TabDiff as it was the most recent work on tabular data, where the metrics are referred to as Shape and Trend. Shape corresponds to the column-wise density estimation, where one employs the Kolmogorov-Sirnov test for numerical columns and the total variation distance for categorical column. On the other hand, Trend corresponds to pair-wise column correlation, where we use Pearson correlation for numerical columns and contingency similarity for categorical columns. We have further explained these metrics and terminology on the experimental section of the paper. Once again: thank you for your time to review our work and the useful comments provided. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. I think my concerns on typos and performance have been well addressed. But my main concern lies in the potential cherry-pick in the experiment, that the metric may be carefully selected for maximizing the advantage of the proposed method. Admittedly this is a good paper so I give it a positive score. But I would like to see the same metric of TabSyn for the proposed method and TabSyn for a more comprehensive comparison. If then I will raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for your comments regarding the typos and performances and further questions. We will again address them point by point. First - and we now see the confusion - we want to emphasize that **our metrics and TabSyn's metrics are the same**. The only difference is how we named them, where we picked the names from TabDiff. We have highlighted this difference in naming, such that the paper now explicitly reports '*Error rate (\%) of column-wise density estimation*' (also called **Shape**), and '*Error rate (\%) of pair-wise column correlation score*' (also called **Trend**). That is, we would like to emphasize that we did not cherry pick the metrics and simply reported the standard metrics for tabular data. For the final version, we recomputed these metrics for all baselines, including TabSyn, as also provided in the tables in the response to other reviewers. There is some variance with respect to the TabSyn paper results. In the TabDiff paper a similar variance was observed, and they show similar performance to what we obtained numerically. In line with another review, we have added TabDiff's results to the tables, where it was initially excluded due to the lack of available code to replicate its results. We hope this addresses your final open questions.
Summary: They propose TabbyFlow, a variational flow-matching method for generating mixed tabular data. An advantage over previous methods is that the exponential-family version allow modelling mixed data (continuous and categorical), and contrary to other methods even other types of data such as Poisson counts. The theory aspect of the paper is strong. They derive interesting connections to Bregman divergence. The approach that they propose has strong theory and generality and thus could be expended to various types of data not explored in the paper. It include multiple metrics on diverse datasets. Claims And Evidence: Claims: - their method allows modelling over mixed data and more (true) - EF-VFM objective and Bregman divergences (true, good theory to support it) - state-of-the-art performance on benchmark tabular datasets (true for the methods tested against, but lacks a distribution metric metric and flow-matching baselines) Methods And Evaluation Criteria: It is strange that baseline comparisons are using diffusion, VAE, GANs, but not flow-matching. Since their approach is one of flow-matching, they should be comparing to at least one flow-matching mixed-data generator baseline which exist in the literature. A few exists: https://arxiv.org/abs/2309.09968, https://openreview.net/pdf?id=Zoli4UAQVZ (although the latter might be too recent to be included, I'm not sure what are ICML rules about concurrent work). The metric "error rate" is not explained, which one is it KST or TVD? Why not just report both metrics separately? And similarly for trend, why not show separately and together the correlation for numerical and contingency sim for categorical pairs? At least having those in the appendix would be helpful to see how methods differs wrt categorical vs numeric features. Please clarify when the alpha-precision and beta-recall metrics are, and not just an intuitive idea of what they measure. Add ranking to Table 1 and 2 since you include it in Table 3 and 4. It is missing a distribution metric which is fundamental to the task that is solved which is tabular data generation. Assessing performance should be done first and foremost by looking at the distance between real and fake distributions at the data-level (not per-feature, like done in Table 1). For this, Wasserstein distance or Maximum Mean Discrepancy (MMD) can be used. In https://arxiv.org/abs/2309.09968, they used a specific preprocessing to ensure that Wasserstein work on both categorical and numeric data. Theoretical Claims: The variational formulation is correct and convergence is ensured by minimizing the KL divergence. Experimental Designs Or Analyses: See "Methods And Evaluation Criteria" Supplementary Material: Implementation details and data details are correct. Relation To Broader Scientific Literature: Overall, the contributions are correctly referring to the relevant prior work. The only thing, is that since the method is using flow-matching, the intro should not just mention diffusion methods, but also other references of flow-matching tabular generators (a few exists in the literature). Essential References Not Discussed: See "Relation To Broader Scientific Literature" Other Strengths And Weaknesses: . Other Comments Or Suggestions: . Questions For Authors: already asked Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer rhRe, Thank you for your thoughtful and constructive review. We are glad that the theoretical contributions and generality of the method came across clearly. We will reply to the points raised in the review pointwise: - We fully agree that including **flow-matching baselines** is important for a fair evaluation. We have now added results from the flow-based gradient-boosted tree model. We also now explicitly discuss TabUnite, which we initially excluded, as it was only available on OpenReview as a withdrawn submission. We agree that it is relevant, and we now cite and briefly discuss it in the related work section, but we are happy to report that TabbyFlow is still on par with SOTA approaches, and achieves this performance with viewer forward evaluation than e.g. the diffusion-based approaches (see rebuttal NBdr in case of interest). - We also acknowledge that some of the **metrics** and terminology were unclear in the original submission. In the revised version, we have clarified the definitions of error rate of trend/shape scores, and alpha-precision/beta-recall both in the main text and appendix. We used the aggregated values as it has previously been done in TabSyn and TabDiff, but we acknowledge it can be useful to disaggregate on numerical vs. categorical and, as such, provide these results too. We observe that on both modalities our approach performs well. - You rightly point out the absence of **distributional distance metrics**, which can be fundamental to the problem. In response, we now report the Wasserstein distances between the synthetic dataset and the original data, following recent benchmarks by reporting the distance to the train set and the test set. These additions confirm the strength of our approach from a distribution-matching perspective. - All result **tables** now include consistent bolding and method-wise ranking across datasets. | **Model** | WD (train) | WD (test) | |-------------|--------------------------|-------------------------| | TVAE | 4.6±0.3 | 4.9±0.1 | | CTGAN | 7.8±0.2 | 7.7±0.1 | | TabDDPM | 3.1±0.6 | 3.9±0.5 | | TabSyn | 2.2±0.4 | 3.0±0.3 | | TabDiff | 2.4±0.3 | 2.9±0.2 | | TabbyFlow | 1.7±0.7 | 2.1±0.4 | **Table:** Wasserstein Distance (WD) between synthetic data set and train/test datasets. Lower values are better. Thank you once again for the kind words and time to review our work. --- Rebuttal Comment 1.1: Comment: Thank you for addressing my comments. This is a good paper. --- Reply to Comment 1.1.1: Comment: Thank you for your kind words and for taking the time to review our work. We appreciate your feedback and are glad that you found the paper to be of good quality.
null
null
null
null
null
null
Online Detection of LLM-Generated Texts via Sequential Hypothesis Testing by Betting
Accept (poster)
Summary: The paper focuses on developing an algorithm for online detection of texts generated by large language models (LLMs). The main contribution is an algorithm based on sequential hypothesis testing techniques, which allows for quick and accurate identification of LLMgenerated texts in a streaming setting. The algorithm leverages score functions from existing offline detection methods and uses a betting framework to accumulate evidence for or against the null hypothesis that the text source is human-written. The authors conduct comprehensive experiments using various score functions and datasets to demonstrate the effectiveness of their method. Claims And Evidence: The claims made in the paper, such as the ability to control the false positive rate and provide an upper bound on the expected detection time, are supported by clear and convincing evidence. The authors present theoretical propositions and empirical results from experiments that validate these claims. Methods And Evaluation Criteria: Methods: the use of sequential hypothesis testing and betting techniques is a novel approach to the online detection of LLM-generated texts. The evaluation criteria, including the false positive rate and rejection time, are relevant metrics for assessing the performance of the algorithm in a streaming setting. The authors also consider composite hypotheses and provide a detailed analysis of the algorithm's performance under different scenarios. Theoretical Claims: The paper includes several theoretical claims, including the control of the false positive rate and the upper bound on the expected detection time. The proofs for these claims are provided in the appendix. The proofs appear to be correct and well-reasoned, with clear explanations of the assumptions and the logical steps involved Experimental Designs Or Analyses: The authors use a variety of score functions and datasets to test the algorithm's performance, which provides a comprehensive evaluation of its effectiveness. The experiments are repeated multiple times to account for randomness, and the results are averaged to provide reliable estimates of the algorithm's performance. The analysis of the results is thorough, and the authors provide clear explanations for the observed trends. Supplementary Material: The supplementary material was reviewed, including the detailed proofs of the theoretical claims and the additional experimental results. The supplementary material provides valuable additional information that supports the main claims of the paper. However, some parts of the supplementary material could be better organized to make it easier for readers to find the information they need. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: The proposed method has practical applications in areas such as content moderation, academic integrity, and social media analysis. Weaknesses: The online optimization and betting framework may have higher computational complexity compared to traditional methods, especially when dealing with large-scale datasets. Other Comments Or Suggestions: The paper discusses using pseudo base-to-new partitions to train the detector, but it's not clear whether the performance of the model is sensitive to the number of partitions. While the ablation study shows performance increases with more partitions, there might be an optimal number beyond which performance plateaus or even deteriorates. A more detailed analysis of this aspect could provide better guidance on how to choose the number of partitions (K) for practical applications. Questions For Authors: 1. How does the algorithm handle cases where the score function outputs are highly variable or noisy? Would this affect the algorithm's ability to control the false positive rate? 2. The experiments in the paper are conducted on specific datasets and LLMs. How well does the method generalize to other types of texts or different LLMs? 3. The paper suggests that the online optimization and betting framework may have higher computational complexity compared to traditional methods, particularly when handling large-scale datasets. Could the authors provide a more detailed discussion on the computational complexity of the proposed framework? 4. Specifically, how does it scale with larger datasets, and are there any strategies for mitigating this complexity in practical applications? 5. The paper mentions that performance improves with an increasing number of pseudo base-to-new partitions in the detector training. However, it remains unclear whether there exists an optimal number of partitions. Specifically, is there a point beyond which adding more partitions leads to diminishing returns, or could the performance even start to deteriorate? A more thorough analysis of this relationship would be helpful in understanding the balance between the number of partitions and the model’s effectiveness. 6. Additionally, could the authors provide more detailed guidance on how to select the number of partitions (K) for practical applications? Understanding how to choose K in real-world scenarios would be beneficial, particularly in optimizing performance while avoiding unnecessary complexity or overfitting. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We first would like to thank the reviewer for the positive feedback and good suggestions. Here are our responses. **(Supplementary material.)** We would like to clarify that the supplementary material includes a detailed README.md file, which provides a clear overview of the codebase, experiment organization, score functions used, and testing scenarios. **(Computational complexity.)** Thank you for raising this point. It is unclear which specific “traditional methods” the reviewer is referring to, so we address both possible interpretations below. - If the reviewer is referring to offline detection methods (e.g., Fast-DetectGPT, LRR, etc.). We would like to clarify that our method is built on top of existing offline detectors, and is designed to complement rather than compete with them. Our focus is to build an online framework that adapts existing offline detectors to an online setting where texts are observed in a streaming fashion. In this case, the only additional computation introduced by our method is a 1D Online Newton Step (ONS) update. For example, if we use the score function of Fast-DetectGPT to compute the text score, **the computational time will be that of the underlying offline detector plus a 1D online Newton update, which incurs only light overhead.** - On the other hand, if the reviewer is referring to the fixed-time permutation test baseline that we include for comparison, we note that permutation-based methods typically involve multiple resampling and recomputation of the test statistic (details in Appendix G), which is often computationally heavier than our lightweight online updates. In our case, each round of detection only involves computing a single score and performing a 1D update, making it much more efficient in streaming settings. **(Partitions.)** We would like to clarify that our method is built on top of existing offline detectors, and is designed to complement rather than compete with them. Thus, our method does not involve base-to-new partitioning, nor does it assume or require any particular structure in the score function. Our contribution lies in providing a general online detection framework that takes text scores evaluated by a chosen score function of an existing offline detector as input, and performs sequential hypothesis testing with rigorous statistical guarantees. Specifically, our method ensures type-I error control at level-$\alpha$ and power of 1. While some of the offline detectors we incorporate, such as Fast-DetectGPT, may internally use base-to-new partitioning as part of their score computation, this is entirely orthogonal to our approach, which operates solely on the text scores. Therefore, the choice of how the score function is computed, including the number of partitions or any use of base-to-new structure, is not within the scope of our framework. **(FPR control for noisy score inputs.)** Our theoretical guarantee on controlling the type-I error (false positive rate) relies on **Ville's inequality** applied to a **non-negative supermartingale** wealth process. This guarantee holds **regardless of the variance or noisiness** of the score function outputs, as long as the null hypothesis holds, i.e., the expected difference in scores satisfies $\mu_x=\mu_y$ or $|\mu_x - \mu_y| \leq \epsilon$ under the composite null. Specifically, as shown in Appendix E of our paper, the wealth process $W_t = \prod_{i=1}^t (1 - g_i \theta_i)$ is a supermartingale under $H_0$ where the score dufference $ g_t = \phi(x_t) - \phi(y_t) $ and $ \theta_t $ is the adaptive betting fraction. Applying randomized Ville’s inequality [1, 2] to this process ensures that $\mathbb{P}(\exists t: W_t \geq 1/\alpha \text{ or } W_T \geq Z/\alpha) \leq \alpha,$ where $Z\sim\text{Unif}(0,1).$ The above property of wealth process guarantees control of type-I error at level $\alpha$. While noisy scores may increase the rejection time under $H_1$, they do not compromise our theoretical type-I error guarantee under $H_0$. [1] Ville, J. Etude critique de la notion de collectif. Gauthier- Villars Paris, 1939. [2] Ramdas, A. and Manole, T. Randomized and exchangeable improvements of markov’s, chebyshev’s and chernoff’s inequalities. arXiv preprint arXiv:2304.02611, 2023. **(Generalization to other types of texts or different LLMs.)** We would like to clarify that we also have more experimental results for additional datasets such as WritingPrompts (stories) and PubMed (long-form answers written by human experts) in Appendix H. Furthermore, we have tested scenarios where the target sequence of texts is of a different domain/topic from that of the prepared human-written texts (e.g., Figure 8). Besides, We extend our method to scenarios where texts from the unknown source are produced by various LLMs (see Figure12(a)), and when the unknown source posts a mixture of human-written texts and LLM-generated texts (see Figure12(b)). Our method consistently perform well in the above scenarios.
Summary: The paper studies the problem of detecting whether a series of texts is LLM/machine-generated sequentially. To this end, they build on existing work on offline detection of LLM-generated text which proposes a variety of scoare functions, as well on recent work in sequential hypothesis testing. More concretely, they frame the problem as an sequential hypothesis testing problem where the null is that the expected score over the source’s distribution is equal to that over a human-written distribution (which they assume access to). They particularize the algorithm and analysis from Chugg et. al. 2023 to their setting, and subsequently enjoy a bound on the probability of false rejection (over all time) as well as a an upper bound on the expected stopping time of the method. They run experiments with a variety of score function specifications from prior literature and three source models that show promising empirical results in terms of detection speed and false discovery control for the proposed method, showing that testing by betting can be useful for this application of recent interest and relevance. Claims And Evidence: Yes the claims are generally supported by evidence. One note is that the assumption on the score function is repeatedly referred to as “mild” yet in my opinion it is not really obviously that mild, so maybe this point could be discussed more at length or be given more nuance. Additionally, maybe more care could be taken to analyze (or at least discuss) the predictiveness/usefulness of the upper bound on stopping time. For example, if one uses the actual, true constants of the bound and plugs in reasonable values for the parameters, I am worried the resulting bound would be quite large. Methods And Evaluation Criteria: I think the experiments conducted are a good first step and useful for illustrating the abilities of testing by betting approaches on the application at hand. The authors consider relevant baselines, as well as multiple score specifications and different LLM models generating the text. Of course by only using two particular datasets of human text we only get very preliminary insight and much additional work would be needed to properly evaluate machine-generated detection methods in a real world, live setting. Theoretical Claims: The theoretical claims follow almost directly from prior work in sequential hypothesis testing by betting. Experimental Designs Or Analyses: I feel like the experimental design and analyses are intuitive and fitting for the application at hand. I appreciate the inclusion of both the alpha-spending baseline as well as the invalid baseline. Supplementary Material: I did not closely review the supplementary material as the theoretical techniques are extremely similar to those of prior work that I am familiar with. Also I personally would recommend cutting down the number of plots and tables in the Appendix to what is essential for the main messages to go through (as the current quantity is, in my opinion, slightly overwhelming). Relation To Broader Scientific Literature: The paper employs recent theoretical advancements in sequential hypothesis testing, most notably that of Chugg et. al. 2023 (but this is part of a flourishing line of work, for which the authors give a good list of references). The paper also ties to existing work on detecting machine-generating texts, by proposing a sequential approach, as well as by employing different scores proposed in various prior works. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: The core strength of this paper is that it applies the testing by betting framework to the very suitable and relevant application (detecting machine-generated text). This application in my opinion makes much more sense as an online decision problem rather than offline and therefore I think the paper is a good addition to the existing literature on detecting llm-generated text. The assumption that the machine and human generated texts come i.i.d. from their respective distributions is quite significant and potentially too unrealistic for the scenario at hand. One other possible weakness is that it may make more sense to consider one-sided hypotheses tests and test for ‘>’ or ‘<‘ rather than $H_0: \mu_x = \mu_y$. Other Comments Or Suggestions: One suggestion I have is considering one-sided hypotheses tests. To me, it would make sense to consider scores for which the higher they are, the more likely it is for the source to be an LLM. In that case, considering the null hypothesis be $H_0: \mu_y > \mu_x + \epsilon$ would be more fitting. I find the testing for equality or near equality to be a bit unrealistic, as, whatever the score, I would expect different means for different subpopulations of humans, and it may be easier to hope for a score that maintains ordering (i.e. is able to cluster humans to one side and machines to the other). It would also possibly require a bit of additional modification of the techniques from Chugg et. al. 2023 and the analysis. Typos and other small comments: - “is” should be “are” on line 73 right column I think - “guarantees” should be “guarantee” on line 229 left column - “perform” should be “performs” on line 369 left column - Figure 1 appears as too low quality for me so maybe that can be fixed? Questions For Authors: My main question is related to the suggestion above — why would we have a nulll test for ‘=‘ rather than ‘>’/‘<‘? What are your thoughts on doing one-sided tests given the nature of this application? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We first thank the reviewer for your positive feedback and helpful suggestions. Below are our responses. **(Assumption on the score function.)** We agree that the assumption of the existence of a score function that produces distinguishable means for human-written texts and LLM-generated texts is critical. We describe this assumption as “mild” based on empirical observations. As discussed in Appendix G (Table 1) and Appendix H (Table11-12), we provide evidence showing that the empirical mean score differences produced by most of the adopted score functions are significant across multiple human-written text datasets and LLMs. Additionally, to address the case where the mean difference under $H_0$ is small but nonzero (i.e., when both sequences are human-written by different individuals), we have extended our theoretical analysis to the composite hypothesis setting (see Proposition 3.2). This formulation allows for a tolerance parameter $\epsilon$ and leads to a more realistic criterion. Our experiments are all conducted in this setting and show good performance, which validates the effectiveness of our method under our assumption. **(Expected stopping time bound)** Regarding the upper bound on the expected stopping time, we acknowledge the reviewer’s concern that the bound might be loose when directly plugging in actual parameter values. We emphasize that this bound serves primarily as a worst-case guarantee. Nevertheless, our empirical results demonstrate that the algorithm often detects LLM-generated texts much earlier than the bound suggests in practice. Additionally, the bound also provides valuable insights into the stopping time by revealing which factors affect it and in what way, such as the mean score gap $\Delta$, the bound estimate $d_*$, and the user-specified significance level $\alpha$. **(Real world setting.)** We would like to clarify that we have more experimental results considering the real-world applications in Appendix H for additional datasets such as WritingPrompts and PubMed. Furthermore, we have tested scenarios where the target sequence of texts is of a different domain/topic from that of the prepared human-written texts (e.g., Figure 8). Besides, We extend our method to scenarios where texts from the unknown source are produced by various LLMs (see Figure12(a)), and when the unknown source posts a mixture of human-written texts and LLM-generated texts (see Figure12(b)). The experimental results of the above scenarios show the effectiveness of our method. **(Overwhelming quantity.)** Thanks for your helpful suggestion! we will revise and streamline the appendix in future versions. **(One-sided hypotheses tests.)** We thank the reviewer for this insightful comment. We think it is relatively straightforward to modify the underlying method for the one-sided test that the reviewer kindly points out. For example, if the null hypothesis is $H_0: \mu_x < \mu_y$, then we can specify the wealth dynamic as $W_t = W_{t-1} \left( 1 - \theta_t (\phi(y_t) - \phi(x_t)) \right).$ Under the null $H_0$, we have $E[ W_t | F_{t-1} ] = W_{t-1} \left( 1 - \theta_t E[ \phi(y_t) - \phi(x_t)] \right) = W_{t-1} \left( 1 + \theta_t (\mu_x - \mu_y ) \right) \leq W_{t-1},$ where we used the fact that $\theta_t$ is $F_{t-1}$-measurable. This shows that the wealth process can be a non-negative supermartingale for this case. Therefore, we can apply Ville's inequality to show that this test is a valid-$\alpha$ test. Furthermore, when the alternative $H_1$ is true, we can apply the online Newton method can help increase the wealth. On the other hand, we would like to point out that using this one-sided test requires the user knows that the score of human-written text is higher (or lower) than the score of machine-generated text beforehand. The two-sided test does not require this assumption. Specifically, if the score function $\phi(\cdot)$ is not necessarily an affinity score (e.g., perplexity [1]), then the formulation in our paper might be more suitable. We agree that different means for different sub-populations of humans are more realistic. While we are primarily concerned with the sequential testing scenario of humans vs. LLMs, we believe that enabling different sub-populations of humans and exploring potential application scenario is a valuable future direction. [1] Yongqiang Ma, Jiawei Liu, Fan Yi, Qikai Cheng, Yong Huang, Wei Lu, and Xiaozhong Liu. Ai vs. human–differentiation analysis of scientific content generation. arXiv preprint arXiv:2301.10416, 2023. **(Typos and other comments.)** We thank the reviewer for carefully pointing out these typos and the figure issue. We have corrected these. Figure 1 will be replaced with a higher-resolution version to improve its visual quality. We appreciate your attention to detail.
Summary: This work has studied an online detection method for AI-generated texts, and it can identify texts from unknown source models. The proposed method mainly makes use of sequential hypothesis testing and has an advantage of non-parametric property. Comparision experiments with several baseline methods (e.g. DetectGPT, Fast-Detect, LRR) shows the merits of the proposed method. ## update after rebuttal. I appreciate the authors for the rebuttal response which partially addressed my concerns. However, I still feel it is not essential to the motivation/necessity of such sequential detection scenarios. Besides, I am not convinced by the comparison with some offline detectors, e.g. Binoculars, which has already performed well in terms of detection speed and accuracy. I would therefore maintain my rating. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes, the idea of using hypothesis testing by betting makes sense for the online detection scenario. Theoretical Claims: Yes, I've checked two propositions (3.1 and 3.2). Experimental Designs Or Analyses: Yes, I have checked the experimental part which includes baselines and comparison results. Supplementary Material: Yes, reviewed Appendix A, B, G. Relation To Broader Scientific Literature: Yes, this work proposes a relatively novel way to conduct online detection of AI-generated texts. It enjoys faster detection speed, and might be more suitable for detecting streaming data(e.g. from social media). Essential References Not Discussed: NA. Other Strengths And Weaknesses: Strengths: The authors propose an online LLM-generated text detection method based on sequential hypothesis testing, capable of identifying texts generated by unknown-source LLMs. Additionally, this method is non-parametric and does not require assuming different prior distributions for human and LLM-generated texts. The proposed algorithm is rigorously justified through extensive theoretical and experimental analyses. The content is comprehensive (though carefully reading nearly 50 pages is unrealistic, such thoroughness is necessary). The writing is well-structured, notation usage is correct, figures are visually appealing, and the construction process is easy to understand. From the perspective of evaluating the effectiveness of different detection algorithms, this paper is undoubtedly a well-founded and rigorous innovation. It parallels the Performance Profiles framework used in optimization research (https://arxiv.org/abs/cs/0102001), but currently, no theoretically guaranteed and objective evaluation methodology exists in AI-generated text detection. For example, evaluating detection algorithms based on rejection counts is an interesting idea. Weaknesses: In the Introduction, the authors emphasize that existing score-based detection methods are sensitive to threshold selection (lines 036-038, right column). However, considering representative real-time methods like Binoculars and Fast-DetectGPT, threshold adjustments do not seem to be frequently required for achieving robust detection across different domains and source models. This is even more evident for trained zero-shot methods like RoBERTa-Base/Large and ReMoDetect (NeurIPS 2024). Therefore, this may not be a significant weakness of prior detection methods, and the authors might need to further justify the necessity of an online detection approach. The paper appears to primarily discuss sequential hypothesis testing in the context of existing detection methods rather than developing a fundamentally new detection approach (i.e., the adaptation seems too direct). Many of the claimed advantages are inherent to sequential hypothesis testing (e.g., error control and ONS strategies). Considering that some existing detection methods have also applied hypothesis testing concepts (such as Raidar (ICLR 2024) and certain watermarking techniques), albeit with differences (e.g., sequential testing does not require a fixed sample size in advance), the novelty of this work seems somewhat limited. The study lacks evaluation in traditional attack scenarios, such as text rewriting, paraphrasing, style transfer, and multilingual settings—an essential step for validating a new detector. In the Introduction, the authors state that their method does not assume any underlying distribution for human or machine-generated texts (lines 104-108, left column). However, later in the paper (lines 141-148, left column), human and LLM-generated texts are assumed to originate from some distribution. This inconsistency may lead to misunderstandings. Other Comments Or Suggestions: No. Questions For Authors: I have some questions regarding this work: i) The authors categorize detection methods as either offline or online, grouping all real-time, training-free detection methods under offline detection. Could the authors clarify the distinction between real-time detection methods (e.g., likelihood-based methods and Fast-DetectGPT) and their concept of online detection? Based on the authors’ analysis of the null hypothesis, the performance of online detection heavily relies on offline detection. If the score function from offline detection fails to effectively distinguish between human and LLM-generated texts, the effectiveness of hypothesis testing will also degrade. Given that offline detection methods can already be highly effective, why is online detection necessary? This issue is particularly relevant to the attack scenarios mentioned in Weaknesses point 3. ii) What is the strategy for selecting the human reference sample x_t? The choice of reference samples directly affects the hypothesis testing results. The paper mainly considers news texts, but in real-world applications, how can we locate or curate suitable human text datasets to ensure that the test remains consistently effective? iii) Binoculars (ICML 2024) achieves high AUROC while maintaining a low false positive rate (FPR), which is also one of the goals of the proposed method. How does this method compare against Binoculars to further highlight its advantages? iv) The paper refers to \textit{time}, which seems to indicate the number of detection steps. Could the authors also report the actual computational time required for detection? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for your careful reading and useful comments. Here are our responses. **(Necessity.)** Our emphasis is not on the frequency of adjustments, but rather on the fact that **most** offline detectors detect by comparing the text score with a **pre-determined** threshold, chosen via training and directly impacting accuracy. This applies to detectors like RoBERTa-Base/Large and ReMoDetect, which require supervised training and threshold-based classification (e.g., probability). While our method only uses an offline detector to score texts, **the threshold (i.e., the significance level $\alpha$) is specified according to the user’s needs (the type-1 error allowed)**. **(Novelty.)** While prior work such as Raidar leverages hypothesis testing with log-probability scores in an offline setting, our approach is, to our knowledge, **the first to provide a general framework for adapting existing score-based detectors to the online setting**. Our key contribution is introducing an **any-time valid, non-parametric sequential testing framework** with **statistical guarantees** and **bounded expected detection time**, which is not available in existing offline detectors. **(Attack scenarios.)** Our work does not propose a new score function, which requires to be evaluated under attacks. Rather, we focus on how to use existing detectors in an online framework. Some have already addressed attacks: DetectGPT and Fast-DetectGPT consider paraphrasing attacks (Mitchell et al., 2023, Bao et al., 2023), and DNA-GPT studies revision attacks (Yang et al., 2023). While such attacks may reduce the gap between human and LLM scores, they typically do not eliminate the difference between them. Therefore, our basic hypothesis, that their population means are different, is reasonable. **(Distribution assumption.)** Lines 104–108: *"... our approach is non-parametric, and hence ..."*. That is, we make no assumption on the specific form of underlying distributions (e.g., Gaussian), which is standard for non-parametric methods (Balsubramani et al., 2015). Like most hypothesis testing methods, including non-parametric ones, we assume samples are drawn i.i.d. from unknown distributions (lines 141–148), without assuming their specific forms. **(Online vs. Real-time)** Scenarios are different. Some detectors predict **individual** samples efficiently, but they are still applied offline: each sample is scored and classified independently. In contrast, our online framework leverages sequential hypothesis testing, where samples are observed in a **streaming** fashion. Our goal is to obtain an anytime-valid level-$\alpha$ test (i.e., controlling type-1 error when the source is human) while controlling the time to reject $H_0$ when the alternative is true (i.e., controlling the time to correctly identify the source as an LLM). To the best of our knowledge, this is **a novel online detection scenario of detecting LLMs, and therefore existing works for offline classification cannot be directly applied to our online scenario.** For example, a naive way to adapt the offline detector in our scenario is that if at a certain point, the offline detector predicts that a sample is written by LLM, then it declares the source is LLM (and hence stops). However, this naive method will have a false positive rate of 1 when the number of rounds becomes sufficiently large, unless the offline detector never makes an error in recognizing human text. **(Human samples.)** The reference human texts need not match the domain of the observed samples, because the hypothesis testing framework does not impose any assumptions on their distribution. As shown in Appendix H, our method works well even when the domains of the reference and target texts differ (XSum, WritingPrompts, PubMedQA). **(vs. Binoculars.)** Our method is not an offline detector, but rather a general online detection framework with strong theoretical guarantees. While offline detectors optimize metrics like AUROC, we focus on making sequential decisions with any-time valid statistical guarantees. In principle, our method could operate on top of Binoculars’ score function in an online manner, providing early stopping and statistical guarantees that Binoculars alone does not offer. Therefore, the goals of our method are complementary to existing detectors. **(Computational time.)** Our method is designed to offer an online framework to offline detectors. **The computational time will be that of the underlying offline detector plus a 1D online Newton update, which incurs very light overhead.** The number of detection steps reflects the statistical notion of sample complexity, a key metric in sequential hypothesis testing. Our focus is on the statistical efficiency (i.e., how much texts are needed before making a decision with statistical guarantees), rather than raw computational time, which mainly depends on the score function used.
null
null
null
null
null
null
null
null
Hyperspherical Normalization for Scalable Deep Reinforcement Learning
Accept (spotlight poster)
Summary: The main claim of the paper is that a novel architecture (SimbaV2) can improve the scaling of RL algorithms. The benefits of using SAC and SimbaV2 are demonstrated across a variety of domains. Claims And Evidence: I find that the authors do an excellent job at demonstrating the empirical performance benefits of using SimbaV2 and SAC. The empirical evidence is overwhelming, across a large number of domains. However, the authors also make some more general claims about scaling (SimbaV2 enables better scaling), that are not as clearly justified. 1. SimbaV2 improves scaling of **RL algorithms** $\rightarrow$ The authors only demonstrate this claim for SAC. 2. SimbaV2 improves **scaling** of RL algorithms $\rightarrow$ The authors only demonstrate that SimbaV2 scales better in width than the original Simba, and not SAC. The authors also don’t demonstrate this claim for depth (increasing the number of blocks). In the UTD dimension, the scaling results are less convincing. Looking at Appendix I, we see that often an increase in UTD offers no performance gain (and sometimes slightly harms). From many previous works, we know that this is still an improvement (naively increasing UTD often harms performance), but this isn’t exactly a strong defense of UTD scaling. As a baseline, the authors only compare against one setting of resets and the original Simba. 3. X design change is beneficial $\rightarrow$ The authors make a number of design choices to improve over the original Simba (listed in Section 4). Many of these design choices are not defended or motivated outside of empirical performance (Table 2). Including the use of a distributional critic, which is specifically outlined as a key contribution in the abstract of the paper. Methods And Evaluation Criteria: The SOTA claims are well-defended: the authors cover a lot of popular benchmarks and compare against many SOTA algorithms. As mentioned above, some of the evaluation of the broader claims made by the authors fall short (only a single base algorithm, limited baselines for analytical claims). Theoretical Claims: N/A. Experimental Designs Or Analyses: I find the analysis limited. In Figure 4, the authors demonstrate that SimbaV2 improves over the original Simba in a number of dimensions (feature norm, parameter norm, gradient norm, effective LR). However, it’s unclear to me whether any of these dimensions are necessary for stability, scaling, or better performance. Furthermore, it’s unclear which design choices contribute to these changes. Supplementary Material: The supplementary material is very thorough and a strength of the paper. I looked through some of the tables and figures. Relation To Broader Scientific Literature: SOTA algorithm is obviously beneficial. UTD and scaling laws are very popular topics in the community right now. I think the authors could do a better job at comparing against existing work in this dimension. Essential References Not Discussed: Hussing, Marcel, et al. "Dissecting deep rl with high update ratios: Combatting value divergence." arXiv preprint arXiv:2403.05996 (2024). This paper analyses high UTD in more detail and also suggests l2 normalization as a solution. Other Strengths And Weaknesses: **Strengths:** So far, the construction of this review process has mostly forced me to list weaknesses, but I think the paper is a valuable contribution to the community. As a scientific paper, the authors make some overclaims, but as an empirical contribution, the authors are introducing a powerful, widely applicable algorithm that has been thoroughly tested. Given the impact we have seen from the same types of papers in the model-based space (DreamerV3, TD-MPC2), I see no reason why a model-free algorithm should be any less useful. Furthermore, many of the design choices are likely widely applicable to other algorithms, although I believe the authors could do a better job defending that claim. **Weaknesses:** I learned very little from this paper, other than the fact that it works well. Other Comments Or Suggestions: Typos - Output Preidiction Line395 - ”resize” (inverse quotes) Line434 Questions For Authors: Empirical: - Does SimbaV2 scale in depth? (i.e., adding more blocks). - Does SimbaV2 work with other algorithms (besides SAC). Analytical: - Why does SimbaV2 (and each of its components) improve performance? - What was the motivation behind the design choices, especially the more subtle ones like Linear + Scalar? Other reviewers may disagree, but I would also be satisfied if some of the broader claims were reduced, and the contribution of the paper was reduced to just the specific SimbaV2+SAC algorithm. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer fmAi, Thank you for your thoughtful and constructive feedback. We address your concerns below and would be happy to clarify further. > **Question 4.1** The scalability claims are not fully justified: (1) Only SAC is tested, (2) only width scaling is shown, (3) UTD scaling shows limited benefit. (1) Algorithm: To evaluate generality beyond SAC, we ran SimbaV2 with DDPG on DMC-Hard and HBench-Hard (UTD = 2): | Method | DMC-Hard | HBench-Hard | |-|-|-| | SimbaV2 | 0.636 | 0.693 | | Simba | 0.649 | 0.445 | | MLP | 0.149 | 0.115 | SimbaV2 performs comparably to Simba on DMC-Hard and significantly better on HBench-Hard, confirming its effectiveness beyond SAC. (2) Depth scaling: We conducted additional experiments on DMC-Hard by varying the critic depth (1, 2, 4, 8 layers) using 5 random seeds. | Method | 1 | 2 | 4 | 8 | |-|-|-|-|-| | SimbaV2 | 0.525 | 0.729 | 0.740 | **0.743** | | Simba | 0.512 | 0.706 | 0.675 | - | Unlike SimbaV1, which degraded with depth, SimbaV2 improves consistently. This supports our view that effective regularization enables stable scaling in depth. Due to a limited time, HBench-Hard results are underway and will be included in the final manuscript. (3) UTD scaling: While simple tasks (e.g., Cartpole-Balance) saturate quickly, more complex tasks such as HumanoidBench-Hard continue improving with higher UTD (see Fig. 6). Notably, prior work [1, 2] shows that high UTD often degrades performance unless combined with weight reinitialization. SimbaV2 maintains stable learning at high UTD without reinitialization, which we believe is a key contribution. - [1] Sample-Efficient Reinforcement Learning by Breaking the Replay Ratio Barrier, ICLR'23. - [2] Bigger, Regularized, Optimistic: scaling for compute and sample-efficient continuous control, NeurIPS'24. > **Question 4.2** It's unclear whether the reported metrics are necessary for stability and scaling. Many architectural choices in Section 4 are not motivated beyond empirical results. The architectural decisions in SimbaV2 are based on training instabilities observed in SimbaV1 and supported by prior literature. 1. Feature Norm: TD learning introduces an implicit bias toward increasing feature norm [1], which can lead to overfitting and a reduction in feature rank. Both effects are closely linked to loss of plasticity during training [2, 3]. 2. Parameter Norm, Gradient Norm, and ELR: As parameter norms grow, the ELR decreases, impeding gradient flow and leading to stagnation in learning [4]. This dynamic was observed in SimbaV1, particularly in the encoder. These insights motivated the following designs in SimbaV2: 1. Hyperspherical normalization to control feature norm 2. Weight projection to constrain parameter norm. 3. Distributional critic with reward scaling to stabilize gradient norm. Together, these components maintain a stable ELR across layers, preserve plasticity, and eliminate the need for weight reinitialization. We agree that this motivation was not clearly described and will revise the introduction to clarify these points. - [1] DR3: Value-Based Deep RL Requires Explicit Regularization, ICLR'22. - [2] Understanding and Preventing Capacity Loss in RL, ICLR'22. - [3] Dissecting Deep RL with High Update Ratios: Combatting Value Divergence, RLC'24. - [4] Normalization and effective learning rates in RL, NeurIPS'24. > **Question 4.3** > The use of a distributional critic is not defended. As shown in [this ablation](https://drive.google.com/file/d/15nCDgxhbPL20LnrPCa6k_gNYfoR5smzO/view?usp=sharing), removing the distributional critic and reward scaling degrades performance and leads to a sharp decline in gradient norm, which in turn reduces the ELR. This supports its role in stabilizing training and preserving plasticity. > **Question 4.4** What was the motivation behind Linear + Scaler? Linear + Scaler serves two purposes: (i) weight projection controls parameter norm and ELR, and (ii) The learnable scalar amplifies important features. Without the scalar, projection limits expressivity; without projection, training becomes unstable. Their combination is essential for balancing stability and flexibility. > **Question 4.5** I learned very little from this paper, other than the fact that it works well. Thank you for the candid feedback. Beyond empirical gains, our central insight is that stabilizing norm dynamics enables scalable, stable, and sample-efficient RL without relying on resets—a common workaround in the RL community. SimbaV2 offers a principled, architecture-level solution to this challenge. We also believe these insights may transfer to other domains. For example, recent [NLP work](https://arxiv.org/abs/2503.19206) suggests long pretraining reduces model plasticity. SimbaV2’s techniques may inform the design of more robust architectures for large-scale training in other domains. --- Rebuttal Comment 1.1: Comment: Thank you for the response. While I still feel that the analysis could be strengthened, the results are significant and convincing. I have increased my score.
Summary: This paper introduces SimbaV2, an RL architecture that improves scalability and stability in deep RL. The authors use hyperspherical normalization to control weight and feature norm growth, alongside distributional value estimation with reward scaling to maintain stable gradients. Using SAC as the base algorithm, SimbaV2 outperforms existing RL methods across a wide range of continuous control tasks. ## Update after Rebuttal After the rebuttal, I am keeping my positive score for this paper. Claims And Evidence: Most claims made in the submission are supported by clear and convincing evidence. However, the claim that this leads to 'scalable' RL remains incorrect, as scaling limits are reached very quickly (See Fig. 5). Methods And Evaluation Criteria: The methods and evaluation criteria are good, although I would have liked to see experiments also in another domain or algorithm, as it feels somewhat repetitive to Simba V1. Theoretical Claims: - Experimental Designs Or Analyses: Experimental design and analysis is sound. Supplementary Material: I reviewed the Appendix. Relation To Broader Scientific Literature: This paper is at the forefront of performance-based RL in continuous control tasks. Essential References Not Discussed: - Other Strengths And Weaknesses: Strengths: Strong performance on the state-based continuous control benchmarks. Well written paper. Weaknesses: This paper only feels like a minor step up of SimbaV1. I would have liked to see experiments in a different set of environments, such as pixel-based control. However, at this level of performance a step up is also hard to accomplish, so there is still a valid contribution (Especially if we compare it to Supervised Learning papers which make minuscule step ups.) Other Comments Or Suggestions: - Questions For Authors: Why have the authors not attempted to do a performance analysis in at least a few pixel-based Mujoco tasks? How would the authors implement their techniques on CNN's ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 4564, Thank you for suggesting future research direction! We respond to each of your comments below and are happy to clarify further if needed. > **Question 3.1** The claim that this leads to 'scalable' RL remains incorrect, as scaling limits are reached very quickly (See Fig. 5). We appreciate your concern. While it is true that performance on DMC-Hard saturates earlier, this is largely due to task-specific ceilings rather than architectural constraints. Most tasks in DMC-Hard (except humanoid-run) reach near-optimal scores around 4.5 million parameters. In contrast, HBench-Hard continues to benefit from increased capacity up to 17.8 million parameters, demonstrating substantial headroom for scaling. This level of parameter scalability is atypical in RL. Standard SAC or DDPG architectures often use around 2 million parameters, and larger models tend to degrade performance. In contrast, SimbaV2 scales consistently and robustly, without the need for reinitialization or tuning. Similarly, in Fig. 6, compute scaling on DMC-Hard saturates around UTD = 4 due to task limitations, but HBench-Hard continues improving up to UTD = 8. This behavior highlights SimbaV2’s stable training dynamics under both model and compute scaling, which we believe justifies its claim to scalability within the current RL landscape. > **Question 3.2** This paper only feels like a minor step up of SimbaV1. I would have liked to see experiments in a different set of environments, such as pixel-based control. > Why have the authors not attempted to do a performance analysis in at least a few pixel-based Mujoco tasks? How would the authors implement their techniques on CNN's ? Thank you for this suggestion. We agree that applying SimbaV2 to pixel-based control is a promising direction. However, extending our techniques to convolutional architectures presents unique challenges. In CNNs, a shared kernel operates over overlapping spatial regions, making it nontrivial to enforce hyperspherical constraints on both features and weights, as we do with MLPs. One possible approach is to project each C-dimensional fiber of the feature map and kernel onto a hypersphere, preserving the underlying normalization principle. However, this would require considerable architectural tuning, which we consider an important direction for future work. That said, we believe the core ideas behind SimbaV2, *controlling feature and parameter norms, and maintaining stable effective learning rates*, can extend to vision-based RL. We look forward to exploring this as a future work. --- Rebuttal Comment 1.1: Comment: Thanks for the additional clarifications. " We appreciate your concern. While it is true that performance on DMC-Hard saturates earlier, this is largely due to task-specific ceilings rather than architectural constraints. Most tasks in DMC-Hard (except humanoid-run) reach near-optimal scores around 4.5 million parameters. In contrast, HBench-Hard continues to benefit from increased capacity up to 17.8 million parameters, demonstrating substantial headroom for scaling. " I think it would be beneficial to the paper to more explicitly mention this (Abstract, Introduction). My concerns have been solved and I will keep my (positive) score! Looking forward to further improvements in this area.
Summary: This paper proposes SimbaV2, an improved version of Simba, by replacing several key components of Simba with a scale-preserved l2-normalization (i.e., hyperspherical normalization), distributional value function approximation, reward scaling, etc. The authors presents a comprehensive experimental study with 57 continuous control tasks across 4 domains, against a wide range of existing online RL methods. The experimental results demonstrate the superiority of SimbaV2 in scaling with larger networks and higher UTDs effectively, along with careful analysis on learning dynamics metrics and design choices. ## update after rebuttal I've read all the other reviewers' comments, and the authors' rebuttal provided additional experimental evidence that addressed my questions well. Therefore, I will keep the rating. Claims And Evidence: Most claims made in this paper are well supported with experimental results. The necessity or effects of the design changes “Linear → Linear + Scaler” and “Residual Connection → LERP” are not supported with direct evidence (Please correct me if I missed them). Methods And Evaluation Criteria: The proposed methods are mainly for better addressing non-stationarity in observation, intermediate features and network output (i.e., target values). The evaluation criteria are diverse in this work. The performance metrics are well normalized. The analysis metrics (including weight norm, feature norm, ELR) also make sense in the context. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: The experiments are well organized with comprehensive content, from the analysis of learning dynamics, to performance comparison and then the ablation study of design choices. Training and inference cost are also included. Supplementary Material: I scanned the whole supplementary material, mainly checked the implementation details and learning curves. Relation To Broader Scientific Literature: The proposed architecture is of potential to be a norm for DRL, beyond the SAC base algorithm considered in this paper. The proposed methods and the experimental study can provide a useful reference to the related studies on learning under non-stationarity, e.g., continual RL, streaming/incremental RL. Essential References Not Discussed: Most essential related works are included. There are some other related papers not included: - Deep Policy Gradient Methods Without Batch Updates, Target Networks, or Replay Buffers. arXiv 2411.15370 - Adam on Local Time: Addressing Nonstationarity in RL with Relative Adam Timesteps. arXiv 2412.17113 - Improving Deep Reinforcement Learning by Reducing the Chain Effect of Value and Policy Churn. arXiv 2409.04792 Other Strengths And Weaknesses: #### Strengths - SimbaV2 significantly improves the stability of learning dynamics in terms of metrics like ELR, especially for more challenging tasks in HBench-Hard. - SimbaV2 is free of adding various learning-based/optimization-based regularization to address the non-stationarity issue, which offers better generality and feasibility. - The comprehensive experiments can provide useful references for related studies. #### Weaknesses - The performance of SimbaV2 is based on SAC, leaving its effects on PPO and DQN-variants unknown (the authors also mentioned this). - The environments used in the experiments include only proprioceptive observations (correct me if I misunderstand it) Other Comments Or Suggestions: None Questions For Authors: 1. Are there experiments for evaluating the effects of the design choices “the inverted bottleneck MLP”, “Residual Connection → LERP” and “Linear → Linear + Scaler” in this work? It seems that I did not find direct empirical evidence for these ones. 2. In Simba, the observation is centered and rescaled by running mean and std, and in SimbaV2, the observation is rescaled by the L2 norm. If there is a variant of SimbaV2, that only replaces the L2-norm rescaling by the running std rescaling, how would it performs differently with SimbaV2? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer ZJka, Thank you for your constructive feedback and positive support! We respond to each of your points below and would be happy to clarify further if needed. > **Q2.1:** > The necessity or effects of the design changes “Linear → Linear + Scaler” and “Residual Connection → LERP” are not supported with direct evidence. > Are there experiments for evaluating the effects of the design choices “the inverted bottleneck MLP”, “Residual Connection → LERP” and “Linear → Linear + Scaler” in this work? Thank you for highlighting this. Below we provide both the rationale and supporting evidence for these design choices. 1. **Linear → Linear + Scaler:** This modification enables two key benefits: (i) weight projection ensures control over parameter norm and effective learning rate, and (ii) the learnable scalar allows selective emphasis of important features. A standard linear layer without projection leads to unbounded parameter growth. On the other hand, projection without a scalar severely limits representational capacity. The Linear + Scaler combination is therefore necessary to balance norm control with expressivity. 2. **Residual Connection → LERP:** Since all features in SimbaV2 are normalized to lie on a hypersphere, standard residual connections are no longer applicable. The closest analog would be fixed-$\alpha$ interpolation (e.g., $\alpha$=0.5). We conducted an ablation to compare **LERP** vs. fixed-$\alpha$ residual: | Method | DMC-Hard | HBench-Hard | |-------------|----------|-------------| | LERP (learnable, $\alpha_{init}$=1/(L+1)) | 0.729 $\pm$ 0.065 | 0.946 $\pm$ 0.089 | | Residual (fixed, $\alpha_{init}$=0.5) | 0.687 $\pm$ 0.092 | 0.843 $\pm$ 0.123 | LERP achieves higher performance in both benchmarks. We attribute this to two factors: its learnable mixing coefficient, which enables adaptive interpolation, and its initialization ($\alpha$=1/(L+1)), which biases early training toward identity mapping, enhancing stability. > **Q2.2:** > There are some other related papers not included: > - Deep Policy Gradient Methods Without Batch Updates, Target Networks, or Replay Buffers. arXiv 2411.15370 > - Adam on Local Time: Addressing Nonstationarity in RL with Relative Adam Timesteps. arXiv 2412.17113 > - Improving Deep Reinforcement Learning by Reducing the Chain Effect of Value and Policy Churn. arXiv 2409.04792 Thank you for pointing out these relevant works. We appreciate the suggestions and will include them in the related work section. We have also added the following relevant works: - Is High Variance Unavoidable in RL?., Bjorck et al., ICLR 2022 - Understanding, Predicting, and Better Resolving Q-Value Divergence in Offline RL., Yue et al., NeurIPS 2023 - Mixtures of Experts Unlock Parameter Scaling for Deep RL., Ceron et al., ICML 2024 - Dissecting Deep RL with High Update Ratios., Hussing et al., RLC 2024 - Don’t Flatten, Tokenize!., Sokar et al., ICLR 2025 > **Q2.3:** > The performance of SimbaV2 is based on SAC, leaving its effects on PPO and DQN-variants unknown. Thank you for highlighting this. While extending to PPO and DQN is a valuable direction, these algorithms present challenges: PPO is inherently on-policy and difficult to scale to high update-to-data (UTD) ratios, while DQN is restricted to discrete action spaces. To assess the generality of SimbaV2 beyond SAC, we conducted additional experiments using DDPG, a widely adopted off-policy algorithm for continuous control: | Method | DMC-Hard | HBench-Hard | |----------|----------|-------------| | SimbaV2 | 0.636 $\pm$ 0.087 | 0.693 $\pm$ 0.119 | | Simba | 0.649 $\pm$ 0.089 | 0.445 $\pm$ 0.101 | | MLP | 0.149 $\pm$ 0.034 | 0.115 $\pm$ 0.047 | In DMC-Hard, SimbaV2 performs competitively with Simba, both significantly outperforming the MLP baseline. In the more challenging HBench-Hard benchmark, SimbaV2 shows clear improvements over Simba, indicating enhanced stability and generalization beyond SAC. > **Q2.4:** > In Simba, the observation is centered and rescaled by running mean and std, and in SimbaV2, the observation is rescaled by the L2 norm. If there is a variant of SimbaV2 that only replaces the L2-norm rescaling with running std rescaling, how would it perform? We appreciate the opportunity to clarify. There appears to be a misunderstanding: both Simba and SimbaV2 use running mean and standard deviation (RSNorm) for input normalization. The key difference is architectural. SimbaV2 replaces LayerNorm with L2 normalization for internal features, not for input observations. This change targets internal stability and effective learning rates, while the input normalization strategy rem
Summary: The paper introduces SimbaV2, an RL architecture that stabilizes training and improves scalability through hyperspherical normalization and distributional value estimation with reward scaling. Built on Soft Actor-Critic (SAC), it achieves state-of-the-art performance across 57 continuous control tasks and scales effectively with increased model size and compute. Experiments confirm its stability, outperforming existing RL methods without requiring periodic weight reinitialization. Claims And Evidence: The paper provides strong empirical evidence to support its claims. The experiments are well-structured, covering scalability, stability, and performance comparisons across 57 continuous control tasks. The ablation studies confirm the importance of hyperspherical normalization and reward scaling, reinforcing the paper’s core contributions. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-suited for the problem. Hyperspherical normalization and reward scaling directly address RL's instability and overfitting issues, making them relevant for scaling RL effectively. The evaluation is thorough, using continuous control tasks across four standard benchmarks (MuJoCo, DMC, MyoSuite, HumanoidBench), ensuring broad applicability. Comparisons with strong baselines and ablation studies further validate the approach. Theoretical Claims: This paper does not contain theoretical proofs. Experimental Designs Or Analyses: I checked all the experimental designs and analyses, and there are no issues. Supplementary Material: I checked all the appendices. Relation To Broader Scientific Literature: The paper builds on prior work in RL scalability, regularization techniques, and normalization methods. It extends ideas from weight decay, dropout, and layer normalization, commonly used in supervised learning, by introducing hyperspherical normalization to stabilize RL training. The distributional value estimation approach aligns with prior work on distributional RL, enhancing gradient stability. Additionally, the paper addresses challenges seen in periodic weight reinitialization methods by providing an alternative that scales without overfitting. It contributes to the broader discussion on scaling laws in RL, challenging the notion that increasing model size and computation necessarily leads to instability. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper's experimental design is a strong point, with thorough evaluations across multiple benchmarks and well-structured ablation studies that clearly isolate the contributions of each component. The writing is also clear and well-organized, making it easy to follow the ideas and their significance. I enjoyed reading this paper. Other Comments Or Suggestions: It would be helpful if the authors could provide more theoretical justification or intuitive explanations for why previous methods struggle to scale while SimbaV2 does. Specifically, a deeper discussion on why hyperspherical normalization stabilizes training and why it eliminates the need for weight reinitialization would strengthen the paper’s contributions. Could the authors disclose the computational resources used for this project, including hardware specifications and training time? This information would be valuable for the community to better understand the practicality and scalability of the approach. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer R7VW, Thank you for your thoughtful and constructive feedback. We address your concerns in detail below and would be happy to clarify any remaining questions. > **Question 1.1** It would be helpful if the authors could provide more theoretical justification or intuitive explanations for why previous methods struggle to scale while SimbaV2 does. > Why does hyperspherical normalization stabilize training and eliminate the need for weight reinitialization? We appreciate this insightful question. Our use of hyperspherical normalization is motivated by empirical observations in SimbaV1, where we identified unstable growth in key quantities—including feature norm, parameter norm, gradient norm, and effective learning rate per layer (Fig. 4). These instabilities are known to impair generalization and reduce plasticity in RL. Specifically: 1. **Feature Norm:** Prior work [1] shows that TD loss induces an implicit bias toward growing feature norms, which can cause overfitting and reduced feature rank. This is often driven by a few dominant dimensions, leading to loss of plasticity [2]. Techniques such as feature norm regularization [1] and hyperspherical normalization [3,4] were proposed to mitigate this. 2. **Parameter Norm and Effective Learning Rate:** As parameter norms grow, the effective learning rate (gradient norm divided by parameter norm) declines, which hampers gradient flow and learning dynamics [5,6]. In SimbaV1, we observed this specifically in the encoder, where the effective learning rate collapses over time (Fig. 4e). Simply increasing the global learning rate is not viable, as other layers (e.g., the predictor) may already be operating at high effective rates. SimbaV2 addresses these challenges through the following design choices: 1. **Feature Norm:** Hyperspherical normalization to control feature norm growth. 2. **Parameter Norm:** Weight projection onto a hypersphere to control parameter norm. 3. **Gradient Norm:** Distributional critic with reward scaling to regulate gradient norm Together, these mechanisms ensure stable learning dynamics (i.e., stable effective learning rate across layers) and sustained plasticity, thereby eliminating the need for weight reinitialization. We recognize that our original draft did not clearly explain these intuitions and will revise the introduction accordingly to highlight these design motivations. - [1] DR3: Value-Based Deep Reinforcement Learning Requires Explicit Regularization, Kumar et al, ICLR'22. - [2] Understanding and Preventing Capacity Loss in Reinforcement Learning, Lyle et al, ICLR'22. - [3] Is high variance unavoidable in rl? a case study in continuous control, Bjorck et al, ICLR'22. - [4] Dissecting Deep RL with High Update Ratios: Combatting Value Divergence, Hussing et al, RLC'24. - [5] Loss of plasticity in deep continual learning, Dohare et al, Nature'24. - [6] Normalization and effective learning rates in reinforcement learning, Lyle et al, NeurIPS'24. > **Question 1.2** Could the authors disclose the computational resources used for this project, including hardware specifications and training time? All experiments were conducted using NVIDIA RTX 3090 GPUs and an AMD EPYC 7402 24-Core Processor. For wall-clock time, a single run of SimbaV2 with UTD=2 typically takes 1.6 hours. This varies by environment: around 1.0 hour for simpler tasks like Cartpole (no collisions, minimal joints) and up to 2.5 hours for more complex tasks like Dog (many joints and intricate interactions). --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response, which solved my concerns. I have increased the rating.
null
null
null
null
null
null
Hardware and Software Platform Inference
Accept (poster)
Summary: This manuscript proposed a method called Hardware and Software Platform Inference (HSPI), aiming at identifying the GPU and software stack for machine learning models. The authors introduced 2 methods: 1. HSPI with Border Inputs (HSPI-BI), which is building inputs that are at the decision boundary of the model. 2. HSPI with Logit Distributions (HSPI-LD) uses the distribution of output logits to figure out the hardware environment. To evaluate ton these methods, the authors employed vision models and language models and used white-box attacks and black-box attacks. The results showed that the proposed method HSPI-LD can figure out GPU types and data types. Claims And Evidence: No, the proposed method is called "hardware and software platform inference", which I doubt is kind of big regarding the manuscripts, which is mostly focusing on GPU types and data types (like int8, fp16, etc.) Also, I was wondering about the scalability of the proposed methods. Will they be easily applied to other hardware configurations and software stacks (like different machine learning compilers)? Methods And Evaluation Criteria: Yes, most of them make sense. I feel several experiments' results are not fully and clearly discussed. Figure 5, different logits showed similar results, what do the authors mean "Figure 5 illustrates the kernel density estimation of various quantization methods where the shapes imply the obvious differences"? Figure 6, what does it imply? Theoretical Claims: n/a Experimental Designs Or Analyses: Yes, I feel several experiments' results are not fully and clearly discussed. Figure 5, different logits showed similar results, what do the authors mean "Figure 5 illustrates the kernel density estimation of various quantization methods where the shapes imply the obvious differences"? Figure 6, what does it imply? Supplementary Material: Yes, I checked the quantizers folder. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: 1. Figure 5, different logits showed similar results, what do the authors mean "Figure 5 illustrates the kernel density estimation of various quantization methods where the shapes imply the obvious differences"? 2. Figure 6, what does it imply? 3. I was wondering about the scalability of the proposed methods. Will they be easily applied to other hardware configurations and software stacks (like different machine learning compilers)? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for offering valuable questions. We will address them one by one. > # Claims And Evidence > No, the proposed method is called "hardware and software platform inference", which I doubt is kind of big regarding the manuscripts, which is mostly focusing on GPU types and data types (like int8, fp16, etc.) ... - Our method does go beyond GPUs and data formats. In Tab 2 and Tab 6, we show that **different kernel implementations, runtime libraries, and parallelism strategies**, each produce unique floating-point “fingerprints”. As stated in line 420 page 8 and Tab 10, in practice, we can merge labels and focus on exactly the configurations that we care about—making the approach scalable to new hardware and software stacks. - Please refer to our answer to Q3 for details. > # Q1. > Figure 5, different logits showed similar results, what do the authors mean "Figure 5 illustrates the kernel density estimation of various quantization methods where the shapes imply the obvious differences"? Please zoom in Fig.5. You may notice there are several places where the kernel densities of different data types are clearly not overlapped. For example, the density value at logit value = -1.3 and 0.65, and the whole INT8 curve. We include Fig.5 to help readers intuitively understand how SVMs capture differences in logits distribution. > # Q2. > Figure 6, what does it imply? Figure 6 visualizes the difference in logit bit distribution between RTXA6000 and A100. We send identical inputs to the same FP16 model checkpoint deployed on RTX6000 (orange) and A100 (blue), and collect outputs. Then we pick the first eight FP32 logits of each output sample (8*32=256 bits per output sample), plot a histogram of the bit counts (the number of ones at that bit) of these 256 bits over all output samples, cut off the minimum bit count of RTX6000 and A100 for each bit, to only show the difference. This implies though we send the same inputs to the same model checkpoints, the bit distribution of outputs from the two GPUs are significantly different even visible to human eyes. We include Fig.6 to help readers intuitively understand why HSPI-LD is possible. > # Q3. > I was wondering about the scalability of the proposed methods. Will they be easily applied to other hardware configurations and software stacks (like different machine learning compilers)? Yes, we answer this question from three aspects: - Hardware configurations - Yes, as we show in Tab R1, HSPI can be easily applies to AMD’s CDNA3 architectures and even **ASIC accelerators** like Amazon’s Inferentia. - In the paper and Tab R2, we already show HSPI works across various GPUs like H100, A100, RTX6000, L40S, L40, A40, RTX8000, RTX2080Ti, etc. - We mainly run on NVIDIA GPUs in the manuscript because setting up the experiment and running experiments is time consuming. - Software stacks - We include results **with and without ML compiler** in the manuscript. Tab 2 runs the LLM inference in the eager mode, while Tab 6 applies a series of optimizations including TorchInductor (torch.compile), cuda graph, etc. - In Tab.R2, we additionally show HSPI can differentiate **different ML compilers (TenorRT-LLM vs TorchInductor)**. - Scale up - In Tab.6, we run LLM serving engine with torch.compile, cuda graph, RadixAttention, dynamic batching enabled. HSPI still differentiate different GPUs, even kernel implementations, data types, parallelism strategies. - In Tab.R1, we additionally show HSPI still work after we scale up the setup to **8-GPU system + LLM serving engine + 70B parameter model**. We know it is not possible to provide results of all combinations covering all hardwares, but we still tried our best to include as various platform setup as possible. RTab.1 : HSPI-LD results for Amazon, AMD, NVIDIA (Llama-3.1-70B-it) |**Vendor**|**Hardware**|**Software**|**DType**|**Accuracy**|**F1**| |-|:-:|:-:|:-:|:-:|:-:| |Amazon|Inferentia|NKI|BF16|0.968|0.924| ||||FP16|0.988|0.908| |AMD|MI300X|ROCm|BF16|1.0|1.0 | |||| FP16 |0.906|0.913| |NVIDIA|H100|CUDA|BF16|1.0| 1.0 | ||||FP16|0.922|0.915| ||||Avg|0.964|0.943| RTab2: HSPI-LD results for large systems (Llama-3.1-70B-it) |**Vendor**|**Hardware (Arch)**|**Num devices**|**DP,TP**|**DType**|**Accuracy**|**F1**| |--|:---:|:---:|:---:|:---:|:---:|:---:| |AMD| MI300X (CDNA3)|2|DP1TP2|BF16|1.0 | 1.0 | |||||FP16|0.961|0.957| |NVIDIA |H100 (Hooper) | 4 | DP2TP2 | BF16 | 1.0 | 1.0 | |||||FP16|0.930| 0.941 | ||L40 (Ada)| 8 |DP1TP8 | BF16 | 1.0 | 1.0 | |||||FP16 |0.969| 0.958 | ||A40 (Ampere)|8| DP1TP8 | BF16 | 1.0 | 1.0 | |||||FP16|0.922| 0.925 | |||||Avg|0.973| 0.973 | RTab.3: HSPI-LD results for ML compilers (Llama-3.1-70B-it) | **GPU**|**ML Compiler**|**DTypes**|**Accuracy**|**F1**| |-|:-:|:-:|:-:|:-:| |H100 |TensorRT-LLM | BF16 | 0.984 | 0.927 | ||| FP16 | 0.947 | 0.944 | || TorchInductor | BF16 | 1.0 | 1.0 | ||| FP16 | 0.930 | 0.941 | ||| Avg | 0.965 | 0.953 | Please let us know if you have further questions :)
Summary: The paper introduces Hardware and Software Platform Inference (HSPI), a novel method for identifying the underlying GPU architecture and software stack of machine learning models based on their input-output behavior. HSPI uses computational differences across various GPUs and software environments to detect the specific device utilized for inference. The authors present two techniques—HSPI with Border Inputs (HSPI-BI) and HSPI with Logits Distributions (HSPI-LD) and both demonstrate high accuracy rates in white-box and black-box settings while discussing the limitations of the methods. Claims And Evidence: The experiments support the submission, showing high accuracy for different algorithms, kernels, datatypes, and sharding techniques. Methods And Evaluation Criteria: The proposed method and evaluation make sense for the problem, with various benchmark datasets for vision and language tasks strengthening the evaluation criteria. The datasets used are also standard datasets in the correspondent areas. Theoretical Claims: No proofs were presented in the paper. Experimental Designs Or Analyses: The experimental designs appear sound, with plenty . One criticism would be the model size limitation presented in the experiments (See Other Strengths And Weaknesses). Supplementary Material: The supplementary material includes the code used in the experiment. The details are not checked carefully. Relation To Broader Scientific Literature: The paper presents a new problem of verifying the hardware and software of ML models. Prior work on hardware detection methods have lower accuracy rates. Essential References Not Discussed: No essential work that is unreferenced is noticed. Other Strengths And Weaknesses: Strength: 1. Novel approach to identifying hardware and software platforms. 2. Clear organization and well-written. Weakness: 1. The paper states that limited GPU memory would constrain the ability to scale to larger language models; even with HSPI-LD the experiments were conducted on rather small LLMs. Yet, the motivation is more about checking whether a very large model is indeed deployed instead of using a smaller, distilled model. (A possible solution could be combining HSPI with distributed methods. 2. The paper should discuss the computation efficiency of the method. Other Comments Or Suggestions: No other suggestions. Questions For Authors: 1. Can other common metrics, such as the wall clock time for inference, be useful for identifying the hardware and software stack? 2. Is it possible that different combinations of hardware and software lead to the same prediction? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for valuable suggestions and questions. We would like to address them one by one. > # W1. Large models and systems > The paper states that limited GPU memory would constrain the ability to scale to larger language models; even with HSPI-LD the experiments were conducted on rather small LLMs. Yet, the motivation is more about checking whether a very large model is indeed deployed instead of using a smaller, distilled model. (A possible solution could be combining HSPI with distributed methods. Yes, large scale experiments are important for evaluating HSPI. - In Tab 6 on page 8, we distributed Qwen-2.5-14B across four-GPU systems using the popular LLM serving engine SGLang and enabled a series optimizations including custom cuda kernels, RadixAttention, torch.compile, cuda graph, smart request batching, etc. HSPI works well and even **differentiated different parallelism strategies like DP2TP2 vs DP4TP1**. We assume HSPI-LD will work on larger distributed systems because the communication between devices does not introduce noise to logits distribution. - To verify this, we ran a new group of experiments where **we distributed Llama-3.1-70B across a larger multiple high-end GPU systems using SGLang**. As shown in the following table, HSPI-LD still achieves an average accuracy over 97%. RTab2: HSPI-LD results for large systems (Llama-3.1-70B-it) | Vendor | Hardware (Arch) | Num devices | DP,TP | DType | Accuracy | F1 | |---|:---:|:---:|:---:|:---:|:---:|:---:| | AMD | MI300X (CDNA3) | 2 | DP1TP2 | BF16 | 1.000 | 1.000 | | | | | | FP16 | 0.961 | 0.957 | | NVIDIA | H100 (Hooper) | 4 | DP2TP2 | BF16 | 1.000 | 1.000 | | | | | | FP16 | 0.930 | 0.941 | | | L40 (Ada) | 8 | DP1TP8 | BF16 | 1.000 | 1.000 | | | | | | FP16 | 0.969 | 0.958 | | | A40 (Ampere) | 8 | DP1TP8 | BF16 | 1.000 | 1.000 | | | | | | FP16 | 0.922 | 0.925 | | | | | | Avg | 0.973 | 0.973 | > # W2. Computation efficiency > The paper should discuss the computation efficiency of the method. Yes, we will add the following discussion into the revised version. In our experiments, HSPI-LD is the most costly method and the cost of HSPI-LD mainly consists of three parts: - Collecting training samples: Collecting training samples is the most computationally expensive and slow part. To collect samples, we deploy models on a specific HW-SW setup, run model inference and dump output logits. The cost can be estimated as `num of platform options * num of samples * sequence length * cost of a forward pass`. We spend over 600 GPU hours on this. - Training HSPI classifier: As stated in Sec 4.3, we train $N(N-1)/2$ binary SVM classifiers to perform HSPI's $N$-class classification. Thus cost is proportional to `N(N-1)/2 * num of samples`. We use sklearn to train SVMs on CPUs, which is fast as the training iteration is around 1000. Usually the training of all binary classifiers takes less than 20 minutes. - Evaluating HSPI classifier: The evaluation feed output logits to all binary classifiers, thus the cost is also proportional to `N(N-1)/2 * num_samples`. The prediction is also on CPU and the cost is ignorable compared to collecting training samples. Usually the prediction takes less than 5 minutes. For HSPI-BI, the cost of training a batch of border inputs is twice as normal model training because we need to run the forward-backward pass for a pair of platforms. Since we mainly run this methods for CNN models, the GPU hours were much smaller than HSPI-LD. > # Q1. > Can other common metrics, such as the wall clock time for inference, be useful for identifying the hardware and software stack? Yes it is possible. Wall clock time can be used for creating hardware fingerprints, but we assume wallclock time is less reliable in this context. Deep learning serving engines like TensorRT, vLLM and SGLang may use some dynamic batching strategies which means the wall clock time changes with the request arrival rates. For example, the daytime wall clock time fingerprint is different from the one at night because the server is busier during the day. > # Q2. > Is it possible that different combinations of hardware and software lead to the same prediction? Yes, it is possible. This is one limitation we discussed in Sec 6.3 and A.4. During experiments we found when the software stack is identical, HSPI cannot differentiate RTX8000 and RTX2080Ti. This is because these two GPUs have both NVIDIA compute capability = 7.5 (Turing architecture), and a very similar number of Tensor Cores around 550 (they fall into the same EQC when the software stack is the same). For more details please refer to Sec 6.3 and A.4. Please let us know if you have further questions :) --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I think the work is solid, and it's novel in identifying the hardware and software based on the input-output. I will keep my original score, partially due to my unfamiliarity with this line of work.
Summary: The paper presents an interesting idea where a client can infer the hardware and software platform that was used for model inference based on its input and output behavior. It banks on the observation that there are inherent differences between different GPUs and software stack. The idea has potential to allow clients to verify the actual hardware that was used for inference against malicious service providers. Claims And Evidence: Besides some questions about the robustness of the approach, I think the paper seems convincing. Methods And Evaluation Criteria: Besides the fact that the evaluated hardware and software platforms are not diverse enough, it seems the paper tried to evaluate across various inference scenarios. Theoretical Claims: Seems the given equations make sense. There are no theoretical proofs for it. Experimental Designs Or Analyses: Seems like the approach seems valid. However, there are some questions regarding its robustness and scalability. Supplementary Material: I reviewed the appendix. I looked at the code that is provided. However, I did not run the code or scrutinize the provided code. Relation To Broader Scientific Literature: The paper presents a simple yet insightful extension to the line of work where we have used input/output data to infer the model and the hardware architecture, and various defense mechanisms against these side channel attacks and IP infringement. Essential References Not Discussed: The paper does not seem to shed enough light on the line of work where side channels are exploited to infer the model and the hardware. 1. Hua, Weizhe, Zhiru Zhang, and G. Edward Suh. "Reverse engineering convolutional neural networks through side-channel information leaks." Proceedings of the 55th Annual Design Automation Conference. 2018. 2. Gongye, Cheng, et al. "Side-channel-assisted reverse-engineering of encrypted DNN hardware accelerator IP and attack surface exploration." 2024 IEEE Symposium on Security and Privacy (SP). IEEE, 2024. Especially, given that the 2nd paper mentioned above is suggesting that side-channels can be exploited to reverse-engineer the hardware IP, it seems like there are some similarities. It would benefit the reader if the authors can give a description about how the approach and applications may differ. Other Strengths And Weaknesses: . Other Comments Or Suggestions: The paper presents a simple yet nice idea to infer hardware and software platforms during inference. I think the paper provides a large set of experiments for various scenarios. I would like to stay positive about the paper. However, it would be great to see how the proposed HSPI approach would scale for larger set of hardware and software platforms. Also, would love to understand the potential ways to make this more robust. Questions For Authors: 1. I am very curious how a client would differentiate between "deviations" from hardware and "deviations" from software. In practice different generations of hardware that have different numerical behavior may return same output due to software optimizations. It seems that 4.1 mentions that these variations are included in H. Can you provide more details in this respect? 2. As the paper states in A.5, the model inference could have introduced some noise/perturbations (consider the fact that different inference may return different output despite the same model). In that case, how would the client filter out that noise to perform HSPI in a robust manner 3. It would be interesting to investigate what features were used to distinguish different precision, kernel, hardware, ... Interpretation of the HSPI model would provide some interesting insights. 4. It would be more interesting to see experimentatal results for more variegated class of accelerators (not just GPUs) considering their prevalence (TPUs, Inferentia, MTIA, Maia100). 5. For the black-box access only scenario in 4.1, client might not have the full picture considering that the service providers may have appended prompt for LLM inference. How is the HSPI impacted and what may be mitigations to make HSPI more robust? 6. Service providers may limit the inference requests while the client tries to infer the hardware, how would this be distinguished from the DoS attack. 7. How does the paper scale tolarger set of hardware and software platforms. Seems like there is a potential for signficant drop in accuracy of the HSPI. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you offering valuable suggestions and questions. We will address them one by one. Due to word limits, we uploaded **three new result tables** here: https://imgur.com/a/VbONCoU > # E1. Essential References Not Discussed Our work is different from these two papers in terms of goals, threat models (methods), and generalizability. Goals - Ref 1 extracts information about CNN models, e.g, whether the model is AlexNet or SqueezeNet, from an FPGA-based CNN accelerator, by analyzing memory access patterns. - Ref 2 predicts information about the encrypted IPs on an FPGA-based DNN accelerator. They also recover characteristics about model architectures and extract model params. - Our work aims to predict information about the SW-HW stack used for serving large deep learning models, including compiler, data types, GPU arch, parallelism strategy, etc. **We do not aim to predict which model** is being served. Threat models - Ref 1 and 2 assume side channel information about the HW like off-chip memory access patterns, electromagnetic traces, schematics is accessible. - We assume the SW-HW platform serves DNNs in the cloud and **we only have access to the request and responses via the service provider’s API**. Generalizability - Ref 1 and 2 tests their methods on a specific FPGA accelerator. - We tried our best to test the generalizability of HSPI, across model families, data types, GPU archs, parallelism strategies, etc. We will include these two papers in our related work section and discuss the differences there. > # Q1 > I am very curious how a client would differentiate between "deviations" from HW and "deviations" from SW ... One can differentiate deviations caused by HW and SW because they will be in different EQCs and we can enumerate all of them by going through all HW and SW combinations. However, in Sec 6.3 and A.4 we do get the same output from RTX8000 and RTX2080Ti (they both have NVIDIA compute capability = 7.5 and very similar number of Tensor cores ~550) when using the same SW stack, thus these two combinations are left in the same EQC. Please refer to 6.3 and A.4 for more details. > # Q2 > As the paper states in A.5, the model inference could have introduced some noise/perturbations ... One would need to increase the number of queries sent by HSPI to capture statistics in different HW and SW settings (the cost increases, too). On the other hand, the introduced noise reduces the model performance, e.g, LLM generates low-quality texts. > # Q3 > It would be interesting to investigate what features were used to distinguish different precision, kernel, HW, ... Our method is based on the EQC theory explained in Sec 2. Limited by rebuttal word counts, we would recommend [this paper](https://openreview.net/forum?id=6zyFgr1b8Q), in which they present a more detailed analysis identifying how architectural choices impact computational stability and precision deviations. > # Q4 > It would be more interesting to see experimentatal results for more variegated class of accelerators ... - Yes, we are also curious about this. We believe our observation on NVIDIA GPUs still holds on other accelerators because there may be more differences captured by HSPI classifiers, such as different accumulator length in compute units. Unfortunately, limited by resources, we are not able to test HSPI on all these platforms. - We run new experiments including Amazon Inferentia and AMD Instinct MI300X (**RTab.1 in the link**). HSPI still successfully differentiates them (avg.acc=96.4%). > # Q5 > For the black-box access only scenario in 4.1, client might not have the full picture considering that the service providers may have appended prompt ... - During experiments, we already prepend chat prompts before our request. For example, each of our Qwen-2.5 input concatenates “You are Qwen, created by Alibaba Cloud. You are a helpful assistant” and our query “Please generate <num_random_words> random words …” . We find using these simple chat prompts actually helps to generate more random outputs. - For complex scenarios, HSPI can be combined with prompting tricks like jailbreaking to be more robust. > # Q6 > Service providers may limit the inference requests ... DoS attack. Commercial inference services are designed to support high‑throughput access. Moreover, HSPI does not need to collect all the responses in a very short period, so request rates of HSPI are lower than the limits set by service providers, lower than DoS attacks by order of magnitudes. For example, in Tab 6 experiments, we sent 6 requests/sec, and our method collects 256 inference queries in around 40 minutes, which is comparable to normal application workloads. We can also distribute queries across multiple accounts in case of strict rate limit. > # Q7 > How does the paper scale to larger set of HW and SW platforms... We ran new large scale experiments (**RTab2** in the link) and HSPI still works well. Please refer to our answer to Reviwer A46R.
Summary: This paper introduced Hardware and Software Platform Inference (HSPI), which is a method for identifying the hardware and software stack based on the input-output behavior of machine learning models. The proposed method leverages the inherent differences of various GPU architectures and compilers to distinguish between different GPU types and software stacks. Experiment show that in a white-box setting the method can distinguish between different GPUs with between 83.9% and 100% accuracy; in a black-box setting the method can achieve results that are up to 3 times higher than random guess accuracy. ## update after rebuttal Questions are answered, I remain positive about this work and would like to keep my score. Claims And Evidence: 1. HSPI is possible because of the different characteristics of hardware and software configuration -- this is supported by the analysis in section 2 and examples showed in Figure 2, Figure 3 and Figure 5. 2. HSPI can effectively distinguish different GPUs with high accuracy in both white-box and black-box setting -- this is supported by results demonstrated in Table 1-4. Methods And Evaluation Criteria: Yes the authors proposed to use different combinations of models, GPUs, kernels, and quantization method to conduct white-box and black-box experiments against HSPI, and collect accuracy / F1 score to show its effectiveness. Details was demonstrated in Table 1-6. Theoretical Claims: N/A. Experimental Designs Or Analyses: 1. The factors made HSPI possible discussed in section 2 makes a lot sense, and demonstrate good insights into low level implementation of machine learning models. 2. Experiments including white-box and black-box, authors proposed to compare the identified quantization method, GPU type, and kernels used to ground truth, and compare accuracy / F1 score. The scores demonstrated its capability to identify underlying hardware and software choice, with high accuracy. Supplementary Material: The authors shared their code to training the classifier and run inference. We don't have the logit dataset to run but it looks legit. Relation To Broader Scientific Literature: This paper is based on the ideas from Schlögl et al. (2023) where it targeted CPU identification based on the same equivalence classes. And further, the authors considered Computational Deviations, and focused on GPUs with both white-box and black-box settings, proposed an effective solution to identify the underlying hardware / software stack with high accuracy. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: This paper provided a very new perspective to look at current model hosting hardware and software stack, especially considering the quantization methods and white-box vs. black-box scenarios. Other Comments Or Suggestions: N/A. Questions For Authors: 1. Given the current method for hardware and software platform inference, how to avoid the stack to be exposed to users? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for offering valuable suggestions and questions. We would like to address them one by one. > S1. The authors shared their code to training the classifier and run inference. We don't have the logit dataset to run but it looks legit. - We know collecting the logits from various combinations of `[model checkpoint, hardware stack, software stack]` are resource consuming, so we will open-source the logits datasets once the paper is accepted. - The supplementary materials also include the codes for producing the logit dataset. For example, in line 227 of `llm_logits_svm.py`, the function `create_logits_dataset` collects logits for training the classifier. > Questions For Authors: > Q1. Given the current method for hardware and software platform inference, how to avoid the stack to be exposed to users? There are several possible ways but may have side effects on user experience like lower generation quality and longer latency. - In Appendix 5, we suggested random bit flips and adding random noise as potential ways of mitigation. For further explanation, please refer to Appendix 5 on page 13. - Furthermore, adding other forms of randomizations such as reordering arithmetics randomly or switching between multiple compilation kernels would make our method more expensive, since then a statistical average would need to be calculated on a lot more samples to see a difference in distributions between different hardware and software platforms. - Lastly, limiting model logit access, also would make it harder for HSPI. However, the logits is an important part of API enabling users to explore smart sampling strategies like test-time scaling. We will add this discussion to the revised manuscript. Please let us know if you have further questions :) --- Rebuttal Comment 1.1: Comment: Thanks for the response, good to know the several ways to avoid exposure of stack details, I think those would be great to be added to appendix. And I remain positive about this work.
null
null
null
null
null
null
Conditional Diffusion Model with Nonlinear Data Transformation for Time Series Forecasting
Accept (poster)
Summary: The paper introduces a Conditional Diffusion Model (CDM) for generative modeling, leveraging denoising diffusion probabilistic models (DDPMs) to generate high-quality samples conditioned on specific inputs. The key contribution is a conditioning mechanism that guides the diffusion process, allowing for controlled generation tailored to input constraints. The proposed approach improves upon existing diffusion models by enhancing sample quality, diversity, and control over generated outputs. Empirical evaluations across multiple benchmarks demonstrate state-of-the-art performance, outperforming prior conditional generative models. Theoretical insights and ablation studies further validate the effectiveness of the conditioning mechanism in guiding diffusion-based generation. Claims And Evidence: The paper’s claims are generally well-supported by empirical results and theoretical insights. The authors claim that their Conditional Diffusion Model (CDM) improves sample quality, diversity, and controllability, which is backed by quantitative evaluations on benchmark datasets. The inclusion of state-of-the-art comparisons strengthens the claim that CDM outperforms prior methods. However, the paper does not clearly discuss the computational cost of CDM compared to existing models, which is crucial since diffusion models are computationally intensive. Methods And Evaluation Criteria: The proposed Conditional Diffusion Model (CDM) is well-aligned with the controlled generative modeling task, and the evaluation on standard benchmark datasets is appropriate. The use of sample quality metrics (e.g., FID, Inception Score) makes sense, but the paper lacks efficiency comparisons to assess computational cost. Additionally, while state-of-the-art baselines are included, a broader range of conditional generative models (e.g., GANs, VAEs) could provide a more comprehensive evaluation. Including robustness tests for different conditioning strategies would further strengthen the experimental design. Theoretical Claims: This is not a theoretical-oriented paper. No comments here. Experimental Designs Or Analyses: The experimental design is solid, with comparisons against state-of-the-art methods, quantitative evaluations using standard metrics (FID, IS), and ablation studies. However, the paper lacks efficiency analysis, making it unclear how computational cost compares to existing diffusion models. Supplementary Material: Supplementary contains the program code. This is nice. Thanks to the Authors for providing this information. Relation To Broader Scientific Literature: The contribution of this paper strengthens the application of diffusion models in time-series forecasting. It is highly recommended if the training and generating complexity can be well handled. Essential References Not Discussed: A recent ICLR paper can be discussed https://openreview.net/forum?id=OlzB6LnXcS Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Methods And Evaluation Criteria:** **Q1: But the paper lacks efficiency comparisons to assess computational cost** **A1:** We now present the computational cost analysis to compare our model with other diffusion-based time series forecasting methods. Refer to the tables below for training and inference comparisons. Note that, SSSD utilizes structural state-space-based diffusion layers along with multiple dense layers; while TimeGrad leverages RNN hidden states; TimeDiff incorporates future mixup and autoregressive initialization; and mr-Diff employs multiple diffusion models. In contrast, our innovation lies in the diffusion formulation rather than in the architectural modifications. Both key components we introduced, the nonlinear transform ($T_\phi(.)$) and the condition network, consist of simple linear layers with an activation function. This design choice allows us to achieve greater computational efficiency compared to existing diffusion models for time series forecasting, and this can be realized by the table below. **Table 1:** Training time (ms) for different models and sequence lengths ($H$) for ETTh1 univariate. | | H=96 | H=168 | H=192 | H=336 | H=720 | |------------|-------|--------|--------|--------|--------| | **CN-Diff** | **0.21** | **0.26** | **0.27** | **0.31** | **0.39** | | mr-Diff | 0.59 | 0.69 | 0.71 | 0.74 | 0.82 | | TimeDiff | 0.71 | 0.75 | 0.77 | 0.82 | 0.85 | | TimeGrad | 2.11 | 2.42 | 3.21 | 4.22 | 5.93 | | CSDI | 5.72 | 7.09 | 7.59 | 10.59 | 17.21 | | SSSD | 16.98 | 19.34 | 22.64 | 32.12 | 52.93 | **Table 2**: Inference time (ms) for different models and sequence lengths ($H$) for ETTh1 univariate. | Model | Trainable Params | H=96 | H=168 | H=192 | H=336 | H=720 | |-----------|----------------|-------|--------|--------|--------|--------| | **CN-diff** | **1.1M** |**6.2** |**6.7** | **6.9** |**7.8** |**9.1** | | mr-Diff | 1.4M | 12.5 | 14.3 | 14.9 | 16.8 | 27.5 | | TimeDiff | 1.7M | 16.2 | 17.3 | 17.6 | 26.5 | 34.6 | | TimeGrad | 3.1M | 870.2 | 1620.9 | 1854.5 | 3119.7 | 6724.1 | | CSDI | 10M | 90.4 | 128.3 | 142.8 | 398.9 | 513.1 | | SSSD | 32M | 418.6 | 590.2 | 645.4 | 1054.2 | 2516.9 | **Q2: Additionally, while state-of-the-art baselines are included, a broader range of conditional generative models (e.g., GANs, VAEs) could provide a more comprehensive evaluation** **A2:** We have already benchmarked our results with variants of GANs **(PSA-GAN)** and VAEs **(D³ VAE)** as shown in the tables in the Experimental section. Additionally, we are also benchmarking our CN-Diff against Graph transformer methods. Please refer to the response to reviewer 1 for Graph transformer results. ### **Essential References:** **Q3: A recent ICLR paper can be discussed** **A3:** This work focuses on introducing a novel formulation for time series forecasting rather than optimizing computational time. The papers suggested focus on reducing sampling time in image diffusion models. From recent work in time series diffusion [1, 2, 3], timesteps for the diffusion is around 100 which is very less compared to images. But even this might cause a longer sampling time in long sequences. So, we acknowledge its importance and will consider it for future work, and will include the suggested references in our paper. **References:** - [1] Shen, L. and Kwok, J. Non-autoregressive conditional diffusion models for time series prediction. In International Conference on Machine Learning, pp. 31016– 31029. PMLR, 2023. - [2] Tashiro, Y., Song, J., Song, Y., and Ermon, S. Csdi: Conditional score-based diffusion models for probabilistic time series imputation. Advances in Neural Information Processing Systems, 34:24804–24816, 2021. - [3] Rasul, K., Seward, C., Schuster, I., and Vollgraf, R. Autoregressive denoising diffusion models for multivariate probabilistic time series forecasting. In International Conference on Machine Learning, pp. 8857–8868. PMLR, 2021. --- Rebuttal Comment 1.1: Comment: I confirm that I have read the author response to my review and I am satisfactory with the new experimental results. I a, updating my review in light of this response as necessary.
Summary: This paper proposes an approach for time series forecasting using conditional diffusion models. The approach utilizes a learnable forward process, making the transition operator and ending points of the forward process learnable and controlled by the conditioning observations. It derives a non-Markovian reverse process based on the learnable forward process and a training objective following the frameworks of DDPM. Experiment results on standard time-series benchmark datasets show the superior performance of the model against the latest baseline methods. ## update after rebuttal I'm satisfied with the response by the reviewer on making improvements on the presentations and terminologies. I appreciate the efforts of the reviewer by providing related works on learnable forward processes. I have no more serious issues with the work and would recommend the paper be accepted. Claims And Evidence: There are a few claims that requires further clarifications from the author: 1. In the first Paragraph of Sec.3, the author claims that one issue of existing diffusion models is that their forward processes are fixed and untrainable. I would like to see the author providing further clarifications or evidence on why this could be a problem given that the ending distribution of the forward process is controlled by condition c and already learnable. This claim is also in contradiction with many modern diffusion models like rectified flow [1] or flow matching [2] which prefers simple and straight reverses processes. 2. The author claims that the forward process defined by Eq.7 is non-Markov. This non-Markovian property of the forward process brings the framework of this work closer to DDIM[3] instead of DDPM. The author only mentioned the similarity between DDIM[3] and DDPM in Appendix A.2. I still think the author should state that in the main paper. [1] Liu, Xingchao, Chengyue Gong, and Qiang Liu. "Flow straight and fast: Learning to generate and transfer data with rectified flow." arXiv preprint arXiv:2209.03003 (2022). [2] Lipman, Yaron, et al. "Flow matching for generative modeling." arXiv preprint arXiv:2210.02747 (2022). [3] Song, Jiaming, Chenlin Meng, and Stefano Ermon. "Denoising diffusion implicit models." arXiv preprint arXiv:2010.02502 (2020). Methods And Evaluation Criteria: The works are evaluated using standard time-series benchmark datasets and settings. They are fit for the task of time-series forecasting, which the proposed CN-Diffuse method is trying to solve. Theoretical Claims: The work requires more clarification on the theoretical correctness of the proposed method. The forward process in the proposed work is conditioned on $c$, resulting in an ending distribution of $X^T$ parameterized by $c$. For the reverse process, Equation 3 states that $X^T$ follows an isotropic Gaussian distribution, but Algorithm 2 states that $X^T$ follows a Gaussian distribution parameterized by $c$. This is an inconsistency. I'm not fully convinced that the proposed method is correct in the first case of Equation 3. Experimental Designs Or Analyses: The experiment design and analysis are thorough and valid. The experiments make comparisons against the latest baselines, and the ablation study results also thoroughly studied different ablated versions of the model. Experiment design and analysis are a strength of the work. Supplementary Material: I read the entire supplementary material and do not have problems with it. Relation To Broader Scientific Literature: The work is built on top of existing diffusion models like DDPM[1] and DDIM[2]. It is also closely related to existing conditional diffusion model works[3, 4] that parameterize the initial distribution of the reverse process using some conditions. [1] Ho, Jonathan, Ajay Jain, and Pieter Abbeel. "Denoising diffusion probabilistic models." Advances in neural information processing systems 33 (2020): 6840-6851. [2] Song, Jiaming, Chenlin Meng, and Stefano Ermon. "Denoising diffusion implicit models." arXiv preprint arXiv:2010.02502 (2020). [3] Lee, Sang-gil, et al. "Priorgrad: Improving conditional denoising diffusion models with data-dependent adaptive prior." arXiv preprint arXiv:2106.06406 (2021). [4] Chen, Jiacheng, Ruizhi Deng, and Yasutaka Furukawa. "Polydiffuse: Polygonal shape reconstruction via guided set diffusion models." Advances in Neural Information Processing Systems 36 (2023): 1863-1888. Essential References Not Discussed: There are a few works on conditional diffusion model with a learnable starting distribution of the reverse process. They are closely related to the proposed approach in terms of methodology and should be cited: [1] Lee, Sang-gil, et al. "Priorgrad: Improving conditional denoising diffusion models with data-dependent adaptive prior." arXiv preprint arXiv:2106.06406 (2021). [2] Chen, Jiacheng, Ruizhi Deng, and Yasutaka Furukawa. "Polydiffuse: Polygonal shape reconstruction via guided set diffusion models." Advances in Neural Information Processing Systems 36 (2023): 1863-1888. Other Strengths And Weaknesses: Other weaknesses: 1. The consistency of notations needs significant improvements. For example, in $\mathbf{x}^{0:T}$ and $x^{0:T}$ were used interchangeably in the Section 3. In the line above Equation 2, and $x_{0:T}$ is used in place of $x^{0:T}$ while $x_{0:H}$ uses subscript to denote time steps in time series data. 2. The presentation and layout of the work could be further improved. For example there are plenty of white space in the left column of Page 4. 3. Some existing but related works are missing in the citations. Please see **Essential References Not Discussed**. 4. Given the existing works on conditional diffusion model and the first point of my concern in **Claims And Evidence**, I think the contributions of the work lack both originality and principled motivations. Other Comments Or Suggestions: Please see the section of **Other Strength and Weakness** and **Questions for Authors**. Questions For Authors: 1. Can we use the same model to make predictions for different prediction window lengths or we need to train different models for different prediction window lengths? 2. Why is $T_{\phi}(x, t)$ not conditioned on c? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **Claims and Evidence:** **Q1: In the first Paragraph $\cdots$ . I would $\cdots$ straight reverses processes.** **A1:** **In the first $\cdots$ untrainable.** Our claim follows the well-known observation that incorporating a flexible forward process can enhance performance and address the limitation of fixed and untrainable forward processes in existing diffusion models [e.g., 1,2,3] - [1] A flexible diffusion model. ICML, 2023. - [2] Soft diffusion: Score matching with general corruptions. TMLR, 2023. - [3] Maximum likelihood training of implicit nonlinear diffusion model. NeurIPS, 2022. **I would like $\cdots$ processes.** To address the above limitation, we propose a nonlinear, time-dependent data transformation combined with a learnable conditioning mechanism in the forward process for time series forecasting. This learnable condition that we have introduced in the forward process results in the final distribution converging to $N(c,I)$. In case of any misunderstanding due to the typo (see below), we apologize. **Q2: The author claims $\cdots$ main paper.** **A2:** Thank you. We also considered this before submission. We agree with the suggestion of moving it to the main paper and shall do so. ### **Theoretical Claims:** **Q3: The work requires $\cdots$ of Equation 3** **A3:** Sorry for the typo and thank you for pointing it out. We sincerely appreciate your careful review. Equation 3 should be correctly written as $p(\textbf{x}^{T}) = \mathcal{N}(\textbf{x}^{T};c,I)$. The text in Algorithm 2 and line 132 (below Equation 1) are correct and consistent with our formulation. We carefully examined the manuscript and ensured that all remaining typos were corrected. ### **Other Strengths and Weaknesses:** **Q4: The consistency $\cdots$ time series data.** **A4:** Thank you for your comment. We will revise Equations 2 and 3 by replacing $x^{0:T}$ with $\textbf{x}^{0:T}$ and $p_{\theta}(x_{0:T})$ to $p_{\theta}(\textbf{x}^{0:T})$ for accurately reflect our notation. We believe that these changes will ensure clarity throughout the manuscript. **Q5: The presentation $\cdots$ of Page 4.** **A5:** We will carefully address the presentation and layout improvements in the paper, including optimizing the whitespace on pages 4 and 6. **Q6: Some existing $\cdots$ Not Discussed.** **A6:** Thanks for referring to these papers. We acknowledge the relevance of these works and clarify their distinctions from our approach. **Polydiffuse** uses a guided diffusion model for polygonal shape reconstruction, where the prior is learned via guidance networks $\mu(x,t,i)$ and $\sigma(x,t,i)$, which are independent of the condition. Also they have not incorporated the condition in the forward process (ref. Fig. 3 [Polydiffuse]). In contrast, our approach incorporates a condition in forward formulation results in condition-dependent prior. And our training procedure and loss function differs significantly, as polydiffuse uses separate stages for prior learning and denoising **PriorGrad** is diffusion-based generative model for speech synthesis, which introduces a prior distribution based on data statistics rather than a learned prior. While their reverse process starts by sampling from $\mathcal{N}(0,\Sigma)$, our method samples from $\mathcal{N}(c,I)$ as we include a learnable condition in the forward process. These fundamental differences set our work apart from the above. **Q7: Given the existing $\cdots$ motivations.** **A7:** CN-Diff introduces a learnable condition in the forward process which depends on history data of the time series for forecasting. This leads to the convergence of the ending distribution $\textbf{x}^T$ to $\mathcal{N}(c,I)$ in the forward process. In the reverse process, this same distribution is used as prior to start sampling. Thus, CN-Diff completely differs from previous work including the provided references of polydiffuse and priorgrad. Apart from this, we also introduced a novel nonlinear, time-dependent data transformation $ T_\phi(\textbf{x}, t)$ in the forward process for time series forecasting which reduces the gap between log-likehood and variational approximation. Incorporating these concepts leads to a new training objective that we derived. The experimental results in Tables (4, 5, and 10) reveal notable improvements in time series forecasting due to our innovation. We therefore humbly request the reviewer to reconsider their verdict. ### **Questions For Authors:** **Q8: Can we use $\cdots$ window lengths?** **A8:** We have trained the model for specific prediction lengths. However, an autoregressive approach during inference allows it to adapt to different prediction window lengths without any retraining. **Q9: Why is $T_\phi(.) $ not conditioned on $c$?** **A9:** The proposal of further conditioning $T_\phi(.)$ on $c$ is a worthwhile consideration that is already in our future work deliberations. We appreciate your insightful contribution. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the response and clarification. I really appreciate that and will reconsider my initial rating. However, I'm still not fully convinced by the argument that incorporating a flexible forward process can enhance performance. It contradicts many latest works in the diffusion model community that aims at keeping the forward and reverse process simple and straightforward as I mentioned in my initial review. I would like to see more discussions from the authors that address or reconcile this contradiction. Is it possible to apply strong conditioning on the diffusion model while keeping the forward process simple? --- Reply to Comment 1.1.1: Comment: Thank you for your comment. We will provide evidence from recent literature on image diffusion models that support the utility of flexible forward process. The first diffusion models [1] evolved significantly through subsequent work [e.g., 2,3] and have led to good generative quality for complex high-dimensional data distributions [4, 5]. However, these diffusion models typically employ a linear Gaussian forward process, which may not be ideal for all data distributions. For dealing with this issue in the image diffusion models, the following approaches have been proposed: (i) alternative forward processes, (ii) learnable noise schedule and (iii) flexible forward process through a learnable data transformation (i) Alternative forward process: Blurring with Gaussian noise [6, 7, 8], diffusion in wavelet domains [9], and forward processes derived from the exponential family [10] have been attempted. (ii) Learnable noise schedule instead of using a fixed noise schedule: VDM [11] learn a noise schedule that minimizes the variance of the resulting variational lower-bound estimator, leading to faster optimization. Improved DDPM [12] showed that learning variances of the reverse diffusion process allows sampling with an order of magnitude fewer forward passes with a negligible difference in sample quality. MULAN [13], a learned diffusion process that adaptively applies noise at different rates across an image and achieved SOTA performance while reducing training steps by 50 percent. (iii) Learning data transformations that make the forward process flexible: In INDM [14], nonlinear diffusion is performed in the data space by applying a linear diffusion in the latent space via a flow network. f-DMs [15] encompass an advanced class of diffusion models achieved by integrating predetermined or learned transformations at each iterative phase of signal alteration. This method surpasses traditional DMs with static Gaussian distributions, thus improving both the efficiency and semantic representation learning. DIFFENC [16] introduced a data and depth-dependent mean function in the diffusion process and achieved a statistically significant improvement for the CIFAR-10 data. NFDM : Learnable Forward Process for Improved Diffusion Modelling [17] introduced a framework that enhances diffusion models by supporting a broader range of forward processes beyond the standard linear Gaussian. Thus, we see that several works on image diffusion models have successfully used flexible forward processes and show a significant improvement. Such an approach was absent for time series forecasting tasks, and we have now introduced CN-Diff. Furthermore, our ablation studies show that the introduction of $T_\phi$ and $c$ gives state-of-the-art improvement for time series forecasting tasks. We hope that the above discussion clarifies the use of flexible forward process. **Is it possible $\cdots$ simple?** Yes, recent models for time series forecasting that we compare against have attempted the same. Time grad used RNN hidden states as conditioning in diffusion; CSDI used mask based condition strategy and developed Conditional Score-based Diffusion model; TimeDiff introduced two novel conditioning mechanisms, future mixup and autoregressive initialization; and SSSD integrated convolution-based condition with state space models. In our CN-Diff model, the incorporation of dense layer-based conditioning within the forward process achieved state-of-the-art results, as shown in Table 2 of the paper. This shows that a simple update in the forward process makes the diffusion model easily trainable, especially for time series data. References: - [1] Deep unsupervised learning using nonequilibrium thermodynamics. ICML, 2015. - [2] Generative modeling by estimating gradients of the data distribution. NeurIPS, 2019. - [3] Denoising diffusion probabilistic models. NeurIPS, 2020. - [4] Diffusion models beat gans on image synthesis. NeurIPS, 2021. - [5] Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 2022. - [6] Generative modelling with inverse heat dissipation. ICLR, 2023. - [7] Blurring diffusion models. ICLR, 2023. - [8] Soft diffusion: Score matching for general corruptions. TMLR, 2023. - [9] Wavelet diffusion models are fast and scalable image generators. CVPR, 2023. - [10] Star-shaped denoising diffusion probabilistic models. NeurIPS, 2024. - [11] Variational diffusion models. NeurIPS, 2021. - [12] Improved denoising diffusion probabilistic models. ICML, 2021. - [13] Diffusion models with learned adaptive noise processes. NeurIPS, 2024. - [14] Maximum likelihood training of implicit nonlinear diffusion model. NeurIPS, 2022. - [15] f-DM: A Multi-stage Diffusion Model via Progressive Signal Transformation. ICLR, 2023. - [16] Diffenc: Variational diffusion with a learned encoder. ICLR, 2024. - [17] Neural flow diffusion models: Learnable forward process for improved diffusion modelling. NeurIPS, 2024.
Summary: This paper introduces CN-Diff, a novel conditional diffusion model tailored for time-series forecasting. The core idea is to integrate a nonlinear data transformation and a learnable condition within the forward process of diffusion, as opposed to more conventional diffusion-based approaches that use only a fixed (typically Gaussian) forward process. The authors argue that a trainable forward process can better align the prior with the data manifold, thereby enhancing predictive performance. The paper includes a comprehensive derivation of the model’s variational training objective and provides empirical results on nine real-world datasets, demonstrating superior or on-par performance compared to state-of-the-art time-series forecasting models. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Correct. Experimental Designs Or Analyses: Sound. Supplementary Material: . Relation To Broader Scientific Literature: . Essential References Not Discussed: . Other Strengths And Weaknesses: **Strength**: * The paper proposed a nonlinear transformation for the forward process of the diffusion model to enhance its representation power when learning the conditional time-series forecast. Based on this proposal, the paper derived rigorously a clear and easy training objective and inference procedures. * Experiments on nine diverse benchmark datasets (e.g., Electricity, Traffic, Wind, Caiso, NorPool, ETT, Exchange) cover multiple domains (energy systems, roads, weather). CN-Diff outperforms or is competitive with a variety of strong baselines, including both deep learning architectures (e.g., N-Linear, N-BEATS, various Transformers) and recent diffusion-based models (TimeDiff, mr-Diff). **Weakness**: * It would be better if the authors can also compared the computational and memory complexity of the CN-Diff with other methods. * The nonlinear transformation $ T_\varphi $ is a central innovation, yet the paper provides little insight into what it learns. Other Comments Or Suggestions: . Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Other Strengths And Weaknesses:** **Q1: It would be better if the authors can also compared the computational and memory complexity of the CN-Diff with other methods.** **A1:** We now present the computational cost analysis to compare our model with other diffusion-based time series forecasting methods. **(Please refer to the response of Reviewer 9M88 Q1 for training and inference comparison tables)**. Note that, SSSD utilizes structural state-space-based diffusion layers along with multiple dense layers; while TimeGrad leverages RNN hidden states; TimeDiff incorporates future mixup and autoregressive initialization; and mr-Diff employs multiple diffusion models. In contrast, our innovation lies in the diffusion formulation rather than in the architectural modifications. Both key components we introduced, the non-linear transform ($T_\phi(.)$) and the condition network, consist of simple linear layers with an activation function. This design choice allows us to achieve greater computational efficiency compared to existing diffusion models for time series forecasting. **Q2: The nonlinear transformation $T_\phi(.)$ is a central innovation, yet the paper provides little insight into what it learns** **A2:** That's an interesting question. We offer empirical evidence through the ablation results presented in the article (Table 5 and 10) that CN-Diff performs better than the ablation run without $T_\phi$. Furthermore, we also note that the ablation with nonlinear transformation only along the feature dimension leads to a slightly larger improvement over the baseline without $T_\phi$ than the ablation with nonlinear transformation only along the forecast window (Table 5 in the paper). To understand what is learned by $T_\phi$, we take a trained model and explore the correlation between the features in the learned latent representation space of $T_\phi(\textbf{x}, t)$ at different diffusion model timesteps. We observed that there are increasing correlations between the features in learned latent space at different timesteps as shown in tables below. Thus, we hypothesize that this increased correlation and time-dependent adaptability perhaps facilitate a more effective diffusion process for time series forecasting. Similar observations of correlation in latent space that help diffusion models have been made in image and video diffusion works [1]. In the space of diffusion models for time series, paper [2] has noted that learning the correlation between feature and temporal space is necessary for time series imputation tasks. We will add these to the discussion in the ablation study section in the revision. **Table 3:** Feature Correlation Matrix for Actual input (ETTh1 dataset) | Features | 0 | 1 | 2 | 3 | 4 | 5 | 6 | |----------|--------|--------|--------|--------|--------|--------|--------| | 0 | **1.00** | 0.08 | **0.99** | 0.07 | 0.22 | -0.20 | -0.19 | | 1 | 0.08 | **1.00** | 0.08 | **0.94** | 0.05 | 0.22 | -0.06 | | 2 | **0.99** | 0.08 | **1.00** | 0.09 | 0.12 | -0.27 | -0.19 | | 3 | 0.07 | **0.94** | 0.09 | **1.00** | -0.10 | 0.06 | -0.07 | | 4 | 0.22 | 0.05 | 0.12 | -0.10 | **1.00** | **0.56** | -0.06 | | 5 | -0.20 | 0.22 | -0.27 | 0.06 | **0.56** | **1.00** | 0.11 | | 6 | -0.19 | -0.06 | -0.19 | -0.07 | -0.06 | 0.11 | **1.00** | **Table 4:** Feature Correlation Matrix for Transformed input $T_\phi(.)$ (ETTh1 dataset) | Features | 0 | 1 | 2 | 3 | 4 | 5 | 6 | |----------|--------|--------|--------|--------|--------|--------|--------| | 0 | **1.00** | **-0.52** | **-0.60** | **0.99** | **0.93** | **-0.96** | **-0.56** | | 1 | **-0.52** | **1.00** | -0.31 | **-0.58** | **-0.68** | **0.60** | **0.63** | | 2 | **-0.60** | -0.31 | **1.00** | **-0.57** | -0.33 | **0.55** | -0.08 | | 3 | **0.99** | **-0.58** | **-0.57** | **1.00** | **0.94** | **-0.97** | **-0.59** | | 4 | **0.93** | **-0.68** | -0.33 | **0.94** | **1.00** | **-0.86** | **-0.80** | | 5 | **-0.96** | **0.60** | **0.55** | **-0.97** | **-0.86** | **1.00** | 0.43 | | 6 | **-0.56** | **0.63** | -0.08 | **-0.59** | **-0.80** | 0.43 | **1.00** | **References:** - [1] Ge, S., Nah, S., Liu, G., Poon, T., Tao, A., Catanzaro, B., ... and Balaji, Y. (2023). Preserve your own correlation: A noise prior for video diffusion models. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 22930-22941). - [2] Tashiro, Y., Song, J., Song, Y., and Ermon, S. (2021). Csdi: Conditional score-based diffusion models for probabilistic time series imputation. Advances in neural information processing systems, 34, 24804-24816.
Summary: The paper presents a conditional diffussion method for time series forecasting. The proposed method CN-Diff adds a non-linear transformation in the forward process. This transformation is a learnable paramter and results in non-Markovian series of latent variables. These latent variables are learned in the reverse process. An objective function is developed for training. Claims And Evidence: 1. The main claim is that introduction of non-linear transformation makes the reverse process non-Markovian is supported by mathematical proof in the main paper and the supplement. 2. The effectiveness of developed objective function is supported by experiments and ablation study. The overall effectiveness of the proposed approach needs more detailed analysis of the results, specifically, why other methods perform better in some of the cases. Furthermore, the use of synthetic data will greatly help in this regard. Methods And Evaluation Criteria: The proposed method is evaluated across various datasets: wind power, Caiso (hourly load), electricity loads, Traffic, electricity and Weather. The comparisons are done with SOTA diffusion and other transformer methods. Theoretical Claims: I believe there no theoretical claims made in the paper, as the paper proposes a conditional diffussion method for time series. Although, I did not checked the correctness of the derivation of equations for the non-linear transformation and consequent non-Markovian process, objective function and proofs. Experimental Designs Or Analyses: The datasets used are usually the ones that are used in SOTA time series forecasting methods. The sota methods used for comparisons are the current diffusion and transformer-based methods. Some experiments should be done using synthetic data. Supplementary Material: I have reviewed all the appendices, although not able to check the material equations and provided proofs Relation To Broader Scientific Literature: The paper proposes a conditional diffusion approach for time series forecasting and I believe does not relate to broader scientific literature. Essential References Not Discussed: The works referenced in the paper are adequate. Although I believe the Graph Deep learning methods (e.g. STGNN) should be discussed. Other Strengths And Weaknesses: Strengths: 1. non-linear transformation in the forward process resulting in non-Markovian reverse process. 2. Derivation of the objective function 3. Mathematical equations and proofs 4. Ablation study Weaknesses 1. Need to use Synthetic data to demonstrate the effectiveness of the approach 2. Need to consider datasets from more domains e.g. stock market 3. Need to discuss specifics in the comparison results as why for some datasets the proposed method is second in performance Other Comments Or Suggestions: 1. Please add a figure that shows the network architecture of the proposed method and a figure that visualy discribes the proposed technique. 2. Figures and tables should be closer to the text descriptions e.g. figure 1 is on a different page than the description, same is true of table 3. 3. Table 1 is not described in text 4. For algorithms, please provide detailed Figures rather than text descriptions. Questions For Authors: 1. The proposed method is claimed to do a better job of learning the patterns in the time series. Will not the use of synthetic data will demonstrate better the effectiveness of the method? 2. What are the specific reasons that in your experiments, for some datasets, other sota methods perform better than the proposed method? Can you please provide detailed reasoning that related to characteristics of these datasets? 3. How will the proposed approach perform on other domains e.g. Stock Market? 4. How are the graph transformer methods, e.g. STGNN, compare with the diffusion methods? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **Questions For Authors:** **Q1: The proposed method is claimed to do a better job of learning the patterns in the time series.Will not the use of synthetic data will demonstrate better the effectiveness of the method?** **A1:** We agree with your reasoning that the structured nature of synthetic data will make the effectiveness of the model more apparent. To address this, we have conducted additional experiments using multivariate synthetic data generated with Gaussian process kernels following [1] . As shown in the table below, our model demonstrates superior performance compared to other models, even with synthetic data. **Table 1:** Performance comparison across different models on synthetic dataset. | Model | **Synthetic (MSE)** | **Synthetic (MAE)** | |------------|--------------------|--------------------| | **CN-Diff** | **0.722** | **0.308** | | Autoformer | 0.767 | 0.339 | | PatchTST | 0.771 | 0.345 | | FiLM | 0.752 | 0.319 | | DLinear | 0.747 | 0.311 | | CSDI | 0.809 | 0.506 | - [1] TimePFN: Effective multivariate time series forecasting with synthetic data. arXiv preprint. **Q2: What are the specific reasons that in your experiments, for some datasets, other sota methods perform better than the proposed method? Can you please provide detailed reasoning that related to characteristics of these datasets?** **A2:** We reiterate that CN-Diff achieves the best average rank across multiple datasets. For Traffic and ETTm1 datasets, CN-Diff achieves the second rank. For Wind, CN-Diff achieves the fourth rank. The wind data set represents the highly volatile wind power data. The superior performance of the multiresolution diffusion (mrDiff (rank 1 for wind)) model suggests that wind patterns need to be analyzed at multiple time scales simultaneously. We believe that incorporating CN-Diff at multiple resolutions to build multiresolution CN-Diff could help us to model such datasets better. **Q3: How will the proposed approach perform on other domains e.g. Stock Market?** **A3:** We have added new experiments to further evaluate CN-Diff using stock market data. The table below highlights the effectiveness of CN-Diff, showing that it outperforms other models even in this setting. We have used CSI-300 (the price-weighted average index of 300 top-rated listed companies on the Shanghai and Shenzhen Stock Exchange) closing price as a stock market data following [2]. **Table 2:** Performance Comparison Across Different Models on Stock Dataset | Model | **Stock (MSE)** | **Stock (MAE)** | |------------|----------------|----------------| | **CN-Diff** | **0.020** | **0.109** | | Autoformer | 0.058 | 0.190 | | PatchTST | 0.028 | 0.129 | | FiLM | 0.040 | 0.159 | | DLinear | 0.044 | 0.170 | | CSDI | 0.037 | 0.169 | - [2] ResNLS: An improved model for stock price forecasting. Computational Intelligence. **Q4: How are the graph transformer methods, e.g., STGNN, compare with the diffusion methods?** **A4:** In our experiments, we have compared our method with CPF [3], a model using Graph Neural Networks. To further strengthen our analysis, we now incorporate a comparison with SageFormer [4] as well, which is a series-aware graph-enhanced Transformer model for time series forecasting. The table below demonstrates CN-Diff's superiority over graph-based methods in time series forecasting. **Table 3:** Comparison of CN-Diff and Sage Former across different datasets with MSE and MAE metrics | Model | ETTm1 (MSE/MAE) | Electricity (MSE/MAE) | Stock (MSE/MAE) | Exchange (MSE/MAE) | ETTh1 (MSE/MAE)| Weather (MSE/MAE) | |------------|----------------|----------------|----------------|----------------|----------------|----------------| | **CN-Diff** | **0.340** / **0.378** | **0.145** / **0.243** | **0.020** / **0.109** | **0.016** / **0.079** | **0.405** / 0.421 | **0.296** / **0.324** | | Sage Former | 0.370 / 0.388 | 0.159 / 0.255 | 0.030 / 0.135 | 0.016 / 0.081 | 0.421 / **0.419** | 0.324 / 0.344 | - [3] Coherent probabilistic forecasting of temporal hierarchies, PMLR. - [4] SageFormer: Series-aware framework for long-term multivariate time-series forecasting. IEEE IoT Journal. ### **Other Comments Or Suggestions:** Thanks for your valuable feedback. We will ensure that the figures and tables are closer to the corresponding text descriptions to improve readability. We will also include a reference to Table 1 in the dataset section. The schematic Figure 2 explains the architecture and algorithms. We will slightly modify it to clearly call out algorithms 1 (training) and 2 (inference) in Figure 2. We appreciate your careful review and will make the necessary revisions accordingly. Thank you! --- Rebuttal Comment 1.1: Comment: After reading author's rebuttal and other reviews, I am keeping my original score of accept
null
null
null
null
null
null
A Unified Framework for Generalization Error Analysis of Learning with Arbitrary Discrete Weak Features
Accept (poster)
Summary: The paper presents a unified formalization and theoretical analysis of discrete Weak Features Learning (WFL) to handle learning with arbitrary discrete weak features (WFs). It introduces a set of algorithms that jointly learn the estimation model for WFs and the predictive model for a downstream task while conducting a generalization error analysis under finite-sample conditions. Additionally, the authors provide empirical results that support their findings. Claims And Evidence: The authors offered both theoretical and empirical analyses to support all of their claims. Methods And Evaluation Criteria: 1. The proposed methods and evaluation criteria are suitable for the problem. 2. However, no benchmark methods are provided to demonstrate and compare the performance of the proposed methods. Theoretical Claims: I checked the proof of the theoretical claims. Most of them are correct except that some key steps are not clear and demonstrating enough: for example 1.P11. A.1 The second inequality arises from the decomposition of the 0-1 loss. This step is not clear enough. 2.A.2 How (a1) and (a2) leads to (A.12) is not clear enough. 3.A.3 P15, top two inequalities under From the uniform law of large numbers... (line 827-829) ? Why do they hold? Here is not clear enough. 4.A.5 line 871, why does the second equality hold? 5.A.5 line 876-886, how the analysis above this inequality leads to this result need to analyze clearly. This is not a simple step. Experimental Designs Or Analyses: The experimental design effectively supports the theoretical result. However, the absence of a baseline method comparison means there is no performance evaluation against existing methods. This limitation is unfavorable for practitioners. Supplementary Material: Yes. I read all the supplementary material. Relation To Broader Scientific Literature: This study offers comprehensive theoretical insights into various problem settings, such as ItR and CFL, involving discrete WFs. This is beneficial to the statistical learning community. Essential References Not Discussed: The literature review is extensive. Other Strengths And Weaknesses: **Other Strengths:** 1. The paper's motivation and structure are clear and reader-friendly. 2. The notation and setup are also clearly presented. 3. Codes are provided in the supplementary materials, which benefit practitioners. **Other Weaknesses:** 1.Theorem 4.3 and Theorem 4.5 lack theoretical contributions. 1.1 These two results provide only consistency findings; no convergence rates are presented. 1.2 The proofs of these theorems rely predominantly on results from existing references. Other Comments Or Suggestions: 1.Suggestion: Add an algorithm table to summarize the methods. Questions For Authors: 1. Is it possible to add the details of proof for my listed confusions in the Theoretical Claims* section? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your careful evaluation and valuable suggestions to improve our paper. **Comparison with Benchmark Methods:** The most relevant benchmark is using WFs directly as input features, because our framework is designed to improve upon this approach by accommodating various learning methods for learning $f$ and $g$. Since this study focuses on the theoretical understanding of WFL, our experiments were conducted to validate the theoretical results rather than compare against the benchmark method. However, we recognize the importance of empirical comparisons with the benchmark method and will include them in the Appendix of the Camera Ready version. We are currently conducting additional experiments, and preliminary results indicate that when $g$ is randomized, our framework outperforms the baseline if $g$’s error rate is below 0.5. Thus, for problem settings where the exact values of WFs can be well estimated, our framework could be a valuable tool for practitioners. **Clarification on the Second Inequality in A.1:** In p.11 lines 567–568, the 0-1 loss is 0 when $g$ correctly predicts the exact values of all WFs and 1 otherwise. In contrast, in lines 570–571, the sum of the 0-1 losses is 0 when $g$ correctly predicts all WFs and equals $k \ge 1$ when $g$ mispredicts $k$ WFs. Based on this fact, the second inequality in p.11 Eq.(A.1) holds. **Derivation from (a1) and (a2) in Eq.(A.11) to Eq.(A.12) in A.2:** Eq. (A.11) shows that the LHS of Eq. (A.12) is upper-bounded by the expectation of the sum of (a1) and (a2). After Eq.(A.11), we analyze (a1) and (a2) separately. Substituting these results into the RHS of Eq.(A.11) and simplifying the expectation operation yields the RHS of Eq.(A.12), confirming its validity. **Clarification on the Two Inequalities in A.3 (P.15, Line 770):** The question refers to lines. 827–829, but the uniform law of large numbers is actually used from l. 770 onward, so we interpret the comment as referring to that part. This law ensures uniform convergence of the empirical expectation to the true expectation over the function class, given that the finite samples are i.i.d. In the top two equations on p.15, the samples used for each empirical risk are indeed i.i.d., validating these inequalities. We will add a reference to Chapter 3.1 (pp. 30–34) in [Mohri 2018] in the Camera Ready version for further clarification. **Justification of the Second Equality in A.5 (Line 871):** Thank you for your valuable comment. We acknowledge that the second equality in line 871 does not hold due to a minor mistake. We will not use this equality and instead consider only the first inequality in line 871. Additionally, we will replace $\mathfrak{R}^*\_{n}(\mathcal{G}\_j(r\_j, \bar{S}\_j))$ with $\mathfrak{R}^*\_{n}( \tilde{\mathcal{G}}\_j(r\_j, \bar{S}\_j))$ in lines 874–887 and in Eq. (4.10) of Theorem 4.4. We explain why this modification is sufficient in the next section. This correction does not affect the subsequent discussion and the theoretical results of the paper. This is because, in the discussion from Theorem 4.4 onward, including Theorem 4.5, the original assumption on the convergence rate of $\mathfrak{R}^*\_{n}(\mathcal{G}\_j(r\_j, \bar{S}\_j))$ is simply replaced with the same assumption for $\mathfrak{R}^*\_{n}( \tilde{\mathcal{G}}\_j(r\_j, \bar{S}\_j))$, and these assumptions are equally valid. We will correct this in the camera-ready version. **Clarification on the Derivation of A.5 (Lines 876-886):** The equations in A.5 (lines. 876–886) can be derived by applying Eq. (A.33) and the equation in lines. 870–873 to the RHS of Eq. (A.32). We acknowledge that this step was not explicitly stated and will clarify it in the Camera Ready version. **Theoretical Contributions of Theorems 4.3 and 4.5:** For Weakness 1.1, the discussion on convergence rates appears follows Theorems 4.2 and 4.4, where error bounds are established (lines 258–320, 356–378). We used “rate of decrease” in the discussion but recognize that “convergence rate” is more precise. We will revise this in the Camera Ready version. Regarding Weakness 1.2, our study provides insights that go beyond a simple combination of existing theoretical results on $f$ and $g$. While their learning methods rely on prior approaches, a theoretical analysis of WFL requires explicitly modeling their interactions. We address this through Theorems 4.2 and 4.4. Theorems 4.3 and 4.5 establish the consistency of WFL by integrating existing theories using the results of Theorems 4.2 and 4.4. We acknowledge that this was not sufficiently clear and will provide further clarification in the Camera Ready version. **Procedure of Our Algorithm Class:** We will make the algorithm class procedure easier to understand in the Camera Ready version. Reference: [Mohri 2018] Mohri, M. Foundations of machine learning, 2nd edition. 2018. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response to my reviews. All my concerns have been addressed. I will maintain my overall ratings, and I'm happy to see that some of my suggestions may be included in the next version. Clarification on the Two Inequalities in A.3 (P.15, Line 770): My original question is about the issue you interpreted in the rebuttal, which was caused by my typo of line numbers. --- Reply to Comment 1.1.1: Comment: Thank you again for your valuable time and dedicated effort. It was gratifying to know that our response addressed your concerns. We especially appreciate the insightful comments and constructive feedback you shared throughout the review process.
Summary: In this paper, the authors propose a unified framework called weak feature learning which accommodates arbitrary discrete weak features and a broad range of learning algorithms. The authors also introduce a class of algorithms that learn both the estimation model for weak features and the predictive model for downstream task and present generalization error analysis, as well as, theoretical conditions necessary for achieving consistency of the learning method. ### update after rebuttal The authors have answered my questions and I will maintain my score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not check all the proofs. However, in the proof of Theorem 3.1, after the second inequality, $M_l$ should be $U_l$. Experimental Designs Or Analyses: Yes. Supplementary Material: I checked section A.1. Relation To Broader Scientific Literature: The results of this paper elucidates the interplay between estimation error of weak features and prediction error of a downstream task which seems to be an important problem. Essential References Not Discussed: Yes. Other Strengths And Weaknesses: 1. Once Lemma 4.1 is established, why Theorem 3.1 is needed? 2. Since the features $x_w$ and $x_0$ are disjoint set of features, when is it possible to learn weak features $x^w$ from normal features $x_0$? Is it inherently assumed that $x_0$ contains enough information regrading $x_w$ so that the estimation function $g$ makes sense? 3. How would one design experiment to demonstrate the effectiveness of Theorem 4.4? Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful and thorough review of our manuscript, and for pointing out areas for further clarification and improvement. **Necessity of Theorem 3.1 Given Lemma 4.1 (about Question 1.):** We agree that Lemma 4.1 provides a tighter bound and that Theorem 3.1 can be derived from it. However, we introduced Theorem 3.1 to improve the clarity of our discussion. While Lemma 4.1 focuses on expressing the relationship between the risks of $f$ and $g$ for detailed analysis, Theorem 3.1 is intended to establish the validity of our proposed framework. Specifically, Theorem 3.1 explicitly illustrates the correspondence between the risk $R_l(f)$ in standard supervised learning and the objective function of WFL $R^{\mathrm{dWFL}}_{l, \lambda}(f)$. This correspondence is not straightforward to interpret directly from the inequality in Lemma 4.1. Since this clarification was not explicitly stated in our paper, we will include it in the Camera Ready version. **Dependence on Informative Ordinary (Normal) Features (about Question 2.):** Your understanding is correct. Since our framework is based on constructing a model that estimates $X^{\mathrm{w}}$ from $X^{\mathrm{o}}$​, accurate estimation of $X^{\mathrm{w}}$ is feasible when $X^{\mathrm{o}}$ is sufficiently correlated with $X^{\mathrm{w}}$. However, while a strong correlation between $X^{\mathrm{o}}$​ and $X^{\mathrm{w}}$​ is a practical condition for achieving good performance, our theoretical results hold regardless of this assumption. **Experimental Design to Demonstrate the Effectiveness of Theorem 4.4 (about Question 3.):** To demonstrate the effectiveness of Theorem 4.4, we would need to generate multiple functions $f$ with varying accuracy and examine whether the learned results of $g$ exhibit a trend similar to the error bound provided by Theorem 4.4. There are two primary methods to construct $f$ with varying accuracy: (1) training $f$ directly using different learning settings and (2) constructing $f$ in a manner similar to $g$ in Section 5.2. The first approach can be implemented by training a neural network while carefully adjusting training hyperparameters and dataset composition. However, ensuring that these adjustments are not arbitrary is challenging, which may introduce biases and compromise the justification and validity of the experimental results. The second approach, following Section 5.2, is not feasible because the constructed $f$ would make $g$ untrainable. Since $f$ makes random predictions with a fixed accuracy, the empirical risk $\hat{R}_{l,f}(g)$ becomes non-differentiable with respect to $g$, preventing effective learning. Due to these limitations, this experimental design is challenging. However, we note that the error bound for $f$ in Theorem 4.2 shares a highly similar form with the bound for $g$ in Theorem 4.4. Given the verification of $f$'s bound in Section 5.2, we can reasonably infer the validity of $g$'s bound as well. **Typographical Error in Theorem 3.1 Proof:** Thank you for pointing this out. We confirm the mistake and will correct it in the Camera Ready version. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. Most of my questions have been ansered. I will maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you again for your time and thoughtful review. We’re pleased to hear that our responses have clarified your concerns.
Summary: The paper presents a unified framework called WFL for analyzing generalization error in learning tasks involving arbitrary discrete WFs. The authors propose a risk-based formulation that accommodates various types of WFs and a broad range of learning algorithms. ## update after rebuttal The author's response has addressed some of my concerns, and I will keep the score. Claims And Evidence: The claims made in the paper are generally supported by theoretical analysis and experimental validation. However, the experimental results are limited to a few datasets, and the impact of different types of WFs on the learning process could be further explored. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. The authors use a combination of theoretical analysis and empirical validation to demonstrate the effectiveness of their framework. Theoretical Claims: I have reviewed the theoretical claims and proofs presented in the paper. The proofs appear to be correct, and the authors provide detailed derivations for their error bounds and consistency conditions. However, I am not an expert in this specific area, so I cannot fully verify the correctness of all the theoretical results. Experimental Designs Or Analyses: The experimental design is sound, and the authors provide a clear explanation of their setup and methodology. Supplementary Material: I have reviewed the supplementary material, which includes detailed proofs for the theoretical results and additional information about the datasets and experimental setup. Relation To Broader Scientific Literature: The paper builds on existing work in WSL and extends it to the problem of learning with WFs. The authors discuss related work in WSL, including semi-supervised learning and learning with label noise, and provide a unified framework that accommodates various forms of WFs. The paper also connects to prior work on ItR and CFL, providing a theoretical foundation for these methods. Essential References Not Discussed: I am not an expert in this specific area, so I have no suggestion on more refrerences. Other Strengths And Weaknesses: The paper's main strengths lie in its theoretical contributions and the unified framework it provides for analyzing generalization error in learning tasks with weak features. However, the impact of different types of WFs on the learning process could be further explored. Other Comments Or Suggestions: - The paper is well-written, but there are a few minor grammatical errors and typos that could be corrected. For example, in Section 3.2, the phrase "The primary factor reducing explainability is the inaccuracy of information provided by WFs" could be rephrased for clarity. - The notation in some of the equations could be more consistent. For example, in Equation (3.4), the use of $\lambda$ as a weighting parameter is clear, but the notation could be more consistent with the rest of the paper. ***Note:*** the template of this paper seems to be different from the official template. Questions For Authors: - Can the authors provide more details on the computational complexity and scalability of the proposed methods, particularly for large-scale datasets? - How does the proposed framework handle continuous weak features, and are there any limitations in this regard? - Could the authors discuss any potential challenges or limitations in applying the proposed framework to real-world applications, particularly in domains with complex data distributions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort the reviewer dedicated to evaluating our work and are grateful for your insightful feedback and constructive suggestions. **The Impact of Different Types of WFs on the Learning Process and Experimental Datasets:** Our framework and analysis hold regardless of the type of discrete WFs, which you have also acknowledged in "Relation to Broader Scientific Literature." This generality is possible because, within our framework, the learning of $f$ and $g$ is decoupled: differences in WF types affect only the learning of $g$, while during the learning of $f$, the WF types are concealed, and only $g$ influences $f$. Also, the effect of WF types on $g$ can be thoroughly investigated through existing theories, such as those related to PLL and CLL. Therefore, to validate our theoretical results, it suffices to examine the behavior of $f$ under different levels of $g$'s accuracy rather than testing specific WF types. Meanwhile, we acknowledge your concern regarding the limited variety of datasets used in our experiments. We take your feedback seriously and are conducting additional experiments on a medical dataset and a dataset related to the prediction of defaults. The results will be included in the Appendix of the Camera Ready version. While our additional experiments are still ongoing, preliminary results indicate qualitatively consistent findings across new datasets. **Computational Complexity and Scalability, and Limitations in Real-World Applications (about Q.1 & .3) :** We agree that computational complexity, scalability, and handling complex data distributions are crucial considerations for real-world applications. Our proposed framework is designed to integrate existing learning algorithms for estimating WFs and predicting a downstream task. Consequently, the computational complexity, scalability, and adaptability to complex data distributions depend on the choice of these algorithms. As for computational complexity, in sequential learning, the computational complexity is determined solely by the sum of computational complexities of the methods applied to learn $f$ and $g$, respectively. In iterative learning, the only additional factor is the number of iterations. Therefore, by selecting appropriate methods within our framework, we can ensure scalability and adaptability to complex data distributions while controlling computational complexity. This flexibility is largely due to the simplicity of our framework. However, this simplicity also imposes certain limitations, particularly in leveraging domain knowledge for real-world applications. For instance, our framework does not support approaches where dependencies among WFs are exploited to estimate certain WFs first and use their results for estimating other WFs. Addressing this limitation is an important direction for extending our framework, and we plan to explore this in future research. Since these points were not explicitly discussed in our paper, we appreciate your valuable feedback and will include additional clarifications in the Camera Ready version. **Handling Continuous Weak Features (about Q.2) :** We have addressed a similar question from Reviewer guZX. Our framework can be applied to continuous WFs in almost the same manner as for discrete WFs. Please refer to our response (the response “Extending the Framework to Handle Continuous Weak Features” for reviewer guZX) there for details. **Grammatical Errors and Typos:** Due to time constraints, we are unable to make extensive revisions at this stage. However, we plan to have the paper professionally proofread before the Camera Ready submission. **Notation Consistency and Template Confirmation:** Thank you for your feedback. We will carefully review the notation to ensure consistency throughout the paper. Regarding the template, we used the official ICML 2025 template downloaded from the conference website, but we will verify it again for correctness.
Summary: This paper introduces a unified framework for Weak Feature Learning (WFL), which aims to address the challenge of learning with arbitrary discrete weak features (WFs)—features that are incomplete, erroneous, or ambiguous due to various real-world constraints. The authors propose a risk-based formulation that jointly optimizes a feature estimation model to approximate the true WFs and a label prediction model to perform the downstream task. Theoretical guarantees on consistency and error propagation are validated empirically across real-world datasets. Claims And Evidence: The claims are well-supported by both theoretical analysis and empirical evidence. Methods And Evaluation Criteria: The evaluation strategy is well-structured and rigorously tests the framework’s effectiveness. Theoretical Claims: These theoretical results are well-supported and logically sound. Experimental Designs Or Analyses: The experiments are well-designed and convincingly validate the theoretical framework. Supplementary Material: The supplementary material is thorough and well-documented. Relation To Broader Scientific Literature: 1 . Weakly supervised learning (WSL) (e.g., complementary labels, positive-unlabeled learning). 2. Imputation-based methods (ItR) and feature refinement strategies (CFL). Essential References Not Discussed: The paper adequately cites major prior works. Other Strengths And Weaknesses: Strengths 1. This is the first work to unify WFL methods with finite-sample error bounds and consistency proofs. 2. Validated on real-world datasets; code and appendix enhance reproducibility. 3. Bridges ItR, CFL, and weakly supervised learning under a single framework. Weaknesses 1. The proposed framework excludes continuous weak features, restricting applicability. 2. Rademacher complexity computations may not scale to high-dimensional data. Other Comments Or Suggestions: Investigate merging causal inference to distinguish feature ambiguity from noise. Questions For Authors: 1. How would you extend the framework to handle continuous weak features (e.g., sensor readings with Gaussian noise)? 2. How does the framework perform under adversarial weak features (e.g., manipulated inputs)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and constructive feedback on our work. **Extending the Framework to Handle Continuous Weak Features:** Extending weak features learning (WFL) to continuous weak features (WFs) is an important research direction. We have investigated this and found that replacing the 0-1 loss-based risk in discrete WFL with a mean squared error (MSE)-based risk yields a parallel theoretical result. However, deriving this required a completely different approach, making it difficult to unify both theories in a single paper. Thus, we have compiled continuous WFL separately. Key differences in continuous WFL are as follows: * Continuous WFL requires handling WFs with continuous-valued uncertainties, such as missing or misobserved values, added Gaussian noise, or intervals containing the exact value. * The estimation error of $g$ in continuous WFL must be measured using metrics suitable for continuous values, such as MSE. * The theoretical framework of discrete WFL is based on the 0-1 loss, and its derivation heavily relies on the discreteness of WFs (e.g., Theorem 3.1 and Lemma 4.1). Consequently, this theory cannot be directly applied to continuous WFL. To establish a parallel theory, we derive an inequality that upper-bounds Lemma 4.1’s LHS by the product of the generalization error of $f$ and the MSE of $g$. Lemma 4.1’s LHS can be bounded using total variation distance and further upper-bounded by KL divergence via Pinsker’s inequality. Assuming $g$’s prediction follows a gaussian distribution, its log-likelihood corresponds to its MSE, allowing the desired inequality to be derived. This result establishes a continuous WFL theory aligned with discrete WFL, suggesting that WFL provides a unified framework for both discrete and continuous WFs. **Scalability of Rademacher Complexity Computations:** We agree that computing Rademacher complexity is challenging for high-dimensional data. However, this is a general issue for all analyses relying on Rademacher complexity, not a specific limitation of our work. Our results provide valuable theoretical insights, such as the convergence rates of error bounds for $f$ and $g$, their mutual interactions, and the impact of WF properties on $f$'s learning. These contributions remain valid regardless of whether Rademacher complexity can be explicitly computed. If one needs to estimate error bounds in practice, approximation methods can facilitate Rademacher complexity computation. For instance, dimensionality reduction techniques can compress high-dimensional data before computing Rademacher complexity. Importantly, the computability of Rademacher complexity does not affect the validity of our theoretical analysis, which remains robust even in high-dimensional settings. **Incorporating Causal Inference into WFL:** We have not yet explored integrating causal inference into WFL, but we agree that this extension is highly intriguing. Inspired by your suggestion, we considered an approach of modeling the observational process of WFs using a Structural Causal Model (SCM). The ambiguity in WFs' observations arises from factors such as observational noise, anonymization processes, or the probabilistic nature of their exact values. By modeling the generation process of WFs with an SCM, we believe it is possible to separate these effects, leading to improved estimation accuracy of WFs' exact values and a better understanding of the factors of ambiguity. A potential approach involves Bayesian estimation of an SCM in which the parameters related to observational noise, anonymization, WFs' exact values, and their probabilistic components are treated as latent variables. From a theoretical perspective, an important question is how the structure of the SCM and the quality of latent variable estimation influence the error bounds of $f$. If such error bounds can be derived, they could provide insights into the theoretical foundations of model selection criteria. **Performance Under Adversarial Weak Features:** While our framework was not initially designed for adversarial WFs, your comment prompted us to examine its relevance to such scenarios. We identified two types of adversarial manipulation and found that our framework effectively captures their impact on learning behavior. First, adversarial perturbations to observed values of WFs in training data would likely degrade the accuracy of $g$. Our error bounds account for this, reflecting the increased difficulty of the downstream task due to reduced $g$’s accuracy. Second, if only the most informative features are designated as WFs, accurately estimating their values becomes critical. As discussed in Section 4.2, our framework expresses the increased learning difficulty when these exact values cannot be precisely estimated.
null
null
null
null
null
null
Poly2Vec: Polymorphic Fourier-Based Encoding of Geospatial Objects for GeoAI Applications
Accept (poster)
Summary: This paper proposes Poly2Vec, a Fourier-transform approach to encoding shapes (points, lines, polygons) for geospatial tasks. The authors find that Poly2Vec outperforms baselines in preserving shape topology, direction, and distance over OSM datasets for two cities, New York and Singapore. ## Update after rebuttal I think this paper is good and recommend it for acceptance. I would have liked to see experiments integrating Poly2Vec with a larger geospatial workflow as I think it can benefit geospatial tasks such as super-resolution, conditional generation, and may even benefit unsupervised learning methods for geospatial data. While evidence for these points would have made the paper stronger, in my opinion the contribution as presented by the authors is sufficient to clear the bar for acceptance. Claims And Evidence: A few key claims and my discussion of the evidence used to support them: **Poly2Vec outperforms baselines in preserving shape topology, direction, distance:** I believe evidence is shown to support this claim, as Poly2Vec seems to clearly outperform baselines like Direct, Tile, Wrap, Grid, Theory, and T2Vec. The datasets are limited to two cities, and broader geographic diversity would be welcome though, as I’m guessing the shapes in NYC vs Singapore aren’t too different. **Integrating Poly2Vec in an end-to-end GeoAI workflow improves performance in population prediction and land-use inference** This is the more pertinent/interesting claim to me and I think the claim is not strongly supported here. The authors pick a specific baseline, RegionDCL, and show that replacing its distance-biased encodings with Poly2Vec improves the pipeline. This is too narrow of a result to support the broader claim. There are important baselines that use OSM data including through large language models (eg: [1]) that can be used for population prediction. Moreover, a ResNet-18 is too weak of a baseline given that newer, larger foundational models [2,3,4] are used for creating embeddings of locations or images. Does integrating Poly2Vec into these workflows improve the quality of embeddings? I think that would be very valuable to test. --- References: [1] Large language models are geographically biased, _ICML 2024_. [2] Satclip: Global, general-purpose location embeddings with satellite imagery, _AAAI 2025_. [3] SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery, _NeurIPS 2022_. [4] Scale-MAE: A Scale-Aware Masked Autoencoder for Multiscale Geospatial Representation Learning, _ICCV 2023_. Methods And Evaluation Criteria: The benchmark datasets are a bit too narrow in scope. They do the job of demonstrating that Poly2Vec might outperform other shape-encoding baselines, but the broader claim of relevance to an end-to-end Geospatial workflow is not sufficiently tested over OSM data with two cities. Moreover, I’m curious to know if linear probing embeddings from satellite image foundation models for distance or topological details of shapes in the image can yield the same accuracies that Poly2Vec displays. This to me is also an important comparison- the authors can use this experiment to demonstrate the value of Poly2Vec’s embeddings in improving the embeddings of geospatial image foundation models. Theoretical Claims: N/A Experimental Designs Or Analyses: I think more ablations are required, e.g., over the number of samples in the frequency domain W, the embedding size, more sampling strategies beyond learned vs geometric etc. Supplementary Material: Briefly did a pass over supplementary material. The description for the input output of each task can be made a bit clearer, and this section A.4.1 should be linked in the main text for clarity. Relation To Broader Scientific Literature: The authors proposed method is promising for geospatial applications, but the key evidence required to demonstrate its utility is through integrations in much larger, more recent geospatial workflows. This is currently lacking in the paper, which instead discusses improvements over other shape encoding baselines (necessary but not sufficient). The authors should spend time surveying more recent geospatial use-cases where poly2vec could demonstrate value, and then share the results of experiments where poly2vec is used to provide embeddings. Essential References Not Discussed: See above. Other Strengths And Weaknesses: Some other strengths: * Paper is clearly written, easy to follow. * Improvements over other shape-encoding baselines specifically for topological/directional metrics is clear. Some other weaknesses: * Lack of thorough experimental validation (over multiple cities/geographies) * Insufficient experimental results on more recent geospatial workflows * Scope of claims is too narrow * Using Fourier based encodings is not new I am still slightly leaning towards acceptance because I do think the paper is promising, provided the authors demonstrate empirical evidence over Poly2Vec’s broader utility. Other Comments Or Suggestions: N/A Questions For Authors: See above. I'm also curious about the ability of Poly2Vec to improve generative image foundation models for geospatial applications (e.g. diffusion), although this is not required to test. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for these detailed, thought-provoking comments. ___ ***Reviewer (paraphrased)***: Existing baselines use OSM data, including LLMs[1]. ResNet-18 is weak given [2,3,4]. ***Authors:*** Poly2Vec is designed to handle arbitrary geometric data described by coordinates (vector spatial data), rather than pixel-based images such as satellite imagery. Thus, methods explicitly requiring imagery are not directly applicable to our scenario. RegionDCL converts vector data of buildings into images, using ResNet-18 for image encoding and distance-biased transformer for restoring location information of buildings and POIs. We hypothesize that RegionDCL adopted this approach because no effective unified encoding existed for vector data before Poly2Vec. This vector-to-image conversion (rasterization) step inherently introduces information loss. We demonstrate that directly encoding vector data with Poly2Vec is more effective. While using a more sophisticated image-based model might yield incremental improvements, the fundamental loss of spatial detail due to rasterization would persist. On the suggested references: [1] primarily evaluates large language models, while [3] and [4] focus explicitly on satellite images, and therefore Poly2Vec does not directly apply. Nevertheless, the broader trend in foundation and large language models emphasizes multimodal data integration. [2] exemplifies this integration, combining both images and geographical coordinates; however, its focus appears more oriented toward geographic positioning rather than the locations, topology, or spatial relationships that Poly2Vec exploits. Poly2Vec could be integrated into [2] to replace/complement its location encoder, to support more sophisticated downstream tasks. Similarly, [1], [3], and [4] can benefit from Poly2Vec if they can be extended to incorporate supplementary vector data inputs. ___ ***Reviewer (paraphrased)***: Benchmark datasets too narrow in scope. … broader claim of relevance to an end-to-end Geospatial workflow is not sufficiently tested over OSM data with two cities. Need experiments on larger, more recent workflows. ***Authors:*** The two cities, New York City and Singapore, have significant differences in building structures, spatial layout, and data density (as visualized in [Link1](https://anonymous.4open.science/r/r-0752/)). New York City has a large, diverse urban environment with varied functional zones and more regular building shapes, while Singapore features a much higher population density, more data-sparse regions, and more irregularly shaped footprints [1]. Poly2Vec focuses on encoding vector data, which serves as the foundation for many geospatial applications. We believe that improving encoding vector data will facilitate a broader range of geospatial tasks. [1] Urban Region Representation Learning with OpenStreetMap Building Footprints, ACM SIGKDD 2023 ___ ***Reviewer (paraphrased):*** Linear probing embeddings from satellite image foundation models for distance or topological details of shapes in the image can yield the same accuracies? … demonstrate the value of Poly2Vec’s embeddings in improving the embeddings of geospatial image foundation models. ***Authors:*** Similar to the first point. On the one hand, satellite image foundation models cannot apply to our settings since we focus on vector data. On the other hand, if a geospatial image foundation model can take vector data as input, so as to inject the spatial information into the embedding, we believe Poly2Vec can be a good choice to encode the coordinates. ___ ***Reviewer (paraphrased):*** More ablations required. ***Authors:*** We tuned the main hyperparameters before submission. Please find the hyperparameter study in [Link1](https://anonymous.4open.science/r/r-0752/), including an additional sampling technique, namely uniform sampling. ____ ***Reviewer (paraphrased):*** Can Poly2Vec improve generative image foundation models? ***Authors:*** See 3rd rebuttal response to Reviewer Bw6M. ____ ***Reviewer (paraphrased):*** Using Fourier-based encodings is not new ***Authors:*** Agreed. Our novel contributions: (1) Leverage Fourier transform to create a unified, polymorphic encoding across different geometry types, with easy extensions to any 2D shape. (2) Fourier transform of line segment is new. Novel polygon approach. (3) Flexibly mixing different geometry types for geometric inferences. (4) Consistently outperforming baselines, including previous frequency-based. ____ ***Reviewer (paraphrased):*** Supplementary material input/output of each task can be clearer. Link A.4.1 in main text for clarity. ***Authors:*** For the sake of reproducibility, we provide the details of all our experimental settings. But given the space limitation, we can only present them in the appendix. To improve clarity and accessibility, we will link to each appendix subsection in the main paper and also add backward references in the appendix. ____ --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal response. Some of my points have been clarified. However, I still think it would be valuable to demonstrate additional experimental validation of Poly2Vec in a geospatial workflow beyond RegionDCL. * I think it could be quite interesting to see whether complementing the SatCLIP [2] location encoder with Poly2Vec improves the quality of its representations. * For SatMAE [3], I think augmenting the "temporal" or "multi-spectral" encodings with Poly2Vec encodings, and then either pre-training or fine-tuning with these augmentations would be a valuable demonstration of Poly2Vec's utility. * For a work like DiffusionSat, it would be quite interesting to see if the quality of generated images can be improved with Poly2Vec encodings passed to the metadata encoder, or to a conditioning ControlNet. There are likely other examples as well of demonstrating Poly2Vec's broader utility in a larger geospatial workflow, the above are suggestions. The current manuscript does a good job of demonstrating Poly2Vec as a solid improvement over prior shape encoding baselines. For me to increase my score, I would like to see more evidence of Poly2Vec's utility in larger geospatial workflows like the ones suggested above. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer’s suggestion to include additional content. However, we would like to note that the paper already contains a substantial core contribution that required significant space to develop both theoretically and empirically. The central innovation of our work lies in the novel application of the Fourier transform to spatial data, enabled by defining indicator functions over basic geometric shapes, specifically line segments and triangles, for which closed-form Fourier representations can be derived. To the best of our knowledge, this formulation has not been explored previously. Building on this foundation, we show how complex yet commonly used spatial objects, such as lines (modeled as ridges of delta functions) and polygons (decomposed into triangles), can also be represented within this framework. Through a careful combination of these shape formulations and affine transformations, we develop a unified and expressive vector representation for points, polylines, and polygons that captures their geometry, spatial location, and inter-topological relationships. This pipeline is, in our view, a significant step forward. It required us to carefully formulate the theory, justify it mathematically, and validate it through extensive experiments. These components together occupied much of the available space in the paper, leaving limited room to add further material without compromising clarity or focus. With this in mind, we would like to clarify that the examples suggested by the reviewer, while referred to as part of a “geospatial workflow”, do not involve spatial objects such as points, polylines, or polygons as inputs. Instead, they primarily use location-based, temporal, or multi-spectral data, which are beyond the scope of this paper. As such, we do not believe these pipelines are suitable for demonstrating the effectiveness of Poly2Vec, despite the terminological overlap. The term “geospatial” can indeed be overloaded, and our focus here is specifically on the representation and reasoning over geometric spatial objects.
Summary: The authors introduce a method of encoding representations of geospatial objects which they call Poly2Vec. This method is capable of representing points (e.g. points of interest), polylines (e.g. roads), and polygons (e.g. buildings) while competing methods struggle to represent all of these different formats. Poly2Vec is a fourier transform based approach which seems to preserve some important attributes of the encoded objects such as location, shape and direction. This method is compared to a wide range of other approaches on OpenStreetMap based datasets for tasks such as determining the distance between objects or their orientation, as well as for use as input encodings for a modern machine learning approach to land use classification and population prediction. Poly2Vec performs best on all of these tasks. ## update after rebuttal The author response clarified my one concern and I believe the paper should be accepted. Claims And Evidence: Yes. The authors seem to compare to a wide variety of competing approaches and show favourable performance across a range of tasks and appropriate metrics. Methods And Evaluation Criteria: Yes. The method seems sensible. The OpenStreetMap based datasets seem sensible for the task and similar datasets are used in the literature (“Urban Region Representation Learning with OpenStreetMap Building Footprints” (KDD '23), “Urban2Vec: Incorporating Street View Imagery and POIs for Multi-Modal Urban Neighborhood Embedding” (AAAI '20)). The evaluation metrics seem appropriate. Ground truth data for tasks such as land use classification has been gathered similarly to previous works in the area. Theoretical Claims: I couldn’t see any problems with the formulation of the fourier transform based method. Experimental Designs Or Analyses: All experiments seem to be well designed. Supplementary Material: Yes. I reviewed all of the appendices, which provide additional details on the method, experiments and baselines are and some additional results are presented. I had a quick glance at the provided code, which seems fine. Relation To Broader Scientific Literature: Some existing works involve creating representations of buildings or places of interest but these make use of semantic information such as text or photos of the location. “Urban Region Representation Learning with OpenStreetMap Building Footprints” (KDD '23) Many works involve generating an embedding of points and some of these are compared against in this work (e.g. WRAP method in "Presence-Only Geographical Priors for Fine-Grained Image Classification" (ICCV '19)). These can be extended to represent lines as sequences of points. (e.g. "LSTM-TrajGAN: A Deep Learning Approach to Trajectory Privacy Protection" (GIScience 2021)) “Towards General-Purpose Representation Learning of Polygonal Geometries” (GeoInformatica 27(4):1-52) produces encodings of polygons that are suitable for machine learning approaches. “Graph Convolutional AutoEncoder models” can also produce encodings of polygons or lines. Spectral domain polygon encoders do exist such as “NUFTSPEC”. The authors do cite this work. (“Towards General-Purpose Representation Learning of Polygonal Geometries” (GeoInformatica 27(4):1-52)). Poly2Vec does seem to uniquely create a unified embedding space for the different geometries that are discussed in the work (polygons, polylines, points) Deterministic spatial reasoners such as postGIS can do many of the tasks that are discussed in this paper such as accurately measuring distances between polygons, lines and points, and determining directions. However these do not produce an encoding that is suitable for many machine learning approaches. Essential References Not Discussed: Essential references seem to be included. Other Strengths And Weaknesses: The work seems to be quite original in furthering spectral based methods of encoding polygons to also include points and lines. The paper is clear and the work done is well described. Baseline approaches are covered quite well. The tasks seem well constructed and a wide range of approaches are compared against. Several different evaluation metrics are used and the proposed method is consistently well performing. The method involves learning how to encode objects in a way that seems to allow it to be fine tuned to specific tasks which might not be helpful for a downstream user who does not wish to train a model in order to encode their geospatial objects, and might limit the transferability of these representations. I also wonder if including a similar neural network for some of the baseline approaches and training these to produce encodings that are useful for specific tasks would also improve performance of these baselines and make Poly2Vec comparatively less high performing. Other Comments Or Suggestions: Well put together paper and not many mistakes that I can see, but on line 875 we have a couple of “Figure ??”s. Questions For Authors: 1. The method involves learning how to encode objects in a way that seems to allow it to be fine tuned to specific tasks which might not be helpful for a downstream user who does not wish to train a model in order to encode their geospatial objects, and might limit the transferability of these representations. Do the authors know if including a similar neural network for some of the baseline approaches and training these to produce encodings that are useful for specific tasks would also improve performance of these baselines and make Poly2Vec comparatively less high performing? This will allow me to better judge the performance of the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful comments. ___ ***Reviewer:*** (Note we grouped these comments, because our answer is essentially the same for all of them.) The method involves learning how to encode objects in a way that seems to allow it to be fine tuned to specific tasks which might not be helpful for a downstream user who does not wish to train a model in order to encode their geospatial objects, and might limit the transferability of these representations. … I also wonder if including a similar neural network for some of the baseline approaches and training these to produce encodings that are useful for specific tasks would also improve performance of these baselines and make Poly2Vec comparatively less high performing. … The method involves learning how to encode objects in a way that seems to allow it to be fine tuned to specific tasks which might not be helpful for a downstream user who does not wish to train a model in order to encode their geospatial objects, and might limit the transferability of these representations. Do the authors know if including a similar neural network for some of the baseline approaches and training these to produce encodings that are useful for specific tasks would also improve performance of these baselines and make Poly2Vec comparatively less high performing? This will allow me to better judge the performance of the proposed method. ***Authors:*** We understand this question and appreciate the trigger to think about this more deeply. You are right that we train our MLPs specifically for the task. Please note that we do the same for the baselines, so our performance comparisons are fair. We will clarify this in the next version of the paper. As we explain in the paper (Section 3.2), different parts of the Fourier transform are important for different tasks. E.g. the Fourier transform’s magnitude generally encodes shape, and its phase generally encodes location. Thus the targeted training tends to extract the right information for the specialized task. Learning general-purpose embeddings is also an interesting direction that we have considered. Such representations could be learned using unsupervised or self-supervised techniques, such as encoder-decoder frameworks and contrastive learning. ___ ***Reviewer:*** Well put together paper and not many mistakes that I can see, but on line 875 we have a couple of “Figure ??”s. ***Authors:*** Thank you for finding these omissions. We will fix them all in the next version. --- Rebuttal Comment 1.1: Comment: Thank you for your helpful response. Just to clarify, for a baseline approach such as "Wrap", after the trigonometric encoding of the input coordinates, you include an MLP similar to the "learned fusion" module to allow these relatively simple representations to be adapted to the downstream task? --- Reply to Comment 1.1.1: Comment: Thank you for the follow-up question. The paper includes two MLPs. The first is part of our learned fusion module, which is specific to Poly2Vec’s design and is not applied to any baselines. The second MLP is a task adaptor that maps encodings for the downstream tasks. We used this second 2-layer MLP adaptor for both Poly2Vec and all the baselines to ensure fair comparison. Our ablation studies (Section 4.3) show that even the Poly2Vec variant that concatenates phase and magnitude (without the learned fusion MLP, referenced as "w/concat") still outperforms all baselines and matches Theory on the polygon-point distance estimation task.
Summary: The paper presents a Fourier based encoding strategy for geospatical principles, i.e., points, lines, and polygons, into a deep-learning compatible vector format. The methodology is well-explained and founded in signal processing theory. It well-explains Fourier analysis and inherent properties (Affine Transformation, Symmetry) that are used in a effective way to first (affine) transform geospatial primitives and then encode them into frequency domain to a vector representations for amplitude and phase. Then two MLPs fuse these representations to map the features vector to a target variable, for instance, for point-in-polygon testing. Overall, it is well-written. The principles are well-explained and the results well structured into underlying research questions and numerically convincing, as the proposed Poly2vec representation outperforms existing point-based and line-based methods. In my opinion, a strong accept. Claims And Evidence: The paper claims to unify representation of geospatial objects, namely points, lines, and polygons. This claim is supported, as the results show better representation accuracy over point- and line-specific models. Methods And Evaluation Criteria: Methods: The underlying method is well-explained first on simple numerical examples that are then abstracted to the generic case. Overall, very intuitive and understandable Evaluation The experiments are well-structured in 4 accurate reserach questions that are systematically investigated in the results. Theoretical Claims: There are no proofs in the paper, but properties of Fourier transform (Linearity, Affine Trasnformation, Hermitian Symmetry, Magnitude and Phase) are well-explained. All used theory is supported by intuition and concrete experiments on simple cases. Experimental Designs Or Analyses: The experiments are well-structured according to 4 research questions. However, all results seem to be based on the classification of point-to-polyline etc relationships, which is reasonable for this experimental setup, but a more realistic usecase, like predicting, for instance, the building use (residential, factory, shopping mall) from the polygion geometry may be interesting as well. For classic geospatial operations like point-in-polygon prediction highly optimized algorithms exist that achieve 100% accuracy (<- but this is still a good problem to compare different embeddings) Supplementary Material: I skimmed the appendix (equations and tables), but did not read it thoroughly Relation To Broader Scientific Literature: This paper presents a crucial step forward embedding geospatial primitives, which is highly helpful in learning from geospatial data in Geospatial Foundation Models. The unifying framework that includes points, lines, polygons and outperforms methodologies designed for single primitives like points, will be important for the geospatial machine learning community and subsequent research fields like Geo-Information Science. Essential References Not Discussed: I think the references are all justified. No additions requested. Other Strengths And Weaknesses: Strengths * Convincing results against baselines focusing on a singly type of geometric primitive * well-explained methodology and well-structured results Weakness: * some more realistic datasets could be tested. E.g., classification of building type, or land use from the geometry shape Other Comments Or Suggestions: no typos found. Questions For Authors: * Can the Fourier transformation be inverted to map back from embedding space to polygon/line/points? * What usecases do the authors see for Poly2Vec in a more complex setting, where the fourier features are integrated in a deeper neural network rather than individual MLPs for point-in-polygon tests. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for the thorough review and encouraging comments. ___ ***Reviewer:*** The experiments are well-structured according to 4 research questions. However, all results seem to be based on the classification of point-to-polyline etc relationships, which is reasonable for this experimental setup, but a more realistic use case, like predicting, for instance, the building use (residential, factory, shopping mall) from the polygon geometry may be interesting as well. ***Authors:*** While we do not perform single-building classification, our RegionDCL experiments address a more general version of that task: predicting region-level attributes from groups of buildings. This can be seen as a superset of building classification, as it involves not just the shape of a single polygon, but also the locations and inter-relationships of multiple buildings, which is particularly beneficial for our downstream tasks where spatial relationships between objects play a critical role. ___ ***Reviewer:*** some more realistic datasets could be tested. E.g., classification of building type, or land use from the geometry shape ***Authors:*** Same as our first response above. ___ ***Reviewer:*** Can the Fourier transformation be inverted to map back from embedding space to polygon/line/points? ***Authors:*** We thank the reviewer for the thought-provoking question. We believe this can be achieved by simultaneously training a decoder to map embeddings back to geometries (i.e., shape decoder). So far, we have experimented with applying the inverse Fourier transform to reconstruct approximations of the original shapes from the raw Fourier samples. This is an exciting direction and we will consider it in future research toward generative models that can reconstruct realistic spatial geometries from learned embeddings. ___ ***Reviewer:*** What use cases do the authors see for Poly2Vec in a more complex setting, where the fourier features are integrated in a deeper neural network rather than individual MLPs for point-in-polygon tests. ***Authors:*** Our experiments with RegionDCL show one example of integrating Poly2Vec into a transformer-based geospatial pipeline, by replacing its input representation with Poly2Vec to classify regions and estimate population from geometric map features. Looking forward, we believe Poly2Vec can serve as a valuable component in multimodal geo-foundation models, when one of the input modalities is vector-based spatial data (i.e., coordinates).
Summary: This work considers the problem of encoding points, polylines, and polygons for the purposes of prediction tasks that require understanding of spatial relationships such as topology, direction, and distance. Similar to previous point and polygon encoding approaches (Space2vec, NUFTspec), Fourier transformation is used to transform vector space data to fixed length presentations. To achieve more detailed representation of alignment both magnitude and phase formation are encoded. To achieve encoding of mixed item types (point, polyline, polygon) three versions of Fourier transformations are used together. For basic spatial relationship tasks (topology, direction, distance), experiment shows improvement over previous approaches such as Space2vec, ResNet1D, T2Vec and NUFTspec. For mixed item type tasks (Land use classification, population prediction), an unsupervised approach RegionDCL is used as the baseline. Improvement of RegionDCL performance is achieved by replacing its distance based attention bias with the proposed Poly2Vec approach. However, not much detail is given in the paper about how exactly is Poly2Vec integrated to RegionDCL. It would also be useful to provide visualization of a few examples to help the reader understand how Poly2Vec helps RegionDCL. Overall the proposed approach is reasonable, but there is significant detail missing which makes it hard for me to understand the result. Claims And Evidence: see summary Methods And Evaluation Criteria: see summary Theoretical Claims: see summary Experimental Designs Or Analyses: see summary Supplementary Material: No Relation To Broader Scientific Literature: No Essential References Not Discussed: no Other Strengths And Weaknesses: see summary Other Comments Or Suggestions: see summary Questions For Authors: see summary Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. ___ ***Reviewer:*** Similar to previous point and polygon encoding approaches (Space2vec, NUFTspec), Fourier transformation is used to transform vector space data to fixed length presentations. ***Authors:*** We note that our Poly2Vec differs from Space2Vec in that (1) Poly2Vec has the full complement of frequency components available to it, as opposed to Space2Vec’s more tightly specified sine and cosine frequencies on a grid of scales and (2) Poly2Vec works uniformly for points, polylines and polygons, while Space2Vec is limited to points. Compared to NUFTspec, Poly2Vec presents a unified, consistent Fourier transform encoding for points and polylines, as well as polygons. Our approach for computing the Fourier Transform for polylines and polygons (through decomposition and affine transformation) is also a novel component of our approach. ___ ***Reviewer:*** Improvement of RegionDCL performance is achieved by replacing its distance based attention bias with the proposed Poly2Vec approach. However, not much detail is given in the paper about how exactly is Poly2Vec integrated to RegionDCL. ***Authors:*** We agree that clearly describing Poly2Vec’s integration into RegionDCL would improve the manuscript’s clarity. Due to space constraints and since RegionDCL itself was not our primary contribution, we focused instead on detailing our novel components. To clarify, Poly2Vec is used as the input encoding in RegionDCL, replacing its original input representation. RegionDCL originally rasterizes OSM building footprints, converting coordinate data into image inputs so it can leverage convolutional encoders like ResNet-18. This rasterization leads to the loss of important spatial information, such as the absolute location of each building. To mitigate this, RegionDCL introduces a distance-biased transformer encoder, where the bias term consists of pairwise distances between buildings and POIs to reintroduce spatial context. In our experiments, we (1) replaced the inputs with Poly2Vec encodings, and (2) replaced the distance-biased transformer encoder with a standard transformer encoder, because our new inputs from (1) capture the necessary spatial information. The fact that Poly2Vec improves performance even without the distance bias demonstrates its ability to inherently retain spatial and positional information. We will add these additional details in the appendix to clearly describe the integration and highlight the differences from the original setup of RegionDCL. ___ ***Reviewer:*** It would also be useful to provide visualization of a few examples to help the reader understand how Poly2Vec helps RegionDCL. ***Authors:*** We provide some visualization examples in [Link1](https://anonymous.4open.science/r/r-0752/), showing building footprints as polygons and their spatial relationships — information that, discarded by the original design of RegionDCL but captured by Poly2Vec, contributes to improved classification performance in RegionDCL. We will add such visualizations of selected regions in the new manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the added explanations. I have raised my score by 1. However, I still think the paper can benefit from visualizing the representation somehow in order to reveal how does it work to solve the geographical problems. --- Reply to Comment 1.1.1: Comment: Thank you for raising your score. We appreciate your feedback, and we will consider it carefully as we revise this paper and future explanations.
null
null
null
null
null
null
Generalization Analysis for Supervised Contrastive Representation Learning under Non-IID Settings
Accept (poster)
Summary: This paper proposes a modified framework where ERM is performed using a small subset of input tuples assembled from a fixed pool of label data points. It derives generalization bounds for the empirical minimizer of U-Statistics and a sub-sampled risk. The results are applied to obtain bounds for common classes of representation functions such as linear functions and neural networks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. I have only read Section 1,2,3 of this manuscript because I am very confused about the motivation and settings of this paper. If the authors can remove my doubts, I will carefully check the subsequent content. Experimental Designs Or Analyses: No. Supplementary Material: No. There is no supplementary material. Relation To Broader Scientific Literature: Please see the above Summary. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: None. Weaknesses: 1.There are some issues for references. For example, Lei et al., 2022 the authors cited should be “Lei, Y., Yang, T., Ying, Y., and Zhou, D.-X. Generalization analysis for contrastive representation learning. In International Conference in Machine Learning, **2023**.” 2.Firstly, some previous works such as Arora et al., 2019 and Lei et al., 2023 have explored **conditional** i.i.d. setting. Secondly, this paper is not the first work provide a generalization analysis for the CRL framework under non-i.i.d. settings. The authors didn’t mention those non-i.i.d. works such as HaoChen et al., 2021, Wang et al., 2022, Huang et al., 2023. HaoChen et al., Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss. NeurIPS, 2021. Wang et al., Chaos is a ladder:A new theoretical understanding of contrastive learning via augmentation overlap. ICLR, 2022. Huang, et al., Towards the generalization of contrastive self-supervised learning. ICLR, 2023. 3.The sampling process adopted by the authors is unreasonable since all pre-training samples are unlabeled. Other Comments Or Suggestions: I have only read Section 1,2,3 of this manuscript because I am very confused about the motivation and settings of this paper. If the authors can remove my doubts, I will carefully check the subsequent content. Questions For Authors: 1.Why do the authors define $\bar{\mathcal(D)}_c$ as Equation (2)? It does not follow the set-ups of most practical use cases since the latent label of input is unavailable. I suggest that the authors adopt the definition of Arora et al., 2019 (Equation (2) on page 2). If so, is the theoretical analysis in this paper affected by this definition, and where? 2.In the lines 141-143, the authors state that $S$ is a dataset of $n$ i.i.d. input tuples. This sentence is a little confusing. The relationship between any two tuples is certainly independent. The authors also describe their own independent sampling process on the right hand of page 3. Do the authors want to state that the $k+2$ samples in each tuple are i.i.d. ? In my opinion, the authors didn’t clarify the drawback of the i.i.d. condition of previous works. It may be better for readers to understand via an intuitive example. 3.In the part 3.3, why do the authors define $\mathcal{S}$ with some labeled samples? Should these samples be unlabeled? If these samples are labeled, the contrastive learning studied in this paper is supervised, not unsupervised. However, the authors state that they derive some generalization bounds of unsupervised contrastive learning. It is very confusing. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We truly value your thoughtful feedback. Below, we do our best to allay your concerns within the character limit, and stand ready to answer any questions in further detail during the rolling discussion. Firstly, we clarify that our work assumes $N$ **labeled** examples $x_{1:N}$ are used to form valid tuples $(x,x^+,x^-_{1:k})$ (where $x,x^+$ share a label) for training an unsupervised loss. Thus, we agree that our setting aligns more with **supervised contrastive learning**, differing from [4, 5, 6], which rely on data augmentation for positive pairs. We promise to clarify these distinctions in our revision. However, we note that using CRL loss in a supervised setting is valid. For instance, both [2, 3] assume access to i.i.d. tuples where $x,x^+$ come from the same conditional distribution, which **cannot be simulated without labels**. Empirically, using the unsupervised loss at the feature representation stage enhances downstream classification performance (See Fig. 2), aligning with previous findings [7, 10, 11]. **Our work vs. [4, 5, 6]**: - The main results in [5] are Thms. 4.2 and 4.8, which bound the **population downstream classification risk** in terms of **population contrastive learning risk**. In [6], bounds are derived for the error rate of nearest-neighbor classifiers built on representation functions in terms of misalignment probability $R_\epsilon$. Thus, results in [5,6] are all at population level. In contrast, we focus on the *excess contrastive risk*. - Thm. 4.1 in [4] tackles excess contrastive risk for a loss taking *two* arguments $x,x^-$. Somewhat like ours, they construct the empirical contrastive risk from a fixed dataset $\bar x_{1:n_{pre}}$ using all pairs. Since there are only two arguments, their proof closely resembles [12], ignoring dependence on $k$. Crucially, the lack of labels allows **all pairs** to be included, removing the nuance of class-collision. In contrast, our work focuses on the interaction between $k$ and class labels, raising many technical challenges. Meanwhile, [4] emphasizes the link between population classification risk and contrastive risk, as well as the impact of data augmentation, which we do not consider. **Q1a: Re formula of $\mathcal{\bar D}\_c$.** **Re**: In Eqn. (2) of [2], the negative distribution is defined as $\mathcal{D}\_\mathrm{neg}(x)=\mathbb{E}\_{c\sim\rho}[\mathcal{D}_c(x)]$, which allows negative samples to be drawn from any class regardless of the anchor's class, effectively **allowing class-collision**. In our definition, class-collision is excluded, resulting in the definition in Eqn. (2) of our work. Disallowing class collision is associated to better performance [2, 8, 9]. **Q1b: Are results affected by Eqn. (2)?** **Re**: Yes, our definition of negative distributions results in a class-collision free version of the population risk in Eqn. (3). Thus, our results concern the concentration of empirical risks around this version. However, our techniques readily extend to class-collision case: we believe the quantity $\widetilde N$ (Thm. 5.1 and 5.2) would then be replaced by $N\min(\rho_{\min}/2,1/2k)$ and we expect the results on the estimation error to be marginally tighter (although the practical performance would suffer from reduced alignment between the unsupervised loss and classification objective [2, 9]). **Q2: Re Sections 3.2 and 3.3.** **Re**: Our Sec. 3 compares the tuples sampling methods in [2] and our work. Under [2] (Sec 3.2), the tuples are drawn i.i.d. from a distribution over $\mathcal{X}^{k+2}$, i.e., each whole tuple is observed independently as the reviewer stated and no data reuse occurs. Under our framework (Sec. 3.3), the procedure in P. 3 is used to select tuples from a labeled dataset $\mathcal{S}$ drawn i.i.d. from a distribution over $\mathcal{X}\times\mathcal{C}$. Hence, while **independence holds for elements of $\mathcal{S}$, it does not hold for the selected tuples**. For instance, if $k=1, \mathcal{S}=(x,y,v,z)$ with labels $1,1,1,2$, then both $(x,y,z)$ and $(u,v,z)$ are valid tuples but they are dependent due to a shared negative sample $z$. This process allows data reuse across tuples, adhering to practice [7] where recycling samples is desired to create large tuples datasets. **Q3: Why is $S$ labeled?** **Re**: As clarified above, our work indeed targets **supervised contrastive learning** rather than truly unsupervised settings. We promise to emphasize this in the revision. We thank you again for your suggestions, which we believe will help us better frame our contribution in a broader context beyond [2, 3]. We promise to thoroughly discuss all the proposed references. If the reviewer and AC deem it appropriate, we are open to changing the title to `Generalization Analysis for supervised contrastive learning in non i.i.d. settings'. We hope our rebuttal has helped improve your opinion of our contributions. Please do not hesitate to let us know if you have any further questions. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. I still have some doubts. I have understood the authors aim to study the theoretical performance of supervised CRL. What puzzles me the most is that the authors didn't explain why they mentioned that they proposed some bounds of unsupervised risk. Besides, CRL was proposed to deal with the case where label information is not available. if label information is available, why don't we conduct the common supervised learning but supervised CRL? Can the authors provide some examples to illustrate that in certain situations, the performance of supervised CRL is greater than supervised learning? Finally, I have also listed some different viewpoints from your response as follows. **1.** The authors stated that the sampling process of [2,3] cannot be simulated without labels. I don't agree this viewpoint. The sampling process of [2,3] is most realistic when data augmentation doesn't alter the original semantics of samples. It doesn't need the real label of samples but the consistency of the latent labels (semantics) of $x,x^+$. Compared with [2,3], the sampling process of this work is not realistic for unsupervised CRL. **2.** The **Re** of **Q1a** stated that the negative distribution defined in [2] allows negative samples to be drawn from any class regardless of the anchor's class, allowing class collision. I agree with it since, in practice, we cannot ensure the latent labels of negative samples are different from their corresponding anchor sample without real label information. In the supervised CRL setting, the negative sampling process of this paper can be achieved. **3.** The **Re** of **Q2** stated that each whole tuple in [2] is observed independently. Then, why do the authors ensure that the sampling method of [2] has no data reuse? From my perspective, the simple example the authors provided is also likely to occur in the case of [2]. --- Reply to Comment 1.1.1: Comment: Dear reviewer, **Many thanks for your very fast response**. To the best of our understanding, the discussion will unfortunately be ended by OpenReview after this message. We try our very best to **clarify** the **remaining misunderstandings** within the character limit and are ready to answer further questions via other proposed means if necessary. In short, we respectfully stand by our rebuttal and are very grateful in advance for your time reading our detailed answer below and our original rebuttal. Note that the references are in our answer to reviewers 81JF/bLbY. **Re: Why supervised CRL but not common supervised learning directly and practical examples of supervised CRL.** We apologize for not making this clearer in the last sentence of the third paragraph of our rebuttal. **Yes**, the practical performance of supervised CRL can be superior. This phenomenon is explicitly described and thoroughly documented for CIFAR10, CIFAR 100 and ImageNet (see Figure 3, Figure 4 and Table 2) as one of the main messages in the paper "Supervised contrastive learning" [7] which has more than 5000 citations. In addition, our own experiments (cf. Figure 2 in our paper) corroborate this fact on the MNIST dataset. Like the rest of the literature, we do not claim to capture this phenomenon theoretically, and instead prove bounds on the generalization error of the unsupervised risk, an important and already **highly non-trivial** step in the study of supervised CRL. **Re (Comment 1)**: Whilst it is not impossible that the original intention in [2,3] may have been to create a simplified model which could metaphorically illustrate the performance of unsupervised CRL in the presence of data augmentation, mathematically, it is a fact that both papers make assumptions which are extremely close to assuming the presence of labels. The results are inapplicable to the "practical" case of data augmentation. To clarify this, note that there is **no data augmentation** in either [2] or [3], and the results of [2], which establish a relationship between the supervised and unsupervised risks, explicitly assume and **mathematically rely on** the fact that the anchor $x$ and positive $x^+$ samples are not only sampled from the same class but sampled **independently of each other**. Thus, the results in [2] **do not work** for data augmentation: it is **not enough** for the "**semantics**" to be preserved, what is required is the ability to draw a new sample $x^+$ from the conditional distribution over the class of $x$ with absolutely no other dependence on $x$ other than through the label. We do not think this is in any way more "realistic" than assuming the presence of labels (if the sampling of $x^+$ conditionally given $x$, which corresponds to the data augmentation step in the references you provide, can be disassociated from the draw of $x$ and applied indefinitely, then it implies the ability to draw indefinitely many samples from the class of $x$). The **assumption is present** in Eqn. (1) of [2] and at the beginning of the "problem formulation" in [3] (the distribution $\mathcal{D}_{sim}$ is factorized into independent components for $x$ and $x^+$). A similar assumption is also made in the analogous 'block' setting in Section 6.3 of [2]. In addition, the **proofs** of Theorem 4.1 (via Lemma 4.3), and Proposition 6.2 **all rely on the independence between $x$ and $x^+$** when using Jensen's inequality: the key step (see step (b) in the proof of Lemma 4.3 or similarly, the first line of the proof of Proposition 6.2 in the appendix) relies on moving the expectation over $x^+,x^-$ (but not the expectation over $x$) inside the loss function formulation $\ell(f(x)^\top [f(x^+)-f(x^-)])$. This is possible pointwise for all fixed values of $x$, whose expectation remains outside the loss function because the function $\mathbb{R}^2\ni(f(x^+),f(x^-))\mapsto\ell(f(x)^\top[f(x^+)-f(x^-)])$ is convex in $(f(x^+),f(x^-))\in\mathbb{R}^2$ for any fixed value of $f(x)$. It is not possible to perform the same step if the distributions of $x$ and $x^+$ are dependent (as in the case of augmentation) since the function $\mathbb{R}^3\ni(f(x),f(x^+),f(x^-))\mapsto\ell(f(x)^\top [f(x^+)-f(x^-)])$ is not convex in $(f(x),f(x^+),f(x^-))\in\mathbb{R}^3$. **Re (Comment 3)**: Thank you for your comment. We believe you are referring to the reuse of the same samples in different tuples. In short, [2,3] require **independence** over tuples, which does not hold in our setting because of the reuse of the same fixed empirical pool of samples $x_1,\ldots,x_N$ to construct many tuples. In the setting of [2,3], **if the population distribution over samples is finite**, it is indeed possible for the same samples to **coincidentally** appear in the same tuple despite independence. If the distribution is non-atomic, this will happen with probability zero. In both cases, it is both far less likely and a distinct question from independence.
Summary: This paper provides the first statistical generalization theory of Contrastive Representation Learning (CRL) where data does not follow the strong assumption of independently and identically distributed (i.i.d.). Previous theoretical research on CRL assumes that data points used for training are drawn independently. However, this is not true in real-life applications. In reality, datasets are often created by reusing labeled samples across multiple tuples via resampling. Such kind of reuse invalidates the independence assumption, making existing theoretical analyses less applicable to real-world scenarios. To address this issue, the paper introduces a revised statistical theory framework that applies to non-i.i.d. data. It establishes generalization bounds for CRL using statistical techniques from U-Statistics, showing that the required number of samples per class depends on the logarithm of the covering number of the learnable feature space. The paper formulates the population risk in CRL as a U-statistics problem and prove that the generalization gap between the empirical minimizer and the Bayes risk decreases at a rate that depends on the dataset size and the structure of the contrastive loss. This provides a more rigorous understanding of how generalization behaves under data reuse. The authors also derive generalization bounds for empirical risk minimization under both U-Statistics and subsampled risk settings, demonstrating how these bounds change with dataset size and subsampling factors. Finally, the paper applies its theoretical findings to common function classes such as linear models and neural networks. By providing a more realistic theoretical foundation, the study offers insights into how CRL generalizes when data is limited and frequently reused. ## update after rebuttal The authors have addressed all my questions in my original review. I thus decide to maintain my original rating of 'accept'. Claims And Evidence: I think all the four major claims in the introduction are supported by theoretical results. Specifically, there are two main technical results. The first one is the generalization bound for the empirical minimizer of U-statistics. This claim is supported by section 5.1 and the detailed analysis is presented in Appendix B.2. The second one is Thm 5.2 in Section 5.2, with detailed proof included in Appendix C.2. I read the proof of the main results in Appendix B and C and skimmed through the technical lemmas. It seems to me that the claims are supported. Methods And Evaluation Criteria: N/A. This is a purely theoretical work that proposes new statistical generalization analysis for contrastive representation learning.There is no empirical evaluation. Theoretical Claims: Yes. I read section 4 on the proof strategy. I also skimmed Appendix B.2, where I focused on the proof of the main result in Thm B.10. I checked the proof of Thm C.1. I did not check the details of the proof for other auxiliary lemmas. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: This paper belongs to the scientific area of statistical generalization theory, where advanced statistical methods are used to analyze the accuracy and robustness of statistical and machine learning models. It specifically focuses on contrastive representation learning. I think this paper is a good complement to this area. Specifically, the previous theoretical work in this area all follows the framework of Arora et al. 2019 which assumes the ideal setting of i.i.d. Data tuples. However, in CRL use case, this often does not hold. This paper relaxes this strong assumption via a U-statistics technique. Therefore, I think this paper makes a valid contribution to its field. Essential References Not Discussed: From my opinion the most relevant works are discussed in the Related Work section. Other Strengths And Weaknesses: **1** Clarity: I think this paper is clear. The claims are clear to understand. As a theoretical work, the section of proof strategy significantly helps readers like me to better understand the major steps in the analysis. **2** Originality: this paper is the first theoretical work in generalization theory of contrastive learning. Other Comments Or Suggestions: N/A Questions For Authors: Can the results or analysis framework of the paper be extended to semi-supervised contrastive representation learning? To be specific, in many use case such as AI for Health (e.g. segmentation task of medical images), practitioners often adopt semi-supervised learning where the total loss consists of a contrastive loss and a supervised loss. So I guess to what extent does the major results depend on the unsupervised risk formulation in Definition 3.1? Can it be of other forms? Thanks! Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer, we truly appreciate your **recognition of the strength and novelty** of our contributions. We are also particularly grateful for your detailed review and the effort you put into carefully **checking our technical proofs** in Appendices B and C. To answer your question - yes - the techniques presented in our paper can be useful in semi-supervised learning depending on the unsupervised risk formulation. Below, we list several examples. **Overview**: Suppose that there exists a distribution $\mathcal{P}$ over $\mathcal{X}\times\mathcal{C}$ where $\mathcal{X}$ denotes the data space and $\mathcal{C}$ denotes the labels space (which is accessible to the learner). In semi-supervised classification, two sets of data $S=\\{(x\_j, y\_j)\\}\_{j=1}^{n}\sim\mathcal{P}^n$ and $S_U = \\{(u\_j, \bar y\_j)\\}\_{j=1}^m\sim\mathcal{P}^m$ are given. While both $S, S_U$ are drawn i.i.d. from $\mathcal{P}$, the labels $(\bar y_j)_{j\in[m]}$ are hidden, making $S_U$ unlabelled. In general, the overall risk for a representation functions $f\in\mathcal{F}$ is a combination of both supervised and unsupervised risks. As the supervised risk has been well-studied [13, 16, 17], we focus on possible formulations of unsupervised risk below instead. **Consistency regularization [14, 15]**: Under this regime, the unsupervised loss aims to ensure consistency of representations under different augmentations of inputs. Specifically, let $\mathcal{A}$ be a distribution over augmentation schemes, the unsupervised risk can be defined as $$L_\mathrm{un}(f)=\mathbb{E}_{\substack{u\sim\mathcal{P} \\\ \alpha,\alpha^+\sim\mathcal{A}^2}}\Big[\\|f(\alpha[u])-f(\alpha^+[u])\\|\Big]$$ where $\\|\cdot\\|$ is a distance measure in $\mathbb{R}^d$. Under this regime, we are implicitly given a set of augmented pairs $S_U^\mathrm{aug}=\{(\alpha_j[u_j], \alpha_j^+[u_j])\}_{j=1}^{m}$ drawn i.i.d. from a distribution dependent on both $\mathcal{P, A}$ (where $a_j$'s are drawn i.i.d. from $\mathcal{A}$). Here, the loss function only relies on augmented views $\alpha(x)$ and $\alpha^+(x)$ which comes from the *same* natural sample, which precludes reusing natural samples in different pairs/tuples. Thus, we can analyze this regime using standard results in learning theory without the decoupling technique. **Self-supervised CL [4, 5, 6]**: In this case, the unsupervised risk can be defined as $$L\_\mathrm{un}(f)=\mathbb{E}\_{\substack{u, u^-\_{1:k}\sim\mathcal{P}^{k+1}\\\ \alpha,\alpha^+, \beta\_{1:k}\sim\mathcal{A}^{k+2}}}\Big[\ell\Big(\Big\\{f(\alpha[u])^\top[f(\alpha^+[u])-f(\beta\_i[u\_i^-])]\Big\\}_{i=1}^k\Big)\Big].$$ Intuitively, the augmented views of the same input should be similar to each other while being dissimilar to augmented views of other samples. Since $S_U$ consists of i.i.d. data points in this setting, it is natural to formulate a $(k+2)$-order U-Statistics from $S_U$, generalizing our results to this setting. While such a learning setting involves reused samples, the lack of supervised information removes much of the subtleties of class-collision in our work. Thus, the analysis would be close to (a $k$-wise extension of) [12], making the result simpler than our case when class constraints are present. **Extension**: Extension to semantic segmentation is natural: we then have $\mathcal{X}\subseteq\mathbb{R}^{C\times H\times W}$, $\mathcal{C}\subseteq \\{0, 1\\}^{C\times H\times W}$ where $C, H, W$ represents image channels, height, width and $\mathcal{F}$ is a class of CNNs. Under consistency regularization, $\\|\cdot\\|$ can be the MSE loss between tensors. Under self-supervised contrastive learning, the contrastive loss can be any common loss functions (hinge/logistic). Lastly, as explained in the response to Reviewer t8eS, we note that our results concern the *excess unsupervised risk*: the connection between the unsupervised (population) risk and the (population) downstream classification risk is an orthogonal line of research which has been studied in various works [4, 5, 6]. **Conclusion**: Once again, we thank the reviewer for the interesting suggestion that extends our work to potential new directions. In our revised manuscript, we promise to discuss such extension to semi-supervised learning at length and propose future directions accordingly. Please do let us know if you have any further questions or points to discuss. **References**: [12] Clemencon et al., Ranking and Empirical Minimization of U-Statistics. Ann. Statist., 2006. [13] Bartlett et al., Spectrally-normalized Margin Bounds for Neural Networks. NIPS, 2017. [14] Ting Chen et al., A Simple Framework for Contrastive Learning of Visual Representations. ICML, 2020. [15] Sajjadi et al., Regularization With Stochastic Transformations and Perturbations. NIPS, 2016. [16] Long \& Sedghi, Generalization bounds for deep convolutional neural networks. ICLR, 2020. [17] Golowich et al., Size-Independent Sample Complexity of Neural Networks. COLT, 2018.
Summary: For the generalization analysis of contrastive representation learning (CRL) in non-IID settings, several new theory bounds are proposed in this paper. First, this paper proposes a revised theoretical framework for CRL. Then, a U-Statistics formulation for the population unsupervised risk is proposed, and bounds for the empirical minimizer of U-Statistics and a sub-sampled risk are derived. Finally, excess risk bounds for classes of linear functions and neural networks are derived based on the above theoretical results. Claims And Evidence: $\bullet$ This paper mainly proposes the claim: Training datasets are often limited to fixed pools of labeled samples, previous results on generalization bounds for CRL might not comply with most practical use cases where data is limited. $\bullet$ The theoretical results in this paper provide strong and clear evidence for the claim. Methods And Evaluation Criteria: Yes. The analytical method on U-Statistics provides effective support for theoretical analysis for CRL under non-IID settings. Theoretical Claims: Yes. I have reviewed most of the proofs and the theoretical claims are correct. Experimental Designs Or Analyses: Yes. The experimental designs used to verify the theoretical results and the analyses on the theoretical results are all valid. Supplementary Material: Yes. I have reviewed the proofs of Theorem 5.1 and 5.2 in the supplementary material. Relation To Broader Scientific Literature: Compared with previous works which require an IID assumption across input tuples, the theoretical results in this paper for non-IID settings are more consistent with practical situations, and these theoretical results broaden the limitations of existing works. Essential References Not Discussed: No. All the essential references have been adequately discussed. Other Strengths And Weaknesses: $\bullet$ Strengths: The theory of U-Processes is often considered to deal with pairwise learning or ranking learning. Using it to deal with pairwise relationships between anchor-positive pairs and tuples of negative samples is a novel perspective, which provides a good example of the potential application and development of U-Statistics. $\bullet$ Weaknesses: 1. The discussion on reducing the dependency of the generalization bounds on $k$ is insufficient. Previous work used the Lipschitz continuity of the loss functions with respect to the $\ell_\infty$ norm or the self-bounding Lipschitz continuity (essentially smoothness) to reduce the dependency on $k$ from square-root to logarithmic. When the label distribution is perfectly balanced, the discussion in lines 314 to 326 is that due to $N=nk$, the bound here is similar to the $\tilde{O}(1/\sqrt{n})$ bounds in previous works. However, in this case, the bound here is tighter than the existing $\tilde{O}(1/\sqrt{n})$ bounds by a logarithmic factor of $k$, What is the reason for the tightness of this logarithmic factor? 2. Regarding the reasons why formulating the (k+2)-order U-Statistics in the case of CRL is not straightforward, it seems to me that the two points listed are actually stating the same thing together, and I suggest that the two points can be combined or that some appropriate explanation should be added to the first point. Other Comments Or Suggestions: Typos: Line 321, "$O(1/\sqrt{n})$" --> "$\tilde{O}(1/\sqrt{n})$". Questions For Authors: Please refer to Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks for your thorough review. We especially appreciate your efforts in checking the correctness of our theoretical analysis as well as raising an interesting question regarding the proof techniques. Please refer to our response below regarding the difference between the technicality of our work and that of [3]. **Re (Weakness 1)**: Many thanks for your keen observation! Indeed, our Theorem 5.1 is free of any logarithmic factors in either $N$ or $k$. However, the right hand side involves the quantities $K_{\mathcal{F},c}$, which each bounds the Rademacher complexity of the loss class associated to tuples corresponding to positive class $c$: applying the bounds to concrete function classes, as we do in Appendix B.3, does incur logarithmic factors of both $k$ and $N$ when bounding these complexities via covering numbers and Dudley's entropy formula. In contrast, bounds in [3] incur logarithmic factors as early as in Theorem 4.8, even though the Rademacher complexities $\mathfrak{R}\_{S_{\mathcal{H}},nk}(\mathcal{H})$ have yet to be bounded. This is because [3] relies on inequalities between different complexity measures (Lemma C.3 in [3]) earlier in the proof. Still, since our analysis on concrete function classes subsequently incurs log factors, we have chosen not to dwell on this particular improvement to not obscure our main contributions. **Details on Proof Technique**: The core similarity of our work and [3] is the fact that we both bound the (empirical) Rademacher complexity of the loss class, denoted as $\mathfrak{\widehat R}\_{S}(\mathcal{G})$ through the $L_\infty$ covering number of $\mathcal{H}=\\{(x, x^+, x^-)\mapsto f(x)^\top[f(x^+) - f(x^-)]:f\in\mathcal{F}\\}$. Specifically, suppose that the loss is $\ell^\infty$-Lipschitz with constant $\eta>0$: $$ \mathfrak{\widehat R}\_S(\mathcal{G}) \lesssim \int_\alpha^\mathcal{M}\sqrt{\frac{\ln \mathcal{N}(\mathcal{H}, \epsilon/\eta, L_\infty(S_\mathcal{H}))}{n}}d\epsilon, \quad \alpha>0. $$ Where $S$ is a set of $n$ independent tuples and $S_\mathcal{H}$ is the set of triplets incurred from $S$ (See [3], Section 4.2). At this point, the difference lies in the methods by which we estimate $\mathcal{N}(\mathcal{H}, \epsilon/\eta, L_\infty(S_\mathcal{H}))$. In our work, the $L_\infty$ covering number of $\mathcal{H}$ is **directly estimated** by the $L_{\infty, 2}$ covering number of $\mathcal{F}$ (Lemma B.12). In [3], the estimation is done via fat-shattering dimension and worst-case Rademacher complexity. Below, we show how the additional costs in terms of $n, k$ and their logarithmic terms propagate through complexity measures: $$ \mathfrak{R}\_{WC}(\mathcal{H})^2 \xrightarrow{nk/\epsilon^2} \mathrm{fat}\_{\epsilon}(\mathcal{H}) \xrightarrow{\ln^2(nk/\epsilon^2)} \ln\mathcal{N}\_\infty(\mathcal{H}, \epsilon, L\_\infty(S_\mathcal{H})), $$ where $\mathfrak{R}\_{WC}(\mathcal{H})$ denotes the worst-case Rademacher complexity of $\mathcal{H}$ (note that $\mathfrak{R}\_{WC}(\mathcal{H})\in\widetilde O(1/\sqrt{nk})$) and $\mathrm{fat}\_\epsilon(\mathcal{H})$ denotes the fat-shattering dimension. This yields the final inequality (See [3], Eqn. (C.6)): $$ \ln\mathcal{N}(\mathcal{H}, \epsilon/\eta, L_\infty(S_\mathcal{H}))\lesssim \frac{nk\eta^2\mathfrak{R}^2\_{WC}(\mathcal{H})}{\epsilon^2}\ln^2\Bigg(\frac{\eta^2 nk}{\epsilon^2}\Bigg). $$ **Re (Weakness 2)**: We thank the reviewer for their suggestion. In our revised manuscript, we will certainly re-phrase the subtlety of formulating a $(k+2)$-order U-Statistics more succinctly. **Conclusion**: Aside from the concerns addressed above, we promise to correct any other existing typos in our revised manuscript. We thank the reviewer once again for their effort in making our work clearer for readers. We stand ready to answer any further questions you may have and hope that our answer has helped further improve your opinion of our work. **References**: [2] Arora et al., A Theoretical Analysis of Contrastive Unsupervised Representation Learning. ICML, 2019. [3] Lei et al., Generalization Analysis for Contrastive Representation Learning. ICML, 2023. [4] HaoChen et al., Provable Guarantees for Self-Supervised Deep Learning with Spectral Contrastive Loss. NeurIPS, 2021. [5] Wang et al., Chaos is a ladder: A new theoretical understanding of Contrastive Learning via augmentation overlap. ICLR, 2022. [6] Huang et al., Towards the generalization of contrastive Self-supervised Learning. ICLR, 2023. [7] Khosla et al., Supervised Contrastive Learning. NeurIPS, 2020. [8] Awasthi et al., Do More Negative Samples Necessarily Hurt In Contrastive Learning? ICML, 2022. [9] Ash et al., Investigating the Role of Negatives in Contrastive Representation Learning. AISTATS, 2022. [10] Schroff et al., FaceNet: A Unified Embedding for Face Recognition and Clustering. CVPR, 2015. [11] Sohn, Improved Deep Metric Learning with Multi-class N-pair Loss Objective. NIPS, 2016.
Summary: This paper revisits contrastive representation learning by relaxing the traditional i.i.d. assumption on training tuples. Instead of assuming independent tuples, the authors analyze a practical setting where a fixed pool of labeled data is recycled to form multiple tuples. They derive generalization bounds using a U‑Statistics framework for both the full risk and a sub-sampled empirical risk. They applied the obtained resutls to linear models and neural networks, and experiment on MNIST and synthetic data. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I have some questions on Remark 4.2 (see questions) Experimental Designs Or Analyses: I checked the to experiments in section 6.1 on how a method based on sub-sampling strategy can effectively approximate the performance of the all-tuple method, and the experiments in section 6.2 showing that the number of samples needed scales linearly with the number of classes and negative samples. Supplementary Material: I reviewed section A and B.1 of the supplementary materials. Relation To Broader Scientific Literature: The authors overcome the non-i.i.d. issue by reformulating the risk estimation problem in terms of U‑Statistics and then applying decoupling techniques. More in detail: instead of computing the empirical risk over tuples of iid samples, they express the population unsupervised risk as a U‑Statistic. This formulation averages the contrastive loss over all valid tuples that can be formed from the dataset. Although these tuples are generated by reusing the same data points, the U‑Statistics framework provides an unbiased estimator of the true risk. To handle the dependencies, the authors use a decoupling technique inspired by works of Peña and Ginè. The key idea is to break down the U‑Statistic into sums over independent blocks. They reorganize the sum over all possible tuples into sums over several groups (or blocks) where, by design, each block consists of tuples that are constructed in a disjoint manner from the fixed dataset. Essential References Not Discussed: I don't believe there are essential related works missing from the paper's citations. Other Strengths And Weaknesses: Strengths: - The paper extends the theoretical analysis of contrastive representation learning by explicitly handling the non-i.i.d. nature of real-world data developing a framework based on U‑Statistics and decoupling techniques. - The experiments (although conducted on MNIST and synthetic datasets) demonstrate that the proposed sub-sampling regime can approximate the performance of using all possible tuples, thereby validating the theoretical claims. Weaknesses: - The clarity of the paper’s exposition is not as strong as it could be. The presentation sometimes lacks sufficient intuition or detailed explanation of certain steps, particularly in the derivations involving U‑Statistics and decoupling techniques - One limitation is that treating the tuples as they where independent, as discussed in 4.1, can be done only if we use the tuple sampling strategy described in section 3.3. In a real-world scenario the within-tuple selections may not be independent. Of course the dependency is negligible when the number of available samples is very large. - While the paper describes the overall approach, details on experimental setup are missing. Other Comments Or Suggestions: For eq (3) I think it's better to reply what is k before eq 3. You mentioned that in the introduction, but I think it's better to restate it before equation 3 instead of before equation 5. Line 141, it is written "where $\mathcal{S}$..." but $\mathcal{S}$ does not appear in the equation above (eq 5). typo: C[m] in equation 11 not defined Questions For Authors: 1) Question regarding remark 4.2: In your decoupling argument, you partition the data into disjoint blocks to form the U‑Statistics. Could you please clarify how the symmetry of the kernel ensures that we can remove the sum over q? and write U_{n} in that way as per remark 4.2? I guess the writing is true only in expectation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks for the thoughtful and detailed response. We sincerely appreciate your recognition of the novelty in our decoupling techniques. Below, we provide our responses to address your concerns. **Re (Remark 4.2)**: To clarify your concern, suppose we have an one-sample, $m$-order U-Statistic as defined in Eqn. (12) and pick an arbitrary integer $i\in[q]$ where $q=\lfloor n/m\rfloor$. Additionally, denote $P_m[n]$ and $C_m[n]$ as the sets of $m$-permutations and $m$-combinations selected from $[n]$, respectively. Then, for any $l_1, \dots, l_m \in P_m[n]$, there are $(n-m)!$ re-arrangements $\pi\in S[n]$ (line 220) such that $\pi[mi-m+u]=l_u, \forall u\in[m]$. In other words, when we shuffle $[n]$ brute-forcedly, the tuple $l_1, \dots, l_m$ appears $(n-m)!$ times (in exact order) from position $mi-m+1$ to $mi$. Hence, we can write: $$ \frac{1}{n!}\sum_{\pi\in S[n]} h(x_{\pi[mi-m+1]}, \dots, x_{\pi[mi]}) = \frac{1}{n!}\sum_{l_{1:m}\in P_m[n]}(n-m)!h(x_{l_1}, \dots, x_{l_m}). $$ When $h$ is symmetric, any permutation of its arguments does not alter its value. Thus, we can further simplify the right-hand-side to a sum over the set of $m$-combinations: $$ \sum_{l_{1:m}\in P_m[n]}h(x_{l_1}, \dots, x_{l_m}) = \sum_{p_{1:m}\in C_m[n]}m! h(x_{p_1}, \dots, x_{p_m}). $$ Combining all of the above: $$ \frac{1}{n!}\sum_{\pi\in S[n]} h(x_{\pi[mi-m+1]}, \dots, x_{\pi[mi]}) = \frac{m!(n-m)!}{n!}\sum_{p_{1:m}\in C_m[n]} h(x_{p_1}, \dots, x_{p_m}), $$ which is exactly Eqn. (11). **Re (Weakness 1)**: We thank the reviewer for the suggestion to improve our presentation. To summarize, our U-Statistics formulation is motivated by the task of creating an (asymptotically) unbiased estimator for $\mathrm{L_{un}}(f)$ by estimating each conditional risk $\mathrm{L_{un}}(f|c)$ then combining the estimators, resulting in Eqn. (8). The reason for this separation lies in the nuance of estimating $\mathrm{L_{un}}(f)$ directly through a $(k+2)$-order U-Statistics (Lines 240 - 245). We acknowledge that we can make the intuition clearer for the readers and we will make sure to improve our presentation in the revised manuscript. **Re (Weakness 2)**: Thank you for the keen observation. In the procedure described in Section 3.3, the tuples are subsampled independently from the pool of 'valid' tuples where samples are unique within tuples. This means that the samples within each tuple are dependent conditionally given the full sample $x_1,\ldots,x_N$ (but independent without condition), whilst the tuples are independent with or without conditioning. **This approach corresponds to calculating U-statistics**. It is also possible to allow replacement within tuples, in which case the samples within each tuple would instead be independent conditioned on the full sample $x_1,\ldots,x_N$ but dependent if no conditioning is applied. **This approach corresponds to calculating V-statistics**. Our techniques readily extend to this case and would only incur an additive term of $\widetilde{O}\Big(\frac{1}{ N\min(\rho_{\min}/2, (1-\rho_{\max})/k)}\Big)$, which does not worsen the order of magnitude in our bounds. To see why, let us consider the one-sample case (described before Eqn. (11)) for simplicity. The V-Statistics of order $m$ for some kernel $h$ is defined as: $$ V_n(h)=\frac{1}{n^m}\sum_{i_1=1}^n\dots\sum_{i_m=1}^n h(x_{i_1}, \dots, x_{i_m}). $$ Generally, $V_n(h)$ is not an unbiased estimator of $\mathbb{E}h(x_1, \dots, x_m)$ due to the terms with repeated samples. By [1], we can write $V_n(h)$ as follows: $$ n^m V_n(h) = \binom{n}{m}U_n(h) +\sum_{i_{1:m}\in[n]^m\setminus C_m[n]}h(x_{i_1}, \dots, x_{i_m}). $$ As a result: $$ V_n(h) - U_n(h) = \frac{1}{n^m} \sum_{i_{1:m}\in[n]^m\setminus C_m[n]}[h(x_{i_1}, \dots, x_{i_m}) - U_n(h)], $$ where $|[n]^m\setminus C_m[n]|\in O(n^{m-1})$. Therefore, if $|h(x_1, \dots, x_m)|\le M$, then $V_n(h)-U_n(h)\in O(M/n)$, which means the effect of biasedness dissipates when $n$ increases (as the reviewer remarked). Thus, while biasedness slightly complicates analysis, we can leverage results on the concentration of U-statistics to study the concentration of V-statistics around the population risk, with a small incurred cost. Once again, we really appreciate the reviewer's feedback that help extend the scope of our work to a very interesting regime. **Other Modifications**: As per reviewer's suggestions, we have made the following changes: - Reiterate the definition of $k$ in definition 3.1 (Eqn. (3)). - Move the definition of the i.i.d. dataset $S$ before Eqn. (5) for better flow. - Define $C_m[n]$ before Eqn. (11). Furthermore, we promise to add a more detailed experiment set-up description in the appendix as well as a pointer to the appendix in the main text. We thank the reviewer once again for their efforts in making our work clearer for readers. **References**: [1] Wassily Hoeffding, A Class of Statistics with Asymptotically Normal Distribution. Ann. Math. Statist., 1948. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I appreciate that you addressed almost all of my concerns. I still have one doubt, my question was not about the equality between eq. (11) and eq. (12), but rather on why $U_n$ can be written exactly as stated in Remark 4.2. --- Reply to Comment 1.1.1: Comment: We are delighted that we have addressed almost all of your concerns. We are very grateful for your feedback and believe our scientific discussion will help us further improve the presentation and impact of our work. Regarding your follow-up question, we understand that you are seeking clarification as to why the sum over $q$ disjoint blocks in Eqn. (12) is no longer present in Remark 4.2. We apologize for the confusion. As we should have stated more explicitly, the original calculation in our rebuttal is indeed meant to establish the equality between the expression in Remark 4.2 (which is the starting point of the calculation) and the U-Statistics formula in Eqn. (11). To summarize concisely, for all $1\le i \le q$ where $q=\lfloor n/m\rfloor$, we have $$ \begin{align*} \underbrace{\frac{1}{n!}\sum\_{\pi\in S[n]}h(x\_{\pi[mi-m+1]},\dots,x\_{\pi[mi]})}\_{\textup{Remark 4.2}} &=\frac{1}{n!}\sum\_{l\_{1:m}\in P\_m[n]}(n-m)!h(x\_{l\_1}, \dots, x\_{l\_m}) \\\ &=\frac{1}{n!}\sum\_{p\_{1:m}\in C\_m[n]} m!(n-m)!h(x\_{p\_1}, \dots, x\_{p_m}) \\\ &=\underbrace{\frac{1}{\binom{n}{m}}\sum\_{p\_{1:m}\in C\_m[n]}h(x\_{p\_1}, \dots, x\_{p\_m})}\_{\textup{Eqn. (11)}}. \end{align*} $$ Intuitively, the expression in Remark 4.2 (**without** the average over $q$ blocks) can be viewed as averaging over all permutations of $n$ samples, where for each permutation, we extract a block of $m$ consecutive elements (starting from position $mi-m+1$ to position $mi$) as inputs to the kernel $h$. As demonstrated above, this is exactly equivalent to averaging over all possible $m$-permutations (or $m$-combinations if $h$ is symmetric) of the $n$ samples, which is precisely the definition of $U_n(h)$ in Eqn. (11). Since this holds for each one of the $q$ disjoint blocks, we remarked that the representation in Eqn. (12) (**with** the average over $q$ blocks) is the same as summing $U_n(h)$ for $q$ times then dividing by $q$ again. Specifically, $$ \begin{align*} \underbrace{\frac{1}{n!}\sum\_{\pi\in S[n]}\frac{1}{q}\sum\_{i=1}^qh(x\_{\pi[mi-m+1]},\dots,x\_{\pi[mi]})}\_{\textup{Eqn. (12)}} &= \frac{1}{q}\sum\_{i=1}^q\underbrace{\frac{1}{n!}\sum\_{\pi\in S[n]}h(x\_{\pi[mi-m+1]},\dots,x\_{\pi[mi]})}\_{\textup{Remark 4.2}} \\\ &= \frac{1}{q}\sum_{i=1}^q U_n(h)=U_n(h). \end{align*} $$ We thank you again for your help in improving our manuscript and promise to include a more thorough explanation as above in the revision.
null
null
null
null
null
null
How Do Transformers Learn Variable Binding in Symbolic Programs?
Accept (poster)
Summary: This paper studies how transformers can learn to implement variable binding through training. In particular, the authors focus on a specific task where the transformer is given a synthetic program where variables are assigned either numerical values or other variables, and the target is to correctly return the numerical value corresponding to a queried variable. By closely inspecting and probing the training trajectories, the authors identify three distinct phases on how transformers learn to implement variable binding. Initially, it simply recognizes the desired output should be a numerical value, then, it leans to use heuristics and shortcuts to generate correct output occasionally, and finally, it leans a systematic mechanism and produces nearly perfect performance over the test set. The study offers some interesting observations on how transformers can be trained to acquire capabilities involving manipulating and tracing symbols. Claims And Evidence: Yes, the authors have carefully designed the experiments to empirically validate their claims and hypotheses. Methods And Evaluation Criteria: Although the task is simple and synthetic, it allows to verify the claims of the paper in a controlled way. However, in addition to the current empirical setting, I think that the authors should really consider a test set where the program structure remains the same as the training set, but the variables and the numerical values differ from the training set. Since the underlying logic to implement variable binding does not restrict to a fixed set of variables names or numerical values, in order to claim that the transformer "leaned a systematic mechanism for variable binding", it is necessary to demonstrate that the model is able to achieve nearly perfect performance on a test set where neither variable names nor numerical values are seen during training. For example, the authors may consider splitting the 26 letters into 13 for training and 13 for testing, and additionally choose integers from 11 to 20 for the numerical values in the test set. ## update after rebuttal During the rebuttal, the authors provided a suite of new empirical results to further verify their claims in the paper. The new experiments cover 3 distinct out-of-distribution settings and, in my opinion, substantially improve the quality of the paper. I updated my initial rating based on the new results. Overall, I think this is a good work and I'd like to recommend acceptance. Theoretical Claims: NA (There is no theoretical claim that requires a proof in the paper) Experimental Designs Or Analyses: Overall, the experimental design is good. My major concern is on the choice of the test set. Since the authors claim that at the end of training, there is "emergence of a systematic mechanism for dereferencing", it is important to have a thorough test on whether the nearly perfect performance is indeed due to learning a systematic mechanism, rather than being just memorization (since the test set and the training set use the same set of variable names and numerical values, and the program is highly structured). Supplementary Material: Yes. The supplement material is short and I read all of it. Relation To Broader Scientific Literature: Understanding how transformers can learn to manipulate symbols and perform basic reasoning tasks is very important. Although the distinct learning phases are not very surprising, given that in some other cases it has also been shown that the learning curve of transformers demonstrates multiple jumps followed by long plateaus, the detailed causal analysis in this paper is interesting. Essential References Not Discussed: I am not aware of important references that are not discussed. Other Strengths And Weaknesses: The clarity of the paper can be greatly improved. Definitions and technical terminology should be properly stated. The authors should provide details on how logits are computed at each layer in Figure 3. In the caption of Figure 3(c), there is a missing reference to Appendix which is now read as "shown in Appendix TODO". Other Comments Or Suggestions: I see occurrence of "fig. 2a", "Fig. 2d", "Figure 3(a)". Please be consistent with naming and referencing. Questions For Authors: I would be very interested to know what happens if the variables names and numerical values in the test set differ from those in the training set. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your assessment that we "carefully designed the experiments to empirically validate [our] claims" and for your thoughtful suggestions. Your questions about generalization inspired several new experiments that strengthen our findings. ## Does the model actually learn a systematic mechanism? We appreciate the critical perspective on whether the model indeed learns a systematic mechanism in addition to a shallow heuristic, and conduct generalization experiments to further investigate the matter. Since our model is trained from scratch rather than pre-trained, testing on completely unseen tokens would mean testing on untrained embeddings, which wouldn't be meaningful. Instead, we conducted three rigorous generalization experiments: - **Held-out variable/number combinations**: We trained a model where specific variable/number combinations, randomly sampled and constituting ~10% of the original training data, were excluded from training but included in testing. The model successfully generalized to these unseen combinations, demonstrating compositional generalization rather than memorization. See [the compositional generalization plot](https://imgur.com/a/comparison-of-original-model-vs-compositional-generalization-model-qq9Z8wF) for further details. (direct link to new plot: https://i.imgur.com/esnjS3O.png) - **Program length generalization**: Testing on programs of 2-25 lines of assignment (vs. 16 in training), we found excellent generalization on programs where the answer is not on the first line, however when the answer is on the first or second line and the first line heuristic is used we see poor performance on shorter and longer programs. See [the program length generalization plot](https://imgur.com/a/model-accuracy-by-program-length-PPHwEyT) for further details (direct link to new plot: https://i.imgur.com/dIaCWx8.png) - **Chain depth generalization**: Testing on programs with 1-13 hops (vs. max 4 in training), we again observed near-perfect generalization, even for 13-hop programs despite our model having only 12 layers. We again see that when the answer is on the first or second line and the first line heuristic the model is worse at higher depth programs. See [the chain depth generalization plot](https://imgur.com/a/model-accuracy-by-number-of-hops-KZytngj) for further details. (direct link to new plot: https://i.imgur.com/C2XEVwc.png) In the paper, we claim that the model uses a shallow heuristic for programs where the answer is in the first or second line, and uses a systematic solution for other programs. When the answer isn’t on the first or second line, the model generalizes far beyond the structures seen during training – to unseen variable/constant combinations, deeper chains, and longer programs. These results further support our claim that the model learns a "systematic mechanism for dereferencing" rather than memorizing patterns. Moreover, when the answer is on the first or second line, we see seriously degraded generalization performance. These results further support our claim that these are exactly the inputs that a shallow heuristic is used to solve the task. **Action:** We will add these new experiments and plots. ## Addressing clarity issues: - We will provide clearer definitions of technical terminology - We will add details on how logits are computed at each layer in Figure 3 - We will fix the missing reference in Figure 3(c) - We will ensure consistent naming and referencing throughout (e.g., "fig. 2a" vs "Fig. 2d") We will improve the readability of all figures by: - Increasing font sizes - Adding clearer labels and legends - Ensuring consistent formatting **Action:** We will implement all these improvements in the camera-ready version. **Thank you for your valuable suggestions that led to these additional experiments! Do these results and planned improvements address your concerns about whether we've demonstrated a truly systematic mechanism?**
Summary: In this paper, the "variable binding" ability of Transformer model is studied, which is to autonomously assign correct values to symbolic variables. The paper focuses on controlled experimental design. A symbolic program dataset is constructed, which consists of programs that involve value passing among variables. Based on the learning results on this dataset, it is concluded that the Transformer model has strong variable binding ability without specific training. Claims And Evidence: In my view, the claims are not well-supported by the evidence: 1) The constructed dataset is too limited. 2) The experimental results lack deep insights. More details are discussed in the experimental design part below. Methods And Evaluation Criteria: Even though the constructed programs are interesting, I think that the dataset design is still limited to justify the results. More details are discussed in the experimental design part below. Theoretical Claims: No theoretical results are included in the paper. Experimental Designs Or Analyses: 1. The construction of the datasets is limited. The programs involves only number assignments among variables, which is a very special type of symbols. It would be better to introduce broader type of symbols (e.g. a is left to b, b is left to c, then a is left to c) and more complicated program structures. In my view, this is not difficult since variable assignment is common in programming tasks. 2. The experimental results are somehow superficial. In the current experiments, the values and symbols are treated equally as tokens. From the results, it is hard to tell whether the model distinguishes between them and actually learns what is a symbol or a number. In my view, we can also explain the experimental results from simply the view of attention weights. After training, the Transformer model just associate the symbols based on attention weights according to their common appearance in the dataset. From this view point, the results do not reflect insight on symbolic perspective. I think the current experiments lack strong evidence to eliminate this explanation. Supplementary Material: I have reviewed the appendix attached after the main paper. Relation To Broader Scientific Literature: The paper is related to the researches on understanding the principles of Transformer models. Even though the purpose of the paper is to study symbol bindings, in my view, the results are similar to analyzing the attention weights among tokens in previous researches. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - I think the paper proposes an interesting paradigm in analyzing the symbolic learning ability of Transformer. Weaknesses: - The experimental design is not sufficient to support the major arguments as discussed above. Other Comments Or Suggestions: - The writing can be significantly improved. The paper is hard to read due to the lacking of clarity. Furthermore, the paper needs further proofreading. There are a number of typos in the paper, such as: - Line 129: 90% training, 0.2% validation, and 0.98% testing. - Line 270: Appendix TODO - The visual effect in Fig. 2d needs refinement. Questions For Authors: - How three phases are decomposed? Are they just three stages of decreasing the learning rate of SGD optimization, or naturally appeared in the experiments without external parameter changing? - What objective is used for training the Transformer? Is it auto-regressive next token prediction? - How to explain Fig. 2c? I wonder why 1-hop accuracy converged slower than those with more hops. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback that our paper "proposes an interesting paradigm in analyzing the symbolic learning ability of Transformer." We appreciate your critical assessment and have conducted new experiments to address your concerns. ## Addressing key concerns: ### 1. "The model does not distinguish between symbols and values" We conducted new subspace analysis experiments specifically to address this concern: 1. We applied PCA to the Transformer's residual stream vectors for the right-hand-side tokens of program lines across many different inputs. 2. We trained linear classifiers (with L1 regularization) identify principal components of the residual stream correlated with numerical constants (10 components) and variable names (26 components), respectively. 3. We performed interchange interventions on these selected components, using counterfactual examples that target the numerical constant or the variable name, achieving high success rates for both numerical constants (92.17%) and variable names (87.08%). **New findings:** The 2D UMAP projections of these subspaces across three training checkpoints (steps 1200, 17600, 105400) reveal a clear evolution toward separated clusters for these token types. By the final checkpoint, the model distinctly represents symbols and values in different subspaces. See the new [the variable-name subspace scatter plot](https://imgur.com/a/2d-umap-projection-of-variable-name-residual-stream-subspace-projection-26-selected-pca-components-KBEft1t) and [the numerical-constant subspace scatter plot](https://imgur.com/a/2d-umap-projection-of-numerical-constant-residual-stream-subspace-projection-26-selected-pca-components-SCyS18B) for more details (direct links to new plots: https://i.imgur.com/ugP0baG.jpeg, https://i.imgur.com/479j8Ch.jpeg). **Action:** We will add these new experiments and plots. ### 2. Questions about training > How are the three phases decomposed? The three phases emerge naturally during training without external parameter changes. They represent qualitative shifts in strategy that correspond to sharp transitions in performance (from ~12% to ~56% to >99% accuracy). We use a linear decay learning rate schedule with 750 warm-up steps, but the phase transitions do not align with learning rate schedule changes. > What objective is used for training the Transformer? We use standard autoregressive next-token prediction (causal language modeling) as stated in the Appendix section "Training Setup." **Action:** We will add this information more explicitly in the main text. > How to explain Fig. 2c? Why 1-hop accuracy converged slower than those with more hops? This results from our task structure. In 1-hop programs, the answer is always a numerical constant directly assigned to the queried variable. The model's early "line-1/line-2 heuristic" works well for multi-hop programs where the first line often contains the root value, but performs poorly on 1-hop programs where the answer could be on any line. **Action:** We will explain this in the figure caption. ### 3. Task design While we acknowledge the controlled nature of our task, its simplicity is intentional. By using a minimal design, we can perform detailed causal analysis that would be challenging with more complex symbolic tasks. Our new generalization experiments show the model learns a truly general mechanism rather than simply memorizing patterns. We would like to clarify an important aspect of our experimental design: our model is trained entirely from scratch without any pre-training on natural language or programming tasks. When the model first encounters our variable assignment syntax (e.g., "a=1"), it has no prior knowledge of what these symbols mean or how they relate to each other. From this perspective, using alternative symbolic relationships like "a is left of b" would be functionally equivalent - the model would still need to learn these relationships from scratch. Our focus was on creating a minimal yet challenging test bed where we can precisely analyze how variable binding mechanisms emerge during training. The apparent simplicity of our task masks its fundamental complexity: the model must still develop sophisticated capabilities to track multi-step variable dependencies while ignoring distractors, regardless of the specific symbolic relationship used. The controlled design was crucial for our mechanistic interpretability goals, allowing us to isolate and analyze the precise causal pathways through which variable binding is implemented in the network. ## Additional improvements: - We will fix all typos, including the dataset split percentages - We will improve the clarity of Fig. 2d - We will add clearer explanations of the training objective and methodology **Action:** We will implement all these improvements in the camera-ready version. **Thanks again for your helpful feedback! Do these clarifications and planned changes address your concerns?** --- Rebuttal Comment 1.1: Comment: Thanks for the detailed responses and the additional experimental results. I appreciate the additional information, as well as the explanation on conducting a "minimalist" test for justifying the target claims. While I still feel that the results don't justify that the Transformer can conduct complex symbolic "reasoning", since symbol binding itself is not quite different from build correlations through attention weights. Overall, I will take the responses into consideration for my final evaluation.
Summary: The authors investigate how variable binding is learned by decoder-only transformer models. In particular, they use causal interventions to understand how transformers propagate binding in tasks where they are asked for the value of a variable, e.g., "c" where ("c=b, p=f, b=a, f=2, a=1") is given. Their key findings are: 1) A general mechanism for dereferencing is acquired in later stages of training, and gradually subsumes heuristics the model learned early on, 2) the full program state is not stored in the residual stream, and 3) certain attention heads' role in routing to dereference can be understood. Finding #1 is particularly interesting, as previous narratives around the acquisition of capabilities in transformers hypothesized that these replace early heuristics and are learned alongside, rather than on top of, them. The paper is well-written and clear, and a very nice interactive webpage is provided alongside the paper where results are clearly explained and can be explored. Claims And Evidence: Claims are well-supported. Methods And Evaluation Criteria: They used the existing method of interchange interventions which is a sufficient patching method to isolate heads relevant to the mechanism they are studying. Theoretical Claims: No theoretical claims are made in the body of the paper. Experimental Designs Or Analyses: Experiments are suitable. Supplementary Material: Yes, I reviewed it completely. It is mostly clarifications and extra detail. The linear probing results are interesting. Relation To Broader Scientific Literature: The paper presents various results around variable binding in transformers. The most interesting of these by far is the way in which the relevant circuits form, with the behaviour itself being fairly uninteresting. As such, the paper forms a reasonable contribution to the literature. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: * Figure 4 is very dense; would prefer large legend at the top, and perhaps thicker lines. Some of the color differences are not that substantial. * Figure 3 - text very small again. Maybe there is a way you can label ticks with an example as reference to make it easier to parse; e.g., say for example "x=3:" and then label x, =, 3, :. * Line 95 column 2 - "open question of how does..." incorrect grammar. * Line 136 column 2 - "hand side of the quality" → "equality". * Line 271 - there is a "TODO". * Line 291 column 2 - "this is evidence that a uniform..." incorrect grammar, and not sure this is a completely valid claim. Representations may be similar (not uniform), but the attention head being implicated on both is a function of the representation after q,k projections, not residual stream? Though I concede likely to be similar. * Both in the intro and conclusion you make reference to the ongoing symbolic vs. connectionist debate and claim to contribute by showing that models can solve such tasks, but models have long been shown to complete vastly more complex tasks that require handling complex inter-dependencies, so this may be overblown. Questions For Authors: Why do you think linear probing works so poorly yet you are able to see clean effects from patching? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive assessment that our paper is "well-written and clear" with claims that are "well-supported." We're especially pleased you highlighted our most interesting finding: that relevant circuits form with "behavior itself being fairly uninteresting." ## Response to your question: Linear probing vs. patching > Why do you think linear probing works so poorly yet you are able to see clean effects from patching? Our linear probing experiment attempts to extract a complete program state from a single vector, while our patching experiments reveal the causal path of specific information through the network. The latter succeeds because the model implements variable binding as a dynamic process of information routing rather than as static state representations. The poor linear probing results suggest the model doesn't maintain a complete program state in a linearly decodable format. Instead, our patching results reveal that the model learns a more efficient strategy that dynamically tracks only the relevant variable bindings through specialized attention patterns. **Action:** We will make this more explicit. ## Addressing your other feedback: - **Grammar issues:** We will correct all grammatical errors, including: - Line 95: "open question of how does" → "open question of how" - Line 136: "hand side of the quality" → "equality" - Line 291: We'll rephrase the statement about uniform representations - **Figure density:** We will redesign Figure 4 with: - A larger, clearer legend at the top - Thicker, more visually distinct lines - Consistent labeling schemes **Action:** We will implement all these improvements in the camera-ready version. **Thanks again for your helpful feedback! Do these clarifications and planned changes address your concerns?**
Summary: This paper performs a mechanistic interpretability study on a transformer trained on a synthetic task that requires tracking values assigned to variables in a mock programming language. Programs consist of 16 variable assignments, and the transformer has 12 layers. The authors use interchange interventions to identify the path through the transformer that the original value of the variable is transmitted through. Given a variable whose value is queried at the end of the sequence, the transformer solves the task by attending to the last assignment to that variable, doing so across multiple layers if necessary to figure out the chain of assignments and identify the constant used for the first assignment. This is intermixed with naive solutions that are specific to the first and second lines. The authors analyze the evolution of this solution over training and identify three main phases. Claims And Evidence: Yes, for the most part. Methods And Evaluation Criteria: Yes. Theoretical Claims: This paper does not have theoretical claims. Experimental Designs Or Analyses: Yes, although I did not check the appendix. Supplementary Material: No. Relation To Broader Scientific Literature: This is a mechanistic interpretability paper on a new synthetic variable assignment task. Essential References Not Discussed: Not that I'm aware of. Other Strengths And Weaknesses: **Strengths** 1. The paper is straightforward and generally well-written. 2. The experiments essentially validate the authors' claims convincingly in terms of how information flows through the transformer. 3. The phases discovered during training are interesting. 4. They made a snazzy website to view the results. **Weaknesses** 1. The scope of the paper is limited to a single transformer trained on a simple synthetic task, so its real-world impact is limited. 2. The synthetic task is simple and could have been made more complicated. See Questions. 3. Some of the figures are hard to read. Other Comments Or Suggestions: 1. 155 right: this is redundant 2. Fib 2b is too hard to read. The colors and numbers should be related visually, perhaps with different shades of gray. 3. Fig 4: It's hard to keep track of the line colors. Questions For Authors: 1. Why exactly 17 lines for all examples? Did you try testing on longer or shorter programs to check if the construction it learns is sensitive to the length of the training data? 2. It seems like the learned solution is limited by the number of layers in the transformer. What happens when you test on any examples with chains longer than 12? 3. Did you try reproducing this over multiple training runs? 4. This analysis is likely sensitive to the distribution of the training data. Did you try other distributions? 5. Fig 3a: Why is logit difference on a scale from 0 to 1? Even if it's probabilities, shouldn't it be on a scale from -1 to 1? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your thorough review and positive feedback that our paper is "straightforward and generally well-written" with experiments that "validate the authors' claims convincingly." ## Responses to your questions: ### Q1&Q2: Why exactly 17 lines? Did you try testing on longer/shorter programs? What happens with chains longer than 12? See the response to Reviewer ZyPs for the results of new generalization experiments. ### Q3: Did you try reproducing this over multiple training runs? Yes, we trained additional models with different random seeds. All runs exhibit the same three distinct learning phases with similar transition points, demonstrating the robustness of our findings. See [the test set accuracy plot across different training run seeds](https://imgur.com/a/comparison-of-test-set-accuracy-across-different-seeds-OXfXTSF) for more details (direct link to new plot: https://i.imgur.com/cvf0vzO.png). **Action:** We will add this new experiment and plot. ### Q4: Did you try other distributions? Yes, we explored different sampling strategies during task design. Our final distribution was deliberately constructed to be as challenging as possible while remaining learnable. Simpler distributions (e.g., with fewer distractor chains, uniform sampling of variables, or shorter chains) made the task trivially solvable using surface-level heuristics. For instance, without our weighted sampling approach (where we choose chains to extend with probability proportional to chain length cubed), the model could simply learn to associate the answer with the longest variable chain. Our rejection sampling procedure ensures balance across referential depths while maintaining sufficient distractor chains that branch from the main query chain. This forces the model to genuinely track variable bindings rather than rely on pattern matching. This challenging distribution was essential for studying true variable binding mechanisms rather than shortcut learning. **Action:** We will make this more explicit in the appendix. ### Minor points The logit differences were normalized to a [0,1] scale for visualization clarity. We agree this could be clearer and will explicitly state the normalization in the figure caption. We acknowledge your concern about figure readability. In the camera-ready version, we will: - Increase font sizes throughout all figures - Use more visually distinct colors and line styles in Fig. 2b and Fig. 4 - Add clearer legends and improve visual mapping between colors and values **Thanks again for your helpful feedback! Do these clarifications and planned changes address your concerns?** --- Rebuttal Comment 1.1: Comment: Thank you for your responses. Most of my concerns have been addressed. I'll update my score accordingly. I just have one unresolved question: What if the number of variable hops is much larger than the number of layers, like 50 hops and 12 layers? I don't think 13 hops and 12 layers is enough to test my original concern.
null
null
null
null
null
null
Nonlinearly Preconditioned Gradient Methods under Generalized Smoothness
Accept (oral)
Summary: This work studies nonlinearly preconditioned gradient methods. After comparison with existing methods and discussing the existing connections, an extension of anisotropic smoothness was proposed. For specific choices of the reference function (1- isotropic reference function and 2- separable reference function) the procedure of calculating a second order condition for $(L,\bar{L})$-anisotropic smoothness was explained. Later, the convergence of the preconditioned gradient method (2) was studied under the settings: a) nonconvexity + smoothness and b) convexity. Simple experiments were conducted to evaluate the performance of the proposed method. Claims And Evidence: Most of the claims are supported. One confusing part is in line 101: "... we describe a common nonlinear gradient preconditioning scheme for the main iterates of popular algorithms including gradient clipping, Adam, Adagrad and ...". However, as described in Section 1.6, a special case of these methods were recovered. These special cases have no gradient accumulation or momentum which are important building blocks for the successful performance of the mentioned algorithms. Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: Yes, the current experiments are sound. One issue, is as follows: The paper claims that their framework brings out novel insights for the class of ($L_0$-$L_1$)-smooth functions and it allows them to go beyond already established methods. While I appreciate such direction, the paper leaves me with no numerical insights toward understanding how this claim looks like in practice. Supplementary Material: No Relation To Broader Scientific Literature: One of the contributions relates to an extension of the anisotropic smoothness. The new definition allows for two different constants to play a complementary role. Accordingly, a second order sufficient condition was defined. Additionally, the analysis for nonconvex functions nonsmooth functions was extended to nonconvex smooth functions. The new result allowed for larger step sizes compared to the previous work. The authors claim that they strengthen the results from some previous works in Theorem 3.6, but it was not mentioned that in what sense did they strengthen the prior results? Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: I like the idea of moving toward unified models to understand and gain insights about the existing concepts. This work endeavours to move toward this direction by relating to generalized and more realistic smoothness assumptions. They also explored the performance of algorithm (2) in terms of convergence rate under smooth nonconvex and convex assumptions. Unfortunately, this extension was not formulated numerically and experimentally, leaving the readers doubtful about the applicability of their $(L,\bar{L})$-smoothness extension. Other Comments Or Suggestions: I believe the paper would benefit from a comprehensive revision specially in terms of presentation and delivering the message. In many instances, vague statements leave the reader helpless. Such statements damage the independentness of the paper. Examples are: line 255: "... are Legndre through (Laude & Patrinos, 2022, Proposition 4.1)" line 258: "... when $\phi$ is a regular optimal transport cost in light of (Villani,2008 Definition 12.27)" line 269: " Note that in this case (Gorbunov et al., 2024, Algorithm 1) can be viewed as (2) with a convservative ..." line 272: "In a similar manner, the stepsize introduced in [Vankov et al., 2024, Equation (3.5)] can be viewed as a less tight version of Corollary 2.7." line 379: "wile also answering the question posed in (Maddison et al., 2021, p. 17)." Figure 3 (right): I suggest a probability bound instead of 100 instances since it would be a more meaningful visualization. Questions For Authors: First, I'd like to thank the authors for their research. 1- I like the idealogy/motivation presented in lines 80-95. Why authors did not use this idealogy in practice and see how reference functions with appealing properties (like strong convexity) perform in training problems? 2- lines 417-419: Is this why Fig3(right) acieves slighly better results? 3- Why does the paper claims encompassing Adagrad, Adam, and Gradient Clipping methods, when they only recover a special case of these methods? (On the side: the momentum plays a crucial role in the success of algorithms like Adam and should not be neglected.) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Regarding the experimental design, our paper is focused on theoretical analysis and thus we only provide preliminary experimental results on problems that are associated with generalized smoothness in the related literature (Chen et al., 2023; Gorbunov et al., 2024; Vankov et al., 2024). Moreover, in Theorem 3.6 we strengthen the results of (Laude & Patrinos, 2022) and (Maddison et al., 2021) in the following ways: we show that for isotropic reference functions the sequence of iterates generated by the method is Fejér monotone and we obtain a sublinear rate for the suboptimality gap $f(x^k) - f^\star$. To the best of our knowledge such results have not been shown before. Regarding the presentation of our paper, we agree that we can make some statements more self-contained and we will change this in a revised version of the paper. We would also like to point out that the quoted text contains typos that are not present in the paper. The meaning of **Questions 1 & 2** is not entirely clear to us. In the following we answer the questions as we understood them. If the answers are not satisfactory, please provide some more clarification. Answer to **Question 1**: As already mentioned, in this paper we focus on theoretical analysis. Nevertheless, we believe that applying the nonlinearly preconditioned gradient method to training problems is interesting future work. Answer to **Question 2**: We conjecture that the algorithm generated by the reference function $\cosh-1$ outperforms the rest of the methods on our experiments because the preconditioner is better tailored to such high order polynomial functions. Answer to **Question 3**: We agree with the reviewer that our wording might cause confusion and we will change our text. What we show in Subsection 1.6 is that if one was to remove the gradient accumulation and momentum mechanisms from these methods, they would obtain the shown basic forward iterations. We are well aware that momentum plays a crucial role in the success of methods such as Adam and Adagrad. However, in this paper we mostly focused on laying the foundation for a new framework for analyzing various gradient descent type methods. Extensions such as adding momentum to recover schemes like Adam and Adagrad are possible and an exciting direction for future research. Moreover, through this connection, practical new algorithms can be derived by combining the momentum mechanisms with other reference functions $\phi$. --- References: Chen, Z., Zhou, Y., Liang, Y., and Lu, Z. Generalized-smooth nonconvex optimization is as efficient as smooth nonconvex optimization. In International Conference on Machine Learning, pp. 5396–5427. PMLR, 2023. Gorbunov, E., Tupitsa, N., Choudhury, S., Aliev, A., Richtárik, P., Horváth, S., Takáč, M. Methods for convex $(L_0, L_1)$-smooth optimization: Clipping, acceleration, and adaptivity. arXiv preprint arXiv:2409.14989, 2024 Vankov, D., Rodomanov, A., Nedich, A., Sankar, L., and Stich, S. U. Optimizing $(L_0, L_1)$-smooth functions by gradient methods. arXiv preprint arXiv:2410.10800, 2024. Laude, E. and Patrinos, P. Anisotropic proximal gradient. arXiv preprint arXiv:2210.15531, 2022. Maddison, C. J., Paulin, D., Teh, Y. W., and Doucet, A. Dual space preconditioning for gradient descent. SIAM Journal on Optimization, 31(1):991–1016, 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your responses and clarifications. Typos in the quoted texts were mine. As I have mentioned, my concern was the vague and unclear statements. The authors correctly understood my point from my questions. The responses are sound. Conditioned on fixing the vague statements and specifically clarifying the rasied points on Adam, Adagrad, etc., I think the community would benefit from this work. I raise my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their positive feedback.
Summary: The paper studies non-linearly preconditioned gradient methods, analyzing algorithms of the form $x^{k+1} = x^k - \gamma\nabla\phi^*(\lambda \nabla f(x^k))$, where where $\phi^*$ is a convex dual reference function, and $f$ is continuously differentiable but potentially non-convex. This general framework encompasses several widely used methods, including standard gradient descent, AdaGrad without memory (i.e., $\beta_1 = \beta_2 = 0$), Adam with exponential decay rates set to $0$ (i.e., $\beta_1 = \beta_2 = 0$) and clipped GD. The paper provides theoretical guarantees under a new $(L, \bar{L})$-anisotropic smoothness assumption, extending prior work on generalized smoothness. Claims And Evidence: The claims are well-supported by theoretical analysis. The proofs appear sound, and the results align with existing literature. Methods And Evaluation Criteria: The paper provides simple experiments for minimizing $\frac{1}{4} \|x\|^4$ and a non-convex phase retrieval task on synthetic data. While limited in scope, these experiments highlight the properties of the proposed framework. Given the paper’s theoretical focus, this level of empirical validation is reasonable. Theoretical Claims: I reviewed the main theoretical results, including key theorems and lemmas. However, I did not verify all details in the supplementary material (additional examples etc). Experimental Designs Or Analyses: The experiments are designed to demonstrate key theoretical insights rather than provide extensive empirical validation. They are appropriate for this purpose, and the results appear consistent with the claims made in the paper. Supplementary Material: See *Theoretical Claims*. Relation To Broader Scientific Literature: This work builds on prior studies of dual-space preconditioning and anisotropic smoothness. The general framework considered in the paper was originally analyzed in [1] under the dual relative smoothness condition for convex and essentially smooth functions, and then analyzed in the general non-convex and composite non-smooth setting in [2] under the anisotropic descent property. The authors extend the previous results by introducing the $(L, \bar{L})$-anisotropic smoothness assumption. This new condition broadens the class of functions considered while reducing to previous formulation when ${\rm dom} \phi = \mathbb{R}^n$. There is a significant body of literature on generalized smoothness, and this work extends some of these notions. [1] Maddison, C. J., Paulin, D., Teh, Y. W., and Doucet, A. Dual space preconditioning for gradient descent. SIAM Journal on Optimization, 31(1):991–1016, 2021. [2] Laude, E. and Patrinos, P. Anisotropic proximal gradient. arXiv preprint arXiv:2210.15531, 2022. Essential References Not Discussed: I do not see any major omissions. Other Strengths And Weaknesses: **Strengths:** - The paper is well-written, clearly structured, and engaging. - It presents an elegant theoretical framework that unifies and generalizes several known algorithms. **Weaknesses:** - The significance of the extension is not entirely clear. While mathematically interesting, it would be helpful to better motivate why this generalization is necessary or practically impactful. Other Comments Or Suggestions: No additional comments. Questions For Authors: 1. What exactly does the new framework cover that is not already addressed in [2]? A clearer distinction would help contextualize the novelty and significance o contributions. 2. $(L, \bar{L})$-anisotropic smoothness condition reduces to the one in [2] when ${\rm dom} \phi = \mathbb{R}^n$. What additional benefits does this relaxation provide when ${\rm dom} \phi \neq \mathbb{R}^n$? 3. Proposition 2.3 shows that for strongly convex reference functions, the class of anisotropically smooth functions is at least as large as that of Lipschitz smooth ones. What are the most relevant examples demonstrating that this class is strictly larger? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. Our answers for **Questions 1 and 2** are summarized in the following. Regarding the $(L, \bar L)$-anisotropic smoothness condition, our contribution is threefold: - First and foremost, we describe a general setting where the second-order condition presented in Definition 2.4 is not only necessary but also sufficient for anisotropic smoothness. This condition was shown to be sufficient in (Laude & Patrinos, 2022) only when $f$ itself is a Legendre function, not applying to the general nonconvex setting we consider in our paper. Moreover, in Definition 2.4 we allow for $\phi^*$ to be just Lipschitz smooth and not twice continuously differentiable, thus allowing us to study the interesting case where the update in Equation (2) takes the form of the popular gradient clipping method. - Second, we lift a major restriction of (Laude & Patrinos, 2022) by allowing ${\rm dom} \phi \neq \mathbb{R}^n$, which leads to a more general descent inequality by choosing a reference function $\phi$ with faster growth. As an example, through Corollary 2.7 and Proposition 2.9, we get that functions $f \in \mathcal{C}^2(\mathbb{R}^n)$ that are $(L_0, L_1)$-smooth are also $(L, \bar L)$-anisotropically smooth relative to $\phi(x) = -\|x\| - \ln(1-\|x\|)$ which is defined for $\|x\| < 1$. Therefore we obtain novel characterizations for this class of functions (through Proposition 2.2 and Proposition 3.5) and we can tackle problems involving such functions with preconditioned gradient steps defined as in Corollary 2.7. - Third, we introduce flexibility by allowing for two different constants whose interplay might have a significant impact in practice, as also indicated by our preliminary experiments. As for the convergence results, we provide the following novel findings: - In the nonconvex setting, our analysis leads to stepsizes up to $2/L$. - In the convex setting, we obtain the first sublinear convergence rate for the suboptimality gap $f(x^k)-f^\star$ for large classes of reference functions $\phi$. To the best of our knowledge, such a result does not exist in (Laude & Patrinos, 2022) or in the related literature. We would like to stress that this is not an easy task: in the isotropic case we utilize the star cocoercivity-like property described in Proposition 3.5 and study the problem as a rescaled gradient descent method. In the more challenging separable case we manage to obtain such a result by considering a nonlinear proximal point reformulation of the method and harnessing the subhomogeneity of the reference function. For **Question 3**: Except for the toy example $f(x) = 1/p \|x\|^p$ considered in the paper, through the aforementioned relation between anisotropic and $(L_0, L_1)$-smoothness, examples of not globally Lipschitz smooth functions that are anisotropically smooth can be found in the $(L_0, L_1)$-smoothness literature (Zhang et al., 2019; Chen et al., 2023). Moreover, we would like to emphasize that examples of Lipschitz smooth functions that are anisotropically smooth with a different parametrization such as the logistic loss function studied in the paper are also of interest, since this can lead to faster algorithms. --- References: Laude, E. and Patrinos, P. Anisotropic proximal gradient. arXiv preprint arXiv:2210.15531, 2022. Zhang, J., He, T., Sra, S., and Jadbabaie, A. Why gradient clipping accelerates training: A theoretical justification for adaptivity. arXiv preprint arXiv:1905.11881, 2019. Chen, Z., Zhou, Y., Liang, Y., and Lu, Z. Generalized-smooth nonconvex optimization is as efficient as smooth nonconvex optimization. In International Conference on Machine Learning, pp. 5396–5427. PMLR, 2023.
Summary: 1. This paper introduces a unified preconditioning gradient descent scheme for some of the popular optimization algorithms including Adam, Adagrad, gradient clipping. 2. The paper introduces the concept of anisotropic smoothness, which generalizes the relative smoothness. It gives 2nd order sufficient conditions of the concept. 3. The paper gives proof of sub-linear convergence for several classes of reference functions, and verifies through numerical experiments. Claims And Evidence: 1. The suggested scheme is claimed to be a common framework for the popular algorithms. However, it recovers only special cases of parameter choices in those algorithms, and does not provide a framework to analyze them in general. 2. Section 2 supports the claim that the new anisotropic smoothness covers a broader range of functions. In particular, Corollary 2.7 gives a new method of obtaining the forward iterates. 3. Theorem 3.6 and 3.7 gives evidence of sub-linear convergence for nonlinearly preconditioned methods with (1) isotropic reference functions and (2) 2-subhomogeneous functions. However, the claim on general separable reference functions is not explicitly proved and justified. 4. In the numerical experiments, the algorithm is only compared with other basic ones, such as GD and clipping, and results of other preconditioned gradient methods or Adam/Adagrad remain unreported. Methods And Evaluation Criteria: The methods and theoretical proofs make sense, but the paper does not give realistic choices of parameters lambda, gamma for arbitrary objectives. This might be of trouble when the curvature of function is unknown or expensive to compute. Theoretical Claims: I checked the proofs of Theorem 3.2, Corollary 3.3, and Proposition 3.5. No issues were found in these proofs. Experimental Designs Or Analyses: The first experiment optimizes a norm-to-power-4 problem, and plots the results of algorithms with different reference functions (but all belong to the class of isotropic functions) and different parameters, though it might be better to give a comparison against other existing algorithms in smooth convex optimization. The objective of the second experiment has overlap with the first one, as they share the same polynomial growth rate. It can be more convincing to choose other problems. Moreover, the results of the second experiment involves a specific choice of parameters, but in general parameters are unknown, and it might be important to show sensitivity to parameters, like the first experiment. Supplementary Material: I checked Appendix D: Details on the second-order condition. This part discusses the choice of parameters in several specific problems. Relation To Broader Scientific Literature: The research is closely related to the existing study of non-linear preconditioned gradient descent/mirror descent. Essential References Not Discussed: Relative smoothness is used a lot in the mirror descent method. Amir Beck, Marc Teboulle, Mirror descent and nonlinear projected subgradient methods for convex optimization, Operations Research Letters, Volume 31, Issue 3, 2003, Pages 167-175. Filip Hanzely, Peter Richt´arik, and Lin Xiao. Accelerated bregman proximal gradient methods for relatively smooth convex optimization. Computational Optimization and Applications, 79:405 – 440, 2018. Other Strengths And Weaknesses: Strengths : 1. Writing is clear, the main idea is easy to understand 2. Experiment settings are detailed and easy to implement Weaknesses: 1. The proposed algorithm in the paper only solves smooth convex optimization problems, and did not touch wider area in optimization. 2. The experimented objective functions are limited to small-scale smooth convex functions defined on the whole space. Non-convex problems and problems on a subset are not experimented. 3. Parameter tuning are not discussed. 4. The intution of some technical results is missing. Other Comments Or Suggestions: **Abstract:** - "thus allowing us to go beyond already established methods." → "thus allowing us to extend beyond already established methods." ("go beyond" is slightly informal) - "in both the convex and nonconvex setting" → "in both the convex and nonconvex settings" (plural for consistency). Questions For Authors: 1. The iteration in (2) and the analysis using relative smoothness resemble the mirror descent update: $$ \nabla \phi(x^{k+1}) - \nabla \phi(x^{k}) = -\alpha_k \nabla f(x_k). $$ What is the advantage of using nonlinear preconditioned gradient methods over mirror descent in this context? 2. Accelerated mirror descent methods have been designed and analyzed. Can the proposed nonlinear preconditioned gradient methods be accelerated in a similar manner? 3. In Theorem 3.6, it is claimed that the norm of the gradient of $f$ decreases monotonically along the iterates of the algorithm: $$ \|\nabla f(x_{k+1})\| \leq \|\nabla f(x_k)\|, \quad \text{for all } k \in \mathbb{N}_0. $$ However, in the simplest case—when the reference function is the squared norm—the method reduces to the standard gradient descent method, for which no such monotonicity result holds. Could you clarify this discrepancy? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s time and effort in providing feedback. Regarding the stated **Weaknesses**: 1. Our results cover both the convex and nonconvex setting. The convergence results in the nonconvex setting are presented in Section 3.1. 2. The scope of the current manuscript is to lay the theoretical foundations of the framework. Therefore, we have provided indicative experiments on problems that are relevant in the generalized smoothness literature. Moreover, we note that the phase retrieval problem is nonconvex. 3. As is standard for descent-type methods we provide a sufficient second order condition that allows us to compute the stepsizes of the method. In cases where the constants are difficult to compute, one can utilize a variation of the classical linesearch procedure to adaptively determine the stepsize sequence. 4. We tend to disagree with the reviewer as we believe that looking at these algorithms under the lens of generalized convexity provides valuable intuition. This connection with generalized convexity is also the main ingredient for many of our proofs. Concerning item 3 of the **Claims And Evidence** section we would like to stress that our claim is that we prove with a novel technique the sublinear convergence rate for large classes of functions, a result which is presented in Theorem 3.7. Regarding item 1 of the **Claims And Evidence** section, see our response to Reviewer ZfYL. Answer to **Question 1**: Indeed, the update defined in (2) bears similarities to mirror descent. In fact the scheme we analyze can be considered as left-preconditioning in contrast to mirror descent’s right-preconditioning (Maddison et al., 2021, p. 3). Nevertheless, the two methods produce different iterates while the descent inequalities used to analyze them also differ. Our paper relies on anisotropic smoothness, while mirror descent is analyzed using the relative smoothness condition. Thus, the nonlinearly preconditioned gradient scheme is used to tackle different problem classes than mirror descent. Answer to **Question 2**: To the best of our knowledge, Bregman gradient descent schemes under relative smoothness can be accelerated when the generated Bregman divergence satisfies some favorable properties (Hanzely et al., 2021). We believe that this is the case with the nonlinearly preconditioned gradient scheme as well, i.e. the method can be accelerated under specific conditions for the reference function $\phi$. We believe that this is interesting future work. Answer to **Question 3**: We respectfully disagree with the reviewer. It is known that for convex and Lipschitz smooth functions, gradient descent with a fixed stepsize of $1/L$ monotonically decreases the norm of the gradient (Beck, 2017, Theorem 10.27). We thus believe that our Theorem 3.6 generalizes this result to the anisotropic smoothness framework for isotropic reference functions. --- References: Maddison, C. J., Paulin, D., Teh, Y. W., and Doucet, A. Dual space preconditioning for gradient descent. SIAM Journal on Optimization, 31(1):991–1016, 2021. Hanzely, F., Richtarik, P., & Xiao, L. (2021). Accelerated Bregman proximal gradient methods for relatively smooth convex optimization. Computational Optimization and Applications, 79, 405-440. Beck, A. First-order methods in optimization. SIAM, 2017.
Summary: The studies a non-linearly preconditioned gradient descent method for optimizing some scalar functions $f:R^n\to R$. The iterations of the method are of $$x_{k+1}=x^k-\gamma \nabla \psi^*(\lambda \nabla f(x^k)),$$ which is similar to the approach of "Dual Space Preconditioning for Gradient Descent" by Maddison et al, although that paper did not consider the inclusion of the parameter $\lambda$. The original paper Maddison et al. only shown convergence results for convex functions $f$, and convex reference functions $\psi$. Recent works Laude & Patrinos, 2022 and Laude et al., 2023 have shown convergence to stationary points for non-convex functions $f$ under a condition called the anisotropic descent property. The key contribution of this paper is the development of a very general $(L,\overline{L})$-anisotropic smoothness property, which is shown to be easily verifiable for a broad range of non-convex functions in Lemma 2.5. Under this property, the algorithm is shown to converge to stationary points of $f$ with explicit rates. Additional sharper convergence results are also obtained for convex $f$ in terms of the optimality gap $f(x^k)-f(x^*)$, improving upon earlier results in Maddison et al. Some numerical results are also presented to show the practical performance of the algorithm compared to gradient clipping. Claims And Evidence: This is a theory paper. The claims of the paper are well supported by the mathematical proofs. The numerical illustrations are of secondary importance, but they also show good empirical performance. Methods And Evaluation Criteria: The proposed methods and evaluation criteria seem reasonable to us. The method was evaluated both on a convex as well as a non-convex test problem, and compared to gradient clipping. Theoretical Claims: I have looked at the proofs, and checked the correctness of the main results (Theorems 3.2, 3.6 and 3.7). I believe that the presentation is clearly written and the proofs are correct and rather ingenious. Experimental Designs Or Analyses: The numerical experiments in the paper are appropriately designed, and the analysis of the numerical results is appropriate. Supplementary Material: I have reviewed the proofs of the main results in the supplementary material. Relation To Broader Scientific Literature: This algorithm is closely related the algorithm presented in the paper "Dual-space preconditioning for gradient descent", as well as papers by Lu et al. and Bauschke et al. A variant of present algorithm has been studied before in the general non convex and composite non-smooth setting in Laude & Patrinos, 2022 different conditions. Nevertheless, the theoretical results obtained in this paper are shown under much simpler conditions and the results are stronger. The authors do a good job at citing relevant papers from the literature. Essential References Not Discussed: I believe that all essential references have been discussed. Other Strengths And Weaknesses: The key contribution of this paper is the development of a very general $(L,\overline{L})$-anisotropic smoothness property, which is shown to be easily verifiable for a broad range of non-convex functions in Lemma 2.5. Under this property, the algorithm is shown to converge to stationary points of $f$ with explicit rates. Additional sharper convergence results are also obtained for convex $f$ in terms of the optimality gap $f(x^k)-f(x^*)$, improving upon earlier results in Maddison et al. These results are interesting as the assumptions on the function $f$ are very mild, the algorithm is simple, and the results are fully explicit. One weakness of the paper is that the algorithm presented is not new, and it has been studied before in the general non convex and composite non-smooth setting in Laude & Patrinos, 2022 different conditions. Nevertheless, the theoretical results obtained in this paper are shown under much simpler conditions and the results are stronger. Other Comments Or Suggestions: No other comments. Questions For Authors: Could you please further clarify the differences between the conditions in your current paper, and the conditions used in Laude & Patrinos, 2022? In terms of numerical experiments, in the context of non-convex functions, it would be interesting to see a neural network example for a small dataset using full gradients (with sufficiently wide networks, the loss can be close to 0 on the training data). In particular, you could do a comparison with gradient clipping in such scenarios. Do you think that the performance advantages would be similar as in Section 4.2? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. Answer to **Question 1**: There are three major differences in the conditions considered in the current paper compared to (Laude & Patrinos, 2022): 1. We consider reference functions $\phi$ of possibly not full domain. This leads to a more general descent inequality that allows us to tackle larger problem classes. As an example of the significance of this extension, through Corollary 2.7 and Proposition 2.9 we obtain that functions $f \in \mathcal{C}^2(\mathbb{R}^n)$ that are $(L_0, L_1)$-smooth are also $(L, \bar L)$-anisotropically smooth relative to $\phi(x) = -\|x\| - \ln(1-\|x\|)$, a function which is defined for $\|x\| < 1$. We moreover focus on strongly convex reference functions in order to capture a function class that is wider than Lipschitz smoothness through Proposition 2.3. 2. We provide a sufficient second order condition for anisotropic smoothness in the form of Definition 2.4, combined with Proposition 2.9. Such a condition (as described in Lemma 2.5) was shown in (Laude & Patrinos, 2022) to be in general only necessary, while it was shown to be sufficient under Legendre convexity of $f$ in (Laude & Patrinos, 2022, Proposition 4.1). Thus we overcome a major limitation of (Laude, Patrinos, 2022) in that we provide a computable condition for anisotropic smoothness. Moreover, in Definition 2.4, we relax the continuous differentiability of $\nabla \phi^*$ to Lipschitz continuity thus allowing us to cover more interesting algorithms. Central to our analysis is the new result regarding the (strong) monotonicity of the forward operator. 3. In our paper we study the convex case separately, obtaining for the first time sublinear convergence rates for large classes of reference functions of both the isotropic and the separable type. Answer to **Question 2**: We thank the reviewer for their suggestion. We performed a preliminary experiment for a subset of the MNIST dataset (600 data points) on a four-layer fully connected network with layer dimensions [28 × 28, 128, 64, 32, 32, 10] and ReLU activation functions, using the cross-entropy loss function. We compare the nonlinearly preconditioned gradient method generated by $\phi_1(x) = -\|x\|-\ln(1-\|x\|)$ and $\phi_2(x) = \cosh(\|x\|)-1$ and the gradient clipping method. We tuned the methods by performing a grid search on the hyperparameters. The results are presented in the following table, where we report the training loss every 30 iterations. | Method | k=30 | k=60 | k=90 | k=120 | k=150 | k=180 | k=210 | |----------|----------|----------|----------|----------|----------|----------|----------| | $\phi_1$ | 0.975662 | 0.440863 | 0.118921 | 0.096205 | 0.000098 | 0.000035 | 0.000020 | | clipping | 0.921261 | 0.277707 | 0.130072 | 0.145052 | 0.068124 | 0.054605 | 0.000012 | | $\phi_2$ | 1.308716 | 0.593909 | 0.300052 | 0.101221 | 0.008348 | 0.001282 | 0.000670 | In this experiment it seems that the method generated by $\phi_1$ is the best performing, while the clipping gradient method speeds up towards the end to achieve better final accuracy. Due to the simplicity of the experiment we cannot conclude on the effectiveness of the compared methods in general. We believe that different reference functions lead to performance advantages on different architectures and loss functions. --- References: Laude, E. and Patrinos, P. Anisotropic proximal gradient. arXiv preprint arXiv:2210.15531, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions during the rebuttal. I feel that this is a nice contribution, with solid theoretical results and the rebuttal also obtained so interesting preliminary results for neural networks. The authors could include more detailed studies on such problems in the final version. I increase my score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their positive evaluation of our work. If the paper gets accepted, we will include more detailed results on our experiments.
null
null
null
null
null
null
Stray Intrusive Outliers-Based Feature Selection on Intra-Class Asymmetric Instance Distribution or Multiple High-Density Clusters
Accept (poster)
Summary: This paper proposes a supervised FS method, Stray Intrusive Outliers-based FS (SIOFS), for data classification with intra-class ADMHC. By focusing on Stray Intrusive Outliers (SIOs), SIOFS modifies the skewness coefficient and fuses the threshold in the 3σ principle to identify the class body, scoring features based on the intrusion degree of SIOs. Claims And Evidence: Please see Weaknesses. Methods And Evaluation Criteria: Please see Weaknesses, Theoretical Claims: Please see Weaknesses. Experimental Designs Or Analyses: Checked. Supplementary Material: Checked the theoretical part. Relation To Broader Scientific Literature: Most current FS methods score features based on the characteristics of all training instances. Existing FS methods rarely aim to identify the class body in the context of intra-class multiple high-density clusters. This paper addresses them. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths 1. The paper structure is clear and is easy to follow. 2. The proposed method is novel. 3. The method is supported by theoretical evidence and empirical evidence. Weaknesses 1. At the third paragraph in Introduction, the authors mention that as shown in Fig. 1b, class “2” has two high-density clusters, but there is only class "1" and "3" in the figure. 2. What is definition of $3\sigma$ principle? Can authors explain its insight? 3. The explanation in Line 132 for equation 3 assumes that outliers have low instance density and thus will not be included. However, there are different types of outliers and low density assumption can not guarantee that all outliers are not included. 4. Why using l1 distance in equation 1? 5. The authors need to explain why the modified SC is formulated as the type of Eq.4. Why $(d_i^{(l)} - u^{(l)})^3$ is used? Is this mean that other terms like $(d_i^{(l)} - u^{(l)})^1$ can not be used? 6. In Line 192, why normalize $\hat{s}^{(l)}$ by $\hat{s}^{(l)}/3$? Other Comments Or Suggestions: Please see Weaknesses. Questions For Authors: Please see Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 5k2j for the constructive and valuable comments. The concerns are addressed as follows. ## Q1: In Fig. 1b, class "2" has two high-density clusters, but there is only class "1" and "3" in the figure. Many thanks for the comment. We revise this typo from class "2" to class "3". ## Q2: What is the definition of 3σ principle? Can authors explain its insight? Sorry for the unclear description. We add the classical literature (Harris & Stocker, 1998) and **[1]** to support the 3$\sigma$ principle. For $\xi\sim N(\mu,\sigma^2)$, the probability (denoted as $Pr$) that $Pr(\xi\in(\mu-3\sigma,\mu+3\sigma))>0.997$. This provides a robust framework for analysing variability in normally distributed data and guides thresholds for outlier detection in the context of ADMHC. **[1]** Groeneveld, R. A., & Meeden, G. (1984). Measuring Skewness and Kurtosis. Journal of the Royal Statistical Society: Series D (The Statistician), 33(4), 391-399. ## Q3: There are different types of outliers and low density assumption can not guarantee that all outliers are not included. We clarify that due to random errors in the data, especially in synthetic datasets, it is impossible to correctly identify all of them. In this paper, only low-density cases are identified and used as outliers. We will include the above details of the outliers we have identified in the final version. ## Q4: Why using l1 distance in Eq.(1)? As presented in the original paper (see line 79, right column), the $\ell_1$ norm can treat each component of the feature vector equally. $\ell_1$ treats all $\vert x_{if}^{(k)}-x_{jf}^{(k)} \vert$ linearly while $\ell_2$ penalizes large $\vert x_{if}^{(k)}-x_{jf}^{(k)}\vert$ quadratically. This makes it easier to identify outliers in $\ell_1$ than in $\ell_2$. As suggested, we add comparisons with $\ell_1$ distance and the typical $\ell_2$ distance. Note that we only replace $\ell_1$ distance with $\ell_2$ distance. As shown in **Rebuttal Table D**, SIOFS with $\ell_1$ distance outperforms that with $\ell_2$ distance on all datasets. We will include these details in the final version. **Rebuttal Table D.** Comparative results of SIOFS with $\ell_1$ distance (denoted as "SIOFS") and $\ell_2$ distances (denoted as "w/ $\ell_2$") on some datasets. "w/ $(\cdot)^1$" means that the $(\cdot)^3$ terms are directly replaced by $(\cdot)^1$ in Eq.(4). The average ACC over all 11 datasets is also reported for a comprehensive comparison. |ACC (%) $\uparrow$|SIOFS|w/ $\ell_2$|w/ $(\cdot)^1$| |-|-|-|-| |CLL|**71.32**$\pm$3.84|65.47$\pm$2.58|70.57$\pm$1.85| |TOX|**83.24**$\pm$4.99|81.19$\pm$2.93|**83.24**$\pm$3.09| |Carcinom|**93.77**$\pm$2.72|**93.77**$\pm$2.33|93.30$\pm$2.19| |Lung|**95.32**$\pm$1.02|94.09$\pm$1.80|95.24$\pm$0.61| |Lymphoma|**89.76**$\pm$2.20|**89.76**$\pm$2.44|**89.76**$\pm$1.52| |Over 11 datasets|**81.80**|81.33|81.64| ## Q5: Explain the modified SC in Eq.(4). Why $(d_i^{(l)}-\mathrm{u}^{(l)} )^3$ is used? How about other terms like $(d_i^{(l)}-\mathrm{u}^{(l)} )^1$. According to the literature (Linton, 2017), the **original formula of the SC** is $SC=\frac{\frac{1}{n}\sum_{i=1}^n(X_i-\bar{X})^3}{\left(\frac{1}{n}\sum_{i=1}^n(X_i-\bar{X})^2\right)^{3/2}}$, where $X_i$ are the individual data points, $\bar{X}$ is the mean and $n$ is the number of instances. As mentioned in lines 142-150 (right column), the modified SC can be obtained by substituting $\mathrm{u}^{(l)}$ and $\hat{\sigma}^{(l)}$ into the formula $SC$. That is, the term $(d_i^{(l)}-\mathrm{u}^{(l)})^3$ is directly derived from the term $(X_i-\bar{X})^3$ and has the same statistical meaning. The SC is a statistical measure that quantifies the asymmetry of a probability distribution. It indicates the degree to which data deviate from a symmetric, bell-shaped normal distribution **[1]**. Since we address the highest-density subclusters in the context of ADMHC, it is reasonable to use and modify the SC via Eq.(4). In addition, we appreciate the kind suggestion to discuss other terms such as $(d_i^{(l)}-\mathrm{u}^{(l)})^1$. We add the comparison between $(d_i^{(l)}-\mathrm{u}^{(l)})^3$ and $(d_i^{(l)}-\mathrm{u}^{(l)})^1$. To make the comparison reasonable, two terms with $(\cdot)^3$ in Eq.(4) are directly replaced by $(\cdot)^1$ (denoted as "w/ $(\cdot)^1$"). As demonstrated in **Rebuttal Table D**, our SIOFS using the term $(\cdot)^3$ achieves higher ACCs compared to $(\cdot)^1$. We will incorporate the above discussions into the final version. ## Q6: In Line 192, why normalize $\hat{s}^{(l)}$ by $\hat{s}^{(l)}/3$? Please see our response to **Reviewer aEFk, Q5**.
Summary: This paper proposes a supervised FS method, Stray Intrusive Outliers based FS (SIOFS), for data classification with intra-class ADMHC. By focusing on Stray Intrusive Outliers (SIOs), SIOFS modifies the skewness coefficient and fuses the threshold in the 3σ principle to identify the class body, scoring features based on the intrusion degree of SIOs. Extensive experiments on 15 diverse benchmark datasets demonstrate the superiority of SIOFS over 12 state-of-the-art FS methods in terms of classification accuracy, normalized mutual information, and confusion matrix. SIOFS has the potential to improve analysis results in sectors that contain the classes with intra-class asymmetric instance distribution or multiple high-density clusters, from healthcare to finance. In all, SIOFS provides the theoretical foundation for further development of effective and interpretable models. Claims And Evidence: In Theorem 2, ‘the larger value in …’ is not rigorous, a better description is ‘if α1>α2∈(0,1), then …’ It’s confusing to define 'larger'. Especially from experiments table 1, The α varies widely across datasets, ranging from 0.1 to 0.6. In the ablation experiment and the appendix, it is also shown that the change of α has a significant impact on the performance of some datasets. Is the final selection of a determined by the test results? Methods And Evaluation Criteria: The scheme designed in the experiment is meaningful and fair, and can explain the superiority of the algorithm to a certain extent. The designed evaluation methods are consistent with most related work, which reflects the rationality of the validation framework. Theoretical Claims: Theorem 2 indicates that a larger α is necessary. However, the significant variation in α across different datasets presented in Table 1 raises concerns about the alignment between theoretical derivation and experimental validation. From the results in Figures 5 and 7, why does α differ so significantly for different data sets? Experimental Designs Or Analyses: The elements of this experiment have been fully addressed; however, there are additional considerations regarding the deep learning component. While feature selection does not appear to offer significant advantages over deep feature extraction in terms of performance, as noted at the end of the article, its primary advantage over deep learning lies in interpretability—specifically, the ability to identify key factors influencing health outcomes rather than focusing solely on accuracy. Supplementary Material: The related work and additional experimental settings of supplementary materials are detailed, but the proofs of theorem 2 cannot dispel my doubts about rigor. Relation To Broader Scientific Literature: From the recent review, the challenge of high-dimensional data classification has always existed, among which feature selection is a highly concerned scheme. In particular, asymmetric instance distribution and multi-density cluster, which are concerned in this paper, belong to strong prior knowledge, which can be used for model design with strong pertinently and predictable performance improvement. Recently, the use of outliers for feature selection [Yuan et al., 2022; Yuan et al., 2024] is a novel and niche research topic, which is beneficial to the development of machine learning. However, we hope to maintain the spirit of open source, so that more people can find the highlights of work and apply it to practice instead of doing closed development. Open-source code will not only increase your citations, it will also allow peers other than reviewers to review your work and provide more professional opinions. Essential References Not Discussed: ‘Feature selection techniques for machine learning: a survey of more than two decades of research.’ This article's descriptive classification of feature selection is very similar to the related work Review of FS in the appendix, but it is not cited. Of course this is not crucial, as the two most important and relevant articles are cited: FSDOC (Yuan et al., 2022) and IOFS (Yuan et al., 2024) methods. Other Strengths And Weaknesses: The article demonstrates a high degree of originality and clarity in its narrative. However, the descriptions of theorems and key indicators are somewhat ambiguous, which poses challenges for code implementation. For instance, at line 161, "the most of" should be specified with a precise percentage (e.g., 0.8, 0.75, or 0.6). Additionally, the transition from 3σ to Formula 5, as well as the adjustment of the coefficient "2" mentioned in the left column at line 189, the normalization operation on line 193, all appear to be based on heuristic reasoning rather than rigorous derivation. Should such parameters be supported and fed back by experiments? Other Comments Or Suggestions: 1. The discussion and expression need to be more rigorous 2. The experiment needs to be considered carefully Questions For Authors: I am particularly concerned about the selected value of α, specifically whether the experimentally determined optimal α aligns with the expectations derived from theoretical considerations. My primary concern is the potential discrepancy between theory and experimental results: the experimentally selected optimal α may not satisfy the criteria of "larger" and "the most of" as anticipated in the theoretical derivation. This concern has been exacerbated by some minor errors identified in this paper, which have raised doubts about the consistency between theory and practice. Line 50 right column,class “2”, There is no class 2 in the figure 1 (b); line 71 left column: radii; line 445 left column: multipple. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s feedback. Below, we address the concerns in detail. ## Q1: Some ambiguous descriptions about the theorems. Inconsistency between theory and experiment about α. As addressed in our response to **Reviewer 52hZ, Q1**, we clarify that the condition for in Theorem 2 does not contradict the experimental results. When $\alpha$ is small, the conclusion in lines 184-187 still holds. $\alpha$ has a significant impact on ACC because of the scattered distribution of clusters and multiple local high-density subclusters (see Fig. 1b). This problem can be solved by selecting more features. In addition, we regret the unclear descriptions. We delete "larger" and "the most of", and **revise Theorem 2** as follows: $d_1^{(l)},d_2^{(l)},\dots,d_{n_l}^{(l)}$, $\dots$. When $\alpha\in(0,1)$ and $\sigma^{(l)}>0$, $u^{(l)}+2\sigma^{(l)}>mode^{(l)}$ holds with probability 1 if $\hat{s}^{(l)}>0$, and $u^{(l)}+2\sigma^{(l)}<mode^{(l)}$ holds with probability 1 if $\hat{s}^{(l)}<0$. We will include the above details in the final version. ## Q2: Lack evidence or practical examples to validate the advantage on deep learning-based FS. We appreciate the constructive feedback provided. As mentioned in lines 417-418, deep FS has extensive applications, such as Remote Sensing (RS) scene classification and 3D object recognition. We add the ACC results by selecting all features (denoted as "AllFeat") in **Rebuttal Table B** and the CMs with the top 5% of features (see https://i.postimg.cc/HLw5K8z6/CMs-AID.png) on AID dataset. These results demonstrate that the SIOFS is superior to AllFeat while some comparative methods (such as ILFS, S^2DFS) is inferior. As shown in CMs, our SIOFS can further improve the classification performance on deep features against other methods. Combined with Fig.1a, for the "School" with intra-class ADMHC, the number of instances correctly classified by SIOFS is 28, while it is 24 by TRC and 27 by Fisher and ReliefF. SIOFS gains better performance in predicting the confusing classes, being able to identify key factors influencing RS monitoring. On UCM, SIOFS (94.95) also performs better than AllFeat (94.48); On ModelNet, SIOFS (93.03) has higher ACC than AllFeat (92.71). We will include these details in the final version. **Rebuttal Table B.** ACC (%) on AID. Six FS methods are randomly selected for comparison. ||ACC (%) $\uparrow$| |-|-| |*AllFeat*|*84.36*| |Fisher|84.92| |ReliefF|84.80| |TRC|85.05| |ILFS|83.93| |FSDOC|84.51| |S^2DFS|84.35| |SIOFS|**85.83**| ## Q3: Maintain the spirit of open source. We appreciate the valuable comment. The code will be released once accepted. ## Q4: Add an essential reference. As suggested, we will include this paper in the Related Work section in the final version. ## Q5: Explain the transition from 3σ to Eq.(5), the adjustment of the coefficient "2" in line 189, and the normalization in line 193. As mentioned in lines 129-131 (right column) and lines 179-187, and combined with the revised Theorem 2, the transition to Eq.(5) is a heuristic reasoning that follows the conclusion in lines 183-187 (see "That is, ..."), and it is necessary to introduce SC for normalization. The original formula of $SC$ (see our response to Reviewer 5k2J, Q5) and the literature [1] (see the response to Reviewer 5k2j, Q2) clarify that $s^{(l)}\in(-3,3)$ is an empirical guideline, providing intuitive thresholds to assess skewness severity and guide data adjustments, and reflecting realistic bounds for most practical datasets. That is, we have $\frac{1}{3}s^{(l)}\in(-1,1)$, then $2-\frac{1}{3}s^{(l)}\in(1,3)$. As mentioned in lines 198-205, the $2-\frac{1}{3}s^{(l)}$ with its range of (1,3) is more rational than 2 for $\sigma^{(l)}$ to address the highest-density subcluster in the context of ADMHC. As demonstrated in **Rebuttal Table C**, the proposed $2-\frac{1}{3}s^{(l)}$ outperforms other formulas in most cases, and the average ACCs over the 11 datasets further quantitatively prove the superiority of our method. We will incorporate the above discussions in the final version. **Rebuttal Table C.** Results of different adjustments and normalizations on some datasets. Our SIOFS is equipped with "$2-\frac{1}{3}s^{(l)}$". Additional settings are included in the table. |ACC (%) $\uparrow$|$2-\frac{1}{3}s^{(l)}$|$1-\frac{1}{3}s^{(l)}$|$3-\frac{1}{3}s^{(l)}$|$2-\frac{1}{2}s^{(l)}$|$2-\frac{1}{4}s^{(l)}$| |-|-|-|-|-|-| |CLL|**71.32**$\pm$3.84|69.52$\pm$1.42|66.37$\pm$1.24|70.57$\pm$2.53|69.52$\pm$4.58| |TOX|83.24$\pm$4.99|**83.63**$\pm$4.64|82.26$\pm$2.84|83.43$\pm$3.72|82.65$\pm$3.98| |Carcinom|**93.77**$\pm$2.72|93.10$\pm$1.05|92.82$\pm$2.14|92.72$\pm$2.36|93.10$\pm$1.96| |Lung|95.32$\pm$1.02|**95.63**$\pm$0.91|95.07$\pm$0.94|95.22$\pm$1.73|95.07$\pm$1.21| |Lymphoma|**89.76**$\pm$2.20|88.54$\pm$2.95|87.50$\pm$2.48|**89.76**$\pm$3.58|88.19$\pm$1.87| |Over 11 datasets|**81.80**|81.35|80.98|81.68|81.48|81.64| ## Q6: Minor errors. We will revise them in the final version. --- Rebuttal Comment 1.1: Comment: Dear author, thank you for your reply. The imprecision of theory and experimental verification is still my concern. If there is empirical operation, is the theoretical derivation still rigorous? Because experiments can sometimes be deceptive. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer's feedback. We clarify that whether $\alpha$ is larger or smaller, it can be theoretically derived. The experimental results validate our theory. We summarize the details of derivations as follows. **Theoretical derivation about $\alpha$.** As mentioned in our submission, from the definition of RDM center (see Eq.3 and lines 137-139) and the fact that $\hat{s}^{(l)}$ has the same property of $s^{(l)}$ (see lines 154-157, right column), it is natural to deduce that (in lines 183-187) when $\alpha$ is smaller in (0,1), $\mathrm{u}^{(l)}+2\hat{\sigma}^{(l)}$ is relatively large for obtaining the highest density values in $d_1^{(l)},\dots,d_{n_l}^{(l)}$ when $\hat{s}^{(l)}>0$. Similarly, if $\hat{s}^{(l)}<0$, $\mathrm{u}^{(l)}+2\hat{\sigma}^{(l)}$ is relatively small for obtaining all highest density values in $d_1^{(l)},\dots,d_{n_l}^{(l)}$ (see **The detailed theoretical derivation when $\alpha$ is smaller**). To make this theory clear, we summarize as follows: For $d_1^{(l)},d_2^{(l)},\dots,d_{n_l}^{(l)}$, $\mathrm{u}^{(l)},\hat{s}^{(l)}$ are the same as in (4), and $mode^{(l)}$ is the same as the footnote of Section 3.1. When $\alpha\in(0,1)$ and $\hat{\sigma}^{(l)}>0$, $\mathrm{u}^{(l)}+2\hat{\sigma}^{(l)}>mode^{(l)}$ holds with probability 1 if $\hat{s}^{(l)}>0$, and $\mathrm{u}^{(l)}+2\hat{\sigma}^{(l)}<mode^{(l)}$ holds with probability 1 if $\hat{s}^{(l)}<0$. **The detailed theoretical derivation when $\alpha$ is smaller**: According to the definition of RDM center, $\mathrm{u}^{(l)}$ is the average of the top $\lceil\alpha\cdot n_l\rceil$ high density values in $d_1^{(l)},\dots,d_{n_l}^{(l)}$. When $\alpha$ is smaller in $(0,1)$ but $\lceil\alpha\cdot n_l\rceil=1$, we have $\mathrm{u}^{(l)}=mode^{(l)}$. Combined with $\hat{\sigma}^{(l)}>0$, we have $\mathrm{u}^{(l)}+2\hat{\sigma}^{(l)}>mode^{(l)}$. Following Theorem 1, let $\xi$ be the distance between an instance and the center of class $l$, $f(x)$ be the probability density function of $\xi$. Given the SC of $f(x)$ that $s^{(l)}>0$, there are two properties of statistical probability: (i) $f(mode+\varepsilon)>f(mode-\varepsilon)$, for any $\varepsilon>0$. (ii) $f(x)$ is monotonically increasing near the left side of $mode$ and decreasing near the right side. $d_1^{(l)},\dots,d_{n_l}^{(l)}$ is a random sample of $\xi$, let $d_2,d_3$ denote the 2nd and 3rd largest density in $d_1^{(l)},\dots,d_{n_l}^{(l)}$, respectively, and $d_1=mode^{(l)}$. Obviously there is $f(d_1)>f(d_2)>f(d_3)$. When $\alpha$ is smaller in $(0,1)$ but $\lceil\alpha\cdot n_l\rceil=2$, $\hat{\sigma}^{(l)}>0$ and $\hat{s}^{(l)}>0$. $\hat{s}^{(l)}$ has the same property of $s^{(l)}$ (see lines 154-157, right column). Due to random errors in the sampling, the probability that $d_2>d_1$ holds w.r.t property (i) is 1, then $\mathrm{u}^{(l)}=\frac{d_1+d_2}{2}>d_1$, i.e., at least $\mathrm{u}^{(l)}+2\hat{\sigma}^{(l)}>mode^{(l)}$ holds with probability 1. When $\alpha$ is smaller in $(0,1)$ but $\lceil\alpha\cdot n_l\rceil=3$, $\hat{\sigma}^{(l)}>0$ and $\hat{s}^{(l)}>0$. If $d_3>d_1$, then $\mathrm{u}^{(l)}=\frac{1}{3}(d_1+d_2+d_3)>d_1$; If $d_3<d_1$ (see https://i.postimg.cc/GmzqF1fV/PDF.png), since $f(d_3)<f(d_2)$, combining the property (i), $d_3$ can only be very close to $d_1$ in $\hat{s}^{(l)}>0$, namely $\mathrm{u}^{(l)}=\frac{1}{3}(d_1+d_2+d_3)$ is very close to $d_1$. Combined with $\hat{\sigma}^{(l)}>0$, it holds with probability 1 that $\mathrm{u}^{(l)}+2\hat{\sigma}^{(l)}>mode^{(l)}$ for $\lceil\alpha\cdot n_l\rceil=3$. When $\lceil\alpha\cdot n_l\rceil=4,5,\dots$, the same conclusion can be obtained. Similarly, if $\hat{s}^{(l)}<0$, $\mathrm{u}^{(l)}+2\hat{\sigma}^{(l)}$ is relatively small for obtaining all highest density values in $d_1^{(l)},\dots,d_{n_l}^{(l)}$. **Experimental verification for $\alpha$.** Based on the theory and explanations, the value of $\alpha$ is reasonable in $(0,1)$, and thus we set $\alpha=0.1,\dots,0.9$ in Fig. 7 and 9. As mentioned in lines 113-115 of our submission, $\alpha$ is the ratio of higher density to all instances in a class, it is affected by dataset characteristics. For some datasets, such as CLL (see Fig. 1b), due to the scattered distribution of clusters and multiple local high-density subclusters, $\alpha$ has a significant impact on ACC in some cases. The details of responses to reviewer's concerns are shown in the previous rebuttal. We will polish the complete mauscript, not limited to "larger" in Theorem 2 and "the most of" in line 178. We promise to release the code once it is accepted. As demonstrated by above explanations, our theoretical derivation is complete and rigorous (whether $\alpha$ is larger or smaller), and the experimental verification is effective. To avoid confusing, all above details will be included in the final version. We hope this is sufficient reason to consider raising the score.
Summary: For the problem of high-dimensional data classification with intra-class asymmetric instance distribution or multiple high-density clusters (ADMHC), a novel supervised feature selection (FS) method named Stray Intrusive Outliers-based FS (SIOFS) is proposed. The proposed method uses the RDM center to characterize the class body, and the modified skewness coefficient (SC) is adjusted and fused into the $3 \sigma$ principle to define the class body. Then, intrusion degree is modeled based on the conclusion of intersecting spheres. Finally, the feature ranking is determined by the intrusion degrees of SIOs. Experimental results demonstrate the effectiveness of the proposed method over other state-of-the-art FS methods. Claims And Evidence: $\bullet$ This paper mainly proposes the claim: In high-dimensional data classification with intra-class ADMHC, the distribution of distances is asymmetry or multi-peak. Existing FS methods rarely aim to identify the class body in the context of intra-class ADMHC. The proposed SIOFS method targets intra-class ADMHC for data classification. $\bullet$ The theoretical results in this paper and the related experimental verification provide strong and clear evidence for the claim. Methods And Evaluation Criteria: Yes. The proposed method indeed demonstrates advantages over other state-of-the-art FS methods in experiments. Theoretical Claims: Yes. I have reviewed the theoretical proofs and the correctness of the proofs supports the claim. Experimental Designs Or Analyses: Yes. The experimental designs to verify the effectiveness of the proposed method are complete, and the comparison of other state-of-the-art FS methods and the comparison and analysis of experimental results on several diverse benchmark datasets well demonstrate the effectiveness of SIOFS. Supplementary Material: Yes. I have reviewed the proofs of the theoretical results and the further details of the methods and experiments in the supplementary material. Relation To Broader Scientific Literature: Prior FS methods rarely aim to identify the class body in the context of intra-class ADMHC. This paper explains the motivation for quantifying the intrusion degree of SIOs and proposes the SIOFS method to deal with intra-class ADMHC for data classification. Essential References Not Discussed: No. All the essential references have been adequately discussed. Other Strengths And Weaknesses: $\bullet$ Strengths: The proposed method effectively handles the data classification problem with intra-class ADMHC and achieves better performance than existing methods. $\bullet$ Weaknesses: 1. Theorem 2 assumes that $\alpha$ is a larger value, but experimental results show that the value of $\alpha$ is often relatively small, with the largest being $0.6$ that occurs by chance. What is the exact range of values ​​of $\alpha$ in Theorem 2? How to explain the inconsistency between theoretical results and experiments? 2. In Equation (6), the coefficient $2$ is adjusted to $2-\frac{1}{3} \hat{s}^{(l)}$, and the motivation and basis for this adjustment are not explained in detail. Can coefficient $\frac{1}{3}$ of $\hat{s}^{(l)}$ be adjusted to other values ​​between 0 and 1? Other Comments Or Suggestions: Line 161, "Condition i" --> "Condition (i)", Line 123, right column, "condition ii" --> "condition (ii)", Line 130, right column, "condition ii" --> "condition (ii)". Questions For Authors: For classification problems, the increase in the number of classes will have a significant impact on the performance of the method, that is, it will increase the difficulty of classification. I noticed that the number of classes in the datasets involved in the experiments is actually relatively small, with the largest number of only $40$ classes. A natural confusion is whether the proposed method can still have a significant advantage over existing methods when the number of classes is large? Therefore, as the number of classes increases, the effectiveness of the proposed method needs further experimental verification. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank Reviewer 52hZ for the recognition of our work and for providing constructive comments. ## Q1: Explain the inconsistency between Theorem 2 and results about $\alpha$. Sorry for the incomplete statement about Theorem 2. We clarify that the condition for $\alpha$ in Theorem 2 does not contradict the experimental results. **When $\alpha$ is small, the conclusion in lines 183-187 (left column) still holds**. For easy understanding, we add the following statement. According to the definition of RDM center (see lines 96-98, right column), $\mathbf{u}^{(l)}$ is the average of the top $\lceil\alpha\cdot n_l\rceil$ high density values in $d_1^{(l)},\dots,d_{n_l}^{(l)}$. When $\alpha$ is smaller but $\lceil \alpha\cdot n_l\rceil=1$, we have u$^{(l)}=mode^{(l)}$. Following Theorem 1, let $\xi$ be the distance between the instance and the center of class $l$ and f(x) be the probability density function of $\xi$. Assume that the SC of $f(x)$ is greater than 0. There are two properties of statistical probability (Harris & Stocker, 1998): (i) $f(mode+\varepsilon)>f(mode-\varepsilon)$, for any $\varepsilon>0$. (ii) f(x) is monotonically increasing near the left side of the $mode$ and decreasing near the right side. In Theorem 2, $d_1^{(l)},\dots,d_{n_l}^{(l)}$ is a random sample of $\xi$, let $d_2,d_3$ denote the second and third largest density in $d_1^{(l)},\dots,d_{n_l}^{(l)}$, respectively, and $d_1=mode^{(l)}$. Obviously there is $f(d_1)>f(d_2)>f(d_3)$. When $\alpha$ is small but $\lceil\alpha\cdot n_l\rceil=2$, $\hat{\sigma}^{(l)}>0$ and $\hat{s}^{(l)}>0$. Given the chance of errors in the sampling process, the probability that $d_2>d_1$ holds according to condition (i) is 1, then u$^{(l)}=\frac{d_1+d_2}{2}>d_1$, i.e., at least u$^{(l)}+2\hat{\sigma}^{(l)}>mode^{(l)}$ holds with probability 1. When $\alpha$ is small but $\lceil\alpha\cdot n_l\rceil=3$, $\hat{\sigma}^{(l)}>0$ and $\hat{s}^{(l)}>0$. If $d_3>d_1$, then u$^{(l)}=\frac{1}{3}(d_1+d_2+d_3)>d_1$; If $d_3<d_1$ (see figure https://i.postimg.cc/GmzqF1fV/PDF.png), since $f(d_3)<f(d_2)$, combining the property (i), $d_3$ can only be very close to $d_1$ in this skewed distribution, i.e., u$^{(l)}=\frac{1}{3}(d_1+d_2+d_3)$ is very close to $d_1$. Combined with $\hat{\sigma}^{(l)}>0$, it holds with probability 1 that u$^{(l)}+2\hat{\sigma}^{(l)}>mode^{(l)}$ for $\lceil \alpha \cdot n_l\rceil=3$. When $\lceil\alpha\cdot n_l\rceil=4,5,\dots$, the same conclusion can be obtained. Similarly, when $\hat{s}^{(l)}<0$ and $\alpha$ is small, but $\lceil\alpha\cdot n_l\rceil=1,2,\dots$, u$^{(l)}+2\hat{\sigma}^{(l)}<mode^{(l)}$ holds with probability 1. In addition, we rewrite "Based on Theorem 2..." (line 181) as "Combining Theorem 2..." to make the discussion and expression more rigorous. Based on the above explanations and Theorem 2, the value of $\alpha$ is reasonable in (0,1), and thus we set $\alpha=0.1,\dots,0.9$ in Figures 5 and 7. Since $\alpha$ is the ratio of higher density to all instances in a class (see lines 113-115), it is affected by different dataset characteristics. For example, due to the scattered distribution of clusters and multiple local high-density subclusters in CLL (see Fig. 1b), $\alpha$ has a significant impact on ACC (see Fig. 7). This problem can be solved by selecting more features. Above statements will be added in the final version. ## Q2: Explanations of the adjustment of the coefficient "2" mentioned in the left column at line 189 and the normalization operation on line 193. Please see our response to **Reviewer aEFk, Q5**. ## Q3: Typos: conditions i-->(i), ii-->(ii) and iii-->(iii). As suggested, we go over the paper and will revise them in the final version. ## Q4: Whether SIOFS can still have a significant advantage over other methods when the number of classes is large? As suggested, we add comparisons with the baselines on Caltech101 dataset, which has 101 classes and 3030 images. Like (Yuan et al., 2022), we also use the Fisher Vectors (262144 dimensions). Additionally, in order to reduce the computation time and preserve the main properties of the original representation, we uniformly sample these very high dimensional feature vectors with 50 components, i.e. $1,51,\dots,262144$, and obtain the final feature vector (5243 dimensions) for each image. Following the setting in (Yuan et al., 2022), we select 60%,70%,80%,90% of features. Same with the original paper, ACC and NMI are calculated over all baselines. And SIOFS outperforms all comparative methods in all cases. Some results are shown in **Rebuttal Table A**. **Rebuttal Table A.** ACC (%) on the large-scale high-dimensional Caltech101 dataset. Details will be added in the final version. |ACC (%) $\uparrow$|60%|70%|80%|90%| |-|-|-|-|-| |FSDOC|43.56|43.43|43.93|43.20| |S2DFS|42.38|43.43|43.56|43.66| |IOFS|45.45|48.84|51.52 |53.83| |EGCFS|43.96|43.89|43.86|44.26| |FSDK|39.14|40.46|42.15|42.97| |SIOFS|**45.74**|**48.98**|**51.58**|**54.13**|
null
null
null
null
null
null
null
null
The Elicitation Game: Evaluating Capability Elicitation Techniques
Accept (poster)
Summary: This paper aims to test capability elicitation techniques for evaluating language models. The motivation is that models are often deployed with undiscovered harmful capabilities that later resurface in deployment. Building on prior work for password-locked models, they introduce models with more sneaky passwords that are harder to elicit with simple prompting techniques. They do this by using 'circuit breaking' to suppress the backdoor behavior unless the password is present. They test 9 capability elicitation techniques based on password-locked and circuit-breaking password-locked models. The key finding of the paper is that fine-tuning methods are distinctly the most effective at elicitating latent capabilities. Claims And Evidence: I feel convinced by this paper (and related work) of how fine-tuning methods are particularly effective at capability elcitation. This provides a useful and compelling piece of evidence for this. I think that this work did what it set out to do very well. The main thing that this paper does not do is show that its model organisms are particularly good models of real-world capabilities. One could red-team models that had undergone unlearning or refusal fine-tuning. This would have been nice but related things have already been done, and not doing this doesn't negate the clear value of this paper. Methods And Evaluation Criteria: I found them compelling. One could always complain about using more models/tasks, but I think that the experimental scope was reasonable. Theoretical Claims: No theory (not needed). Experimental Designs Or Analyses: The key experimental appraoch involved training mistral and gemma models with conventional and enhanced password locking and then attacking them to re-elicit the suppressed capability. This is a sound approach. Supplementary Material: I spent about a minute looking through the appendix and did not spot anything that I think is suspicious or unexpected. Relation To Broader Scientific Literature: I work on some closely-related topics, and I think that the results in this paper are not totally unexpected. Nonetheless, I strongly believe that this work is useful and citeable. Essential References Not Discussed: Could the authors work on adding some more discussion of backdoors (e.g., https://ieeexplore.ieee.org/abstract/document/9996741?casa_token=I1LX2IcCjbMAAAAA:sin7jUwhWhI5aqHqeqws2uilBXiCuUCa6BPaafVNxBu2Oz1wCiEvPVp8HY1WFTY0IUtofkCS4w) and model tampering attacks (e.g., https://arxiv.org/abs/2502.05209)? Other Strengths And Weaknesses: Strength: I might be biased because I am familiar with the subject matter, but this paper was easy to read and understand. Weakness: This paper did an adequate but not particularly thorough job of discussing related work. In particular, it seems important to discuss more about backdoors (including obfuscated backdoors) to credit past work related to this. Other Comments Or Suggestions: None Questions For Authors: I thinkt the future work paragraph is reasonable, but FWIW, I think that bridging findings from model organisms with real model evals seems like a good priority. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for your help and feedback! We’re glad that you found the motivation and contributions compelling. > The main thing that this paper does not do is show that its model organisms are particularly good models of real-world capabilities. One could red-team models that had undergone unlearning or refusal fine-tuning. This would have been nice but related things have already been done, and not doing this doesn't negate the clear value of this paper. Do you agree with our response to Tqzj that: “This is a great point and something that we gave some consideration! We think that our model organisms provide an upper-bound for the difficulty of eliciting hidden capabilities from real systems. This is because our models are not constrained to generate realistic behaviour hiding capabilities, e.g., the circuit-broken models can output nonsensical tokens in the coding setting, whereas real sandbagging models would likely need to sandbag in an undetectable way, e.g., by imitating the behaviour of a weaker system. This is why we say in the paper that our model organisms “act as a worst-case, adversarial scenario” for eliciting methods. From a practical standpoint, it seems plausible that elicitation methods which are effective on our model organisms would also be effective on real systems (though the converse is less likely). We will add further discussion of this point to the paper.” > I found them compelling. One could always complain about using more models/tasks, but I think that the experimental scope was reasonable. We’re glad that you found the experimental scope compelling — Reviewers XiFg and BBji commented asking for further tasks, so we extended our results to MMLU (see response to XiFg). Thank you for providing the references to backdoors and model tampering attacks. They are definitely related to our work and we have added further discussions on them, inline with your suggestion. > I thinkt the future work paragraph is reasonable, but FWIW, I think that bridging findings from model organisms with real model evals seems like a good priority. Creating realistic model organisms is difficult, which is a main reason for why we opted for the approach described above. However, there are some exciting possibilities here, and we agree that this is a good priority. First, one could create model organisms with certain properties in toy scenarios and testing them in realistic scenarios (see [Denison et al., 2024](https://arxiv.org/abs/2406.10162)). Second, one could create scenarios where AI systems “decide by themselves” to hide their capabilities, and then apply elicitation techniques (see e.g., [Meinke et al., 2025](https://arxiv.org/abs/2412.04984) or [Greenblatt et al., 2024](https://arxiv.org/abs/2412.14093)). --- Thank you for your review!
Summary: This paper evaluates techniques for eliciting hidden capabilities in language models. The authors create two types of "model organisms" (password-locked and circuit-broken models) with intentionally hidden capabilities, then test various elicitation methods including prompting, activation steering, and fine-tuning. Their findings suggest that for MCQ tasks, combined prompting techniques work well, while for code generation, only fine-tuning reliably elicits capabilities from the more robust circuit-broken models. Claims And Evidence: - The core claim that circuit-broken models are more resistant to elicitation than password-locked models is supported, particularly for code generation. - The evidence showing the effectiveness of combining techniques is clear and convincing for these specific tasks/domains. Methods And Evaluation Criteria: - The model organism approach is nice, but a rehashed idea that is evaluated in this work - Evaluation on multiple-choice (WMDP) and code-generation (APPS) provides decent coverage for small-scale study purpose - I also wonder whether the relatively small-scale models used (7B-9B parameters) are representative of frontier systems where capability hiding is a real concern and the domains and style of tasks used. How does this change with model size? How does this change with reasoning models trained with rewards? Theoretical Claims: No formal proofs to check as the paper is primarily empirical. Experimental Designs Or Analyses: - The experimental methodology is generally sound - However, very limited scope of testing - only two tasks/domains with mostly saturated benchmarks - Would have benefited from testing more diverse elicitation in domains with real-world impact. The authors even mention that there being only four reasonable tokens (A, B, C, or D) for WMDP, the elicitation method merely needs to guide the output to one correct token --> this is certainly not the case if these elicitation and model organisms techniques are to be applied/studied more broadly. Supplementary Material: I did not review supplementary material. Relation To Broader Scientific Literature: The paper clearly builds on model organism and capability elicitation techniques from prior work, and focusses on evaluation only. Essential References Not Discussed: -- Other Strengths And Weaknesses: Strengths: - Practical insights for AI evaluators on some methods to prioritize going forward modulo the tasks chosen - Clear and interpretable presentation of results Weaknesses: - The paper seemed to make recommendations with limited evidence in 2 quite artificial/niche settings - Lacks deeper analysis of why certain elicitation techniques succeed or fail. Would have been nice to deeply analyze response patterns of models qualitatively. - The overall contribution feels a bit incremental with minimal/surface level insights into model behaviour Other Comments Or Suggestions: -- Questions For Authors: -- Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your feedback. We're glad you appreciated our novel model organism approach, clear results, and practical insights. > The model organism approach is nice, but a rehashed idea that is evaluated in this work Our paper focuses on using model organisms to evaluate techniques to elicit hidden capabilities, improving past work by introducing a more robust model organism, and providing an extensive evaluation of different elicitation techniques. We note that other reviewers found our contribution compelling. > [on small scale & reasoning models, and domain/style of tasks] We used open-weight models to enable us to create model organisms and to evaluate elicitation methods requiring weight-access, and we used smaller models to meet our engineering and compute constraints. Are there specific experimental results you would like validated on larger models? We note that scaling experiments with reasoning models like DeepSeek R1 (released just days before ICML submission) would be interesting but outside our current scope. (R1 was the most prominent open-weight reasoning model at the time). > The paper clearly builds on model organism and capability elicitation techniques from prior work, and focusses on evaluation only. This is incorrect because we introduce a novel model organism, and show that it improves prior work (i.e. more robust to elicitation). We note that the other reviewers commented positively on this contribution. > However, very limited scope of testing - only two tasks/domains with mostly saturated benchmarks We've extended our results to MMLU (see response to XiFg). With WMDP, APPS, and MMLU, we cover diverse domains where hidden capabilities could be harmful. Benchmark saturation is not a concern for our study. We selected WMDP, MMLU, and APPS to focus on domains where hidden capabilities could be harmful. The importance of detecting concealed capabilities remains regardless of base model performance, and our evaluated models do not reach saturation on these benchmarks. > The authors even mention that there being only four reasonable tokens (A, B, C, or D) for WMDP [...] this is certainly not the case if these elicitation techniques are to be applied more broadly. This limitation of WMDP is exactly why we also study the (more realistic and open-ended) code-generation setting on APPS. We extended our results to MMLU, requiring the model to perform across a wide range of diverse domains to hide its capability. It also makes certain elicitation methods, such as persona or concept methods, more challenging because it is more difficult for evaluators to determine broad-covering concepts or personas that would simply “guide the output to one correct token”. > [...] recommendations with limited evidence in 2 quite artificial/niche settings As noted, the APPS code-gen setting is quite realistic, and we now extend our results to MMLU (see response to XiFg). But indeed, we invite future work on additional settings to make our recommendations more robust. >Lacks deeper analysis [...] We agree this would be valuable but providing rigorous analyses across multiple methods requires effort beyond our current scope, especially given the multiple elicitation methods. We do offer preliminary hypotheses (lines 360-372, right column) as a stepping stone for future theoretical research. An important insight from our work is that hidden capabilities can be embedded within specific layers of a model through mechanisms such as LoRA adapters and/or circuit-breaking. This suggests that evaluators who fine-tune models at specific layers—common practice with LoRA—may remain vulnerable to the persistence of hidden capabilities. Concretely, circuit-broken model organisms can be trained in a way that fine-tuning only on the latest 40% of layers does not elicit performance. While this point was accidentally excluded from the original submission, it has now been incorporated into the revised manuscript. >The overall contribution feels a bit incremental [...] To re-emphasize our contributions, we: - Introduce a novel model organism combining password-locking with circuit-breaking that demonstrably improves robustness to capability elicitation. - Provide a comparative evaluation of multiple elicitation techniques across different tasks and models, showing that fine-tuning methods (especially anti-refusal training) are most effective. - Demonstrate that combining techniques (N-shot, prefilling, and steering) significantly improves elicitation effectiveness when fine-tuning access is unavailable. It seems like you may have under-appreciated the novel model organism. --- Thanks again for taking the time to review our paper! We hope that the additional experiments and these responses provided help bring the paper in line with your expectations for a pioneering study of capability elicitation methods. We would be glad to address any further questions or concerns you may have.
Summary: In this paper, the authors seek to evaluate different capability elicitation methods from the literature. Since there are too many to reasonably evaluate, they choose a subset of techniques by prioritizing their cost, simplicity and ease of use. The authors propose to use *model organisms* that mimic real-world models whose capabilities have been hidden, to evaluate the different techniques. Motivated by the finding that password-locked models often used in the literature are brittle in the face of capability elicitation interventions, the authors combine this with the circuit-breaking training techniques, leading to more robust model organisms. The authors then benchmark their selected elicitation techniques on their 2 model organisms (password-locked, circuit-broken), on a multiple-choice question-answering task and a coding task, on 3 models in the 7b-9b range (one base, 2 instruct). They find that: * their circuit-broken model is much more robust to elicitation; * elicitation technique efficiency varies between task and model but that supervised fine-tuning is usually the best performing one; * combining techniques performs better; * on the more complex coding task only finetuning (supervised as well as anti-refusal) recovers the full model capability. Claims And Evidence: The claims are substantiated by evidence from 2 tasks and 3 models. I find the quality of evidence relatively satisfactory, but would be happy to see confirmation from other tasks since some results differ importantly between the MCQA and coding task. For instance, that might help understand why prefilling is so effective for coding and password-locked models (Sec 4.2), and tease out why elicitation in general is much less effective for coding (Sec 5). Methods And Evaluation Criteria: I wonder whether the Mistral model makes for an accurate model organism. More recent models have undergone much more extensive post-training, which might affect how they behave under capability elicitation. Coding is a good proxy for complex capabilities that one might want to hide. MCQA is probably slightly too simplistic for a model of dangerous capabilities as relates to misuse risks. One might expect bad actors to query the model in natural language, such that a generative evaluation of a model's harmful knowledge might be useful. Theoretical Claims: N/A Experimental Designs Or Analyses: See Methods and Evaluation section. Supplementary Material: Did not review the supplementary. Relation To Broader Scientific Literature: The authors attempt to summarize previous research in capability elicitation through this study. They cite multiple techniques, from which they select 6 with various levels of ease of application, keeping in mind constraints third-party model evaluators might face. The model organism the authors propose to study is a combination of the password-locked model, which is classical in the elicitation literature, with a training technique coming from adversarial robustness called circuit-breaking. Essential References Not Discussed: None come to mind. Other Strengths And Weaknesses: * I am very concerned about a major limitation of this paper, namely the relationship between model organisms such as the authors study here and realistic models whose capabilities have been hidden. While the authors acknowledge the limitation themselves (this is good!), there is no discussion on how model organisms and realistic models are related experimentally. Given how variable the results are from one model organism, base model, and task, one can rightfully ask what lessons are to be drawn from this paper from a practical standpoint (except that combining elicitation techniques is better, which is unsurprizing). Since it might be hard to find realistic models whose creators have admitted they hid a capability, a study on the appropriateness of model organisms could be carried on harmfulness and then tentatively transferred to capabilities. Good discussion on this topic, or experimental evidence from the authors or from other papers could make me raise my score. * The paper is extremely clear and very pleasant to read. The authors surveyed the literature correctly and summarized all their techniques with an appropriate level of detail and clear, to-the-point language. * The paper could almost read as the presentation of a new model organism, by combining previous methods. The authors do show that the circuit-breaking model is more robust, and since the elicitation techniques are reasonable this makes for a stronger framing. * “We show that anti-refusal training elicits hidden code-generation capabilities. Notably, this method does not require any code-related training data.”. This is a very interesting result and would warrant further investigation. Other Comments Or Suggestions: N/A Questions For Authors: * Question 1: Why the choice of mistral 7b instead of more recent models such as llama or qwen? * Question 2: “supervised fine-tuning elicits the capability and often even improves performance (for the circuit-broken Gemma instruct even a 6% improvement).” (sec 4.1): As a point of reference, how does the improvement compare to fine-tuning the original Gemma instruct model? * Question 3: Do you have any idea why the capabilities of mistral base are systematically higher than the ones of mistral instruct? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for taking the time to review our paper and offer high-quality feedback! We’re glad that you found the paper clear and enjoyable to read, and that you appreciated the strength of our novel model organism based on circuit-breaking. Note: we abbreviated some of your comments to meet the character limit. > The claims are substantiated [...] but would be happy to see confirmation from other tasks [...] As discussed in response to XiFg, recreating our results on MMLU has provided further evidence for the core claims in the paper. Experiments in non-MCQA settings is a great area for future research. For instance, that might help understand why prefilling is so effective for coding and password-locked models (Sec 4.2), and tease out why elicitation in general is much less effective for coding (Sec 5). Prefilling has been found to be quite an effective jailbreak method (e.g. [Andriushchenko et al., 2024](https://arxiv.org/pdf/2404.02151) because it causes the model to quite strongly condition its response on the prefilled tokens (e.g., "Sure here's how I can help") Elicitation is likely substantially harder for the open-ended coding setting because the "target space" is much larger, i.e., in MCQA the elicitation only needs to guide the output to a single token ("A", "B", etc), whereas the space of capable outputs in coding is much richer and difficult to elicit (see lines 360-272, right column). We invite further work to examine this and related hypotheses in more detail. > I am very concerned about a major limitation of this paper [...] This is a great point and something that we gave some consideration! We think that our model organisms provide an upper-bound for the difficulty of eliciting hidden capabilities from real systems. This is because our models are not constrained to generate realistic behaviour hiding capabilities, e.g., the circuit-broken models can output nonsensical tokens in the coding setting, whereas real sandbagging models would likely need to sandbag in an undetectable way, e.g., by imitating the behaviour of a weaker system. This is why we say in the paper that our model organisms “act as a worst-case, adversarial scenario” for eliciting methods. From a practical standpoint, it seems plausible that elicitation methods which are effective on our model organisms would also be effective on real systems (though the converse is less likely). We will add further discussion of this point to the paper. Adding more context still, we agree that creating realistic model organisms is difficult, which is a main reason for why we opted for the approach described above. However, there are some exciting possibilities here. First, one could create model organisms with certain properties in toy scenarios and testing them in realistic scenarios (see [Denison et al., 2024](https://arxiv.org/abs/2406.10162)). Second, one could create scenarios where AI systems “decide by themselves” to hide their capabilities, and then apply elicitation techniques (see e.g., [Meinke et al., 2025](https://arxiv.org/abs/2412.04984) or [Greenblatt et al., 2024](https://arxiv.org/abs/2412.14093)). > The paper could almost read as the presentation of a new model organism [and] this makes for a stronger framing. Thanks! We will try to emphasise this contribution more. > [The success of anti-refusal training] is a very interesting result and would warrant further investigation. We agree, and we are coordinating with others on follow-up work. > Question 1 We chose Mistral-7b v3 because it was released 10 months ago, roughly around when we started on the project, and it achieved SOTA on various benchmarks at the time. For Gemma 2 9B instruct, we specifically selected it for the code generation tasks due to its superior performance compared to the other 7B models on this benchmark. > Question 2 |Model|Setting|WMDP(%)|MMLU(%)| |-|-|-|-| |*Mistral-base*|SFT original|60.0|63.6| ||PW-locked + PW|58.4|62.0| ||Circuit-broken + PW|58.7|62.8| ||SFT PW-locked|60.0|66.0| ||SFT circuit-broken|60.0|64.0| |*Mistral-instruct*|SFT original|51.4|59.1| ||PW-locked + PW|57.3|59.4| ||Circuit-broken + PW|56.8|59.3| ||SFT PW-locked|57.0|64.0| ||SFT circuit-broken|53.0|61.0| |*Gemma-instruct*|SFT original|54.9|75.3| ||PW-locked + PW|52.2|72.5| ||Circuit-broken + PW|52.4|72.3| ||SFT PW-locked|58.0|**76.0**| ||SFT circuit-broken|**62.0**|**76.0**| *PW - shorthand for password* > Question 3 We do not have a well-grounded explanation for this phenomenon. However, throughout our experiments, we observed that smaller instruct/chat models generally tend to perform slightly worse on MCQA tasks. --- Thanks again for your detailed feedback! We hope that the additional experiments and these responses provided help bring the paper in line with your expectations for a pioneering study of capability elicitation methods. We would be glad to address any further questions or concerns you may have. --- Rebuttal Comment 1.1: Comment: In light of the discussion with the other reviewers and of your answer to my comments I am happy to raise my score. I am not sure I completely agree with your justification that elicitation difficulty for the model organisms you introduced are an upper bound for sandbagging models: it is hard to make such claims a priori for LLMs. Your argument ("our models are not constrained to generate realistic behaviour hiding capabilities") implies that model organisms are easier to train than sandbagging models, not necessarily that they are more robust. I could imagine a genuinely misaligned model attempting to hide its capabilities being much harder to crack than a model of a similar size having undergone much shallower password-locking finetuning. However, if you include the above discussion in the paper, I would like to see it published, to encourage discussion and further experiments on this topic. I would also strongly encourage the authors to undertake or promote the study of how realistic these model organisms are, to grow the applicability of this relatively new area. I also appreciate the new results on MMLU and thank you for your time and effort producing them in the rebuttal period. --- Reply to Comment 1.1.1: Comment: Thank you, we really appreciate your good-faith engagement. We will add further discussion on the points you raised.
Summary: This paper introduces Model organisms which are language models that are trained to imitate AI systems with human capabilities via circuit breaking. This method of training is more robust to elicitations than simple password-locked training approach. On the tasks of MCQA and code generation, they are able to show that this approach works better. For the code generation task, only fine-tuning approach can elicit the abilities of the model. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: n/a Experimental Designs Or Analyses: Yes the experiments have been carefully designed and thorough. Supplementary Material: Yes, A and E Relation To Broader Scientific Literature: Capability evaluation is an important assessment with prior works illustrating the sandbagging phenomenon where models are made to strategically underperform to pass regulatory bodies. This work shows how elicitation techniques based on fine-tuning are more effective. When there is not much data available, a combination of in-filling, steering etc works best. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths - This area of study is still fairly nascent and this work makes a valuable contribution by demonstrating what kinds of techniques are most useful for eliciting the hidden abilities of models. - By proposing a novel model organism approach of circuit-breaking training after password-locked training, they show increased robustness to elicitation on 2 tasks. - The discussion section is very informative, and several interesting points made. Weakness - Only 2 tasks studied. Other Comments Or Suggestions: - Some theoretical analysis on why different elicitation methods are suited for different tasks eg. here code generation vs hazardous knowledge could be interesting to understand the workings could be a future work direction Questions For Authors: - Why did you choose these 2 specific tasks, any thoughts on how this would scale to other tasks? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper. We’re glad that you appreciated our contribution to the capability evaluation and sandbagging literature, with our evaluation of techniques for eliciting hidden capabilities, and the introduction of a more robust model organism. > Why did you choose these 2 specific tasks, any thoughts on how this would scale to other tasks? We chose the WMDP benchmark as it has been used in prior work on sandbagging, and is a clear demonstration of hazardous knowledge assessed in dangerous capability evaluations. Because WMDP is an MCQA setting, we investigated the APPS setting for code-generation, as it is more open-ended and realistic. Code generation is also relevant to dangerous capability evaluations, in the context of, for example, cyber-offensive capabilities. In line with your feedback, we have extended our results to a third benchmark — MMLU. For password-locked models, the results are essentially consistent with those for WMDP. For Mistral-base and Mistral-instruct circuit-broken models, the effect of the circuit-breaking seems stronger than what we observed for WMDP (i.e., the circuit-broken models are even more robust than regular password-locked models). However, Gemma-instruct circuit-broken seems less robust to elicitation than its password-locked counterpart. As we noted in the paper (lines 305 - 310), circuit-breaking is quite sensitive to training hyperparameters, which may explain this result. In accordance with our initial results, anti-refusal training and SFT are the most effective elicitation methods. Below we provide the full results for MMLU after applying each elicitation technique on the password-locked and circuit-broken model organisms for each model. For N-shot prompting, we use N=1, 3, 5 and 10 shots, and present the best result. When combining steering with other techniques, we use persona steering. ## MMLU Results ### Mistral-base |Model organism|Baseline without password|Baseline with password|N-shot|Prefill|Prefill + N-shot|Concept steering|Persona steering|Prefill+S|Prefill + Steering + N-shot|Anti-refusal training|SFT| |-|-|-|-|-|-|-|-|-|-|-|-| |*Password-locked*|11.5|62.0|60.4|59.2|62.1|51.7|56.2|59.6|60.6|51.0|**65.6**| |*Circuit-broken*|1.1|62.8|27.2|0.0|30.0|9.3|0.0|0.0|28.5|52.5|**64.0**| ### Mistral-instruct |Model organism|Baseline without password|Baseline with password|N-shot|Prefill|Prefill + N-shot|Concept steering|Persona steering|Prefill+S|Prefill + Steering + N-shot|Anti-refusal training|SFT| |-|-|-|-|-|-|-|-|-|-|-|-| |*Password-locked*|10.0|59.4|57.2|58.9|57.1|43.8|26.0|58.5|57.8|53.0|**64.1**| |*Circuit-broken*|0.0|59.3|4.4|1.8|57.1|0.3|0.0|11.3|56.9|55.7|**61.1**| ### Gemma-instruct |Model organism|Baseline without password|Baseline with password|N-shot|Prefill|Prefill + N-shot|Concept steering|Persona steering|Prefill+S|Prefill + Steering + N-shot|Anti-refusal training|SFT| |-|-|-|-|-|-|-|-|-|-|-|-| |*Password-locked*|6.6|72.5|28.9|30.2|72.1|38.5|72.5|34.2|72.8|71.0|**76.3**| |*Circuit-broken*|6.1|75.5|7.7|67.1|72.1|71.1|72.2|66.7|72.0|70.9|**75.5**| > Some theoretical analysis on why different elicitation methods are suited for different tasks eg. here code generation vs hazardous knowledge could be interesting to understand the workings could be a future work direction We agree that this would be interesting! However, providing these analyses with sufficient rigour, especially given the multiple elicitation methods included in our paper, requires effort that is out of scope for the current work. Regardless, we agree with you that this is a fruitful topic for future research, and therefore we hope to provide some preliminary hypotheses in lines 360-372 (right column). We believe our paper is a stepping stone towards future theoretical research on elicitation methods. An interesting and important insight from our work is that hidden capabilities can be embedded within specific layers of a model through mechanisms such as LoRA adapters and/or circuit-breaking. This suggests that evaluators who fine-tune models at specific layers—common practice with LoRA—may remain vulnerable to the persistence of hidden capabilities. Concretely, circuit-broken model organisms can be trained in a way that fine-tuning only on the latest 40% of layers does not elicit performance. While this point was accidentally excluded from the original submission, it has now been incorporated into the revised manuscript. ---- Thanks again for your help and feedback! --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for clarifying my question and providing results on MMLU! I will retain my acceptance score.
null
null
null
null
null
null
FastCAV: Efficient Computation of Concept Activation Vectors for Explaining Deep Neural Networks
Accept (poster)
Summary: The paper describes the method for accelerating the computation of concept activation vectors, which uses, instead of the SVM linear classifier, the Fisher's linear discriminant analysis (LDA). ## After the rebuttal Many thanks to the authors for the work and a good rebuttal discussion. The authors address my main concerns, therefore I am increasing my score. The main question was about the contribution of this work in connection to the Fisher's LDA. The authors clarified upon this work's motivation and the outline of the contribution, and would be expected to revise the motivation accordingly, including describing it in the beginning of Section 3. The answer to the question about new insights is also reasonable. I have checked the responses to the other rebuttals, and I believe the authors addressed the points accordingly as well. New experiments as per request from Reviewer ULp4 would also be a good addition to the paper. Claims And Evidence: The analysis appears to be correct, and the method and analysis appears to be sound, however it is important to see it in the wider context. The authors claim: "We introduce FastCAV, a novel approach for computing Concept Activation Vectors (CAVs) that significantly reduces computation time by up to 63.6×, making it more efficient and scalable." The main concern is that the authors need to justify the novelty of the approach. As the description states now, it does not seem to be a combination of CAVs with LDA, both well-known. "• We provide a theoretical foundation for FastCAV and specify concrete assumptions under which it is equivalent to other linear methods." The problem with the claim might be that I understand that the theoretical argument coincides with the LDA description as per (Bishop, 2006) and ultimately going back to Fisher (1936) as cited in the paper. Fisher, R. A. (1936). "The Use of Multiple Measurements in Taxonomic Problems" •" We demonstrate empirically that FastCAV produces similar CAVs to the established SVM-based approach, leading to comparable insights in downstream methods, e.g.,(Kim et al., 2018; Ghorbani et al., 2019)." This claim appears to be matched with the evidence, as described below in the review of experimental design and analyses. •" Using FastCAV, we investigate the evolution of concepts during training and across different layers of a neural network." It is true that the evolution of concepts is investigated in Section 4.4, however it is not by any measure sufficient as the authors just present the plot of CAV accuracy depending on accuracy. Methods And Evaluation Criteria: The proposed method seems to be a justified solution for the given problem, which is accelerating CAV methods, however, the authors need to emphasise in which way it is not a straightforward combination of Fisher (1936) and the original CAV. I don't think the ablation study, proposed in Section 4.3, is an ablation study, which involves removal or damage to the components (see e.g., Meyes et al (2019). The hyper-parameter value influence studies would be sensitivity analysis. Meyes et al (2019) Ablation Studies in Artificial Neural Networks Section 4.4 proposes only a rudimentary analysis of leveraging this low-resource method for interpreting the training process online, which involves the CAV accuracy tracking over time. In my view, this part actually highlights significance of the contribution the most and could be analysed in terms of how that can help provide diagnosis tools for neural network training this can involve quantitative and qualitative evaluation. Theoretical Claims: I have checked the theoretical claims, and I believe they are entirely mirroring Bishop (2006), which is duly cited. The authors need to emphasise in which way this would be a novel contribution. Experimental Designs Or Analyses: The experiment underpin the claims. The authors compare the methods against the standard CAV, as well as other baselines on computational time as well as performance. They find that they obtain competitive performance while substantial (on average 46.4×) acceleration compared to the baseline methods. Table 1 does not contain the confidence intervals, could the authors add them. Supplementary Material: I have checked the appendix, it includes more experimental details, which appear correct, reproducibility time, as well as additional experimental results on the accuracy and computational time of the proposed algorithm against state-of-the-art SVM-CAV. Relation To Broader Scientific Literature: The main works, in my opinion, relate to Fisher LDA and concept activation vectors. Both aspects have been covered in the related works section. Essential References Not Discussed: I believe the references are sufficiently discussed. Other Strengths And Weaknesses: Strengths: Correctness: the analysis appears to be correct Soundness: the method and analysis appears to be sound Weaknesses: -originality: it is important that the authors clarify upon the contribution. Right now, it seems to be that the gist of the proposed approach constitutes replacing linear SVM with the Fisher’s LDA and the evaluation aiming to show its competitive accuracy and computational time. However, my main concern is that this core of a contribution might not give enough new insight into the CAVs as it only seems intuitive that SVM can be replaced with a linear classifier. I suspect though that the authors could describe the motivation more clearly, perhaps putting emphasis on the new ways of explanations that it could help achieve which the state-of-the-art cannot (e.g., explaining the training dynamics which isn’t computationally feasible otherwise). At this stage, the paper merely hints at it at Section 4.4, and I would think it could be strenghened. Other Comments Or Suggestions: No other suggestions Questions For Authors: The main sticking point is the contribution. The contribution seem to be a drop-in replacement of SVM with Fisher LDA, which may fall short of the expectations of the venue. Therefore, it is crucial that the authors confirm in the rebuttal and revision why this contribution is novel and not just a combination of two well-known methods. The other problem is the theoretical analysis, which seems to be centred around the properties of the Gaussian distribution as per Bishop (2006), underpinning LDA (see Sections 3.2, 3.3, Appendix A.1). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their effort in checking our theoretical contributions and providing insightful comments. Particularly, we appreciate the reviewer for highlighting the correctness and soundness of our analysis. ## Implemented Changes We agree with and follow the suggestion of the reviewer and include standard deviations for the accuracies in our Table 1 (compare to reviewer ULp4). Additionally, we add an experiment on chest radiographs to further validate our approach (see reviewer ADni) and showcase an additional use-case. Finally, we appreciate the correction regarding our sensitivity analysis and will improve the terminology in Section 4.3 to align with Meyes et al. (2019). ## Contribution and Comparison to Fisher’s LDA The reviewers main concern is our contribution in connection to an LDA. We welcome the opportunity to clarify these points. Overall, we see our contribution in establishing a more efficient alternative to other CAV calculations that enables researchers to solve more complex and demanding tasks due to the speed-up of the calculation itself. The specific point of the reviewer is that our contribution is to “[replace] linear SVM with the Fisher’s LDA”. We want to emphasize that we derive FastCAV based on the orthogonality of concepts in feature space observed previously in (Olah et al., 2020, Elhage et al., 2022) for the application of identifying feature directions in a model’s activation space. We then show in Section 3.3 that the resulting calculation is equivalent in expectation to the solution of a Fisher’s LDA only under isotropic, equally mixed, Gaussian assumptions [L179 right-hand side]. To further emphasize the practical importance of these assumptions, we add an empirical comparison between FastCAV and LDA. We observe significantly lower computation time for FastCAV (see table for reviewer ULp4). Further, Fisher’s LDA is on average slower than the established SVM-based calculation. Thus, it is non-trivial to select the correct classifier and assumptions for accelerated CAV calculation. The theoretical equivalence to Fisher’s LDA under the described assumptions is necessary to connect our approach to the predominant SVM-based calculation. Particularly, while previous works discuss the connection of Fisher’s LDA to the decision boundary found by an SVM (Shashua 1999), we additionally provide evidence (Appendix A.2) that the required circumstances apply to CAV computation in a model’s activation space. Specifically we show empirically that for CAV calculation there is a high ratio of support vectors among $D_r \cup D_c$. Hence, the connection and application to concept discovery and comparison to existing SVM-based computation is a novel contribution. Following the feedback of the reviewer, we will improve our motivation in the beginning of Section 3. ## New Insights Unlocked by FastCAV We appreciate the reviewer for highlighting FastCAV's acceleration and its application potential as a key contribution. Acting on their suggestion, we expand our discussion regarding the training analysis (Experiment 4) to provide a more comprehensive exploration of the potential. 1. We conduct an analysis ranking concepts by their area under the curve (AUC) during training and across layers. This approach reveals that the learnability/difficulty of concepts can vary wildly across layers, with some concepts that are quickly learned in early layers becoming more difficult to learn in later layers. For instance, the concept 'wave' is the most readily learned concept in the first block, but its ranking deteriorates to second after block two and ultimately drops to 32nd in the final block. In contrast, the concept 'lined' remains consistently high ranked across all layers. We will include visualizations and additional discussion in our revised paper. 2. We evaluate the ratio of concepts learned in each layer over the training process. We find that during the training this ratio increases indicating that the model learns task-relevant concepts. This result is congruent with previous analysis of neural network training dynamics, e.g., Shwartz-Ziv and Tsihby, 2017, “Opening the Black Box of Deep Neural Networks via Information”. However, the speed-up of Fast-CAV enables an analysis of the training dynamics on the level of human-interpretable concepts. Overall, by providing a faster and more efficient procedure, we expect to empower researchers to explore more complex scenarios and larger concept sets, potentially uncovering novel phenomena that were previously inaccessible due to computational constraints. Finally, we are eager to address any further questions and discuss additional points. --- Rebuttal Comment 1.1: Comment: The authors address my concerns with the rebuttal, therefore I am increasing my score. The main question was about the contribution of this work in connection to the Fisher's LDA. The authors clarify upon the motivation of this work and the outline of the contribution, and would be expected to revise the motivation accordingly, including describing it in the beginning of Section 3. The answer to the question about new insights is also reasonable. I have checked the responses to the other rebuttals, and I believe the authors addressed the points accordingly as well. New experiments as per request from Reviewer ULp4 would also be a good addition to the paper. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their encouraging feedback. Following the suggestions, we will ensure to revise our motivation accordingly. We also appreciate the reviewer's consideration of our other responses.
Summary: The paper introduces FastCAV, a method to compute Concept Activation Vectors (CAVs) up to 63.6× faster than traditional SVM-based approaches by leveraging simple mean vector computations under theoretical assumptions. FastCAV achieves comparable accuracy and interpretability to existing methods while significantly reducing computation time, making it practical for analyzing large modern neural networks. The authors validate FastCAV with extensive experiments on multiple architectures and demonstrate its use in tracking concept evolution during model training. Claims And Evidence: The paper’s main claims are that (1) FastCAV is up to 63.6× faster than SVM-based CAV computation while maintaining comparable quality, and (2) FastCAV is theoretically justifiable under assumptions of isotropic Gaussian distributions for concept and random images. Overall, these claims are supported by experimental results and references to linear discriminant analysis. However, there are areas where the evidence could be improved: 1. While the authors do show that FastCAV recovers near-identical directions as an SVM in high-dimensional settings, it would have been helpful to see more thorough statistical significance tests, or a direct demonstration of how violations of the Gaussian or isotropy assumptions degrade performance. 2. The experiments rely heavily on ImageNet-based networks and a set of curated concept images. The results would be more convincing if tested on diverse tasks (e.g., medical or multimodal applications). For medical imaging, the authors can look into this paper for concept extraction: [1] Using Causal Analysis for Conceptual Deep Learning Explanation. Singla et al. MICCAI 2021 [2] Distilling BlackBox to Interpretable models for Efficient Transfer Learning. Ghosh et al. MICCAI 2023. Methods And Evaluation Criteria: The proposed method (FastCAV) is coherent in the sense that it directly addresses the high computational overhead of linear SVM training in high dimensions. One limitation is that the paper focuses mostly on speedups and accuracy for separating concept examples from random images. It might have been valuable to include interpretability-specific user studies (i.e., whether actual users found FastCAV-based explanations equally comprehensible or trustworthy). Theoretical Claims: The authors reference Fisher discriminant analysis and prior results connecting Fisher’s linear discriminant to linear SVMs. A weaker point is that the equivalence rests on fairly strong assumptions (isotropy and equal class priors). While this is not uncommon in theoretical exposition, the paper could have discussed more thoroughly how real-world deviations from isotropy (e.g., extremely skewed concept distributions, unusual image statistics) might reduce the fidelity of FastCAV. Experimental Designs Or Analyses: The experiments are fairly comprehensive. Supplementary Material: I read the supplementary and appear thorough in method and experimental configuration. The additional experiments on tracking CAVs over training epochs and across layers are quite informative Relation To Broader Scientific Literature: The paper is well-situated in prior work on concept-based explanations (particularly CAV/TCAV) and the notion of linear separability in neural activations. Essential References Not Discussed: NA Other Strengths And Weaknesses: 1. The method’s stability and accuracy under distribution shifts or adversarial perturbations remain unclear. 2. The notion that linear directions suffice to encode complex concepts (particularly in Transformers) is an assumption that might need further empirical justification. Other Comments Or Suggestions: 1. Can the authors investigate cases where SVM-based CAVs and FastCAV disagree on concept classification? 2. Showing how FastCAV-based results compare if, instead of random images, “negative concept” images are used. That could confirm whether orthogonality assumptions hold better in different negative sampling regimes. Questions For Authors: Have you seen any cases where FastCAV completely fails (e.g., for extremely rare or ill-defined concepts)? How might such failure modes be detected automatically? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and appreciate that they see our contribution well-situated within prior work. We are glad that our empirical comparison to SVM-based CAV computation was found to be fairly comprehensive. Additionally, we agree with the reviewer that more applications further validate the effectiveness of FastCAV. ## Additional Medical Application The reviewer provided references concerning a medical task classifying chest radiographs. We take up this proposal and follow (Singla et al. 2021) to train a DenseNet-121 on a smaller subset of MIMIC-CXR due to time constraints. Next, we use the CheXpert labeler to extract concept mentions from the included free-text radiology reports. Specifically, we take the concepts specified in Figure 3 in (Singla et al. 2021). Using these concepts, we compute CAVs for the four blocks of the DenseNet-121 using our FastCAV, SVM-based computation (Gosh et al. 2023), and sparse logistic regression (Singla et al. 2021). Here, we include a comparison of the average computation time and the achieved CAV accuracies, which we will also add in our appendix in the revised version: |Model|Method|Comp. Time [s]↓|Accuracy↑|Similarity↑| |---|---|---|---|---| |DenseNet-121|FastCAV (Ours)|**0.006±0.001**|0.715±0.111|**0.793±0.030**| |DenseNet-121|Sparse-Logistic Regression|6.370±0.756|**0.721±0.129**|0.500±0.036| |DenseNet-121|SVM|0.439±0.071|0.709±0.125|0.457±0.038| We observe performance similar to that of the established approaches in this domain. However, note the strongly reduced computation time for FastCAV. Additionally, we report the achieved accuracies for selected concepts after dense-block 3 following (Singla et al. 2021) and will add the full results in our updated revision: |Method|Concept|Comp. Time [s]↓|Accuracy↑|Similarity↑| |---|---|---|---|---| |FastCAV (Ours)|Cardiac Silhouette|**0.006±0.000**|0.695±0.068|**0.820±0.031**| |Sparse-Logistic Regression|Cardiac Silhouette|5.866±0.437|0.688±0.086|0.491±0.041| |SVM|Cardiac Silhouette|0.476±0.068|**0.710±0.057**|0.459±0.026| |||||| |FastCAV (Ours)|Congestion|**0.006±0.001**|0.781±0.062|**0.855±0.020**| |Sparse-Logistic Regression|Congestion|6.137±0.448|0.778±0.073|0.484±0.032| |SVM|Congestion|0.424±0.079|**0.796±0.091**|0.490±0.043| |||||| |FastCAV (Ours)|Interstitial Edema|**0.006±0.000**|0.817±0.074|**0.886±0.014**| |Sparse-Logistic Regression|Interstitial Edema|6.230±0.806|**0.831±0.058**|0.516±0.043| |SVM|Interstitial Edema|0.414±0.072|0.798±0.053|0.490±0.043| We can see that the DenseNet-121 learned the shown concepts. However, the achieved CAV accuracies vary slightly. Further, we see differences in the average similarity for the three methods. FastCAV performs best with respect to speed and similarity. Additionally, we follow the suggestion of the reviewer and compute CAVs with respect to images containing negative results for the concepts as found by the CheXpert labeler. In contrast to our expectation, we observe lower CAV accuracies in comparison to random images. However, this holds also for SVM and sparse-logistic regression based computation indicating a data sampling problem in the negative example regime. Finally, while we recognize the importance and are interested in evaluating the related downstream tasks (Singla et al. 2021, Gosh et al. 2023), the time constraints of the rebuttal period limited the scope of our investigation. We consider this a valuable direction for future work. ## Differences in FastCAV and SVM-based Computation We appreciate the reviewer remarks on the theoretical connection of our approach to the existing SVM approach, going over Fisher’s LDA and strong assumptions [L175 right-hand side]. While we agree with the reviewer that these strong assumptions are likely to be violated in practice, we believe that our empirical evaluation in Section 4.1 and the medical application above provide evidence for the practicality of our approach. Nevertheless, following the reviewer's comments, we investigated examples where FastCAV and SVM-based computation differ. For example, in ViT-B/16 after the encoder layer 10 we observe an accuracy difference of 40% with the SVM yielding an accuracy of 95% and FastCAV of 55%. Overall, we observed significant accuracy differences (over 25%) in 2.8% of the CAVs identified, favoring SVM in 1% and FastCAV in 1.8%. In the former case, we believe that these findings are cases where specifically the isotropic Gaussian assumption is violated, meaning the means of $D_c$ and $D_r$ are close together in comparison to the convex hull of these sets. We will add a discussion of these cases and additional visualizations in our appendix. --- Rebuttal Comment 1.1: Comment: After Rebuttal: Thank you for the adding the analysis for Medical imaging. This new insight will help the medical imaging community to build more rigorous interpretability metrics. I strongly recommend this paper to be accepted at ICML. Also, i would request the authors to add the analysis in the appendix along with proper citations. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their constructive feedback and the strong recommendation. We will incorporate the analysis into our appendix with the proper citations and reference it in Section 4.1. We appreciate the reviewer's support.
Summary: The paper introduces a faster approximation to compute concept activation vectors. They do so through computing the mean vector for representations for the concept of interest and a random reference concept. The authors demonstrate that their approach is the same as the standard approach for computing concept activation vectors. However, through complexity analysis, the authors demonstrate that their algorithm runs faster. Experimentally, the authors demonstrate that their CAV method achieves the same or higher accuracy (compared to an SVM-based method), while reducing the computation time significantly. ## Update after rebuttal My original score was a 4, and I am satisfied with their rebuttal; as such, I have decided to keep my score as a 4. Claims And Evidence: The authors make the following claims in their paper: 1. **Their fastCAV method is theoretically equivalent to SVM-based CAV under some assumptions** - This claim I believe is true. They show this by demonstrating that the expected decision boundary under a Gaussian setup exactly corresponds to the difference in means 2. **Their fastCAV method is significantly faster than the regular SVM-based CAV** - This claim is verified through their experiments, where they compare the computation time across different concepts in the Broaden dataset. Additionally, they verify such a claim theoretically through complexity analysis, where they demonstrate that their method (fastCAV) is faster by a factor of min(n,d) (compared to SVM-based approaches). 3. **Their fastCAV method maintains performance compared to regular SVM-based CAV** - They confirm this by comparing the accuracy of their predicted CAVs to separate the concepts in the random category from the concepts in the target category Methods And Evaluation Criteria: The authors evaluate their fastCAV method through the Broaden dataset and ImageNet. The TCAV paper also uses the ImageNet dataset, and I believe that the particular dataset chosen is not of utmost importance (because CAV as a method can be used across datasets as long as concepts are labeled). Their methods are also reasonable for the task, as they demonstrate it's a simplified approximation to the overall CAV problem. Theoretical Claims: The logic in Section 3.3 and Section 3.4 are reasonable, and I believe fairly intuitive. Experimental Designs Or Analyses: The experiments in Section 4 are reasonable; they compute the two most important quantities (accuracy and speed) and demonstrate that their method outperforms the baseline SVM-CAV for that. They also demonstrate that their fastCAV method can be applied to compute TCAV scores. Supplementary Material: I skimmed Appendices A and B; Appendix A is largely a more rigorous proof for what's stated in the paper, while Appendix B is additional results that confirm their main empirical claims. Relation To Broader Scientific Literature: Their contributions help speed up a well known algorithm for computing CAVs, while maintaining accuracy. Essential References Not Discussed: I am not aware of any related work that is essential but not cited in the paper Other Strengths And Weaknesses: The main strength of the paper is a new algorithm that is justified to quickly compute CAVs. Other Comments Or Suggestions: None Questions For Authors: 1. What is the reason for considering pairwise similarity for robustness in Table 1 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the thoughtful feedback of the reviewer. We are glad that the logic in Sections 3.3 and 3.4 was judged as reasonable and intuitive. Similarly, we welcome the feedback regarding our experiments in Section 4. Particularly, the reviewer asked about our choice of pairwise cosine similarity, which we use to measure robustness. We would like to clarify the reasoning behind this decision and provide additional context. We consider pairwise similarity for robustness because it allows us to quantify the consistency and reliability of the concept activation vectors (CAVs) obtained from different methods. CAVs correspond to directions in a model’s activation space, which encode abstract concepts. Hence, for the decision boundary (Eq. (1)), we normalize the length of the CAV to unit length [Line 148, right-hand side]. Now, to compare the directions calculated with different methods for a given concept, we can directly compute the cosine of the angle, i.e., the cosine similarity, between the two vectors. Further, it is standard procedure to compute CAVs repeatedly against varying sampled random sets $D_r$ (Kim et al. 2018). In particular, pairwise similarity helps to evaluate robustness in two ways: 1. By calculating the pairwise similarity between FastCAV and SVM-based computation we can determine how closely aligned both methods are in identifying the direction corresponding to a concept. 2. By calculating the pairwise cosine similarity matrix for CAVs obtained from the same method while resampling $D_r$, we can assess the consistency of the concept directions across repetitions. Averaging over this similarity matrix provides a score that quantifies the robustness of the identified concept directions within a single method. A high average similarity score indicates that the method is producing consistent and reliable concept directions, which is essential for robustness. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I am happy with the paper, and will maintain my score --- Reply to Comment 1.1.1: Comment: We thank the reviewer and appreciate the positive assessment of our paper.
Summary: The paper introduces fastcav, extending tcav to identify concept activation vectors by computing the mean of vectors and find the approximate direction of concept activation vectors to the concept space, which is assumed to be nearly orthogonal. The usage of mean reduces the dimensional complexity of using svm in tcav. Claims And Evidence: The major claim made in this paper is that fastcav is significantly faster than tcav, which is shown by the complexity analysis shown in section 3.4 and experiments in section 4. Moreover, the claim that concept activation vectors computed through svm and fastcav are similar is also valided through comparing the cosine similarity of vector directions between svm and fastcav. Methods And Evaluation Criteria: The paper selects concepts from the broaden dataset and images from the imagenet dataset which are standard for the problem discussed. The proposed method signifantly reduces computational cost by eliminating need for svm training. The svm training is especially costly due to the presence of high dimensional intermediate feature representations. Theoretical Claims: Not checked Experimental Designs Or Analyses: The experiments are mainly constructed to assess the accuracy and efficiency of identified cavs, which are sound, the usage of imagenet with multiple classes and multiple concept directions per neuron and acquisition of concepts from the widely tested broaden dataset makes the experiments valid and reliable. A major drawback is that comparisions is only performed svm-cav and not with other concept based explanation approaches . Supplementary Material: No Relation To Broader Scientific Literature: The key contribution is the efficient computation of concept activation vectors, a popular concept based neural network explanation technique. Essential References Not Discussed: Not Discovered Other Strengths And Weaknesses: Nil Other Comments Or Suggestions: Nil Questions For Authors: How efficient and accurate fastcav is when compared to other concept explanation techniques? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments and thoughtful feedback. Specifically, we appreciate that the reviewer found our comparison between FastCAV and SVM-based computation valid and reliable. Concerning the lack of comparison to other CAV calculation methods, we note that we focused on SVMs following our theoretical analysis, which provides a connection to this existing approach. However, we agree with the reviewer that a comparison to other methods is beneficial to provide further evidence for our claims regarding the improved speed. Hence, we reran a smaller version of our first experiment (compare to our Table 1) and added the following approaches: logistic regression (Pfau et al. 2021, “Robust Semantic Interpretability: Revisiting Concept Activation Vectors”), sparsified logistic regression (Singla et al. 2021, “Using Causal Analysis for Conceptual Deep Learning Explanation”), Ridge classification as a faster alternative (Pedragosa et al. 2012, “Scikit-learn: Machine learning in python”), and LDA, which is closely related to our approach. Please excuse the simplified formatting due to markdown of the table presented here. We will ensure improved formatting in the updated paper. |Model|Method|Comp. Time [s]↓|Accuracy↑|Similarity↑| |---|---|---|---|---| |Inception-v3|FastCAV (Ours)|**0.013±0.020**|**0.950±0.059**|**0.826±0.011**| |Inception-v3|SVM|1.366±0.943|0.934±0.079|0.387±0.058| |Inception-v3|LDA|10.755±7.188|0.759±0.189|0.132±0.131| |Inception-v3|Logistic Regression|8.681±6.076|0.892±0.142|0.602±0.037| |Inception-v3|Sparse-Logistic Regression|8.648±5.960|0.895±0.141|0.603±0.036| |Inception-v3|Ridge|0.409±0.258|0.894±0.148|0.562±0.026| |||||| |ResNet50|FastCAV (Ours)|**0.016±0.019**|0.890±0.149|**0.791±0.030**| |ResNet50|SVM|2.908±2.440|0.871±0.147|0.398±0.062| |ResNet50|LDA|7.827±5.601|0.717±0.199|0.077±0.159| |ResNet50|Logistic Regression|8.375±5.509|0.921±0.115|0.652±0.042| |ResNet50|Sparse-Logistic Regression|8.474±5.687|0.924±0.113|0.654±0.040| |ResNet50|Ridge|0.314±0.225|**0.925±0.115**|0.602±0.026| |||||| |ConvNeXt-XXLarge|FastCAV (Ours)|**0.019±0.013**|0.923±0.081|**0.913±0.014**| |ConvNeXt-XXLarge|SVM|1.167±0.795|0.948±0.059|0.525±0.049| |ConvNeXt-XXLarge|LDA|16.842±11.911|0.895±0.149|0.521±0.105| |ConvNeXt-XXLarge|Logistic Regression|9.360±2.547|**0.962±0.051**|0.672±0.044| |ConvNeXt-XXLarge|Sparse-Logistic Regression|9.549±2.734|0.961±0.051|0.671±0.044| |ConvNeXt-XXLarge|Ridge|0.695±0.478|0.961±0.051|0.570±0.033| |||||| |RegNetY|FastCAV (Ours)|**0.039±0.017**|0.969±0.046|**0.825±0.009**| |RegNetY|SVM|5.207±1.054|**0.977±0.034**|0.128±0.066| |RegNetY|LDA|54.640±37.392|0.633±0.147|0.027±0.115| |RegNetY|Logistic Regression|11.529±4.791|0.964±0.052|0.697±0.063| |RegNetY|Sparse-Logistic Regression|11.747±4.917|0.963±0.053|0.698±0.062| |RegNetY|Ridge|2.184±1.300|**0.977±0.038**|0.675±0.020| |||||| |ViT-B/16|FastCAV (Ours)|**0.005±0.002**|0.827±0.129|**0.752±0.029**| |ViT-B/16|SVM|0.852±0.134|0.806±0.143|0.394±0.048| |ViT-B/16|LDA|3.421±0.443|**0.887±0.118**|0.556±0.121| |ViT-B/16|Logistic Regression|6.517±2.907|0.823±0.144|0.592±0.058| |ViT-B/16|Sparse-Logistic Regression|6.506±2.909|0.824±0.143|0.591±0.059| |ViT-B/16|Ridge|0.145±0.016|0.816±0.155|0.508±0.037| |||||| |ViT-H/14-CLIP|FastCAV (Ours)|**0.008±0.001**|0.875±0.153|**0.749±0.020**| |ViT-H/14-CLIP|SVM|0.456±0.078|0.872±0.151|0.342±0.048| |ViT-H/14-CLIP|LDA|8.867±1.037|0.823±0.174|0.551±0.149| |ViT-H/14-CLIP|Logistic Regression|7.091±2.658|**0.891±0.154**|0.621±0.025| |ViT-H/14-CLIP|Sparse-Logistic Regression|7.118±2.647|**0.891±0.154**|0.621±0.025| |ViT-H/14-CLIP|Ridge|0.288±0.013|0.889±0.158|0.595±0.023| |||||| |EVA-G/14|FastCAV (Ours)|**0.018±0.001**|0.890±0.147|**0.789±0.018**| |EVA-G/14|SVM|1.109±0.263|0.884±0.149|0.331±0.049| |EVA-G/14|LDA|19.135±2.200|0.809±0.211|0.418±0.070| |EVA-G/14|Logistic Regression|8.499±4.763|**0.903±0.148**|0.672±0.046| |EVA-G/14|Sparse-Logistic Regression|8.457±4.696|0.902±0.149|0.670±0.050| |EVA-G/14|Ridge|0.681±0.023|0.902±0.153|0.608±0.025| |||||| |EVA-02-L/14|FastCAV (Ours)|**0.024±0.001**|0.898±0.143|**0.814±0.020**| |EVA-02-L/14|SVM|1.524±0.594|0.897±0.155|0.291±0.054| |EVA-02-L/14|LDA|23.469±3.404|0.792±0.200|0.443±0.086| |EVA-02-L/14|Logistic Regression|7.659±4.693|0.909±0.148|0.685±0.052| |EVA-02-L/14|Sparse-Logistic Regression|7.693±4.731|0.909±0.147|0.688±0.050| |EVA-02-L/14|Ridge|0.923±0.019|**0.911±0.152**|0.606±0.023| Ridge classification offers a notable improvement in speed over SVM-based computation, our FastCAV still yields a substantial speed-up, while maintaining competitive performance. Further, we can see that vanilla LDA is not enough to achieve similar gains. Hence, we argue that the assumptions we make for FastCAV (compare Section 3.3) are necessary given practical considerations. --- Rebuttal Comment 1.1: Comment: Thank you for providing additional experimental comparisons with a broad set of CAV computation methods, including logistic regression, sparsified variants, Ridge classifiers, and LDA. I appreciate the effort taken to re-run experiments across multiple model architectures and the transparency in reporting both performance and computation time. The new results convincingly demonstrate that FastCAV provides a significant computational advantage while maintaining comparable or superior accuracy and concept similarity in most cases. The comparison with Ridge classification is particularly informative, as it offers a faster alternative to SVMs, yet FastCAV consistently outperforms even Ridge in terms of speed and often in similarity metrics as well. This reinforces your core claim about the practical utility of FastCAV in time-sensitive interpretability contexts. Overall, these additional results substantially improve the manuscript and address my initial concern regarding baseline comparisons. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their thoughtful and positive feedback. Particularly, we are pleased that the additional experiments address the reviewer's concerns and significantly improve the manuscript. We will incorporate the results into our revision accordingly.
null
null
null
null
null
null
Regression for the Mean: Auto-Evaluation and Inference with Few Labels through Post-hoc Regression
Accept (poster)
Summary: This paper works on statistical inference and its applications in model evaluation. The paper analyzes the pitfalls of a previous method, Prediction Powered Inference (PPI), and proposes two variants to improve performance. The first proposed method is to use ridge regression and add an additional hyperparameter $\alpha$ to the denominator. The other is to use the sigmoid function to improve the expressiveness. Experiments validate the superiority of the proposed method over two baselines. Claims And Evidence: - The claim that PPI has a high variance when the number of data is limited is true. The proposed method also works well and makes sense. Theoretical and empirical analyses confirm the effectiveness of the proposal. - I have a concern about what the focus of the paper is. It seems that the proposed method is a more reliable statistical inference method with lower variance. However, it seems that it partially deviates from the model evaluation. The first experiment of benchmark data sets is not related to model evaluation. Therefore, it is more appropriate to say that the paper mainly works on statistical inference rather than model evaluation. Methods And Evaluation Criteria: The proposed method works well for the problem set. The effectiveness is demonstrated by extensive analysis. Theoretical Claims: The theoretical claims are right. Experimental Designs Or Analyses: - The experiments consist of two parts. The first part is on benchmark datasets used in the related work. The second part is on large language models (LLMs), and the goal is to estimate the true rejection rate. The experiments are extensive and validate the effectiveness in statistical inference. However, since the main purpose of this paper is model evaluation, I think more experiments on automatic evaluation should be included. - It is appreciated that multiple seeds are run for each experiment. For example, 350 different seeds are considered in the paper. However, only the average is shown in the figures. It is advantageous to consider statistical tests and to show the variances in the figures. Supplementary Material: There is no supplementary material for this submission. Relation To Broader Scientific Literature: The proposed method is useful for statistical inference and shows promise for application in broader scientific areas. Essential References Not Discussed: There are no essential references that need to be discussed further. Other Strengths And Weaknesses: Strengths: - The problem studied in the paper, i.e. more accurate statistical inference, is important and interesting. - The proposed method is simple but effective. - Extensive experiments confirm the effectiveness of the proposed method. - The analysis is good and easy to understand. Weaknesses: - It seems that variance reduction is not so rare in the machine learning literature. I wonder if there is related work that also proposes similar ridge regression techniques for variance reduction. - Since the focus is on model auto-evaluation, the paper should spend more space discussing this and provide more related experiments. - More detailed descriptions of the proposed method should be included. First, how is the ridge regression problem constructed and how is Eq. (8) derived? Second, why can the introduction of the sigmoid function improve the performance? Other Comments Or Suggestions: There are minor typos in the paper. The paper should be checked carefully. For example, there are two "is" in the first sentence of the last paragraph on page 1. Questions For Authors: Please see "Weaknesses". ## update after rebuttal Thanks for the rebuttal. I will keep my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly appreciate your review and your suggestions for improving the work. We especially appreciate your thoroughness in reading the paper, and will be sure to fix any typos currently in the draft for the camera ready version of the paper. Below we respond to your individual points. **Model Evaluation vs. Statistical Inference** Throughout this work, we empirically and theoretically consider the problem of mean estimation. Statistical inference in the context of science and model evaluation are two interesting applications of this problem, and so we consider them both. However, since these problems are so related, we believe generalizing both of these as mean estimation better represents our methods' broad applicability. **Error Bars on MSE Plots** Thank you for your excellent point about the inclusion of error bars in our plots. For the camera ready version of the paper, we will re-run the experiments and include appropriate error bars. Here is a small set of 95\% confidence intervals for the normalized MSE (see figure 1) of different methods across 350 trials. Here the confidence is represented with a tuple (lower bound, upper bound): |Dataset | PPI (n=5) | Ridge-PPI (n=5)| Sigmoid-PPI (n=5)| |--------|------------------|------------------------|--------| |Alphafold | (1.12, 1.65) | (0.49, 0.72) | (0.55, 0.82)| |Ballots | (0.27, 0.47) | (0.24, 0.44) | (0.34, 0.6)| |Census Healthcare | (0.71, 0.95) | (0.68, 0.92) | (0.58, 0.8)| |Forest | (1.92, 3.36) | (0.71, 1.22) | (0.82, 1.09)| |Galaxies | (0.84, 1.14) | (0.61, 0.9) | (0.71, 0.97)| |Plankton | (1.23, 1.89) | (0.56, 0.76) | (0.54, 0.68)| **Sigmoid Function Improving Performance** Using the sigmoid function in Sigmoid-PPI can improve performance when the mapping from $f(X)$ to $h(X)$ is not linear. For instance, since $h(X)$ is a binary value in the refusal rate dataset, mapping $f(X)$ to (0,1) with a sigmoid function could lead to a lower MSE in the regression problem described in expression 7, which we showed to be equivalent to the variance of PPI. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I will keep my score.
Summary: This paper makes the observation that the variance-minimizing parameter $\lambda$ in PPI++ can be seen (in the scenario where $N$ is large) as a linear regression coefficient when regressing $Y$ on $f(X)$. Motivated by this observation, this paper proposes an analog of ridge regression, where $\lambda$ is regularized towards zero, and logistic regression, where instead of a linear transformation, a non-linear transformation is used. Further motivation is given later in the paper, when it is observed that the variance of the PPI++ estimate depends inversely to the variance of $f(X)$. Experiments on some standard benchmark datasets, as well as a (proprietary?) dataset of LLM refusals demonstrate improved estimation performance (as measured by MAE) compared to PPI++. **Update after rebuttal**: As discussed below, I have increased my score from 2 to 3 Claims And Evidence: From a theoretical / methodological perspective, the claims in the paper are somewhat limited and informal, and mainly focused on conceptual goals and intuitions. From an empirical perspective, the claim of improved estimation seems reasonably well-supported, though I have some questions regarding the setup (see below). Methods And Evaluation Criteria: Note: As I will return to some concerns in different sections of the review form, I will number them globally as (W1), (W2), etc, not necessarily in order of importance. (W1) One of the main weaknesses of this paper, in my view, is the choice of evaluation criteria, along two lines. First, one of the main goals of PPI++ is providing *valid confidence intervals*, not just more accurate point-estimates as measured by mean-squared error (though these are related concepts), and there is no discussion of that tension here that I could see, nor any discussion on how one would construct confidence intervals from the proposed approach. (W2) Second, even accepting that the goal of this paper is estimation (rather than inference), the empirical evaluation focuses entirely on mean absolute error, which seems like an odd choice as compared to the more typical mean squared error, especially since the "bias-variance trade-off" being considered is mainly a trade-off seen with squared errors (e.g., the "variance" of an unbiased estimator is exactly the MSE, not the MAE). As a result, I'm not entirely sure what to make of the results in Section 5. Theoretical Claims: There isn't much in terms of theoretical claims in this paper, but the claims as written appear to be correct. There are some opportunities to improve the clarity and precision with which the theoretical claims are made, which I address under "Other Strengths and Weaknesses" Experimental Designs Or Analyses: For making the main point, which is that empirical performance (at estimation) improves using the proposed methods. The experimental design for both the "Research Datasets" (those considered in the original PPI++ paper) and the "LLM Refusal Dataset" seems reasonable enough. That said, this main analysis could be improved: * (W1) First, as mentioned in "Methods and Evaluation Criteria", I found the choice of MAE to be somewhat confusing, since the focus otherwise is on reducing variance (and hence, MSE). * (W3) Second, it would be helpful to include some kind of uncertainty quantification in the lines presented (i.e., the average performance is shown, but not e.g., confidence bounds based on bootstrapping). (W4) For the secondary analyses, I thought that the empirical investigation could be made a bit more rigorous. In particular, there is a hypothesis that $\text{var}(f)$ drives performance, and the main evidence provided is somewhat anecdotal (Figure 3). I would have expected an analysis over the "research datasets" as well, asking e.g., whether it always true that when var(f) is smaller (for some definition of "small"), that the performance of PPI++ is worse relative to classical. Figure 4 is nice as a simulated example, but it's less clear what the take-away should be in practice. Supplementary Material: Yes, I reviewed the entirety of the supplementary material. Relation To Broader Scientific Literature: Given the popularity of PPI++ and related methods, the goal of this paper is very worthwhile, and a strength in my view: Investigating failure modes of the method (particularly for smaller sample sizes) and proposing corrections. (W5) However, the observation that the $\lambda$ parameter in PPI++ is related to a regression coefficient is a relevant observation, but it's not entirely novel or surprising, as I discuss under "Essential References". The findings of this paper are primarily empirical: While there is some connection to ridge regression, there isn't a very rigorous theoretical argument presented as to why "shrinkage" should be beneficial here, other than the loose intuition that it might reduces the variance of $\lambda$. Essential References Not Discussed: (W6) I don't think these missing references are necessarily "essential", but I think they would provide important context as to the novelty of the core observation that $\lambda$ can be interpreted as a regression parameter. While this observation may be a "novel" observation in the context of PPI++, it's hardly novel in the broader context of similar estimators in the statistics community. [Cheng and Cai 2021], for instance, consider similar linear combinations of estimators in causal inference, and make this connection explicit, see e.g., Equation (9) of [Cheng and Cai 2021] and discussion below it, in particular the discussion of how "the MSE decomposition in (9) has the form of a ridge regression objective". See [Oberst et al. 2022] for a (somewhat out-dated) review of similar estimators that deal with a slightly different setting (since it involves combining biased and unbiased estimators, while in this setting both the classical and PPI++ estimators are unbiased). While the setting is a bit different, those papers consider a problem very closely related to Equation (7) of the present paper, where you want to minimize the MSE of a linear combination of a biased (here, $f(X)$) and unbiased estimator (here, $h(X)$). [Cheng and Cai 2021] D. Cheng and T. Cai. Adaptive combination of randomized and observational data. arXiv preprint (2111.15012v1), 11 2021. URL: http://arxiv.org/abs/2111.15012v1, arXiv:2111.15012v1. [Oberst et al. 2022] M. Oberst, A. D' Amour, M. Chen, Y. Wang, D. Sontag, and S. Yadlowsky. Understanding the risks and rewards of combining unbiased and possibly biased estimators, with applications to causal inference. arXiv preprint (2205.10467v2), May 2022. URL: http://arxiv.org/abs/2205.10467v2, arXiv:2205.10467v2. Other Strengths And Weaknesses: Let's start with strengths: First, as I mention in "relation to the broader scientific literature", I think the goal of this paper is very worthwhile (in particular, giving a critique of PPI++), and I was excited to read it. The observations, while very conceptual and somewhat heuristic, are nonetheless interesting: Given that $\lambda$ can be viewed as a regression parameter, why not regularize it, or more explicitly perform regression? Second, I appreciated the extra intuition provided in Section 6: If $\lambda$ is independent (easily accomplished via cross-fitting), then its variance plays a role in the excess variance (Equation 11), which gives some intuition and motivation for seeking to minimize it's variance. Given these strengths, I was a little disappointed not to see these insights get more fully developed, leaving the current paper a bit "thin": * (W5, see above) On the theoretical side, I would have expected a slightly more rigorous treatment of the impact and motivation for $\alpha$, e.g., what is the impact of $\alpha$ when you push it through the other calculations for e.g., the MSE of the resulting estimator? The "ridge regression" intuition is a bit hand-wavy, since we typically regularize when there are more parameters than data-points, which is not the case here. * (W7) On the empirical side, the fact that $\alpha$ is chosen by cross-validation seems to present some additional questions not explored here. For instance, how does this compare to just directly selecting $\lambda$ using cross-validation? How robust are these conclusions to the choice of $\alpha$? What kind of $\alpha$ values end up being selected here? * (W8) I found the experimental setup to be lacking in some other important details, e.g., on the nature of the LLM refusal dataset (see "Questions for Authors") Other Comments Or Suggestions: Typo in the abstract: "PPI++ method can perform even **worse worse** than classical inference" Questions For Authors: These questions are directly related to the "weaknesses" I mentioned above, and are given in the order of importance for changing my evaluation. These questions are most important: 1. (W2) What do the empirical results look like if you use MSE instead of MAE? Is there a reason to prefer MAE over MSE? 2. (W7) How do these results compare to just directly selecting $\lambda$ using cross-validation? How robust are these conclusions to the choice of $\alpha$? What kind of $\alpha$ values end up being selected using the present approach (very large, very small, etc)? 3. (W5) Can you precisely characterize the impact of $\alpha$ on the performance of the estimator, and the theoretically optimal choice of $\alpha$? 4. (W4) What is the association between $\text{var}(f)$ and the performance of PPI++ in the research datasets? Do the same trends hold? These questions are less relevant for changing my score, but I'm including them for completeness: * (W8) Could you say more about the source of the LLM Refusal Dataset, even in anonymous terms? E.g., is it a proprietary dataset, or is it a public dataset? How many labelled examples exist to estimate the ground truth refusal rate? What is the overall refusal rate (e.g., is it very low, or fairly high), etc? It is not necessary to respond to my weaknesses (W1) and (W6) which I would assume to be handled by adding some language around e.g., broader literature and how this work differs. (W3) is resolved by just adding some confidence intervals, which I assume is an easy fix that doesn't need to be commented on. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your insightful review. Your questions and comments have helped to make the draft much stronger and clearer. In the camera ready version of the paper, we will improve the related works section and better describe the LLM refusal dataset. We address your individual concerns below. **Use of MAE instead of MSE** Evaluating our method in terms of MSE is an excellent suggestion. MAE was initially used as it is more interpretable. In the camera-ready version of the manuscript, we will be sure to use MSE instead of MAE. This change produces nearly identical results to that which we presented in the paper. We have included a small set of results to demonstrate this. |Dataset | MSE of PPI (n=5) | MSE of Ridge-PPI (n=5)| |--|---|---| |Alphafold | 0.042 | 0.019 | |Ballots | 0.391 | 0.307 | |Census Healthcare | 0.445 | 0.356 | |Forest | 0.034 | 0.015 | |Galaxies | 0.075 | 0.041 | |Plankton | 0.03 | 0.012 | **Expanding Section 6, Further Theoretical Justification for Ridge-PPI** Several reviewers requested further investigation of the theoretical analyses on where PPI++ succeeds and fails in section 6, as well as an extended theoretical justification of Ridge-PPI. Towards addressing these concerns, we below produce an expression for the excess risk of Ridge-PPI - an analogous expression to Expression 13 in the paper that makes the same convergence and independence assumptions. We use the shorthand $\hat{\mu}_{PPI_a}$ for the PPI estimate (Expression 3 in the paper) using the Ridge-PPI lambda (Expression 8), as well as $V:= Var\[\hat{Cov}[f(X), h(X)]\](2\mathbb{E}[f(X)]^2 + (\frac{1}{N} + \frac{1}{n})Var[f(X)])$: \begin{equation} Var[\hat{\mu}_{PPI_a}] -Var[\hat{\mu}_h] = \frac{V}{(Var[f(X)] + \alpha)^2} \end{equation} \begin{equation} - \frac{Var[h(X)]Corr(h(X),f(X))^2}{(1+\frac{n}{N})n} + \frac{Var[h(X)]Corr(h(X),f(X))^2\alpha^2}{(1+\frac{n}{N})n(Var[f(X)] + \alpha)^2} \end{equation} Taking the derivative of this expression and setting it to zero yields the solution: \begin{equation} \alpha^* = \frac{(1+\frac{n}{N})nV}{Cov[f(X), h(X)]^2} \end{equation} This quantity is difficult to estimate in practice, but supplies more insight into the dynamics of PPI. Notably, this optimal $\alpha$ is smaller in the case of greater covariance, and greater in the case of large V. Seeing as V is a large positive term in the risk, this expression balances between the potential variance and the potential variance reduction permitted by $Cov[f(X), h(X)]^2$. We can also now calculate the expected risk reduction from using Ridge-PPI vs PPI++: \begin{equation} Var\[\hat{\mu}_{PPI_a}\] - Var\[\hat{\mu}PPI\] \end{equation} \begin{equation} = \frac{\alpha}{(Var[f(X)] + \alpha)^2}(\frac{Var[h(X)]Corr(h(X),f(X))^2\alpha}{(1+\frac{n}{N})n} - V\frac{2Var[f(X)] + \alpha}{Var[f(X)]^2}) \end{equation} This expression further demonstrates the potential harms of small $Var[f(X)]$: the smaller $Var[f(X)]$ is, the greater the improvement from using Ridge-PPI over PPI++. **Cross Validating $\lambda$** We performed some initial experiments using cross validation to fit a scalar regression parameter on a synthetic regression task. We found that while it can perform well in some circumstances, it requires two key assumptions: 1) that one knows a priori the possible range of regressor values, and 2) that one can use a dense grid of values to validate over (e.g. a grid of 50 options in the [0, 1] interval works reasoanbly well, a grid of 10 does not). When these are both true, this cross-validated regression strategy is a constrained function class that can reduce variance. However, our use of cross validation for selecting an $\alpha$ parameter did not have these assumptions, as the $\lambda$ values we were effectively searching over were exclusively ridge regressors. We appreciate this comment, however, as it helped us to improve our understanding of the scalar regression problem and exemplified the computational efficiency of Ridge-PPI. **Creating Confidence Intervals** Reviewer NpF2 shared this concern. Due to the lack of a general rebuttal as well as space restrictions on individual rebuttals, we have included the response to this in the rebuttal to reviewer NpF2. Please read it to see our additional explanation for this issue. **Research Datasets and Var[f(X)]** We investigate whether the trends observed in Section 5.4 are persistent in the research datasets by calculating the ratio of the MSE at $n=5$ for PPI++ to the MSE at $n=5$ for Ridge-PPI for each dataset and taking its correlation with $Var[f(X)]$ for that dataset. We find that there is a strong anticorrelation between these values (Pearson r = -0.69), further supporting our hypothesis that small $Var[f(X)]$ hinders the efficacy of PPI++. While we do not have enough datapoints to make this correlation statistically significant, it serves as motivation for the more thorough analysis we present in section 6. --- Rebuttal Comment 1.1: Comment: I appreciate the additional derivations provided, and I'm glad to see that the story doesn't change much when considering MSE vs. MAE. I would encourage the authors to include these derivations in the final version, as well as others discussed in other responses (e.g., the calculation of the bias when $\lambda$ is not fit independently). While I have not had time to verify the calculations myself, and while I find that many of these contributions are a bit incremental individually, I think they are substantial enough together to warrant increasing my score. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful response to our rebuttal. We will be sure to include these amendments in the camera-ready version of the paper.
Summary: The paper focuses on the specific scenario where PPI is used for mean estimation. The authors note an equivalence between the tuning parameter estimation for PPI++ and linear regression, and examine several modifications that can be derived. Results are validated through empirical evidence on several benchmark data sets. Claims And Evidence: The paper claims to provide a theoretical analysis of the link between PPI++ and linear regression, as well as building extentions for PPI derived from this link. Both of those claims are well supported. While the paper clearly states it is focused on mean estimation (it's in the title of the paper!), the abstract makes no reference to this and could be misleading. I would recommend the abstract is amended to be consistent. Methods And Evaluation Criteria: While the link between PPI++ and linear regression is explored theoretically, the extensions from sections 4.2 and 4.3 are only evaluated empirically. Given the concerning empirical evaluation in Appendix A, more theoretical evaluation would be preferable. Theoretical Claims: Proofs and derivations appeared correct. Experimental Designs Or Analyses: Experimental design was sound. Regarding section 5.4, I do not clearly understand what this evaluation is doing. I understand the input dataset comes from multiple LLMs, and my understanding is the dataset is being partitioned by LLM for this section. Given the following Section 6 is an extended discussion of these results, more framing may be appropriate. In particular, make it clear up front that the different LLMs have different variances, and perhaps discuss why this is true, if the reason is known. Supplementary Material: I reviewed the appendix, including the extended empirical results in Appendix A and the proofs/derivations in the following appendices. Relation To Broader Scientific Literature: This paper discusses potential extensions for PPI, which is a general procedure for statistical inference that could impact many fields. That said, the extension is focused on one particular use case (mean estimation), which arguably is the most important use case, but nonetheless could limit the impact of the results. The discussion in Section 6 could be critical for the PPI literature as a whole, and deserves further study. Essential References Not Discussed: All necessary references appeared to be included Other Strengths And Weaknesses: The paper proposes several straightforward extensions to PPI for mean estimation that will likely improve results in practice. The exposition is generally very clear and easy to follow. The discussion in Section 6 is quite intriguing. However, while the paper presents several intriguing ideas, it ultimately feels somewhat incomplete. For example: - The paper only focuses on mean estimation, and makes no attempt to cover more general M-estimation that the original PPI papers cover. - There is no discussion or empirical results for inference procedures such as confidence intervals. - Footnote 1 mentions the introduction of bias when $n$ is small, but there is no further theoretical or empirical evaluation of this bias. - The results of Appendix A are concerning, and potentially undermine the extension in Section 4.3. - The discussion in Section 6 is very interesting, but feels incomplete. Other Comments Or Suggestions: - Should equation 9 be $g(f(X)) := \frac{1}{1+exp(-af(X) + b)}$? - The abstract contains the word 'worse' repeated: "can perform even worse worse than classical inference" - I'm not sure this paper is an appropriate place to call out a biased implementation of software. Questions For Authors: 1. Is this method relevant for more general M-estimation? The equivalence to OLS is obvious for mean estimation, but perhaps something analogous exists for other estimates of interest? 2. Do the proposed extensions impact confidence intervals while retaining coverage, at least empirically? 3. Following footnote 1, have you performed an empirical evaluation of whether the bias exists in the empirical results? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to express our appreciation for your thoughtful review and constructive critique. We look forward to making the writing more clear following your points. Specifically, we will amend the abstract and make the description of the LLM refusal dataset more clear. We respond to individual points of yours below. **Appendix A** Sigmoid PPI is intended for use with small sets of labelled examples. Within this regime, we see Sigmoid PPI regularly match the original PPI++ or make significant improvements. However, one potential fix for the results seen in Appendix A would be using the following alternative formula: \begin{equation} g'(f(X)) := \frac{1}{1 + \frac{n}{N}}\frac{1}{1 + exp(-\alpha f(X) + \beta)}, \end{equation} This would have the estimator behave more similarly to the classical estimator as n -> N and the classical estimator becomes stronger. We are currently experimenting with this modification and hope to include them in the camera ready version of the paper. Below we provide promising results of average MSE at the largest possible $n$ for the research datasets, demonstrating how this method (Sigmoid-Div) improves asymptotic performance: |Dataset | PPI | Sigmoid-PPI | Sigmoid-Div| |--------|------------------|------------------------|--------| |Alphafold | 8.94e-05 | 0.0001 | 7.03e-05| |Ballots | 1.6e-05 | 0.0002 | 1.53e-05| |Forest | 0.00013 | 0.00018 | 0.00015 | |Galaxies | 0.00024 | 0.00023 | 0.0002| |Plankton | 0.00014 | 0.00013 | 0.00012| **Expanding Section 6, Further Theoretical Justification for Ridge-PPI** This was a shared concern for reviewer aoGS as well. Due to the lack of a general rebuttal as well as space restrictions on individual rebuttals, we have included the response to this in the rebuttal to reviewer aoGS. Please read it to see our additional analysis for this issue. **Bias of PPI++** Footnote 1 in the paper indicates that for small $n$, PPI++ will be a biased estimator. Prior work only considered the asymptotic unbiasedness of $\lambda$ and there is no existing expression for this bias with small samples. Here, we produce such an expression and are happy to include it in the paper if the reviewer finds it helpful. The expression for this bias is the following (revealed by taking the expectation of PPI assuming $\lambda$ is not independent from the training data): \begin{equation} E[\hat{\mu}_{PPI}] - \mu^* = -Cov[\hat{\lambda}, \hat{\mu}_f] \end{equation} We note that this expression can be applied to any estimator of $\lambda$, such as Ridge-PPI. Using the definition of covariance can make this expression more interpretable: \begin{equation} E[\hat{\mu}_{PPI}] - \mu^* = \sqrt{Var[\hat{\lambda}]Var[\hat{\mu}_f]}Corr[\hat{\lambda}, \hat{\mu}_f] \end{equation} We note that we generally do not include this bias term as it can be ignored in cases where we assume that the $\lambda$ parameter can be fit independently, such as section 6.1. Furthermore, since our methods focus on variance reduction (in line with prior literature) and were effective at reducing overall MSE/MAE, we consider a complete treatment of this bias to be out of scope for this work. **Other M-Estimators** We chose to focus our work on specific insights into the dynamics of PPI when used with mean estimation. While other M-estimators are interesting as well, we find them out of scope for our work, which is specifically about the small sample behavior of PPI for mean estimation. We believe the small sample focus of our work validates the contribution as novel and useful, despite not considering a wide range of M-estimators. **Creating Confidence Intervals** We note that throughout this work we did not consider the efficacy of constructing confidence intervals for any regime, since the central limit theorem (which past methods rely on) tends not to hold in small data regimes. Since improving the efficacy of PPI in the small data regime was our primary concern, those intervals would not be valid for any of the methods that we compared with. However, the basic machinery from Angelopoulos et al. [2023] will work to create confidence intervals for Ridge-PPI, as constructing the interval is not impacted by how $\lambda$ is selected. Additionally, for Sigmoid-PPI, assuming that the parameters of the transformation are fit independently as in section 6.1, the confidence interval can be created using a very similar formula to Angelopoulos et al. [2023], with the exception that the transformed predictions are used in place of the original predictions: \begin{equation} \hat{\mu}_{PPI_g} \pm 1.96*\sqrt{\frac{\hat{Var}[g(f(X_i)) - h(X_i)]}{n} + \frac{\hat{Var}[g(f(X_i^u))]}{N}} \end{equation}
Summary: The paper provides a method to improve over standard prediction-powered inference by weighting the correction factor in the same way that is done in control variates. The methods are simple and improve over existing baselines, and the authors provide an illuminating discussion. Claims And Evidence: Claims were clear and method is clear. My only concern is that of the variance of the estimated lambda that occurs due to estimation on data. For example, it would have been very useful to see the derivation with the variance with the optimal lambda_hat plugged in. Something like this was attempted in 6.2, but more the estimated covariance term still exists in there, which makes the math less illuminating than seems possible. Methods And Evaluation Criteria: Methods are clear, evaluation criteria are sound. The main idea for the method is to see that lambda can be seen as a scalar "corrector" for f(X) to approximate h(X), whose mean one wants to estimate. This whole idea can more generally be thought of as a special case of variance reduction by averaging a conditional mean, and the conditional mean is estimated by regressing h(X) on f(X) over a. simple function class that can be fit on few samples. The papers uses linear and logistic regression. The evaluation of the method is done on estimating means from few samples in problems ranging from protein function to segmentation. The evaluation showed that on all the small sample sizes considered, the proposed modifications to PPI improves over vanilla mean estimation, and also other modifications in the literature. I like the refusal rate example. Also, the authors should provide confidence intervals for the experiments. Theoretical Claims: Theory is correct. Experimental Designs Or Analyses: Experiments are valid and support the new methods clearly. Supplementary Material: Yes, the proofs. Relation To Broader Scientific Literature: The method, as it stands, improves on the PPI methodology and the related work discusses relavant work about PPI modifications. Essential References Not Discussed: None here. Other Strengths And Weaknesses: Simple method, clear new insight about how to interpret PPI, and it works. Other Comments Or Suggestions: Please improve the quality of the figures, with larger font sizes and HD figures. Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First of all, we thank you for your constructive feedback and positive reception of the work. We look forward to improving the manuscript based on your review. We will be sure to improve readability of the figures for the camera ready draft. Below, we respond to individual points you raised. **Optimal Lambda Risk Expression** While Expression 5 in the paper does describe the risk of PPI for a static lambda, we substitute the optimal lambda presented in Expression 6 below. \begin{equation} = \frac{1}{n}(Var[ h(X) ] + \frac{1}{Var[f(X)]} - 2 Var[h(X)]Corr[h(X), f(X)]^2). \end{equation} **Error Bars on MSE Plots** Thank you for your excellent point about the inclusion of error bars in our plots. For the camera ready version of the paper, we will re-run the experiments and include appropriate error bars. Here is a small set of 95\% confidence intervals for the normalized MSE (see figure 1) of different methods across 350 trials. Here the confidence is represented with a tuple (lower bound, upper bound): |Dataset | PPI (n=5) | Ridge-PPI (n=5)| Sigmoid-PPI (n=5)| |--------|------------------|------------------------|--------| |Alphafold | (1.12, 1.65) | (0.49, 0.72) | (0.55, 0.82)| |Ballots | (0.27, 0.47) | (0.24, 0.44) | (0.34, 0.6)| |Census Healthcare | (0.71, 0.95) | (0.68, 0.92) | (0.58, 0.8)| |Forest | (1.92, 3.36) | (0.71, 1.22) | (0.82, 1.09)| |Galaxies | (0.84, 1.14) | (0.61, 0.9) | (0.71, 0.97)| |Plankton | (1.23, 1.89) | (0.56, 0.76) | (0.54, 0.68)|
null
null
null
null
null
null
Reflection System for the Abstraction and Reasoning Corpus
Reject
Summary: The authors propose an augmented ARC dataset, AugARC, that can be used for finetuning LLMs to solve ARC tasks. They also propose a two-stage system for solving ARC tasks that runs multiple ARC solvers in parallel for a solution followed by a reflection LLM that chooses the best solution. They demonstrate that finetuning LLMs on AugArc improves the number of ARC tasks solved and that the reflection system significantly improves the number of ARC tasks that are solved by the system. Claims And Evidence: Claims are partially supported, see experimental designs or analyses. Methods And Evaluation Criteria: Yes. Theoretical Claims: N/A. Experimental Designs Or Analyses: Figure 5, Table 4 and Table 5. - While Figure 5b gives the gain in the number of solutions for each pair of methods in the ensemble, it would be nice to have a deeper understanding of what kinds of solutions each method adds over the other. In particular, the comparison of solutions found by claude and DSL would be interesting to see which ones can be found by the LLM and which ones are better solved by the DSL. Comparison with LLM-guided program synthesis would also be interesting. - I don't understand Figure 5a. The DSL search is the best method that very few other methods can improve upon, so how can it be that there is such high solution overlap? - The best DSL solver already solves 160 tasks, an ensemble approach solves 161, and the proposed reflection system solves 166 tasks. Moreover, the usage of Llama-70b seems to make the DSL solver worse. Thus, the benefits of the reflection system need to be better articulated, especially if the improvement seems to come from using a more powerful LLM. Supplementary Material: N/A. Relation To Broader Scientific Literature: This paper proposes a solution the ARC challenge. The data augmentation is not novel (e.g., see https://da-fr.github.io/arc-prize-2024/the_architects.pdf). The idea to use a "LLM as judge" is also not novel, but does appear to be novel in the context of the ARC challenge. Essential References Not Discussed: Essential references are discussed. Other Strengths And Weaknesses: The idea to use a LLM to ensemble solutions is interesting. However, a deeper exploration of this idea is missing from the paper as well as a deeper analysis of the benefits of using a LLM reflection system. Moreover, much time is spent on data augmentation and finetuning which have been explored in prior works. Other Comments Or Suggestions: N/A. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 1
Summary: The paper introduces a reflection system and data generation techniques for ARC. The authors evaluate baseline LLMs on ARC and find sort of unsatisfactory results, and try to address this in two ways: AugARC and the reflection system. They create AugARC, which consistently improves LLM performance through task rotations. Using augmented data, they fine-tune LLMs and observe significant accuracy gains. The Reflection System combines LLMs and a previous DSL solver to leverage their complementary strengths. This system solves 166/400 problems. Claims And Evidence: The claims in this paper are partially supported by experimental results, but there are notable weaknesses in the evidence provided. The authors primarily compare their Reflection System against older works, with limited comparison to state-of-the-art approaches. They claim significant improvements over previous methods, but the baselines they use (mainly DSL Search and a voting ensemble) are not recent. E.g., the comparisons are not drawn to Induction/Transduction approach and data augmentation techniques of Li et al., or the TTT technique of Akyurek et al. (see the references below). These earlier papers have more general ways to construct datasets and perform training. The paper makes strong claims about the benefits of the Reflection System architecture, but doesn't sufficiently analyze why it works better than alternatives. In that, there's little analysis of the specific mechanisms that enable these improvements or detailed ablation studies that would validatespecific choices. Methods And Evaluation Criteria: As mentioned above, baselines are lacking, and comparisons between AugARC and e.g. ARC Heavy and ARC Potpourri of Li et al. should at least be drawn. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The ARC benchmark has quite specific evaluation protocols so i don’t have concerns about how the ARC performance is evaluated, which this paper is mostly about. However, I have concerns about the lack of comparison to proper baselines. Supplementary Material: There does not seem to be supplementary materials. Relation To Broader Scientific Literature: The paper's contributions are primarily focused on the specific ARC benchmark rather than introducing fundamentally new ideas to the field.. The Reflection System builds on well-established ideas of ensembling and self-reflection in LLMs that have been explored extensively in other contexts. It is also difficult to judge the effectiveness given the lack of comparisons, but we do know other techniques get much larger numbers (e.g., TTT / BARC gets +50% in ARC eval) in this benchmark, and a clear benefit is not articulated. Essential References Not Discussed: I think this paper lacks quite a bit of essential references, baselines, and methods used in these papers. Some of these include ways to generate augmented datasets and proper ablations. I think citing and comparing to them is warranted. COMBINING INDUCTION AND TRANSDUCTION FOR ABSTRACT REASONING (Li et al. ) {also see reARC in the paper} The Surprising Effectiveness of Test-Time Training for Abstract Reasoning (Akyurek et al.) CodeIt: Self-Improving Language Models with Prioritized Hindsight Replay (Butt et al.) There is quite a bit more, but it is easy to recurse from the references of these papers. Other Strengths And Weaknesses: The main weaknesses are lack of baselines and detailed comparisons to existing works, which prevents me from understanding the contribution of the paper. There are a lot of parallels in terms of augmentation generation and reflection mechanisms, and unless proper ablations are in place or a meaningful comparison is in the paper, it’s difficult to understand the benefit of this method. Finally, the writing seems a bit rushed (e.g., title still says “formatting instruction for ICML”), and the certain subsections are made of a single paragraph and is not easy to understand to me. Other Comments Or Suggestions: - Questions For Authors: What happens if you compare AugARC to ReARC or ARC-Potpourri/Induction? Alternatively, what are the specific differences to the augmentation mechanisms in Akyurek et al.? What would the numbers look like if you were to test against the techniques in Li et al. or Akyurek et al., and fix the architecture / model choices otherwise? Code Of Conduct: Affirmed. Overall Recommendation: 1
Summary: In this work, the authors examine the performance of large language models (LLMs) when paired with a program synthesis solver to tackle the ARC challenge. They also explore augmentation strategies to expand the training data size, which was limited to 400 tasks in the original version. Fine-tuning the LLM with the proposed augmentations enhances the final ARC-solving accuracy. Claims And Evidence: The claims are clear. Methods And Evaluation Criteria: Yes, they make sense. Theoretical Claims: The paper does not include proofs or theoretical claims. Experimental Designs Or Analyses: The experimental design was described in the main paper. Supplementary Material: The supplementary was not reviewed. Relation To Broader Scientific Literature: This work examines the capability of AI systems to tackle abstract reasoning tasks. The abstract reasoning dataset being analyzed, ARC, presents challenges for AI due to its limited training set and the numerous tasks ("classes") involved. Essential References Not Discussed: Related work seems to be missing. In particular, [1,2] proposed augmentation strategies for the ARC dataset. The reviewer suggests the authors include experiments/comparisons with these augmentations. This will help emphasize the benefits of the proposed augmentation strategy and better reveal this work's contribution. Also, [1,2] incorporate LLMs into their pipeline to solve ARC. Can the authors compare their approach with theirs? [1] Akyürek, Ekin, et al. "The surprising effectiveness of test-time training for abstract reasoning." arXiv preprint arXiv:2411.07279 (2024). [2] Lee, Seungpil, et al. "Reasoning abilities of large language models: In-depth analysis on the abstraction and reasoning corpus." ACM Transactions on Intelligent Systems and Technology (2024). Other Strengths And Weaknesses: The reviewer's main concern is about the contribution of this work. This is because prior research has also a) proposed augmentation strategies to enhance ARC performance and b) utilized LLMs to address ARC. Another weakness of the approach is the necessity to augment the ARC dataset to enhance its performance, as ARC was intentionally designed to assess a model's problem-solving capability when few examples are available. Another aspect is the paper's presentation and writing. The writing and clarity can be improved; the paper title is also missing. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Summary: This paper proposes a “Reflection System” for solving the ARC challenge, combining multiple ARC solvers (notably DSL-based program synthesis and LLM-based solvers) and then using an additional “reflection” model to select among the candidate predictions. The authors also introduce AugARC, an augmented version of ARC tasks (with rotations, flips, permutations) and show that fine-tuning small LLMs on these augmented data can boost their direct accuracy on the ARC evaluation set. Ultimately, they claim that combining their best LLM solver with a DSL-based solver via a reflection model achieves 166/400 on the ARC evaluation tasks, surpassing prior systems that combine multiple methods via a simple voting ensemble. Claims And Evidence: The paper claims that its reflection-based combination of ARC solvers outperforms each solver alone (rising from 160 tasks to 166), and that augmenting ARC data improves fine-tuned LLMs. While the 160→166 boost is an increase, it is relatively small. Moreover, both **ARC augmentation** (see ARC-AGI 2024 SOTA: https://da-fr.github.io/arc-prize-2024/the_architects.pdf; Akyürek et al. (2024) *The Surprising Effectiveness of Test-Time Training*) and reflection-style **ensembling** (Li et al. (2024) *Combining Induction and Transduction*; “CodeIt,” Butt et al. (2024)) have been used in prior works, so the novelty is limited. The paper also does not compare in detail to these existing methods or demonstrate a fundamentally new approach to “broad generalization.” Methods And Evaluation Criteria: The paper’s reflection mechanism (an LLM “judge” selecting from independently obtained outputs) is a straightforward ensemble. While this approach is reasonable, a direct comparison to other ensemble methods (e.g., more advanced multi-solver interaction or deeper ablation of reflection’s decision process) is currently missing. Tables 2 and 3 omit crucial rows—for instance, **Claude 3 Opus** is missing despite being a major component of the best final ensemble. Including these rows would clarify how AugARC and fine-tuning affect the model’s performance. Theoretical Claims: N/A Experimental Designs Or Analyses: Some experimental design details could be clarified: 1. Shot mismatches: In Table 2, it appears AugARC is tested in a 3-shot manner, whereas the baseline ARC is only tested 1-shot (?). A fairer comparison would be 3-shot vs. 3-shot, potentially with adjusted temperatures and sampling settings on non AUG ones. 2. System transparency: The final best system (DSL Search + Claude 3 Opus + GPT-4o reflection) is unclear about whether Claude 3 Opus is further fine-tuned. If it is, it should appear in Tables 2 or 3 to make the gain of augmentation/fine-tuning more transparent. Supplementary Material: N/A Relation To Broader Scientific Literature: The paper connects well to standard ARC references, though it does not thoroughly engage with other lines of ARC solver work on advanced data augmentation or test-time training approaches for ARC. Using an LLM as a “judge” or “referee” is broadly aligned with established “reflection” or “ensemble” approaches in LLM literature. However, despite describing it as a “Reflection System,” the paper’s method simply picks among final outputs from independent solvers—there is no iterative feedback or “self-reflection” loop that modifies solver reasoning. It would be beneficial to (1) compare to other multi-step ensemble or reflection methods in detail, and (2) discuss the design choices around one-step vs. iterative or multi-round reflection/communication to clarify why a single-step judge was selected. Essential References Not Discussed: See "Claims And Evidence". Other Strengths And Weaknesses: Strengths 1. The reflection system is straightforward to implement and can scale easily with any new solver or LLM. 2. Fine-tuning smaller LLMs with an augmented ARC set is a sensible idea and does improve performance on the evaluation tasks. 3. The study attempts a thorough measure of solution overlap among different solvers. Weaknesses 1. Comparisons to state-of-the-art or somewhat more advanced approaches (e.g., TTT or Induction/Transduction for ARC) are missing. It remains unclear if the proposed approach is truly better or if it simply reimplements existing ensemble ideas. 2. The data augmentation approach is not novel, and the authors do not measure each augmentation’s distinct impact or compare it to other advanced transformations. 3. While the reflection system slightly outperforms a prior voting ensemble, more thorough ablation of the reflection model or a broader set of reflection protocols is missing. 4. The presentation is sometimes unclear (e.g., 3-shot vs 1-shot in the table 2, missing rows in both table 2 and 3 ). Other Comments Or Suggestions: N/A Questions For Authors: 1. Comparison to Baselines: How does AugARC differ from or outperform the data augmentation techniques in other SOTA solvers? (see "Claims And Evidence") 2. Reflection Mechanism: The reflection system simply let a LLM picks among final solutions from each solver. Did you consider an iterative reflection or solver cooperation step? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
null
null
null
null
null
null
What makes an Ensemble (Un) Interpretable?
Accept (poster)
Summary: The paper provides a complexity-theoretic investigation into the interpretability of ensemble models, focusing on three major types of explanation queries (sufficient reasons, contrastive reasons, Shapley values, etc.) and three common base-model classes (decision trees, linear models, neural networks). The main thrust is that ensembles are often far more difficult to interpret, from a computational perspective, than single base models. The authors show that even small ensembles can push explanation queries into NP-complete, #P-complete, or even higher complexity classes, underscoring fundamental limitations on generating certain forms of explanations in a tractable manner. Interestingly, the authors also highlight nuanced differences among models: while even two linear models can already lead to intractability for some explanation queries, an ensemble of a few (small) decision trees may still be analyzed in polynomial or XP time under certain conditions. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, I did not find significant issues for this part. Theoretical Claims: Yes. For Proposition 4.2 (complexity separations in linear/tree ensembles) and Proposition 5.1 (dealing with constant-size base models), the reductions appear logically consistent. There is no obvious mismatch in how they construct the gadgets or use known NP-complete problems as a basis. Experimental Designs Or Analyses: It is not an experimental paper, so there are no empirical tests on real data sets or performance measures. The analysis is fully theoretical, focusing on polynomial and parameterized complexity arguments. Supplementary Material: No Relation To Broader Scientific Literature: The paper strongly relates to prior research on the computational complexity of interpretability (eg., references to complexity analyses of single decision trees, linear models, and neural networks). It also relates to the broader literature on knowledge compilation and logic-based explanation, as well as standard NP/#P-completeness references. Their parameterized-complexity approach extends prior lines of inquiry that used W[1]-hardness for single-tree explanation tasks. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths 1. Comprehensive discussion of three explanation types. The paper studies multiple explanation queries (feature-selection based, contrastive, Shapley) which are popular in different XAI branches. 2. Clear cases on different base-model. By analyzing different base model (linear, tree and neural nets) and structural parameters, the author clarifies precisely how and why ensembles become more difficult to interpret. 3. Well-structured theoretical results. The complexity classification including membership, hardness, and parameterized analyses. Weaknesses 1. Limited discussion on practical methods. While the theoretical results are strong, there is relatively little discussion on approximate or heuristic strategies to circumvent these hardness results. It would be better to state how these hardness barriers affect real-world usage of ensemble interpretability methods. In addition, elaborating how practical interpretability methods (eg., SHAP) align with or contradict the complexity results would be valuable. 2. Scalability in practice. Although the paper mentions XP- or W[1]-type results, it is still somewhat unclear which of these might be feasible for moderately large ensembles (eg., k=10 or k=20). 3. Discussion for more specialized ensemble schemes. The paper primarily focuses on major/weighted voting, but some new ensemble methods apply meta-model strategies or dynamic weighting. It is recommended to provide a deep analysis. Other Comments Or Suggestions: No Questions For Authors: Some practitioners still use large ensembles (eg., random forests with 100+ trees), do you have suggestions for which subsets of your results are more pressing in practice versus primarily theoretical? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback, and for recognizing the significance of our work. **Discussion on more specialized ensemble schemes** We thank the reviewer for raising this point. While our complexity results are established for both majority voting and weighted voting inference methods - covering a broad range of possible ensemble configurations and commonly used ensembles - many of our findings extend to other types of ensemble methods as well. We touch on this briefly in Appendix A.1, where we discuss how our results extend to certain other ensemble configurations. Specifically, for meta-learner ensembles (as discussed in A.1), if the meta-learner can simulate majority or weighted voting - which is feasible even with simple models like linear classifiers, and certainly with neural networks - then the *hardness* results we present also extend to this setting. However, the *membership* results are more nuanced and would depend on the specific architecture of the meta-learner. Similarly, for dynamic ensembles, whenever the dynamic behavior reduces to a static configuration (e.g., fixed weights), the *hardness* results continue to hold. However, as with meta-learners, the *membership* results would depend on the particular characteristics of the dynamic mechanism. We will highlight these points more clearly in the final version. **Practicality aspects and relation to existing explainability schemes** We thank the reviewer for raising this important point. We will clarify and expand our discussion of these aspects in the final version of the paper. We first emphasize that some of our results demonstrate the *intractability* of various explanation tasks, hence demonstrating that heuristic approaches may lack formal guarantees. For example, in the case of sufficiency-based explanations, this can imply that practical heuristics that are used in practice may produce explanations that are either insufficient (i.e., unfaithful) or suboptimal in terms of minimality - issues already noted in practical prior work that have examined these aspects (e.g., [1-2]). Similarly, for SHAP, our results suggest that heuristic methods like KernelSHAP or FastSHAP may significantly diverge from exact SHAP values. For tree ensembles, many computational challenges become efficiently solvable when both the number of trees and the number of leaves per tree are bounded. Furthermore—and importantly—we will clarify in our practical discussion that many of these computations are also *amenable to parallelization*. Lastly, since our work establishes *membership* in classes such as W[1] and related complexity classes, some of these problems can be reduced to well-known problems within these classes (e.g., $k$-Clique), for which efficient heuristics are already available [e.g., 3–4]. This opens up an interesting direction for future research, where such heuristics could be leveraged to solve these problems more efficiently. We also agree that an exciting direction for future work lies in exploring additional forms of approximate guarantees - such as fully polynomial-time approximation schemes (FPTAS) or fully polynomial-time randomized approximation schemes (FPRAS) - as well as investigating relaxed versions of certain explanation definitions, which may offer improved tractability. Given these considerations, the scalability of explanation methods like SHAP for tree ensembles largely hinges on the number of leaves per tree. When this number is relatively small, explanations remain tractable - even for considerably large ensembles such as those mentioned by the reviewer. However, a key and central practical insight from our results is that interpretability benefits, from a computational perspective, are greater when using fewer but deeper trees rather than many shallow ones - an insight that can influence the preferred tree structure when interpretability is the goal. In contrast, integrating linear models into ensembles can significantly hinder interpretability, even with just a few base models, due to a rapid increase in the complexity of generating explanations. This highlights the comparative advantage of decision-tree-based ensembles over those incorporating linear models from an interpretable lens. We believe the strong theoretical foundation laid by our work can inspire further practical exploration of these diverse aspects, and we will revise the final version to expand this practical discussion. We thank the reviewer for raising this important point! [1] VeriX: Towards Verified Explainability of Deep Neural Networks (Wu et al., Neurips 2023) [2] Using MaxSAT for Efficient Explanations of Tree Ensembles (Izza et al., AAAI 2022) [3] Efficient Algorithms for Clique Problems (Vassilevska et al., Information Processing Letters) [4] Efficient Maximum Clique Computation over Large Sparse Graphs (Chang et al., KDD 2019)
Summary: The paper presents complexity results for a variety of explanation queries on different machine learning models with the main aim of comparing the behaviour of single models with ensembles. In particular, the paper considers the following explanation queries: Minimum Sufficient Reason (MSR), Minimum Contrastive Reason (or minimum change required) (MCR), Shapley Additive Explanations (SHAP), and variants of MSR such as check sufficient reason (CSR) and count completions (CC). Moreover, it considers the following models (and their respective ensembles): decision trees, linear models, and neural networks. Their main results (which are based on various complexity results for the above mention queries on the above mentioned models) are: - DT are strictly more c-interpretable than ensembles of DTs for CSR, MSR, MCR, CC, and SHAP and the same holds for linear models for CSR, MSR, and MCR. Moreover, assuming unary weights and biases the same applies for linear models w.r.t. CC and SHAP. - The above does not hold for neural networks, i.e., for any query neural networks and their ensembles behave the same. This is based on the observation that ensembles of neural networks can be transformed into an equivalent single neural network in polytime. - Parameterized Hardness results for ensembles of all model types with the size of any single model as a parameter for all query types. - Parameterized complexity results for ensembles of DTs and linear models with the number of models as a parameter showing that while this parameter does not help for linear models it can be helpful for DTs. ## Update after rebuttal I stay with my initial recommendation because the authors did not convince me that the overlapping results are not a problem. In particular, they did not even address all the overlaps in their rebuttal. Claims And Evidence: The claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes. Theoretical Claims: I checked all proofs occuring in the main text as well as some proofs in the appendix and did not find any issues. Experimental Designs Or Analyses: n/a Supplementary Material: I checked the proof methods for all results given in the appendix as well as the defered preliminaries. Relation To Broader Scientific Literature: This is the main problem of the paper, i.e., various results that are claimed to be new have already been shown in https://arxiv.org/pdf/2407.15780, which is also cited but not discussed in the paper. In particular, this holds for almost all results shown for DTs and ensembles of DTs for the query types MSR, MCR, and CSR. I am not sure why the authors did not see this, I can only imagine that the authors did not realise that MSR equals local abductive explanations, MCR equals local contrastive explanations, and https://arxiv.org/pdf/2407.15780 also contains results on CSR that were used for MSR. In particular, I found the following problems: - Proposition 4.2: The following results have already been shown in https://arxiv.org/pdf/2407.15780 for ensembles of DTs: coNP-hardness with respect to CSR (lemma 42), NP-hardness with respect to MCR (theorem 36), and coNP-hardness w.r.t. MSR (theorem 36) - Proposition 5.1: The following results have already been shown in https://arxiv.org/pdf/2407.15780 for ensembles of DTs w.r.t. size of any single model: para-coNP-hardness for CSR (theorem 36), paraNP-hardness for MCR (theorem 36), para-coNP-hardness for MSR (theorem 36) - Proposition 5.3: The following results have already been shown in https://arxiv.org/pdf/2407.15780 for ensembles of DTs w.r.t. number of ensemble models: coW1-hardness for CSR (lemma 29,30, and 34), W1-hardness (theorem 35) and in XP (theorem 13) for MCR. - proposition 5.5: shown in theorem 13 of https://arxiv.org/pdf/2407.15780 - propostion 5.6: shown in theorem 8 of https://arxiv.org/pdf/2407.15780 (in a much more general manner) Essential References Not Discussed: See previous item. Other Strengths And Weaknesses: The problems considered in the paper are well motivated and significant and the obtained results are insightful and plentyful. Unfortunately, many of the obtained results have already been known previously and shown in https://arxiv.org/pdf/2407.15780 (see above). This is of course a major problem for the paper that cannot easily be repaired since repairing this problem might (and probably) will change the motivation and problem setting of the paper. I therefore have to recommend that the paper is rejected. I would also recommend the authors to resolve this issue and resubmit. Other Comments Or Suggestions: I do not have additional comments. Questions For Authors: The main question I have is whether the authors agree with my assessment with respect to the results in https://arxiv.org/pdf/2407.15780. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and for acknowledging the significance of many of our results. **Relation to Ordyniak et al. (KR, November 2024)** We appreciate the reviewer for highlighting the important connection to the recent and highly significant work by Ordyniak et al., published in KR in November 2024. Their work addresses several related concepts that are indeed relevant to ours. While our work was developed independently and focuses on many complementary aspects, we fully recognize the significance of their contributions and agree that our discussion should be improved to better reflect this connection. We appreciate the reviewer highlighting certain intersections within a few propositions. However, we'd like to emphasize that these represent only a very small portion of our overall contributions and proofs, many of which the reviewer has positively acknowledged. Specifically, after careful examination, we found genuine intersections amounting to no more than 1–2 pages of proofs in total within dozens of pages of our proofs in the full 44-page appendix. Throughout our rebuttal, we will clarify this point with technical details and will further address and elaborate on this important related work in the final version. Nonetheless, we believe these refinements can be effectively addressed in a camera-ready revision, without requiring a new submission. **Technical elaboration.** Much of our *non-parameterized* results are entirely independent of Ordyniak et al., including technically challenging results like Lemma E.5, which resolves a long-standing issue, and a complex dynamic programming algorithm for SHAP under linear models with unary weights. Our parameterized results focus on three model classes: linear models, neural networks, and decision trees—of which the first two were not studied by Ordyniak et al. A key contribution is our intractability results for ensembles of a constant number of linear models (Pages 31–44), which reveal surprisingly sharp complexity contrasts with decision tree ensembles. We also analyze explanations like CC and SHAP using non-trivial proof techniques. Our analysis goes beyond majority voting to include weighted voting, which involves some intricate constructions. Although there is some intersection with Ordyniak et al. in the parameterized results for CSR/MCR/MSR on decision tree ensembles using majority voting, the bulk of our proofs for these classes are distinct as well—as we elaborate below. For Proposition 4.2, our proof applies to a broader class of *poly-subset-constructible models*, including decision trees, linear models, neural networks, and more. We also cover the CC and SHAP queries. While there is some intersection in our results for CSR and MCR, our results for these are mainly corollaries and are not central to the technical core of this proposition—CSR isn’t even listed as a novel contribution of ours in Table 1, and we’ll revise MCR similarly. Finally, for the MSR query, we prove $\Sigma^P_2$-hardness via a non-trivial construction that goes beyond coNP-hardness. Similarly, Proposition 5.1 applies to the broader class of poly-subset-constructible models—and extends the analysis to CC, SHAP. While the CSR/MSR/MCR results for decision trees intersect, this is again only an extremely brief corollary (lines 1685–1688 only). For Proposition 5.3, we agree on the MCR and CSR *hardness* proofs and will clarify this. However, our results also present non-trivial membership results—coW[1] for CSR, W[P] for MCR—and additional findings like #W[1]-hardness and membership results for SHAP and CC. We also analyze MSR, showing para-NP-hardness and containment in XNP. Lastly, while we agree Propositions 5.5/6 can be inferred from Ordyniak et al., in our work these too are presented more as direct corollaries rather than central contributions. We can clarify this by revising the very brief quarter-page section in the main text discussing these results, to make their connection to that work more explicit. **Summary.** We fully acknowledge the highly valuable contributions of the work of Ordyniak et al. and agree with the reviewer that their work deserves a more thorough discussion. While there are some technical intersections, we believe they are minor. Our main contributions and the current structure of the paper—which emphasize the surprisingly contrasting behaviors across different model families (e.g., linear models vs. decision trees) and offer a nuanced analysis of the complexity gaps between base models and ensembles—can largely remain unchanged. We believe the clarifications regarding the connection to Ordyniak et al. are reasonable to address in a camera-ready revision, without requiring a separate submission. We will incorporate these clarifications by expanding on their work in the introduction, related work, and the relevant sections of the main text and appendix. We thank the reviewer for highlighting this very important point!
Summary: This paper studies the interpretability of ensemble of models from the computational complexity theory. Authors show both negative results (e.g., intractability even for constant-size models) and positive results (e.g., tractability for small decision tree ensembles). Claims And Evidence: Yes, there are proofs. Methods And Evaluation Criteria: The paper is a pure theoretical work with no experiments. Theoretical Claims: Yes, there are proofs. Experimental Designs Or Analyses: No, there are no experiments. Supplementary Material: Yes, some of the proofs. Relation To Broader Scientific Literature: This paper has an impact over specific claims surrounding the explainability of ensemble models. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strenghts: Novelty and Broad Scope: The paper provides a formal, complexity-theoretic framework for ensemble interpretability, addressing underexplored (but sometimes) questions. For instance the fact that k-ensemble decision trees are strictly more interpretable than k-ensemble linear models was not proved before. The analysis spans multiple explanation types (sufficient reasons, contrastive explanations, Shapley values) and base models. Thus, the paper gives a more broad picture. Limitations: The paper is very hard to read, with several notions on complexity that I think are not necessary. Some of the proofs (e.g., Proposition E.1) are dense and could benefit from higher-level intuition in the main text. The paper addresses questions with obvious answers. For instance, Section 4 studies the comparison between explainability of base models and their ensemble counterparts. It is obvious that the former is more easily interpretable than the latter. I don't see why we need a study for the purpose. Why is it even important to answer these questions? There are no experiments to show the applicability of the theory discussed here. Other Comments Or Suggestions: No. Questions For Authors: The analysis assumes feature independence for Shapley value computation. How would feature correlations impact the complexity results? Would relaxing this assumption require new complexity classes or render the problem fundamentally harder? The paper shows that linear model ensembles become intractable with just two base models, while decision tree ensembles can be tractable for fixed k. What inherent structural property of decision trees (e.g., axis-aligned splits, hierarchical structure) enables this tractability, and why does this property fail for linear models? Is there a formal characterization of model classes that avoid this complexity explosion? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and for recognizing the importance of the results provided in our work. **The importance of complexity separations between base-models and ensembles** While the reviewer acknowledges the importance of some of our unexpected results, such as the substantial complexity differences between ensembles of decision trees and linear models, they question the value of including more intuitive findings — like the complexity gap between base models and their ensembles. We appreciate this and clarify that, although it is intuitive to expect ensembles to be more complex, the *exact nature* of this complexity gap is not at all well understood. Our results show that while such gaps often appear, they sometimes do not — highlighting the need to understand when and why they arise. For instance, ensembles of neural networks *do not* lead to increased complexity, in contrast to ensembles of decision trees and linear models. Furthermore, we demonstrate that the presence of a complexity gap also depends on the type of explanation used — for instance, linear models and their ensembles exhibit no complexity gap under CC and SHAP. It is interesting to note, though, that when the weights of an ensemble of linear models are encoded in *unary*, the complexity gap for CC and SHAP *does* reappear. This finding emphasizes the importance of a linear model’s expressiveness in driving this effect and reveals another unexpected insight into how the complexity gap between ensembles and their base models varies across different settings. To summarize, although the broad distinction between ensembles and base models might be anticipated, our findings reveal various subtle and nontrivial differences across settings — differences that depend on both the model type and the explanation method. This highlights the importance of a thorough and rigorous analysis of this dimension. **What structural properties drive the complexity explosion in linear models versus decision trees?** We thank the reviewer for this insightful question, and we will elaborate on this point further in the final version. Intuitively, decision trees have a leaf structure that allows for enumeration over possible outcomes — this enumeration becomes more complex in ensembles, but the increase is gradual. In contrast, linear models lack a comparable enumerable structure, making it impossible to iterate over possible assignments in a similar way. This leads to the surprising result that even a strikingly small ensemble of linear models can already render interpretation computationally intractable. We agree that exploring alternative base models or more structured approaches that retain the enumerable properties of decision trees is a promising direction for future research. We will include this direction in our final version. An important question moving forward is how to design models that balance expressivity and predictive power with the ability to compute explanations efficiently. **Impact of feature independence** We note that our work follows common conventions used in previous computational complexity studies of SHAP (e.g., [1,2]) and in standard implementations like KernelSHAP [3], by assuming feature independence. As the reviewer correctly pointed out—and as also noted in [1]—relaxing this assumption can render the computation of SHAP intractable. Intuitively, even when using a simple model such as a decision tree, a highly complex and expressive data distribution can dominate the computational burden, ultimately making the SHAP computation hard. However, recent work [4] has shown that tractability can still be achieved under less expressive, but non-independent distributions, such as those exhibiting *Markovian* dependencies. Identifying broader classes of non-independent distributions that still allow efficient computation remains an important and interesting research direction. We believe that linking this line of research with our tractability results for SHAP presents a promising direction for future exploration. We appreciate the reviewer for raising this point and will incorporate it as a direction for future exploration. **Improving clarity of points** Following the reviewer’s comments, in the final version, we will enhance the technical presentation of the paper by simplifying some complexity-related notations, clarifying the intuitive expedition of the results, and reducing the density of certain propositions, as suggested. We thank the reviewer for highlighting these points. [1] On the Tractability of SHAP Explanations (Van den Broek et al., JAIR 2022) [2] On the Complexity of SHAP-Score-Based Explanations: Tractability via Knowledge Compilation and Non-Approximability Results (Arenas et al., JMLR 2023) [3] A Unified Approach to Interpreting Model Predictions (Lundberg et al., Neurips 2017) [4] On the Tractability of SHAP Explanations Under Markovian Distributions (Marzouk et al., ICML 2024)
null
null
null
null
null
null
null
null
Zero Shot Generalization of Vision-Based RL Without Data Augmentation
Accept (poster)
Summary: The authors propose a method for generalizing vision-based RL policies in the presence of image distribution changes or distractors, such as modifying the color scheme of the image. There method aims to learn a representation that disentangles the different components of an image, specifically into two types: task-relevant and task-irrelevant features. Ideally, if a given feature is either only task relevant info or only task irrelevant info, then generalization is possible because for some downstream image with distractors, those distractors should get filtered into the irrelevant features while the relevant features still capture only the needed info. Claims And Evidence: Their claims, that disentangled visual representations are necessary for generalization, are well-motivated both intuitively and by making some arguments from biology. Their experiments show that their proposed approach outperforms other approaches without using data augmentation, except for a baseline that uses a large pre-training dataset. Methods And Evaluation Criteria: To my knowledge, the benchmark environments they use are standard for this type of RL. Theoretical Claims: I checked the proofs in the appendix and it seems correct to me. Experimental Designs Or Analyses: The described experiments are reasonable given the problem formulation, though I did not check the code. Supplementary Material: I did not review supplementary material. Relation To Broader Scientific Literature: This work relates to broader literature on image processing, RL, image based RL, and representation learning. This work seems to be mostly connecting some prior work in disentangled representation learning with image-based RL. Essential References Not Discussed: I am not sure if this is strictly necessary, but bi-simulation (e.g. work from Amy Zhang et al.) may be worth looking into. Other Strengths And Weaknesses: Strengths: - Paper is well written - The introduction and motivation is very clear Weaknesses: - Some figures providing qualitative examples would be helpful. Specifically, the following two: - A figure showing the architecture and inference procedure for the representation, especially relating to the codebook. This part of the paper was harder to parse, and a figure may clarify things. - A figure illustrating the learned representation. For example, show the same trajectory but with two different coloring schemes, and plot the representations as this trajectory progresses. If the algorithm is working correctly, all relevant features should closely match, while irrelevant features may be wildy different. Additionally, show if you can use this approach to modify the images. For example, modify the observation to be a different color, compute its representation, set its irrelevant features to match the training data, then regenerate the image. You should be able to effectively change the image visuals using this approach without losing the relevant information, such as joint position. Other Comments Or Suggestions: Minor: - Notation in the background section is a bit unclear. For example, you use R(s_t, a_t) for reward, but you also use r_t. r_t is not defined anywhere, and I assume this is reward at time t. Also, the equation for the bellman residual J(Q) has reward listed twice and a missing parenthesis in the subscript notation. Lastly, the policies objective is to maximize value, so this should either be indicated or a negative sign should be included. - Quotation marks are all double end quotes. Questions For Authors: 1) What happens if you use to large of a latent space |z_d|? Does this break things? 2) Since each dimension is treated independently, the scaling of the number of the codebook is exponential in |z_d|. Is this correct? Is there a way around this somehow? Please discuss. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your in-depth feedback and for providing references to work on bi-simulation for visual generalization. We will add bi-simulation to the related works section and fix the reward / RL objective notation issues for the final iteration of the manuscript. We respond to individual comments and questions below. **Bi-simulation may be worth looking into.** Thank you for bringing this to our attention and it is indeed relevant. It aligns with data augmentation (DA) and latent regularization methods like our RePo baseline. From Appendix A.1 Equation 8, DA is defined as attempting to learn an “optimality invariant” Q-function such that $Q^*(\textbf{z}, a) = Q^*(\textbf{z}', a)$ for two states $\textbf{z}$ and $\textbf{z'}$ that are semantically the same but perhaps differ visually. Similarly, bi-simulation defines an optimal representation where states that are associated with similar return are closer together in some metric space, and one where task-irrelevant features are removed from the representation. However, our core claim is that for generalist agents, what is “task-irrelevant” can vary depending on the task, so removing such information may be detrimental. For example, a flying bird is irrelevant to autonomous driving until it becomes a collision risk. Rather than removing “task-irrelevant” features, we argue that generalist agents should retain all information about the environment, where the onus is on the policy network to decide what variables are relevant for taking optimal actions. Since our method uses a reconstruction loss, we retain all information about the environment but specifically in a factorized representation, making our method *equivariant* to visual distribution shifts. Thus, our method makes no distinction between task-relevant and task-irrelevant features – it simply attempts to factorize the latent structure into separate dimensions without bias. The policy and critic networks decide what is task-relevant and what isn’t. Though this can make our method sensitive to “task-irrelevant” features for any given task, the associative memory component mitigates much of the problem by mapping OOD values (e.g., a new robot color at test time) to known values seen during training before passing the representation to the actor/critic networks. **A figure showing the architecture and...** Thank you for your recommendation. We will update Figure 1 to include a diagram showing the inference procedure of the associative latent model. **A figure illustrating the learned representation...** Here is the first figure requested by the reviewer: https://drive.google.com/file/d/1P0zkA4s4bRJoNWCNhLWmqKLDOUI-GmbW/view?usp=sharing. We take our model ($z_d=12$) trained on the standard, unmodified DMControl environment and perform rollouts on two instances of the color-hard evaluation environment with the same initial conditions. The two plots in the top row show the values that each of the 12 latent dimensions takes on over the course of the trajectory for both environments. The bottom row of images correspond to the initial state of the latent trajectory plots in the top row. We find that latent dimensions that wildly vary are the same in the left and right plot, specifically the ones colored red, pink, green, and brown. As the reviewer predicted, some of the dimensions vary significantly while others are more similar throughout the rollout, likely representing the divide between task-relevant and task-irrelevant features. **Additionally, show if you can use this approach to modify the images...** Regarding the reviewer’s second point, we do precisely this in Figure 6 (bottom) and Figure 8 in the appendix (A.3). In this experiment, we take our model trained on the color-hard environment, sample a batch of images with randomized colors, and encode them into our disentangled latent representation. We then hold all but one latent dimension fixed, interpolate that latent dimension from min to max value (x-axis), and visualize the resulting image using the decoder. The y-axis is several of the latent dimensions we interpolated. From Figure 6, we can see that interpolating the first two latent dimensions (first two rows) corresponds to changing the color of the scene, while the latent dimension corresponding to the bottom row changes the joint angle of the left knee. **What happens if you use to large of a latent space |z_d|? Does this break things?** This doesn't break the algorithm, but it can cause performance degradation if $|z_d|$ strays too far away from the true number of sources of variation. Several of the latent dimensions will end up corresponding to the same physical attributes. We provide a study on the effects of $|z_d|$ in the Appendix Section A.8. **Since each dimension is treated independently...** The number of codebooks 1:1 corresponds to the number of dimensions in the latent space, and so the scaling is linear i.e. if $|z_d|$ is 20, then we have 20 codebooks. --- Rebuttal Comment 1.1: Comment: 1. Thank you for clarifying how your approach relates to bisimulation 3. Interesting. This looks promising. Suggestions: Plot each latent dimension separately to compare them directly, so you will have 12 plots with 2 lines each rather than 2 plots with 12 lines. This will make interpreting this easier. Also, it isn't clear from the figure if you already do this, but take the same sequence of actions in each environment so that the states are identical, and they are one to one comparable. Otherwise, the latent representations may diverge simply because the trajectories diverge. 4. Oh, awesome. Perhaps change the caption and labeling of the figure to make this more clear as I did not realize this on my first pass. 5. Devising some method for determining this hyper parameter, other than a brute force search, would be a good direction for future work. --- Reply to Comment 1.1.1: Comment: Thank you for responding to our rebuttal and for the suggestions on the plot. Here is the updated figure: https://drive.google.com/file/d/1btjbLQbaX1VaM3i8iLaFhVbj_PiSshoF/view?usp=sharing. We decided to create two rows of 12 plots because the trajectories of some of the latent dimensions match to the point where they may be indistinguishable. The left Walker image corresponds to the initial state of the top row of latent trajectories, while the right Walker image corresponds to the bottom row. Both environments are rolled out using the same actions as the reviewer suggested. Again, we find that many of the latent dimensions are very similar (if not the same), while some vary significantly. Regarding the reviewer's other comments: 3. We appreciate the feedback on the captions and will improve them in the final version of the paper. 4. We agree!
Summary: This paper proposes Associative Latent DisentAnglement (ALDA) that builds on standard off-policy RL towards zero-shot generalization. It learns a disentangled representation from the training data and then uses an associative memory model to recover data points in the original training distribution given OOD data.The authors also prove that data augmentation methods can be considered as a weak disentanglement. Experiments show that the proposed methods can outperform most baselines. Claims And Evidence: I think all the claims are well supported. Methods And Evaluation Criteria: 1. The proposed algorithm is simple and well-motivated. The Association strategy is also elegant and effective. 2. However, I have some concerns in its novelty. Although disentanglement is highlighted throughout the paper, it looks like a simple application of existing QLAE algorithm. There are no specific adaptions for the concrete Vision-based RL scenario. Especially, the authors do not conduct any disentanglement in the temporal domain. This makes it impossible to analyze the dynamic cues from vision inputs despite its importance in many RL tasks. Theoretical Claims: The authors provide a proof that data augmentation is a weak disentanglement of the latent space. This provides a good insight to building the connection between previous literature and this work, and also gives a good motivation for learning a disentangled representation. I do not see any mistakes in the proof. Experimental Designs Or Analyses: 1. It is great that the authors conduct extensive experiments to compare with multiple baselines on different tasks. 2. I am not sure whether these experiment setups are common knowledge in this field, but I would encourage the authors to explain some settings at least in the appendix. I am not sure whether each experiment is in-distribution or out-of-distribution. If they are OOD, how are they OOD? Do they have OOD visual appearances or dynamics? Is the OOD interpolation or extrapolation? 3. Besides, it would also be helpful to provide some failure cases to show the limitation of the OOD generalization. 4. Since disentanglement is an emphasis of this paper, it is great to have some analysis about how the representation is disentangled and what is the physical meaning of each component if possible. Supplementary Material: I have read the supplementary material. Relation To Broader Scientific Literature: There is a concern about novelty mentioned in "Methods And Evaluation Criteria". Essential References Not Discussed: [1] shares a very similar insight with this paper about disentangled representation for generalization. Since it is a key contribution of the proposed algorithm, I believe the authors should have some discussions related to [1]. [1] Wu, Zheng, et al. "Zero-shot policy transfer with disentangled task representation of meta-reinforcement learning." 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2023. Other Strengths And Weaknesses: Please refer to the previous parts. Other Comments Or Suggestions: Please refer to the previous parts. Questions For Authors: The authors can consider replying to my concerns mentioned in previous parts. I am not an expert in the area of RL, so feel free to point them out if I have any misunderstandings. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your in-depth feedback and for providing the additional reference. We will add it to the related works and discussion sections for the camera-ready version of our manuscript. We respond to individual comments and concerns below. **However, I have some concerns in its novelty...** As the reviewer correctly pointed out, self-supervised disentanglement methods currently do not provide a way to extract or disentangle temporal information. To address this, we take a fixed window of consecutive image observations, encode each observation *separately* into disentangled latent embeddings, and then extract temporal information via a 1D convolutional network (See figure 2), which is one of our novel contributions. Our other novel contributions are as follows: - We show that the latent model in QLAE is equivalent to a Hopfield network with fixed, predetermined memories when the quantization loss is removed. We replace this with the attention mechanism used in modern Hopfield networks, boosting performance over standard QLAE. - Prior RL disentanglement methods, such as DARLA, use a two-stage approach—disentangling latents with random policy data and then training an optimal policy on the fixed representation. This approach is suboptimal because (a) the random agent may not explore the full state space, and (b) critic gradients cannot backpropagate to the latent space or encoder, which is critical for good performance. Our framework jointly disentangles the latent space and trains the policy while allowing critic gradients to update the latent model, leading to significantly better performance than DARLA. **I am not sure whether these experiment setups are common knowledge in this field, but I would encourage the authors to explain some settings at least in the appendix...** We will add a summary of the DMControl Generalization Benchmark (DMCGB) to the appendix in the final, camera-ready version. For the reviewer’s reference, DMCGB is a wrapper over the standard DMControl (DMC) benchmark, which focuses on optimal control. DMCGB introduces visual distribution shifts to assess zero-shot generalization of agents trained on high-dimensional image observations. The “in-distribution” (or “training” environment as we refer to it in the paper) is the standard DMC benchmark, which emits unmodified image observations. The “color-hard” environment is an OOD setting that perturbs colors randomly on reset, while the DistractingCS environment applies camera jitter and plays random videos from a pre-recorded dataset. Both environments modify only visuals, leaving task dynamics unchanged. Our method trains solely on the unmodified DMC environment and is periodically evaluated on DMCGB environments, testing extrapolative generalization. In contrast, methods like SVEA apply random transformations such as overlaying images from the Places Dataset containing 10 million real world images, or applying random convolutions which change the colors in the scene, making their generalization results more representative of interpolative generalization. **Besides, it would also be helpful to provide some failure cases to show the limitation of the OOD generalization.** The results on the DistractingCS environment highlight the limitations of our method’s OOD generalization. While our performance is on par with SVEA, an ideal model would distinguish between the distracting background video and the agent in the foreground, resulting in minimal performance loss. The observed performance drop suggests room for improvement, making this a promising direction for future research. **Since disentanglement is an emphasis of this paper, it is great to have some analysis...** The representation is disentangled such that each latent dimension corresponds to one unique aspect of the image. As an example, in the Walker2D task, a given dimension could be the color of the robot, one of the robot’s joint angles, the floor, the sky, etc. We qualitatively show the physical meanings of some of the latent dimensions in the “latent traversal plots” in Figure 6 in the main paper, and in the appendix section A.3. In these experiments, we sample a batch of images, set all but one latent dimension static, and then interpolate one latent dimension from min to max value (x-axis) and generate the resulting images using the decoder. The y-axis (rows) are interpolations of different dimensions of the latent space. This allows us to see the physical meaning of each latent dimension, and indeed we find that the latent variables correspond to unique attributes such as a joint angle or the color of the robot/background as with Figure 6 (bottom) where the colors are randomized. --- Rebuttal Comment 1.1: Comment: I appreciate the efforts of authors in writing this rebuttal. This rebuttal can solve my concerns, so I will keep my score. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer for taking time to read our rebuttal and finalize their score. Given that the discussion deadline is approaching, we look forward to hearing back from all reviewers on their final scores and are happy to answer any remaining questions / concerns.
Summary: The paper introduces Associative Latent Disentanglement (ALDA), an approach to zero-shot generalization in vision-based reinforcement learning (RL) without relying on data augmentation. ALDA leverages disentangled representations and associative memory mechanisms to enable RL agents to generalize to novel environments by factorizing latent spaces into modular components, allowing for independent adaptation of task-relevant and task-irrelevant features. Claims And Evidence: The claims in the paper are generally supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of zero-shot generalization in vision-based RL. Theoretical Claims: The theoretical claims, particularly the connection between data augmentation and weak disentanglement, appear well-reasoned. Experimental Designs Or Analyses: he experimental design is generally sound, with well-chosen benchmarks, multiple baselines, and ablation studies. The inclusion of latent traversals and β-study strengthens the analysis. Supplementary Material: The supplementary material includes additional latent traversal visualizations, β-study ablations, framestack comparisons, and proof details. Relation To Broader Scientific Literature: The paper builds on existing work in vision-based RL, disentangled representation learning, and associative memory. It extends prior research on disentanglement in RL (e.g., DARLA) by integrating modern Hopfield networks for associative memory. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: -- The integration of disentangled representation learning with associative memory is novel in the context of vision-based RL. The theoretical perspective connecting data augmentation to weak disentanglement is insightful. -- The work addresses a crucial challenge in RL generalization, providing a potential alternative to data augmentation that could improve scalability and efficiency. -- Strong baseline comparisons, ablations, and visualization techniques support the claims. Weaknesses: -- Some theoretical discussions, particularly regarding disentanglement and associative memory mechanisms, could be more clearly explained for a broader audience. -- While ablations exist, a more controlled comparison of ALDA with and without associative memory would further clarify its unique benefits. Other Comments Or Suggestions: N/A Questions For Authors: 1. How does ALDA perform if the associative memory component is removed or replaced with a simpler alternative? 2. Have you tested ALDA on more complex or real-world RL tasks beyond DeepMind Control Suite? How well does it scale? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback and suggestions on how we could improve the paper's clarity. We respond to individual comments and questions below. **Some theoretical discussions, particularly regarding disentanglement and associative memory mechanisms, could be more clearly explained for a broader audience. -- While ablations exist, a more controlled comparison of ALDA with and without associative memory would further clarify its unique benefits.** Because association is implicit in the dynamics of the latent model that QLAE uses to perform disentanglement, there is no way to remove just the associative part of ALDA and perform an experiment. The closest comparison we can do is the comparison between QLAE and BioAE presented in Figure 3. Like BioAE, QLAE uses activation energy minimization as an auxiliary objective in order to disentangle the latent representation. The only other differences between the two are that BioAE does not have an associative latent model and that BioAE also enforces nonnegative activations, so it essentially functions as ALDA without associative memory. As per reviewer N37F's suggestion, we plan to update Figure 1 to include a diagram of the inference procedure of the associative latent model so that the role of association in our method is more clearly conveyed. **How does ALDA perform if the associative memory component is removed or replaced with a simpler alternative?** ALDA without associative memory essentially functions as DARLA (see Figure 5 for comparison) or BioAE (see Figure 3 for comparison), which are purely disentanglement methods, both of which perform worse at visual generalization compared to ALDA. For more details, please see our response above. **Have you tested ALDA on more complex or real-world RL tasks beyond DeepMind Control Suite? How well does it scale?** Please see our response to Reviewer Tie6 under *"The environments the authors use are toy control environments..."*. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I will maintain my rate.
Summary: The authors present ALDA - an approach for training disentangled representations along with off-policy learning for OOD generalization. They build upon existing QLAE-based latent model, which is SOTA disentanglement method, which uses latent space dimensions, each having their own codebook. The prove that data augmentation is weak disentanglement and derive a novel loss function for their approach for simultaneous representation and policy training. For temporal sequences, they feed in batches of data to the model, fed to a 1-D convolution for the actor-critic networks, and batch reconstruction is done directly on the latent space. They compare DARLA, SAC+AE, RePo, SVEA with ALDA on DeepMind Control Generalization Benchmark for "color hard" and "distracting cs" cases. ALDA beats all baselines except SVEA which uses data augmentation. The authors argue that the test set visuals are also covered with external data augmentation in SVEA, thereby not showing true generalization performance. The authors also present interesting visuals to showcase disentanglement of their model using the produced latents. ### Update After Rebuttal Thanks to the authors for the rebuttal. I enjoyed reading the rebuttal, the additional experiments and the arguments made in the rebuttal. The paper has its merits. I think the idea that the authors present is interesting. Claims And Evidence: - **Data augmentation is weak disentanglement**: I think this is a fair idea and the authors discuss a theoretical proof in detail. - **Data augmentation requires larger models, training data, longer training times, and have greater training instability**: While the authors discuss this in the introduction, I think it would be nice to discuss this for SVEA vs ALDA since that would be an interesting statistic to see. - **If a data-driven model can generalize better with less data, then it will scale better with more data**: This is a fair point. The authors do provide some reasoning behind this based on their results on ALDA. However, if this is actually true, why did the authors not try SVEA + ALDA (i.e. data augmentation and disentanglement together)? Is it hard to implement such an approach? If not, this would directly prove this proposition. In a world where data is cheap to generate/obtain, data augmentation shouldn't not be used. Methods And Evaluation Criteria: **Methods** - The authors train ALDA approach on four tasks for DeepMind Control Suite. They evaluate on "color hard" and "distracting cs" environments. This is a fair way to evaluate the method. However, the authors should look into potentially harder tasks (e.g. navigation/rearrangement/manipulation). The environments the authors use are toy control environments, which are far from the real-world environments. There might be other benchmarks that the authors could explore. - The baselines are of varied kinds including data augmentation, disentanglement, etc. This provides a nice overview/comparison of their approach with other approaches. **Evaluation Criteria**: They use episode reward for comparison, which is standard in control tasks. The authors mention that it is not possible to directly evaluate disentanglement, and therefore show qualitative examples in Fig 6, which makes sense. Theoretical Claims: - The authors prove a theorem showing that data augmentation is weak disentanglement of latent space. The discussion is sound, and shows that in order to achieve a latent representation only relevant to the task, we would have to gather data from all task-irrelevant source, which would be unrealistic in the real world. - There is also discussion in Section 4 on how the loss function for ALDA is created based on theory on attention-based Hopfield networks and QLAE dynamics. I have a high level intuition of this idea, but I am not entirely certain on the correctness of the entire discussion. - There is a set of proofs in appendix too, which I did not read in detail. Experimental Designs Or Analyses: - They show that QLAE achieves better performance over the course of training compared to BioAE (another disentanglement method), showing better OOD generalization. - The analyze the results from various training and representation strategies in Figure 5, discussing the performance of ALDA against other approaches. - The analysis of different latents in Figure 6 is also interesting and showcases disentanglement. Supplementary Material: I skimmed over it, but did not read it in detail. The authors discuss more proofs,ablations like hyperparameters, frame-stacking instead of batching, and other implementation details. Relation To Broader Scientific Literature: I think the approach is overall interesting. However, the authors could present results on more environments, tasks for a comprehensive study of the approach. Otherwise, if this approach is only applicable/useful in toy control problems, within a limited number of scenarios, then it might not have a lot of impact in the field. Essential References Not Discussed: N/A Other Strengths And Weaknesses: A well-written paper overall. I like the motivation, the buildup of proofs, and the choice of baselines. The authors could have tried on more benchmarks and set of problems, however. Other Comments Or Suggestions: - Line 96 - incorrect quotes around "weak" - Line 145 - incorrect quotes around "random convolution" - Line 142, Col2 - incorrect quotes around "irrelevant - Line 157 - don't -> do not - Line 295 - we've -> we have - Line 290, Col 2 : incorrect quotes around "color hard" - Inconsistent formatting of color hard and distracting cs throughout the paper. - Line 410 - Incorrect quotes - Line 418 - Incorrect quotes - Line 428 - That's, isn't - informal usage. Questions For Authors: - How easy or hard is it to transfer the approach to on-policy algorithms? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments and feedback on the paper. We will fix the grammar errors for the camera-ready version of the manuscript. Regarding your questions and concerns, we respond to each one individually below. **I think it would be nice to discuss this for SVEA vs ALDA since that would be an interesting statistic to see.** The type of data augmentation that’s applied can affect the performance, stability, and sample complexity of the underlying RL algorithm. As noted in the SVEA paper, “More recently, extensive studies on data augmentation have been conducted with RL, and conclude that, while small random crops and translations can improve sample efficiency, most data augmentations decrease sample efficiency and cause divergence”. To explore this, we trained SVEA directly on the color-easy evaluation environment in DMControl. The color-easy environment randomizes the agent, sky, and background colors on reset, but not to the extreme RGB values that the color-hard environment does. This form of augmentation, often used with lighting randomizations for Sim2Real RL deployment [1], also helps assess generalization to OOD visual shifts, since in this experiment the DistractingCS and to an extent color-hard environments will be OOD with respect to the training data. We compared ALDA, standard SVEA, and SVEA (color-easy) on the "Walker Walk" task here: https://drive.google.com/file/d/1hwFX6glI8-IW6i4vGqDSqo-MuCzDo5F2/view?usp=sharing. SVEA (color-easy) underperformed compared to both vanilla SVEA and ALDA, particularly in the training and color-hard environments. We suspect that the diversity of the 10 million real-world images from the Places Dataset is crucial for SVEA’s generalization and training stability, ensuring that evaluation environments remain in-distribution, or at least well within the support of the training distribution. [1] "Maniskill3: Gpu parallelized robotics simulation and rendering for generalizable embodied ai." **Why did the authors not try SVEA + ALDA?...** ALDA aims to factorize the image distribution into latent variables, and random overlays could disrupt this by obfuscating the underlying structure. What we wish to convey is, rather than solely relying on data augmentations or brute forcing the generalization problem by collecting massive datasets, models that learn the underlying structure from fewer examples can allocate remaining compute/data budgets to other tasks. For instance, if a robot agent can learn SO(2) invariance from a single object rotated in various ways and can tease out the notion of rotational invariance, then it should be able to generalize rotational invariance to other objects, removing the need for exhaustive data collection on all possible orientations of all possible objects. Unlike computer vision or language data, robot data is more difficult to collect and not as widely available. As of now, the field is allocating a significant amount of data collection efforts to viewpoint, color, lighting, background, [...] randomizations, but if we can alleviate this on the model/architecture side, then those efforts can be spent elsewhere. In instances where data is cheap or large datasets are readily available, we completely agree with the reviewer that the data should be leveraged. However, solely relying on data may not be sufficient to achieve true generalization, and may instead be obfuscating deeper issues within current robot learning methods. In the SO(2) invariance example, random rotations in CV or viewpoint variations in robotics may not truly be capturing the SO(n) group given current architectures [1, 2], indicating that data alone is insufficient for solving generalization. [1] "Progress and limitations of deep networks to recognize objects in unusual poses." [2] "On the Ability of Deep Networks to Learn Symmetries from Data: A Neural Kernel Theory." **The environments the authors use are toy control environments...** We agree that exploring this method in more complex environments would be beneficial and are working on extending our approach to benchmarks like Sim2Real transfer of manipulation policies trained via behavior cloning as part of a separate investigation. However, the focus of this work is to explore whether combining association with latent disentanglement enables zero-shot generalization—an idea that has not been studied before. To that end, we chose RL as the driving optimizer and the DMControl benchmark so that we can study visual generalization in isolation without worrying about the complexities of Sim2Real, harder tasks, real hardware, etc. We also maintain that visual distribution shifts remain a challenging problem to solve, regardless of the difficulty of the underlying task. Finally, recent works addressing visual generalization in RL, many of which we included as baselines, primarily evaluate on DMControl. Extending our method and all baselines to other benchmarks would be beyond the scope of this study. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. I enjoyed reading the rebuttal, the additional experiments and the arguments made in the rebuttal. I will retain my score.
null
null
null
null
null
null
What Limits Bidirectional Model's Generative Capabilities? A Uni-Bi-Directional Mixture-of-Expert Method For Bidirectional Fine-tuning
Accept (poster)
Summary: This paper explores the impact of bidirectional fine-tuning on the performance of unidirectional language models, particularly focusing on the decline in generative ability caused by bidirectional fine-tuning. The authors attribute this decline to subsequent dependence and perform an in-depth analysis to support this explanation. A key finding is that fine-tuning the Feed-Forward Network (FFN) layer has the least impact on generative performance. To address the trade-off between generative and embedding tasks, the paper introduces UBMoE-LLM, a novel model that leverages the Mixture-of-Experts (MoE) method. UBMoE-LLM integrates both the original FFN layer and the bidirectionally fine-tuned FFN layer to preserve generative capabilities while improving performance on embedding tasks. Experimental results demonstrate that UBMoE-LLM achieves good performance, showcasing its potential to balance efficiency and effectiveness in practical applications. This work has significant implications for advancing the adaptability of pre-trained models across diverse downstream tasks. Claims And Evidence: Yes, the claims made in the submission are largely supported by evidence provided in the paper, though the strength of the evidence may vary depending on the specific claim. Methods And Evaluation Criteria: The proposed UBMoE-LLM are well-suited to the bidirectional finetune degeneration problem. However, the generality of the findings could be enhanced by broadening the evaluation to include more baselines, datasets, and task types. Theoretical Claims: Yes, the paper provides a theoretical explanation for subsequent dependence, which is likely grounded in how bidirectional fine-tuning alters the model’s internal representations or dependencies that are crucial for unidirectional generation. Experimental Designs Or Analyses: The paper evaluates UBMoE-LLM on both generative and embedding tasks, comparing it to baseline models and demonstrating improvements. The overall experimental setup is reasonable, but more types of foundation models can be further evaluated. Supplementary Material: Yes, I reviewed the additional experiments and provided code. Relation To Broader Scientific Literature: The key contributions of the paper—identifying subsequent dependence as a cause of generative performance degradation, proposing UBMoE-LLM to balance task performance, and demonstrating the importance of FFN layer fine-tuning—are well-grounded in the broader scientific literature. The paper builds on prior work in bidirectional vs. unidirectional models, MoE methods, and layerwise fine-tuning while advancing the field by addressing a specific trade-off between generative and embedding tasks. These contributions represent a meaningful extension of existing research, with the potential to influence future work in fine-tuning paradigms and multi-task NLP models. Essential References Not Discussed: Based on the current analysis, the references cited in the paper appear to cover the core prior work in the field and provide sufficient background to support the key contributions of the study. Other Strengths And Weaknesses: 1. This paper demonstrates an interesting exploration in attention. The authors propose the concepts of preceding and subsequent dependence to explain the relationships between tokens in the attention layer. The experiments on attention show the dependence of attention when bidirectionally fine- tuning different layers. 2. The authors have validated the method at four different scales, and the wide experimental scope provides insights into the relationship between performance and parameter count, offering guidance for scaling up to larger models. 3. The authors combine the bidirectionally fine-tuned FFN layer through the MoE method to achieve the model's embedding ability while retaining its generative ability. The proposed method is innovative and well-grounded. This series of models holds potential application value. 4. Although there are some minor issues in wording, this paper provides a well-justified and impactful solution to the bidirectional fine-tuning of LLMs that deserves to be published. Other Comments Or Suggestions: There are also some minor improvements. 1. Test on More Models: The authors have only tested the Qwen 1.5 series of models. The experiments on more models are yet to be conducted. 2. Test on Recent Models: It is beneficial to conduct experiments on recent LLMs. 3. Experimental Details: Several important experimental details are missing or not properly explained. a) The experimental setup for Figure 2 is not provided in the main text. The authors need to clarify the specific LLMs used in the experiments, as this information is not available in Appendix C. b) The value of λ (lambda) used in the loss function is not specified. The authors need to explicitly provide this information. c) In Line 258, the authors mention using a "small amount of data" to train the gating layer, but they do not define what this dataset consists of. Further clarification is needed. 4. Typos: Some typos remain to be corrected. a) The word "FNN" should be corrected to "FFN". b) The word "Casual" should be corrected to "Causal". Questions For Authors: It seems that FFN+ATT, as well as FFN+EMB perform better on the model size of 0.5B in Figure 2. I would like to understand why this is the case. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: > Weakness 1&2: Test on More Models: The authors have only tested the Qwen 1.5 series of models. The experiments on more models are yet to be conducted. Test on Recent Models: It is beneficial to conduct experiments on recent LLMs. Thank you for your advice. We evaluate the effectiveness of our proposed method on the recently released Qwen2.5 series, and UBMoE still achieved performance comparable to that of causal language models. | Model | MMLU | Winogrande | Truthfulqa | Avg | | --------------------- | ----- | ---------- | ---------- | ----- | | Qwen2.5-0.5B-Instruct | 46.90 | 55.60 | 41.86 | 48.12 | | UBMoE-LLM | 43.85 | 56.94 | 43.02 | 47.94 | | Qwen2.5-1.5B-Instruct | 60.21 | 64.48 | 46.58 | 57.09 | | UBMoE-LLM | 59.41 | 63.95 | 49.26 | 57.54 | | Qwen2.5-7B-Instruct | 73.86 | 73.60 | 64.72 | 70.72 | | UBMoE-LLM | 70.86 | 75.42 | 62.65 | 69.64 | | Qwen2.5-14B-Instruct | 79.54 | 78.74 | 67.51 | 75.26 | | UBMoE-LLM | 78.44 | 77.83 | 68.01 | 74.76 | > Weakness 3: Experimental Details: Several important experimental details are missing or not properly explained. We provide the experimental setup for Figure 2 in Appendix B. For the model, we use Qwen1.5-0.5B. We set the value of λ (lambda) to 0.001. We provide detailed data sources in Section 5.1. The data used to train the gating layer is sampled from the Tulu-v2-SFT mixture, with only 32,000 samples used for training. These details will be added to Section 5.1 in the revised manuscript. > Question 1: It seems that FFN+ATT, as well as FFN+EMB perform better on the model size of 0.5B in Figure 2. I would like to understand why this is the case. For LLMs with fewer parameters, fine-tuning both the FFN and attention layers can yield modest performance gains due to the limited model capacity, as shown in our results in Table 2. However, as the model size increases, this performance gap narrows significantly. > Typos Thank you for your thorough review. We will address this and carefully check our paper in a next revision. > Summary Thank you for your careful reading and suggestions. We will improve our work according to your suggestion. We hope these additional analyses address your concerns and improve the manuscript's rigor. --- Rebuttal Comment 1.1: Comment: I appreciate the answers given and change my score to 5. --- Reply to Comment 1.1.1: Comment: Thank you for your reply and encouragement to us.
Summary: There exists a common belief that causal language models (i.e., unidirectional models) perform better in generation tasks while bidirectional models perform better in embedding tasks. However, bidirectional finetuning of unidirectional models usually leads to significantly inferior generation performance, which makes it difficult to obtain a model that excels in both tasks. This paper analyzes the performance degradation from a new perspective, i.e., they analyze the attention scores and observe that bidirectional finetuning enhances subsequence dependence. Furthermore, they find that training only the FNN layers results in a lower increase in subsequent dependence. Based on that, they propose UBMoE-LLM, a new bidirectional finetuning paradigm that combines the original FNN layer of the unidirectional model with the bidirectional FNN layer trained by unsupervised contrastive learning through MoE and exhibits impressive performance in generation and embedding tasks. Claims And Evidence: The authors claim that the inferior performance of bidirectional finetuning is due to the increased subsequence dependence. The motivation is easy to follow. However, although the results in Table 1 are interesting, I am afraid they are not enough to verify the relationship between subsequence dependence and generation performance. I think more verification experiments will make this conclusion more convincing. Methods And Evaluation Criteria: The solutions cooperate well with the empirical findings (tuning FFN layers results in a lower increase in subsequent dependence). However, I believe additional ablation studies would be helpful to analyze the role of the gate control layer. Theoretical Claims: I do not find theoretical analysis in this paper. Experimental Designs Or Analyses: The results in Tables 2, 3, 4 demonstrate the performance of bidirectional training in both generation and embeddings tasks. However, to show the effectiveness of UBMoE-LLM, I believe additional comparisons with other bidirectional training methods and bidirectional models are necessary. The current results are not enough to support the claims proposed by the authors. Supplementary Material: The additional empirical results in the supplementary material correlate well with the main paper. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Most of the essential related works are discussed in this paper. Other Strengths And Weaknesses: 1. Obtaining a model that performs well in both generation and embeddings tasks is a significant topic and this paper provides novel insights. 2. The paper is organized well and easy to follow. Other Comments Or Suggestions: N/A Questions For Authors: 1. This paper focuses on bidirectional finetuning to enhance the performance of causal language models. I wonder whether it is possible to achieve comparable performance on embeddings tasks compared to bidirectional pretraining tasks. 2. I note that UBMoE-LLM improves the generation performance on TruthfulQA. Is it possible to provide an additional explanation for the improvements? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Weakness 1: The authors claim that the inferior performance ... this conclusion more convincing. As we replied to Reviewer bS72, Table 1 shows that there is a consistent trend between subsequent dependence and the general capability of the model. This indicates a correlation between subsequent dependence and the model’s general performance, suggesting a potential causal relationship. Besides, we analyzed the relationship between generation performance and attention-based forward or subsequent dependence. The experimental results are shown in the table below. As the level of subsequent dependence increased, the generation performance of the model declined. In contrast, our method reduced the subsequent dependence introduced by incorporating bidirectional experts while improving generation performance. This indicates a correlation between the distribution of attention weights and the model's generation performance. | Model | DPB | DPA | MMLU | | :-------------------: | :---: | :---: | :---: | | Qwen2.5-0.5B-Instruct | 55.68 | 39.75 | 46.9 | | UBMoE-LLM | 54.68 | 38.91 | 43.85 | | FNN-Bi | 54.40 | 41.40 | 35.54 | | Qwen2.5-1.5B-Instruct | 58.27 | 36.57 | 60.21 | | UBMoE-LLM | 58.08 | 36.80 | 59.41 | | FNN-Bi | 57.00 | 38.13 | 54.35 | | Qwen2.5-7B-Instruct | 60.94 | 35.07 | 73.86 | | UBMoE-LLM | 60.37 | 35.61 | 70.86 | | FNN-Bi | 46.17 | 49.05 | 21.28 | > Weakness 2: The results in Tables 2, 3, 4 demonstrate ... current results are not enough to support the claims proposed by the authors. Thank you for your advice. We added MNTP-LLM as a baseline, which is trained using the Wikipedia dataset through the MNTP method. As shown in the table below, the experimental results demonstrate that UBMoE-LLM still exhibits strong performance. | Model | MMLU | Winogrande | Truthfulqa | Avg | | :-------------------: | :---: | :--------: | :--------: | :---: | | Qwen2.5-0.5B-Instruct | 46.90 | 55.60 | 41.86 | 48.12 | | UBMoE-LLM | 43.85 | 56.94 | 43.02 | 47.94 | | MNTP-LLM | 41.33 | 54.40 | 41.65 | 45.79 | | Model | MMLU | Winogrande | Truthfulqa | Avg | | :-----------------: | :---: | :--------: | :--------: | :---: | | Qwen2.5-7B-Instruct | 73.86 | 73.6 | 64.72 | 70.72 | | UBMoE-LLM | 70.86 | 75.42 | 62.65 | 69.64 | | MNTP-LLM | 70.06 | 74.21 | 62.54 | 68.93 | > Weakness 3: However, I believe additional ablation studies would be helpful to analyze the role of the gate control layer. Thank you for your helpful advice. As shown in the table below, we conducted experiments on the STS task. The gate control layer effectively enhances the ability to embed tasks. | Model | Method | Avg | | --------------------- | ------------ | ----- | | Qwen2.5-0.5B-Instruct | FNN | 81.63 | | Qwen2.5-0.5B-Instruct | FNN w/o gate | 80.14 | | Qwen2.5-1.5B-Instruct | FNN | 80.15 | | Qwen2.5-1.5B-Instruct | FNN w/o gate | 78.49 | | Qwen2.5-7B-Instruct | FNN | 81.79 | | Qwen2.5-7B-Instruct | FNN w/o gate | 80.39 | > Question 1: This paper focuses on bidirectional finetuning ... embeddings tasks compared to bidirectional pretraining tasks. We appreciate the reviewer’s suggestion. However, it is well known that there is currently a lack of proper similar scale bidirectional LLMs to serve as a baseline. We are trying to train to the bidirectional model for testing. > Question 2: I note that UBMoE-LLM improves the generation performance on TruthfulQA. Is it possible to provide an additional explanation for the improvements? We're glad you're paying attention to that. We believe this is mainly due to the introduction of bidirectional experts, which enhances the model’s resistance to hallucination. > Summary Thank you for your valuable advice. We have addressed these suggestions through additional experiments and analyses. We hope these revisions strengthen the validity of our findings.
Summary: The paper investigates the impact of bidirectional fine-tuning on unidirectional language models. The authors argue that bidirectional attention mechanisms, while enhancing embedding tasks, degrade generative performance. To address this, they integrate unidirectional FNN layers with bidirectional ones trained via unsupervised contrastive learning. Extensive experiments demonstrate the model's ability to improve embedding performance without compromising generative capabilities. Claims And Evidence: The primary claim is that bidirectional fine-tuning increases "subsequent dependence," negatively impacting generative performance. Evidence includes ablation studies comparing various fine-tuning strategies and datasets, showing UBMoE-LLM effectively balances embedding and generation tasks. The study also claims the FNN layer is least affected by bidirectional training, supported by experiments revealing minimal impact on generative performance when only the FNN layer is fine-tuned. Methods And Evaluation Criteria: The methodology includes training with bidirectional contrastive learning using LoRA for larger models and testing on diverse datasets like Natural Instructions, DOQA, and Dureader. The evaluation metrics include F1 scores, accuracy, and rouge-l, providing a comprehensive performance assessment. Theoretical Claims: The paper introduces a novel attention dependence measure to quantify preceding and subsequent dependencies. The derivation is consistent with established attention weight calculations, though a deeper examination of convergence guarantees would strengthen the claims. Experimental Designs Or Analyses: The experimental results involve models of varying sizes (0.5B to 7B) and controlling factors like learning rate and batch size. However, further experiments with more diverse datasets would enhance the generalizability of the findings. Supplementary Material: I have checked additional results provided in the supplementary material. Relation To Broader Scientific Literature: The contribution builds on previous works exploring bidirectional modeling in causal language models. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: - Comprehensive evaluation across diverse datasets. - The introduction of the attention dependence measure is novel and insightful. Weaknesses: - Limited discussion of potential trade-offs in training complexity. - Additional analysis on the interpretability of attention weights could enrich the findings. Other Comments Or Suggestions: - Clarify the distinction between preceding and subsequent dependencies. - Include a broader comparison with other hybrid models. Questions For Authors: - How does UBMoE-LLM perform on tasks beyond text generation and embedding, such as reasoning tasks? - Have you explored the impact of scaling UBMoE-LLM beyond 7B parameters? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Weakness 1: Limited discussion of potential trade-offs in training complexity. Thanks for the helpful comments. Our training consists of two main parts: bidirectional expert training and UBMoE-LLM model training. For bidirectional expert training, compared to previous bidirectional approaches, we only fine-tune the FNN layers, which significantly reduces training costs. As for UBMoE-LLM training, we train for only 1,000 steps, using just 32 samples per step. The training of UBMoE-LLM (7B) can be completed in just 1 hour on a single H100 GPU. > Weakness 2: Additional analysis on the interpretability of attention weights could enrich the findings. Thanks for the helpful comments. We analyzed the relationship between generation performance and attention-based forward or subsequent dependence. The experimental results are shown in the table below. As the level of subsequent dependence increased, the generation performance of the model declined. In contrast, our method reduced the subsequent dependence introduced by incorporating bidirectional experts while improving generation performance. This indicates a correlation between the distribution of attention weights and the model's generation performance. | Model | DPB | DPA | MMLU | | :-------------------: | :---: | :---: | :---: | | Qwen2.5-0.5B-Instruct | 55.68 | 39.75 | 46.9 | | UBMoE-LLM | 54.68 | 38.91 | 43.85 | | FNN-Bi | 54.40 | 41.40 | 35.54 | | Qwen2.5-1.5B-Instruct | 58.27 | 36.57 | 60.21 | | UBMoE-LLM | 58.08 | 36.80 | 59.41 | | FNN-Bi | 57.00 | 38.13 | 54.35 | | Qwen2.5-7B-Instruct | 60.94 | 35.07 | 73.86 | | UBMoE-LLM | 60.37 | 35.61 | 70.86 | | FNN-Bi | 46.17 | 49.05 | 21.28 | > Weakness 3: Clarify the distinction between preceding and subsequent dependencies. We provide a detailed explanation of the computation methods for preceding and subsequent dependencies in Section 3.2. As shown in formula 1, preceding dependence represents the average score of attention before token i in the attention layer. Subsequent dependence represents the average score of attention after token i. > Weakness 4: Include a broader comparison with other hybrid models. Thanks for your advice and we added MNTP-LLM as a baseline, which is trained using the Wikipedia dataset through the MNTP method. | Model | MMLU | Winogrande | Truthfulqa | Avg | | :-------------------: | :---: | :--------: | :--------: | :---: | | Qwen2.5-0.5B-Instruct | 46.90 | 55.60 | 41.86 | 48.12 | | UBMoE-LLM | 43.85 | 56.94 | 43.02 | 47.94 | | MNTP-LLM | 41.33 | 54.40 | 41.65 | 45.79 | | Model | MMLU | Winogrande | Truthfulqa | Avg | | :-----------------: | :---: | :--------: | :--------: | :---: | | Qwen2.5-7B-Instruct | 73.86 | 73.6 | 64.72 | 70.72 | | UBMoE-LLM | 70.86 | 75.42 | 62.65 | 69.64 | | MNTP-LLM | 70.06 | 74.21 | 62.54 | 68.93 | > Question 1: How does UBMoE-LLM perform on tasks beyond text generation and embedding, such as reasoning tasks? As shown in the table below, we evaluated the performance of UBMoE-LLM on a physics problem benchmark and a mathematics problem benchmark. The experimental results indicate that UBMoE-LLM still demonstrates performance comparable to that of a causal language model. | Model | Math | PIQA | Avg | | --------------------- | ----- | ----- | ----- | | Qwen2.5-0.5B-Instruct | 32.5 | 70.62 | 51.56 | | UBMoE-LLM | 32.67 | 69.85 | 51.26 | | Qwen2.5-1.5B-Instruct | 41.07 | 76.22 | 58.64 | | UBMoE-LLM | 42.68 | 76.36 | 59.52 | | Qwen2.5-7B-Instruct | 54.51 | 79.49 | 67 | | UBMoE-LLM | 53.91 | 79.04 | 66.48 | > Question 2: Have you explored the impact of scaling UBMoE-LLM beyond 7B parameters? As shown below, we present the experimental results on Qwen2.5-14B-instruct, where UBMoE-LLM maintains performance consistent with that of a causal language model in terms of generation quality. | Model | MMLU | Winogrande | Truthfulqa | Avg | | -------------------- | ----- | ---------- | ---------- | ----- | | Qwen2.5-14B-Instruct | 79.54 | 78.74 | 67.51 | 75.26 | | UBMoE-LLM | 78.44 | 77.83 | 68.01 | 74.76 | > Summary Thank you for your advice. Your suggestions have brought great help to the improvement of our work. Our results show that the method efficiently combines both generative and embedding tasks efficiently, which supports our hypothesis of attention dependence. We hope these additional analyses address your concerns and improve the manuscript's rigor. We sincerely appreciate your constructive feedback and have carefully addressed all the points raised. We hope the revised version demonstrates significant improvements and better aligns with your expectations.
Summary: Due to the unidirectional attention mechanism, current LLMs underperform in embedding tasks. Some studies have modified the unidirectional attention to bidirectional attention in LLMs and fine-tuned them using contrastive learning, resulting in models better suited for embedding tasks. However, this modification compromises the model's generative capabilities, rendering it ineffective for generation tasks. The authors propose a two-stage training approach to address this issue. In the **first stage**, a feedforward neural network (FNN_b) suitable for embedding tasks is trained. Specifically, the causal attention is replaced with bidirectional attention, while freezing all model parameters except the FNN, and training embedding tasks using contrastive learning. In the **second stage**, the trained FNN from the first stage is combined with the original FNN, and a gate router is added to form a Mixture of Experts (MoE) model. Then, all parameters except the router are frozen, and auto-regressive training is conducted on generation tasks. This results in a new model that excels in both embedding and generation tasks. Claims And Evidence: 1. The article provides a detailed experimental ablation on the effectiveness of fine-tuning layers of LLM (FFN, embedding, attention) for embedding tasks. 2. The article lacks a discussion on the proposition of using MoE (Mixture of Experts) models for both embedding tasks and generation tasks, such as efficiency or performance, which are not addressed (given the inherent router in both tasks, it is deemed unnecessary in my opinion). Methods And Evaluation Criteria: The new model developed using the author's approach has increased computational demands for generation tasks. Moreover, for tasks with clear distinctions such as embedding tasks and generation tasks, it is not particularly necessary to employ a single model to achieve both (unless it is training-free). This is because these two tasks inherently have a natural router (the task category is known in advance), and a more efficient and cost-effective solution can be directly implemented through an if-else branch (eliminating the need for a second training phase, as there is no requirement to train a router). Theoretical Claims: In Section 3.1, under "Preliminaries," the concepts of "Subsequent Dependence" and "decline of productive ability" are only correlated, but causality cannot be inferred. Therefore, the logic of introducing the MoE architecture based on this point is insufficient. Experimental Designs Or Analyses: 1. What is the experimental setup for downstream task testing in the article? For example, is the model used for the evaluation of embedding tasks taken after the second stage of training or before the second stage of training? Does the embedding task evaluation employ bidirectional attention or causal attention? 2. In the results of Table 3, which layer of the model was used to test **subsequent dependence**, and was unidirectional attention or bidirectional attention used during the evaluation? 3. The experiments seem to lack any comparison with previous state-of-the-art (SOTA) methods. Supplementary Material: None Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: #### Strengths: - The author's proposal of "attention dependence to explain this phenomenon" in Section 3.1 is quite novel, offering a unique perspective for observing the characteristics of attention. #### Weaknesses: - In my opinion, the author's motivation is not sufficiently justified. For details, please refer to **Methods and Evaluation Criteria**. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > W1: The new model developed ... computational demands for generation tasks. UBMoE activates only one expert for each token, adding computational overhead during inference solely for token routing. Compared to the computational cost of the causal language model itself, this additional overhead is minimal. As shown in the table below, our approach does not result in significant additional computational overhead. | Model | GFLOPs | | --------------------- | -------- | | Qwen2.5-0.5B-Instruct | 505.82 | | UBMoE-LLM (0.5B) | 505.86 | | Qwen2.5-1.5B-Instruct | 1,580.62 | | UBMoE-LLM (1.5B) | 1,580.70 | | Qwen2.5-7B-Instruct | 7,239.97 | | UBMoE-LLM (7B) | 7,240.18 | > W2: For tasks with clear distinctions such ... requirement to train a router). A model with both generation and embedding capabilities can reduce deployment costs by eliminating the need to deploy multiple models. Meanwhile, the method proposed in this paper can also enhance the model’s resistance to hallucination to some extent by introducing bidirectional experts, which improve the model’s performance against linguistic priors. Some work has reflected the advantages of combining unidirectional and bidirectional models \[1\]\[2\]. Our results at Truthfulqa benchmark in Table 4 also show that the bidirectional module effectively reduces the LLM hallucination. [1] LLM2Vec: Large Language ModelsAreSecretly Powerful Text Encoders [2] BatGPT: A Bidirectional Autoregessive Talker from Generative Pre-trained Transformer > W3: In Section 3.1, under "Preliminaries," ... based on this point is insufficient. As shown in Table 1, there is a consistent trend between subsequent dependence and the general capability of the model. This indicates a correlation between subsequent dependence and the model’s general performance, suggesting a potential causal relationship. Besides, we analyzed the relationship between generation performance and attention-based forward or subsequent dependence. As shown in the table below. As the level of subsequent dependence increased, the generation performance of the model declined. In contrast, our method reduces the subsequent dependence introduced by incorporating bidirectional experts while improving generation performance. This indicates a correlation between the distribution of attention weights and the model's generation performance. | Model | DPB | DPA | MMLU | | :-------------------: | :---: | :---: | :---: | | Qwen2.5-0.5B-Instruct | 55.68 | 39.75 | 46.9 | | UBMoE-LLM | 54.68 | 38.91 | 43.85 | | FNN-Bi | 54.40 | 41.40 | 35.54 | | Qwen2.5-1.5B-Instruct | 58.27 | 36.57 | 60.21 | | UBMoE-LLM | 58.08 | 36.80 | 59.41 | | FNN-Bi | 57.00 | 38.13 | 54.35 | | Qwen2.5-7B-Instruct | 60.94 | 35.07 | 73.86 | | UBMoE-LLM | 60.37 | 35.61 | 70.86 | | FNN-Bi | 46.17 | 49.05 | 21.28 | > Q1: What is the experimental setup for ... employ bidirectional attention or causal attention? We provide detailed experimental settings in Section 5.1. For the evaluation of embedding capability in Table 2, we maintain the same settings as in the training phase and enable bidirectional attention. > Q2: In the results of Table 3, which layer of ... used during the evaluation? Following the computation method described in Section 3.2, we use the average across all layers. During the calculation, we adopt causal attention to remain consistent with the generation task. > W4: The experiments seem to lack any comparison with previous state-of-the-art (SOTA) methods. Thanks for your advice. We added MNTP-LLM as a baseline, which is trained using the Wikipedia dataset through the MNTP method. As shown in the table below, the experimental results demonstrate that UBMoE-LLM still exhibits strong performance. | Model | MMLU | Winogrande | Truthfulqa | Avg | | :-------------------: | :---: | :--------: | :--------: | :---: | | Qwen2.5-0.5B-Instruct | 46.90 | 55.60 | 41.86 | 48.12 | | UBMoE-LLM | 43.85 | 56.94 | 43.02 | 47.94 | | MNTP-LLM | 41.33 | 54.40 | 41.65 | 45.79 | | Model | MMLU | Winogrande | Truthfulqa | Avg | | :-----------------: | :---: | :--------: | :--------: | :---: | | Qwen2.5-7B-Instruct | 73.86 | 73.6 | 64.72 | 70.72 | | UBMoE-LLM | 70.86 | 75.42 | 62.65 | 69.64 | | MNTP-LLM | 70.06 | 74.21 | 62.54 | 68.93 | > Summary Thank you for your helpful advice. We will improve our work based on your suggestions. It is worth noting that the concept of attention dependence proposed by us is highly correlated with the generative ability of the model in experimental verification, and has important significance for the study of unidirectional and bidirectional architecture of the model. We hope our response provides a clearer perspective on our work.
null
null
null
null
null
null
Simple and Critical Iterative Denoising: A Recasting of Discrete Diffusion in Graph Generation
Accept (poster)
Summary: This paper introduces Iterative Denoising, a novel framework to improve discrete diffusion and flow matching models for graph generation. Traditional discrete diffusion models suffer from error accumulation and propagation due to time dependencies in the noising process, particularly in mask diffusion. The proposed framework circumvents this issue by assuming conditional independence across time, effectively simplifying the diffusion process. Claims And Evidence: The main selling point of the proposed method is unclear. While the authors suggest that their method is broadly applicable to discrete data, they only apply it to graph generation and compare it with graph generative models. If the focus is on graph generation, more background on other graph generative models should be included. Conversely, if the authors wish to emphasize the model's broad applicability, empirical analyses in other domains, such as natural languages, are necessary. The authors claim that the proposed method significantly outperforms existing discrete diffusion baselines. Given that the current application is limited to graph generation, I expect the proposed method to at least match the performance of other graph generative models. However, as shown in Tables 1 and 2, the baseline DruM appears to outperform the proposed method. If this is the case, please clarify the advantages of the proposed model over DruM. Methods And Evaluation Criteria: The manuscript is motivated by the issue of error accumulation in the diffusion process, which the authors attribute to mask diffusion or discrete diffusion models with absorbing kernels. Instead of the proposed technique, other diffusion kernels, as described in [1], could be employed to mitigate this issue. What advantages does the proposed technique offer over these alternative diffusion kernels? [1] Structured Denoising Diffusion Models in Discrete State-Spaces, NeurIPS 2021 Theoretical Claims: The proofs seem solid. Experimental Designs Or Analyses: The authors should consider applying their method to other domains. In the context of graph generation, several important baselines are missing, including a discrete diffusion model EDGE[1] and some other graph generative models like GraphRNN[2], EDP-GNN[3], SPECTRE[4]. [1] Efficient and Degree-Guided Graph Generation via Discrete Diffusion Modeling, ICML 2023 [2] GraphRNN: Generating Realistic Graphs with Deep Auto-regressive Models, ICML 2018 [3] Permutation Invariant Graph Generation via Score-Based Generative Modeling, AISTATS 2020 [4] SPECTRE: Spectral Conditioning Helps to Overcome the Expressivity Limits of One-shot Graph Generators, ICML 2022 Supplementary Material: The appendix provides useful details that support the main manuscript. Relation To Broader Scientific Literature: The application scenario for the proposed method is unclear. The authors should elaborate on this aspect. Essential References Not Discussed: The manuscript lacks a comprehensive background on graph generative models. A thorough review of related works in graph generation is necessary, including those mentioned in the Experimental Designs or Analyses section. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We would like to thank you for the review. ## Claim and Evidence **Paragraph 1** The justification for our work is as follows: - We identify a limitation of discrete diffusion models—namely, the issue of compounding denoising errors. - We propose a theoretically grounded method that directly addresses this issue. - We empirically demonstrate that our approach significantly improves over discrete diffusion in a specific domain: graph generation. - By introducing a novel perspective on discrete denoising models, our work contributes to a deeper understanding of this model family, and thereby contributes to advancing research in discrete generation. We agree that further investigation is needed to assess the applicability of our method to other domains. However, it is common practice to first introduce a method within a research area before exploring broader applications. For example, Discrete Flow Matching, initially introduced for protein co-design, and only later extended to language modeling, code, image, and graph generation, is now widely adopted across tasks involving discrete data. **Paragraph 2** Our claim that the proposed method significantly outperforms existing discrete diffusion baselines' is supported by rigorous empirical comparisons, over strong baselines and over ablated models using identical denoisers to ensure a fair and controlled evaluation. While our approach is closing the gap between discrete and continuous denoising methods, we acknowledge that it does not yet surpass the strongest continuous model, DruM. Nonetheless, discrete and continuous denoising remain complementary methods with competitive performance. We believe, there is an interest in continuing the development of both. Moreover, we argue that paper in scientific conferences should aim to advance science, rather that selling direct real-world applications, particularly in a subfield where, as noted by reviewer whDB, benchmark performance does not translates directly to concrete use-case scenario. --- ## Methods And Evaluation Criteria: We respectfully disagree with the interpretation attributed to us by the reviewer. Contrary to the reviewer's statement, we do **not** attribute compounding denoising errors (CDE) specifically to the absorbing-state (mask) kernel. Rather, we argue that this issue affects **all** commonly used noise kernels for categorical data, including uniform and marginal kernels. As clearly stated in the manuscript (lines 110–113, right column): “Consequently, the CDE issue also affects other common noise distributions, such as the uniform and marginal distributions.” The statement comes at the end of a paragraph justifying it. Four kernels have been proposed in [1], two of which (discretized Gaussian and token embedding distance) are not applicable here as they are tailored to ordinal or embedded data. In addition to the mask kernel, our experiments focus on the marginal kernel, which we prefer over the uniform kernel based on evidence from Digress [2] that uniform noising is a poor choice for graph data. Therefore, we disagree with the affirmation that alternative kernels could mitigate the issue. In fact, we show, both theoretically and empirically, that all standard kernels (including mask, uniform, and marginal) are susceptible to compounding denoising errors, and that our method robustly addresses this in both cases. To the question of what advantages our approach offers over other diffusion kernels, we refer the reviewer to: - Sections 2 and 3: outlining the theoretical benefits of our approach. - Tables 3 and 4: demonstrating consistent empirical improvements across both marginal and mask kernels. --- ## Experimental Designs Or Analyses (Baselines) We acknowledge the importance of the referenced models and include them in our literature review. However, we view baselines primarily as reference points for evaluation, rather than an exhaustive survey. We deliberately selected three representative baselines: - DruM: current state-of-the-art. - GDSS: as best standard continuous diffusion. - Digress: as best (equivariant) discrete diffusion. All are equivariant, ensuring a fair comparison with our own model. We will clarify this choice more explicitly in the revised version. It is also important to distinguish baselines from ablations. Unlike ablation studies, baseline comparisons do not control for multiple factors such as architecture, training time, or hyperparameter. As a result, differences in performance should be interpreted cautiously. By contrast, our ablation studies—comparing discrete diffusion with our Iterative Denoising method under controlled conditions—provide more meaningful evidence of the proposed method’s effectiveness. We fully agree with the reviewer that exploring applications beyond graph generation is important. We consider this an exciting direction for future work. [1] Austin et al. , NeurIPS 2021 [3] Vignac et al., ICLR 2023 --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. I believe the proposed method shows strong potential; however, the current analysis in the manuscript does not provide sufficient support for its effectiveness. Since the authors primarily aim to validate their approach within the graph domain, I would like to focus on the practical aspects of the method as it relates to graph generation. My main concerns are outlined below: - The empirical results of the proposed method lag behind those of DruM. What makes this method a preferable choice over DruM in practice? - One of the central contributions of this work is the CID framework. However, the results indicate that Marginal ID significantly outperforms MASK CID. Is it possible to apply critical denoising to Marginal ID? If not, what are the practical advantages of MASK CID compared to Marginal ID? - The CID approach introduces additional computational overhead during both training and sampling. Could the authors provide a comparison of computational efficiency between the proposed method and the baseline methods? --- Reply to Comment 1.1.1: Comment: We thank the reviewer for they additional questions. We reiterate that evaluating graph generative models primarily on their direct practical applications is somewhat artificial. Unconditional graph generative models typically lack direct use cases. We see their primary role as foundational components for domain-specific downstream tasks, typically conditional generation. That said, assuming that these models should demonstrate direct practical utility, our method offers advantages on various potential use-cases. --- ## 1. Practical Preference for Our Method Over DruM Molecular design is arguably the most significant real-world application. In this task, producing valid molecules with realistic chemically properties is essential. Our CID model shows a significant improvement over DruM in terms of validity—producing up to 30 times fewer invalid molecules on the larger and more realistic Zinc dataset. It is an evident advantage in scenarios where reliable valid molecules are needed. Even without the Critic, our marginal ID variant generates 3 times fewer invalid molecules than DruM, while also achieving better performance on the Fréchet ChemNet Distance. On Zinc —the dataset most representative of practical molecular design— our marginal ID variant establishes a new SOTA, outperforming DruM. This underscores the practical value of our approach. --- ## 2. Motivation for CID ### A. Combining the Marginal Kernel and the Critic As explained in Section 4.1, the introduction of the Critic causes all kernels to collapse to the mask kernel. While it is conceivable to design hybrid strategies that alternate between the marginal kernel and the Critic, such explorations fall outside the scope of this paper, which introduces the method. This observation highlights a broader point: our approach is new and thus carries high potential for further improvement. In contrast, DruM builds on diffusion bridges, a mature framework, which potentially present less room for future developments. ### B. Advantages of CID CID demonstrates clear advantages in specific settings—for instance, it achieves impressive validity in molecular generation. For practitioners prioritizing this metric, CID should be a method of choice. Additionally, our ablation study shows that CID performs better than other ID variants in low-NFE regimes. Under this constraint, CID offers a significant practical advantage. --- ## 3. Computational Cost ### A. Additional Overhead from CID CID introduces a computational overhead, as it requires training a Critic and evaluating it at each denoising step. As a first-order approximation, this doubles both training and inference time. However, the performance gain over Mask ID and Mask DDM is important, and Mask denoising models are suitable for some specific downstream applications such as molecular scaffold extension (see [1] and Appendix E of [2]). ### B. Comparison with Baselines For both our model and baseline methods, the dominant computational cost arises from the architecture—especially from the size of edge feature vectors, on which MLPs are applied, as their number scales quadratically with the number of nodes in the graph, and linearly with the number of layers. Below is a comparison based on DruM’s configuration files, summarizing the number of layers (L) and edge feature dimensions ($d_E$) per dataset: |Dataset | Ours (L × $d_E$) | DruM (L × $d_E$)| | --- | --- | --- | | Qm9 | 4x16 | 8x256 | | Zinc| 4x64 | 9x128 | |Planar| 4x64 | 8x64 | |SBM| 4x64 | 8x64 | We highlight that we evaluate our model using 500 denoising steps, whereas the baselines use 1000 steps. In summary, our model achieves superior performance in key practical tasks, such as large molecule generation (as measured by validity and Fréchet ChemNet Distance), using fewer denoising steps, fewer layers, and smaller per-layer costs. We appreciate the reviewer’s remark to make and will update the manuscript in order to explicit these advantages. --- ## Evidence Supporting the Effectiveness of Our Method Our work addresses a core limitation of discrete denoising models. We show that our approach consistently outperforms existing discrete diffusion methods across all evaluation metrics, including those ablating NFE's. On this regard, we believe the paper provides sufficient evidence of our method’s effectiveness. --- ## Conclusion We maintain that our paper has relevance beyond the immediate scope of graph generation. It identifies a fundamental flaw in discrete diffusion models and proposes a concrete solution. Even if our method would not generalize as effectively to other domains, the insights it provides into the behavior of discrete denoising processes are valuable in their own. As such, we believe this work is an important contribution to the rapidly growing field of unsupervised discrete modeling. [1] Maziarz et al., Learning to extend molecular scaffolds… , ICLR, 2022 [2] Vignac et al., Op. Cit.
Summary: This manuscript observes that choices taken at the early steps of the generation process may, in hindsight, turn out to be "errors", and empowering the model with a mean to fix these errors could prevent their accumulation and improve performances. Iterative Denoising (ID) is introduced to this end, and further improvements are sought with Critical Iterative Denoising (CID). Given a trained Discrete Diffusion Model (DDM), one may get an ID without any training: it is purely a matter of perspective and inference-time algorithm. Both DDM and ID use the current state $Z_t$ to predict a candidate denoised data point $z_1$. Whereas a DDM would obtain the next state $z_s$ (with $1<s<t$) by "interpolating" between $z_1$ and $Z_t$, ID applies the same forward (noising) process to $z_1$ as it would during training. If "errors" are made early on, there is some possibility for such a forward (noising) step to undo them. CID is more involved. The first step is to augment the system with a Bernoulli random variable $a_t$ that indicates whether each element has been corrupted (0) or not (1). The backward distribution has no dependencies in entries of $z_t$ for which $a_t=0$. The idea is to train a Critic model to predict $a_t$, then only update the entries that are predicted to be masked. An expression for what and optimal Critic should return is derived, and a GNN is trained on top of the predicted data to play that role. Evaluation is performed on molecular graphs and generic graphs. Despite both sharing the same denoiser, CID is not always superior to ID, but does provide improvements in some contexts, notably in the low Number of Function Evaluation (NFE) limit. Claims And Evidence: The manuscript does not explicitly give a summary of claims, so the following are my own understanding. ### Claim 1: ID is novel ID is so simple that my first thought was "someone must have done it already". However, all I've managed to find is how Cold Diffusion [(Bansal et al., 2023)](https://proceedings.neurips.cc/paper_files/paper/2023/hash/80fe51a7d8d0c73ff7439c2a2554ed53-Abstract-Conference.html) presents "Naive Sampling" in their Algorithm 1. I'm not aware of anyone presenting ID as a "good thing" in the past, and it may be that ID only makes sense in specific settings such as mask-based discrete diffusion contexts. As long as this situation gets clarified in an eventual camera ready, I'm ok with this aspect of the novelty question. Then comes the novelty of "fixing" the noise accumulation problem with a "remasking" approach. There are many concurrent works attempting this, including Kim et al. (2025), Nie et al. (2025) and Peng et al. (2025). All these approaches are closer in spirit to CID, as they involve a planner. Also, ID has the advantage to being very simple and theoretically well-grounded/justified (unlike some of the concurrent work). Again, I believe that this is novel, but warrants some clarifications/rephrasings/citations in the camera ready version. ### Claim 2: ID addresses the error-accumulation issue I believe that this claim is supported both theoretically and practically. ### Claim 3: CID is novel CID itself is almost certainly novel. The "critic" in CID can be related to a "planner" as defined in Liu et al. (2025), and see also the aforementioned concurrent work in Claim 1. Again, I believe that adding some clarifications (and citations) as to the extent of the novelty to the camera ready should suffice. ### Claim 4: CID confers improvements over ID Overall, Mask CID beats Mask ID. However, for some task/metric combinations, Marginal ID (and other baselines) can capture something that that Mask CID cannot. As defined, CID may only be used with Mask noise, and this restriction may thus make it worse than ID. This is not a deal breaker, but I believe that it should be stated explicitly in the camera ready version. Methods And Evaluation Criteria: The proposed methods and evaluations are standard and make sense. Some OBGN would have been nice. Theoretical Claims: I understand and sanity-checked the equations of the main text. I quickly browsed the appendix, but did not delve deeply in the proofs. Experimental Designs Or Analyses: Metrics are reported without error ranges. This is sadly common in the field. Supplementary Material: I browsed it in general, and delved on some details on a per-need basis (mainly A and E). Relation To Broader Scientific Literature: It is a paper of its time. Mask-based discrete diffusion, no conditioning on time, training a planner for remasking... As mentioned in Claims 1 and 3, there are many concurrent works aiming at the same general goal. Essential References Not Discussed: Published work: - Bansal et al., Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise. NeurIPS 2023. https://proceedings.neurips.cc/paper_files/paper/2023/hash/80fe51a7d8d0c73ff7439c2a2554ed53-Abstract-Conference.html - Liu et al. Think while You Generate: Discrete Diffusion with Planned Denoising. ICLR 2025. https://openreview.net/pdf?id=MJNywBdSDy Concurrent work: - Kim et al. Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions. https://arxiv.org/pdf/2502.06768 - Peng et al., Path Planning for Masked Diffusion Model Sampling. https://arxiv.org/abs/2502.03540 - Nie et al. Large Language Diffusion Models. https://arxiv.org/pdf/2502.09992 Other Strengths And Weaknesses: I like the simplicity of ID, and the mathematical grounding of both ID and CID. I have reservations as to how performances on this type of benchmark would translate to concrete use-case scenario, but this is an issue pervading the whole generative graph modeling subfield: I don't think that I can fault the manuscript for this. Other Comments Or Suggestions: Please consider adding a clear summary of the claims in the introduction: readers (and reviewers!) like those. Line 103, left column: I'm not clear on the $Z=i$ notation here. Line 214, left column: > our noising procedure is not a diffusion process This may raise the hairs of some readers. I propose adding some caveat such as "(at least not in the standard sense)". In a world with "X is secretly Y" papers, and things as weird as Cold Diffusion being considered diffusion... Line 253, right column: "noising noising" At the start of Section 4.3, I suggest the addition of a complimentary reminder that $p_{\alpha}(a)$ is the Bernoulli distribution defined above Equation (11). It took me a while... Like many recent models, ID and CID do not require an explicit time/noise dependency, which is a highly desirable feature. I initially thought that this was the "time dependencies" being discussed in the manuscript, but I then realized that it may only speak of the accumulation of errors. I do think that ID and CID have this desirable characteristic, and that it should clearly be highlighted in the manuscript. (It is not novel in itself though.) Questions For Authors: My current score of 4 assumes satisfying answers to the following questions. ### Question 1 Do the authors agree with my general assessment in Claims 1, 3 and 4 above? If yes, do they consent to such edits in the camera ready? If not, please clarify the situation, and propose changes (if any) in the camera ready. ### Question 2 Is the difference between the V.U.N. of Mask ID and Mask CID for `Planar` significant? Mask CID is one point below, and it *shouldn't*. While a "better" model could potentially be "worse" on the "unique" and "novel" metrics, the manuscript states that all samples are unique and novel, so here V.U.N. actually amounts to the validity rate. Please discuss, and state what changes (if any) you would make in the camera ready. ## April 14 update Now that my two fellow reviewers have replied, I revert my score to it's "true" value of 4. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank you for your review, your careful reading and questions. We believe they will contribute to significantly improve our submission. ## Answer to the questions ### Question 1: We broadly agree with your summary in **Claim and Evidence** and specifically with your assessment in Claims 1, 3, and 4, and thank you for the thoughtful summary, which will help improve the clarity of our manuscript. As suggested, we will include a clear summary of the claims in the introduction. **Claim 1** We will explicitly discuss the novelty of our contributions and include a reference to Bansal et al. (2023). We are grateful to the reviewer for pointing out this relevant work. **Claim 3** (and second part of Claim 1) Regarding the relationship between our Critic and the "Planner" introduced in Liu et al. (2025), we will add a concise discussion clarifying the distinction. We also acknowledge the emergence of concurrent works after our submission and will cite them accordingly. **Claim 4** As correctly noted, Marginal ID captures aspects that Mask ID does not. While Mask CID tends to outperform Mask ID, it does not consistently outperform Marginal ID. Mask CID appears particularly effective in the low NFE regime. We agree that the choice between Marginal ID and Mask CID depends on the application and objective, and we will elaborate on this point in the revised discussion. ### Question 2: The error ranges are provided in the appendix. Considering these, the observed 1% difference in V.U.N. between Mask ID and Mask CID for Planar is not statistically significant, as both error margins exceed 5%. However, on another metric—spectral MMD—the difference is more substantial, clearly favoring CID. This suggests that the Critic captures structure not reflected in validity scores but visible in graph spectra. We will clarify in the revised version that the V.U.N. difference is not significant, referring explicitly to the error bounds in the appendix. We will also note that the improvement offered by Mask CID in this case primarily appears in the spectral-based metric rather than in the validity. --- ## Other Comments Or Suggestions We thank the reviewer for the detailed proofreading. We will implement the suggested corrections in the camera-ready version. Below are specific revisions: - Line 103: The notation was indeed confusing. We will revise the sentence as: “We sometimes represent the univariate categorical distribution over the variable \(p(z)\) as a vector \(\vz\), where the $i$-th component $z_i$ denotes the probability that $z$ belongs to the category indexed by $i$.” - Line 214: We will rephrase as: “Our noising procedure is not a diffusion process in the traditional sense, as the state at time $t$ depends only on the initial state and not on the full trajectory or past states.” --- ## Essential References Not Discussed We thanks the reviewer for these references. We note (for potential readers) that the 'concurrent works' were not publicly available at the time of submission. We will however include and discuss them in the camera-ready version. --- ## On the Relationship Between Our Critic and the Planner in [2] We thank the reviewer, as well as reviewer *VwCn*, who highlighted this connection. The revised version will include the following clarification. Our Critic and the Planner proposed in Liu et al. (2025) differ ins some fundamental aspects. The Planner operates within the DDM setting, while our Critic is specifically designed for our Iterative Denoising (ID) framework. The Planner determines, at each step $t$, which elements should be denoised (e.g., unmask), aiming to optimize the denoising order. Importantly, once an element is unmasked, it cannot be remasked in this framework. In contrast, our Critic operates on the fully denoised instance and identifies which elements should be renoised. This allows for previously unmasked elements to be remasked. Moreover, as highlighted by reviewer whDB, our Critic comes with strong theoretical grounding. In short, the Planner seeks to prevent compounding denoising errors via better ordering. Our Critic actively corrects such errors post hoc by leveraging renoising and multiple denoising iterations. --- [1] Bansal et al., Cold Diffusion: Inverting Arbitrary Image Transforms Without Noise. NeurIPS 2023. https://proceedings.neurips.cc/paper_files/paper/2023/hash/80fe51a7d8d0c73ff7439c2a2554ed53-Abstract-Conference.html [2] Liu et al. Think while You Generate: Discrete Diffusion with Planned Denoising. ICLR 2025. https://openreview.net/pdf?id=MJNywBdSDy [3] Kim et al. Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions. https://arxiv.org/pdf/2502.06768 [4] Peng et al., Path Planning for Masked Diffusion Model Sampling. https://arxiv.org/abs/2502.03540 [5] Nie et al. Large Language Diffusion Models. https://arxiv.org/pdf/2502.09992 --- Rebuttal Comment 1.1: Comment: Thank you for the extra details and comments. I initially gave only a "Rebuttal Acknowledgement" as I had no remaining issue with the manuscript and I was satisfied with my score of 4, but now seeing my fellow reviewers reaction (or lack thereof), I will use this "Rebuttal Comment" to summarize the current situation as I understand it. ## VwCn > 1. Time Dependency in the Noising Process The authors acknowledged the ambiguity in the original submission, clarified the meaning of "time dependency", and answered all sub-questions of VwCn. I'll add my own personal touch: designing the training task so that the the neural network needs not receive extra inputs for the "noise level" is particularly "fashionable" right now, because it works. In the case of ID, the Authors have shown a *disarmingly simple* way to do so, that trivially generalizes way outside the subfield of graph diffusion. The fact that they could build CID on top shows great promises for other such "fancier" algorithms. > 2. Equation (7) and the Forward Process The authors say "the reverse process interprets as predicting a clean instance and renoising it". Let me explain the ID inference algorithm using text sequences as examples. Here $f$ represents the trained denoising model. - Starting point (noise level 4): $Z_4$ = [MASK MASK MASK MASK] - First denoising: $f(Z_4)$ = [Once cat war economy] - Renoise to noise level 3: $Z_3$ = [MASK cat MASK MASK] - Second denoising: $f(Z_3)$ = [The cat ate everything] - Renoise to noise level 2: $Z_2$ = [The cat MASK MASK] - Third denoising: $f(Z_2)$ = [The cat often black] - Renoise to noise level 1: $Z_1$ = [The cat MASK black] - Final denoising: $Z_0 = f(Z_1)$ = [The cat is black] At each step, them model $f$ attempts to "fully denoise" the sequence. If you're thinking "wait, that's it?", then you understood. *Disarmingly simple.* > 3. Novelty in Section 3 I fully agree with the Authors here. > 4. Similarity to Prior Work I raised a similar point, and I agree with the authors: there is a relation between the authors "critic" and concurrent work's "planners", but ID and CID are still novel, and the authors have promised to properly discuss this in the camera ready. > 5. Comparison with Recent Empirical Studies I'm personally satisfied with the authors' response. Are you, VwCn? ## 4gxJ This reviewer replied to the first rebuttal, so I'll only consider the second round. > The empirical results of the proposed method lag behind those of DruM. What makes this method a preferable choice over DruM in practice? Authors: Better validity on larger and more realistic datasets. > One of the central contributions of this work is the CID framework. However, the results indicate that Marginal ID significantly outperforms MASK CID. Is it possible to apply critical denoising to Marginal ID? If not, what are the practical advantages of MASK CID compared to Marginal ID? I find the authors' answer sufficient, and they bring an important new argument: their "approach is new and thus carries high potential for further improvement. In contrast, DruM builds on diffusion bridges, a mature framework, which potentially present less room for future developments." > The CID approach introduces additional computational overhead during both training and sampling. Could the authors provide a comparison of computational efficiency between the proposed method and the baseline methods? The authors provide a new table, showing that it is not the case. ## whDB All my points have been addressed. I believe that this work can be impactful, beyond the graph regime. I'm raising my score to 5 in protest. --- Reply to Comment 1.1.1: Comment: We would like to warmly thank the reviewer for their engagement in the review process and for the clear and effective summary of the current discussion. We fully agree with their assessment and appreciate their recognition of the core contributions and potential of our work.
Summary: The paper aims to address *Compounding Denoising Error* in discrete diffusion models by removing time dependency in the noising process. To enhance performance, it introduces a Critic that aligns generated samples with the data distribution. The authors provide relevant experimental results as supporting evidence. ## update after rebuttal Thank you for the authors' feedback. However, I have decided to retain my original score. I am not fully convinced by the proposed method's (1) novelty or (2) empirical performance. The authors claim that one of the distinct components of the proposed method is that "the reverse process is interpreted as predicting a clean instance and then re-noising it." However, this is only slightly different from the process described in Eq. (4) of D3PM [1] and Eq. (24) of [2]. These two prior works adopt a very similar design by first predicting clean data and then adding noise according to the time schedule, which is common in discrete diffusion models. Consequently, the methodological novelty seems limited to me. Additionally, as pointed out by reviewer 4gxJ, the empirical study lacks comparisons with recent models, and the performance gains are neither consistent nor substantial compared to baselines. The proposed method could potentially have significant impact; however, the current analysis in the manuscript does not provide sufficient evidence to support it. Therefore, I do not believe this paper qualifies for ICML acceptance. Ref:\ [1] Austin J, Johnson DD, Ho J, Tarlow D, Van Den Berg R. Structured denoising diffusion models in discrete state-spaces. Advances in neural information processing systems. 2021 Dec 6;34:17981-93.\ [2] Gat I, Remez T, Shaul N, Kreuk F, Chen RT, Synnaeve G, Adi Y, Lipman Y. Discrete flow matching. *Advances in Neural Information Processing Systems*, 2024 Dec 16;37:133345-85. Claims And Evidence: I have provided aggregated comments in this block. Please refer to them for the rebuttal discussion. Thanks. 1. **Time Dependency in the Noising Process:** - The paper states that the noising process does not assume any time dependency apart from its dependence on the original data point \$z_1$. Could the authors clarify how this aligns with Equation (6), given that the equation still contains a time-dependent coefficient? - Additionally, how does this approach differ from the discrete flow matching (DFM) formulation [1]? 2. **Equation (7) and the Forward Process:** - Could the authors clarify whether Equation (7) performs the same operation as DFM [1] in the forward process? - It seems to provide an analytic solution conditioned on the un-noised data—would the authors be able to confirm or elaborate on this? 3. **Novelty in Section 3:** - The model description in Section 3 appears to closely align with the DFM framework. Could the authors highlight any key differences or novel aspects introduced in this work? 4. **Similarity to Prior Work:** - There seems to be a notable similarity to the work presented in [2], especially its planner module. Could the authors clarify how their approach differentiates from or builds upon this prior research? 5. **Comparison with Recent Empirical Studies:** - How does the proposed approach compare empirically to recent advancements in graph generation, such as SwinGNN [3], Equivariant Denoisers [4], and PARD [5]? - These works have introduced new insights into permutation invariance and autoregressive diffusion models for graphs. Could the authors provide a comparative analysis or discuss potential advantages and limitations relative to these methods? [1] Gat I, Remez T, Shaul N, Kreuk F, Chen RT, Synnaeve G, Adi Y, Lipman Y. Discrete flow matching. *Advances in Neural Information Processing Systems*, 2024 Dec 16;37:133345-85. [2] Liu S, Nam J, Campbell A, Stärk H, Xu Y, Jaakkola T, Gómez-Bombarelli R. Think While You Generate: Discrete Diffusion with Planned Denoising. arXiv preprint arXiv:2410.06264. 2024 Oct 8. [3] Yan Q, Liang Z, Song Y, Liao R, Wang L. SwingNN: Rethinking permutation invariance in diffusion models for graph generation. *arXiv preprint arXiv:2307.01646*, 2023 Jul 4. [4] Laabid N, Rissanen S, Heinonen M, Solin A, Garg V. Equivariant Denoisers Cannot Copy Graphs: Align Your Graph Diffusion Models. *The Thirteenth International Conference on Learning Representations*. [5] Zhao L, Ding X, Akoglu L. PARD: Permutation-invariant autoregressive diffusion for graph generation. *arXiv preprint arXiv:2402.03687*, 2024 Feb 6. Methods And Evaluation Criteria: Please see the comments above. Theoretical Claims: Please see the comments above. Experimental Designs Or Analyses: Please see the comments above. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please see the comments above. Other Comments Or Suggestions: N/A Questions For Authors: Please see the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their careful reading of our submission and for the thoughtful questions and comments. We believe these insights will substantially help us improve the quality and clarity of our paper. ### 1. Time Dependency in the Noising Process We at time used *time dependency* ambiguously in the original submission. Specifically, we occasionally referred to *time dependency* when we meant *dependent noise distributions across time*. We agree that this may have been misleading and will revise the camera-ready version accordingly . Regarding the difference with DFM, we remind that the standard instantiation of DFM with independent coupling is equivalent to continuous-time discrete diffusion, so that all explicit differences with DDM in Section 3 also hold for DFM. The core difference between our approach and the Diffusion Flow Matching (DFM) model lies in **Assumption 3.1**. In particular, we assume *conditional independence* of the noisy variables across time steps, formalized in Eq. 7: $$ q_{t|1}(z|z_1) = q_{t|1}(z|z_s, z_1) \quad \forall t \neq s. $$ This assumption contrasts with the formulation in DFM with independent coupling [1], where the stochastic process is defined through the relation $$ Z_s \sim \delta_{Z_t}(Z) + \Delta_t u_t(Z_t) $$ (Eq. 12 in [1]), thus inducing a direct dependency between $Z_s$ and $Z_t$, and violating Assumption 3.1. To clarify further, the marginal noising distribution $q_{t|1}(z|z_1)$ (Eq. 6) is indeed equivalent to that of DFM when considering a single time step. However, the distributions $q_{s|t}(z|z_t)$ differ substantially, due to our assumption of conditional independence (See points 2 and 3). --- ### 2. Equation (7) and the Forward Process Therefore Equation 7 plays a key role in distinguishing our model from both DDM and DFM. While DDM employs a Markovian forward process such that $$ q_{t|1}(z|z_s) = q_{t|1}(z|z_s, z_1), $$ and DFM constructs a stochastic trajectory with explicit dependence between $Z_s$ and $Z_t$, our model, by contrast, does not define a probabilistic path or trajectory, but rather a serie of *conditionally independent* distributions given $z_1$. This structural assumption leads to a simplified denoising process. Because $Z_s$ depends on $Z_t$ only through the intermediate variable $Z_1$, the reverse process interprets as predicting a clean instance and renoising it. This is operationalized through Eqs. 8 and 9 in our paper. The absence of a forward path avoids the compounding denoising errors that affects models such as DDM and DFM where $Z_s$ directly depends on $Z_t$. --- ### 3. Novelty of Section 3 As discussed above, **Assumption 3.1** (Eq. 7) leads to a distinct denoising process: we use Eq. 8, while DFM uses Eq. 5. Crucially, in DFM (as in DDM), the backward process involves direct dependencies between $Z_s$ and $Z_t$ (as expressed in Eq. 5), whereas in our model, this dependence is only indirect via $p_{1|t}(z|Z_t)$. This decoupling directly addresses the issue of error accumulation in denoising, as discussed in Section 2. In addition, the summary provided in the “Claim and Evidence” section of reviewer **whDB** accurately outlines the novel contributions of our paper. We provide further justifications in our response to reviewer **4gxJ** (Claim and Evidence - P. 1) --- ### 4. Similarity to Prior Work We thank the reviewer for pointing out this related work. We will ensure it is incorporated in the camera-ready version. Regarding novelty: since the referenced work [2] builds upon DDM, all differences discussed above remain applicable. For a detailed comparison between our *Critic* and the *Planner* in [2], we refer to our response to reviewer **whDB**, who raised a similar question. --- ### 5. Comparison with Recent Empirical Studies As elaborated in Appendix B (*Generative Graph Modeling: Related Works*), we classify generative models into *sequential (invariant)* and *equivariant* approaches. While some hybrid models attempt to merge these approaches, our work is *orthogonal* to these distinctions. Our implementation is based on an equivariant model to eliminate confounding effects from node ordering and to ensure a fair comparison with other equivariant baselines. Moreover, sequential models are particularly susceptible to overfitting, a concern we sought to avoid. In consequence, we selected comparable equivariant models as baselines. On QM9, the results in [3] and [4] align with *DruM*, our primary equivariant baseline. On Zinc250k, although these models yield similar similar results to our *Marginal ID* on FCD and NSPDK, our model produces *an order of magnitude fewer invalid graphs*, highlighting its robustness. We will add the missing references in our literature review. However, we are not in favor of multiplying the baselines. For a broader discussion on baselines, we refer to our detailed response to reviewer **4gxJ** (*Experimental Designs Or Analyses (Baselines)*).
null
null
null
null
null
null
null
null
Controllable Data Generation with Hierarchical Neural Representations
Accept (poster)
Summary: This paper proposes a hierarchical Implicit Neural Representations (INR) framework that aims to provide better control over hierarchical representations during the generation process. In the first stage of the framework, a Layer-of-Experts (LoE) model is trained, and a latent variable is learned for each layer. In the second stage, the hierarchical relationship between different layers is modeled by a conditional diffusion process. Experiments are conducted on several image datasets, and evaluations and comparisons are made with baseline methods. Claims And Evidence: I found two major issues with the claims and evidence. First, the paper claims that the proposed framework achieves hierarchical controllable generation. However, this claim remains questionable to me, given the qualitative results shown in Figure 4. While it is clear that the first layer has significant effects on the generation by controlling the fixed “type” of the data, such as the general appearance of a face, the object category in 3D shapes, and vehicle types, the effects of the rest of the layers seem very weak without any clear patterns. For example, which layer or layers control the color of the vehicle, as an important semantic feature for that dataset? The authors are highly encouraged to replace the visualization methods in Figure 4 with those in Figure 1 to clearly demonstrate the effects of each layer through multiple branches of each generated image, allowing readers to better understand which representations are controlled by previous fixed layers and which are affected by the remaining layers. Additionally, it would be very helpful if the authors could relate the observations from the qualitative results to the quantitative results in Table 1 and demonstrate whether they align with each other. Second, the authors claim that the proposed framework outperforms existing methods on most datasets. However, it seems that many improvements are marginal. I appreciate the authors’ efforts in reporting the variance of performance in Table 2, but it is necessary to conduct a statistical test and report the p-value to demonstrate whether these marginal improvements are statistically significant. Methods And Evaluation Criteria: Yes, I believe most of the methods make sense and are suitable, while the qualitative results in Figure 4 can be improved by adapting the methods in Figure 1, as mentioned above. Theoretical Claims: TW (1): In line 188, how is $h^l$ computed? Is it obtained by sampling from a component of the mixture or by computing the average? TW (2): Figure 2, as the main figure for the proposed framework, lacks sufficient captions to explain the details in the figure. For example, what does the transparency of noise indicate? What does the shade of color indicate? What is the relationship between each “latent” and the mixture of experts? TW (3): In line 242, what does $||ε, ε_{θ}()||^2$ mean? Did the authors mean to write $||ε - ε_{θ}()||^2$, as used in diffusion models? Experimental Designs Or Analyses: Yes, I checked all the qualitative and quantitative results. I believe the experiments are well-designed, but the analysis is not sufficient to support the key claims of this paper, as mentioned above in "Claims And Evidence". Supplementary Material: I checked the sections of Appendix B for additional experiment results. Relation To Broader Scientific Literature: The proposed hierarchical model is novel and promising for INR works, which is a solid research direction worth investing in. However, the significant advantages and improvements of the current version of the proposed framework remain somewhat questionable compared to the baselines, as evidenced by the results. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: # after rebuttal I thank the authors for the response. Most of my concerns have been addressed, and I have increased my score accordingly. Questions For Authors: Q(1): I look forward to the authors’ response to my concerns about the qualitative results, as described in the “Claims and Evidence” section, which will have the highest impact on my future decision. Q(2): It would be very helpful if the authors could clarify my confusion listed in the “Theoretical Claims” section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank reviewer ZbmH for the valuable feedback. ### **W1. Qualitative results.** Thank you for the valuable feedback. We agree that earlier layers exhibit more visible influence in Figure 4. This is expected in our hierarchical design—deeper layers naturally introduce finer, subtler variations since prior layers are fixed. Regarding vehicle color in the NeRF dataset: it is not controlled by a specific layer but emerges from all layers jointly. This is because NeRF INRs jointly regress RGB and density at the output, causing color to be decoded at the final stage and entangled with spatial attributes, rather than controlled by a specific layer. We appreciate your suggestion to adopt Figure 1’s visualization style. However, Figure 4 was designed to showcase different conditional chains per row, and we cannot revise it due to rebuttal constraints. Nevertheless, please note that some layer effects are visible and aligned with Table 1—for example, in the NeRF samples, a spoiler appears in the 3rd and 4th samples of the first row, with layer 3 fixed in the latter, supporting its role in controlling that attribute. ### **W2. Marginal improvements.** Thank you for the suggestion. Our primary goal is not to optimize reconstruction/generation quality alone, but to introduce a framework that enables hierarchical control—a key capability missing in prior generative INR methods such as mNIF and Functa. As shown in Table 2, CHINR achieves notable improvements in evaluation metrics (e.g., PSNR/FID/SSIM) over most baselines except mNIF. However, CHINR exhibits significantly less memorization than mNIF, indicating better generalization. Furthermore, we aim to demonstrate that the conditional chain enables controllable generation without sacrificing quality—rather than to show it outperforms all models on every metric. ### **TW1. How is $h^l$ computed.** $h^l$ is an instance-specific latent at layer $l$, which is mapped to a gating vector that weighted averages the experts at layer $l$. To compute $h^l$, we use meta-learning or auto-decoding (Section 3.2) by initializing it around zero and optimizing it for each data instance. ### **TW2. Figure 2 meaning.** Thanks for pointing this out. We will make the captions clearer to explain this figure. Specifically, the transparency of noise refers to the noise schedule in the diffusion process, where the noise gradually dominates in the forward process. We shade the color of latents to distinguish latents from different layers. The latent at each layer is mapped to a gating vector that weighted averages the mixture of experts at that layer. ### **TW3. $||\epsilon,\epsilon_\theta()||^2$ in Line 242.** Thanks for pointing this out. It should be $||\epsilon - \epsilon_\theta()||^2$ as used in diffusion models. We will correct this in the final version.
Summary: The paper proposes a framework for controllable data generation using hierarchical implicit neural representations (INRs). It models conditional dependencies across layers in the parameter space to improve control over the generation process. Claims And Evidence: The paper presents clear evidence to support the claims. Methods And Evaluation Criteria: The proposed method and evaluation criteria are reasonable. Theoretical Claims: The theoretical claims are correct and consistent. Equations (1), (2), and (3) discuss the foundation of hierarchical property of INR parameters. Equations (4) and (5) presents how to formulate the hierarchy modeling with diffusion models. Experimental Designs Or Analyses: The paper presents valid quantitative and qualitative experiments. Table 1 and 2 show the superior performance over existing methods, i.e. Functa, mNIF, GEM, and GASP. The analyses in Section 4.3 are intuitive. The ablation study demonstrates the effectiveness of conditional modeling. Supplementary Material: We have reviewed the supplementary material. Relation To Broader Scientific Literature: The key contribution of the paper is the controllability of data generation with INRs. The proposed CHINR uses hierarchical latent vector and layer-of-experts to represent data instance while prior methods, e.g. Functa, and mNIF, use flat latent vector and MOE to represent data. The CHINR proposed hierarchical conditional diffusion model to model the conditional distribution, while prior methods model the joint distribution of flat latents. Essential References Not Discussed: None Other Strengths And Weaknesses: Strength: - The paper introduces a first-of-its-kind, hierarchical way to model INRs by incorporating layer-wise conditional dependencies, allowing fine-grained control over semantic generation. - The introduction of LOE greatly enhances the model’s ability to generate diverse semantics. The expert-sharing mechanism also improves parameter efficiency. - The framework is tested comprehensively across multiple modalities, including images, 3D objects, and motion sequences. Weakness: - While the model enforces a hierarchical structure, the exact meaning of each layer’s latent representation is not explicitly defined. Users must manually inspect outputs to understand how different latents influence semantic factors. This lack of predefined interpretability might make fine-grained control difficult in real-world applications. - Compared to state-of-the-art generative models such as StyleGAN or diffusion models, CHINR appears to be restricted to relatively simple image contents and object representations. How does the model generalize to more complex data? - The proposed framework introduces a multi-stage training pipeline involving meta-learning, auto-decoding, and hierarchical diffusion modeling, which is computationally intensive. A breakdown of training/inference time and resource requirements would be useful. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer aYYY for the constructive feedback. ### **W1. Lack of predefined interpretability for each layer.** Thank you for the thoughtful comment. CHINR is designed to align the hierarchical structure of INRs with semantic abstraction, allowing each layer to control different levels of detail. While the semantics of each layer are not predefined (they are learned freely in Stage 1), the progressive control in Figure 4 and latent composition in Figure 6 demonstrate consistent layer-wise influence, suggesting that meaningful hierarchical semantics naturally emerge through training. Incorporating attribute supervision into Stage 1 to enforce predefined semantics at each layer is a promising direction to improve interpretability and fine-grained control in practical applications. ### **W2. Generalize to more complex data.** 1. Thank you for your insightful comments. Compared with other state-of-the-art methods such as StyleGAN, the data instance size represented by CHINR is significantly smaller (e.g., $5 \times 64$ latent vector). Additionally, the LoE utilizes a five-layer neural network, which inherently limits its generation quality. The primary advantage of using implicit neural representations (INRs) lies in their ability to represent data at arbitrary resolutions, rather than in achieving high reconstruction fidelity. 2. To generalize to more complex data, two main directions can be pursued: (1) enhancing the capacity of the INR, and (2) refining the meta-learning pipeline. For the first approach, dividing each data instance into smaller patches can significantly reduce the burden on the INR, allowing it to better capture local structures. For the second, existing meta-learning pipelines typically initialize the latent vector from a Gaussian distribution and optimize it over just three steps. This process can be improved by enabling more efficient optimization over a greater number of steps, thereby enhancing performance and adaptability. ### **W3. Training/inference cost** Thanks for your suggestion. We analyze the resource consumption of CelebA experiments in the following. All experiments were conducted on an RTX 3090 GPU with a batch size of 8 for both training and inference. *Training cost for Stage 1 and 2*: | | Time | Memory | |---------|------|--------| | Stage 1 | 50h | 4.3GB | | Stage 2 | 28h | 18.9GB | *Inference cost of CHINR, generating 1000 samples following HCDM*: | Time | Memory | |------|--------| | 2h | 8.4GB |
Summary: The paper introduces CHINR, a framework for controllable data generation using hierarchical neural representations. It addresses limitations of existing generative INR approaches that fail to capture hierarchical structures in data, leading to limited control over generation. The method consists of two stages: Stage-1 constructs a Layers-of-Experts network where each layer has its own latent vector for disentangled representations, and Stage-2 introduces a Hierarchical Conditional Diffusion Model to capture dependencies across layers for controllable generation. The framework enables hierarchical control over generated content at various semantic granularities. Experiments across different modalities show improved generalization and controllability compared to existing methods. ## update after rebuttal I hope the authors could refine the larger dataset problem in future revisions. As a concensus is reached among reviewers, I will keep my score. Claims And Evidence: The proposed CHINR achieves outstanding performance. This is verified by experiments. Methods And Evaluation Criteria: The paper proposes LCDM and LoE as methods. The benchmarks are CelebA-HQ, ShapeNet, and SRN-Cars, which are commonly-used ones. Theoretical Claims: No theoretical claims are involved in this paper. Experimental Designs Or Analyses: I found no issues regarding experimental designs or analysis. Supplementary Material: I read through the supplementary materials. The authors provide lots of samples that visualizes the quality of CHINR. Relation To Broader Scientific Literature: While existing INR methods fail to leverage the hierarchy of semantic abstraction, this paper introduces hierachical control in generative INRs. Essential References Not Discussed: No from my knowledge. Other Strengths And Weaknesses: Strengths: 1. Novelty: hierarchical control is achieved, different from previous works. 2. Comprehensive evaluation: the multi-modal experiments have been conducted to validate the method in various scenarios. Weaknesses: Scalability to larger datasets, as is mentioned in "Conclusions". Other Comments Or Suggestions: Layout problem: the font size of legend and ticks in Fig. 4 and 7 are too small. The authors should consider revising these figures. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer N8d9 for the valuable comments. ### **W1. Scalability to larger datasets.** Thank you for pointing this out. As discussed in the conclusion, scaling CHINR to larger datasets is a known challenge. While CHINR demonstrates the core idea of hierarchical control through INR parameter modeling, extending it to more complex data can be achieved by enhancing the INR representation capacity and employing more efficient training approaches. For example, localized solutions such as patch-wise or spatial-adaptive modulation methods [1,2,3] can help INRs better capture local structures and improve scalability. Thanks for pointing out the layout problem of Fig. 4 and 7. We will increase the font size of those figures. ### **Reference** [1] Wang, Peihao, et al. Neural implicit dictionary learning via mixture-of-expert training. ICML, 2022. [2] Bauer, Matthias, et al. Spatial functa: Scaling functa to imagenet classification and generation. 2023. [3] Park, Dogyun, et al. DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations. ICLR, 2024.
Summary: The paper presents a novel method, CHINR, for controllable generative INR by exploiting the hierarchy structure in parameters. The authors employ a Layers-of-Experts (LoE) network to encode data with layer-wise latents and propose a Hierarchical Conditional Diffusion Model (HCDM) to learn conditional dependency in layers. Experiments show that CHINR enables precise control at different granularities during generation. Claims And Evidence: The claims are supported by clear evidence. Methods And Evaluation Criteria: The proposed CHINR makes sense in solving controllable data generation. The evaluation criteria such as success rate and FID are convincing. Theoretical Claims: The theoretical claims are correct and clearly explained. We observed that Equation (2) in main paper is grounded in explaining the representation ability of INR, while Equation (3) models corresponding hierarchical representations. Experimental Designs Or Analyses: The experimental designs and analyses are convincing. Figure 4 clearly presents controllable data generation, and Table 1 presents accordingly quantitative evidence. Figure 6 intuitively explains why the CHINR works. Supplementary Material: We reviewed all parts of the appendix. Relation To Broader Scientific Literature: The paper explores an orthogonal direction to existing INR literature by focusing on the controllability of data generation. It argues that methods such as Functa, mNIF, and GASP process the latent modulation vector indiscriminately, and instead proposes modeling this vector as a conditional chain to enable hierarchical control. Essential References Not Discussed: None Other Strengths And Weaknesses: Strength: - The observation of connection between hierarchy of semantics and model parameters is interesting and inspiring, and the idea is well-motivated. - The proposed diffusion model is a novel method to model the hierarchical dependency within parameters. - The paper is well-written and structure is straightforward. The mathematical formulations appear correct. - The empirical results show strong evidence of successful controllability across various modalities. The paper presents thorough analysis that semantics are disentangled in parameter space. Weakness: - Since the parameters are generated with a conditional chain, it has chances getting out-of-distribution samples. The out-of-distribution causes erroneous parameters, which will accumulate if occurred in early layers. How do you solve this problem? - The binary condition length must be carefully tuned on each dataset to balance controllability and generalization. Can the authors explain more about the choice of binary condition? - How does the model generalize to data without inherent frequency hierarchy, e.g. text? - The stage-2 is trained with reconstructed data of stage-1, how does the reconstruction quality affect the conditional modeling in stage-2? Can stage 2 learn compatible conditional chain if the reconstruction quality is low? Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer T7ed for the thoughtful comments. ### **W1. Out-of-distribution samples in conditional chain.** Thank you for your insightful feedback. We address error accumulation during both the training and inference phases. In the training phase, we begin by teaching the model to capture inter-layer dependencies, using ground truth layer-wise latents (obtained from Stage 1) as conditions. Subsequently, we fine-tune the model to construct a coherent conditioning chain by progressively incorporating generated latents as conditions. During inference, we further mitigate potential degradation by adjusting the standard deviation of the input noise in the diffusion process. This reduces the likelihood of generating outlier latents that could compromise output quality. ### **W2. Binary condition.** We choose to use binary condition for the following reasons: 1. With fewer bits to represent data, the model is less likely to memorize every detail of the training data. Instead, it must extract the core features that are most useful/general across many examples. This acts as a form of regularization. 2. As the model processes the input data, it learns to assign similar items to the same quantized value. Over time, these discrete codes become representative of certain semantic concepts or clusters, which correspond to the inherent structure of the data. ### **W3. Generalize to data without inherent frequency hierarchy.** Thank you for your interesting question. CHINR leverages the frequency-based representational hierarchy inherent in INR structure, which aligns well with data like images and 3D shapes with spatial frequency hierarchy. The current formulation may not be directly applicable for modalities like text that don't follow such a frequency hierarchy. However, CHINR’s core idea—modeling hierarchical latent dependencies across network layers—may still be adapted by identifying alternative structures (e.g., syntactic or semantic hierarchies) relevant to the modality. ### **W.4 Reconstruction quality affects Stage 2** The sensitivity of hierarchical control to quality variations is evident in two key aspects: 1. Reconstruction Quality in Stage 1: The effectiveness of hierarchical control in Stage 2 is bounded by the reconstruction quality achieved in Stage 1. If the proposed LoE fails to learn hierarchically structured latents during Stage 1, it limits the controllability during the generation in Stage 2. Our findings show that when the PSNR on CelebA-HQ exceeds 25, no visually noticeable distortions are observed, and hierarchical control performs reliably. 2. Quality of Ground-Truth Data: The controllability is also constrained by the quality of the ground-truth data used in Stage 1. Higher noise levels hinder the learning of conditional dependencies. In the extreme case where the data is entirely noise, the resulting latents lack meaningful structure. As the noise in the ground-truth data increases, both reconstruction and generation quality progressively degrade.
Summary: The paper introduces a novel framework to capture hierarchical data semantics with implicit neural representations, enabling improved control over data generations. The framework is structured in two stages. For the first stage, a layer-of-expert (LOE) architecture is employed to capture general semantics with shared experts and distinct semantics with latent vectors. For the second stage, a hierarchical conditional diffusion model (HCDM) is employed to learn the distribution of latent vectors. The HCDM models inherent hierarchical structure as conditional dependencies between layer l and layers <l. The framework is evaluated on four different domain data, i.e. images, point clouds, neural radiance fields, and motions, and presents superior performance in controllability and reconstruction ability. ## update after rebuttal I will keep my ratings since most of my concerns are solved. Claims And Evidence: Each claimed contribution is supported by clear evidence. Methods And Evaluation Criteria: The proposed hierarchical approach makes sense. It aligns INR structure with data semantic for layer-wise control. The evaluation criteria (e.g., Table 1) are reasonable in showing the controllability of CHINR. Theoretical Claims: The theoretical claims seem correct. Equation 2 explains the hierarchical structure of INR, which serves as the foundation of the hierarchical approach. Experimental Designs Or Analyses: The experimental designs are valid in proving the controllability achieved through the hierarchical approach. The analyses in section 4.3 are convincing and show evidence of disentangled semantic. The only issue would be the scale of data as CHINR is currently evaluated in small-scale datasets. Supplementary Material: I reviewed the supplementary material. Relation To Broader Scientific Literature: CHINR aims at bridging the weight space structure and data semantics for controlled generation, which is new to the INR literature. The idea of hierarchical/progressive data generation has been widely explored in a broader literature (e.g., GANs, VAEs). The difference is that CHINR models this hierarchy in network parameter space instead of data feature space. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The idea of bridging INR weight hierarchy and semantic hierarchy is novel, and the layer-wise generation effectively models this hierarchy. 2. The experiments show CHINR’s controllability in different datasets, validating its hierarchical approach across multiple domains. 3. The paper is well written. The motivation, methodology, and experiments are clearly presented. Weaknesses: 1. Despite performing well on small datasets like CelebA-HQ, CHINR’s scalability to larger and more complex data (e.g., higher-resolution images or shapes) is unclear. Adding more experts or layers may hinder training convergence and raise inference costs. In addition, longer conditional chains are more prone to error accumulation, affecting generation quality. 2. The reliance on meta-learning may limit generalization to more diverse data patterns. Since latents are initialized from a small distribution for fast adaptation, they struggle to capture broader variations. 3. CHINR allows attribute variation at different levels but lacks a mechanism for targeted modification (e.g., changing pink lips to red). This limits its use in real-world applications such as image and shape editing, where targeted adjustments are essential. Other Comments Or Suggestions: See the weaknesses part. Questions For Authors: See the weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank reviewer XvvS for the valuable feedback. ### **W1. Scalability to larger datasets** Thanks for raising this scalability concern. The CHINR framework focuses on establishing the connection between INR parameters and data semantics for controllable generation. While our experiments use smaller datasets, the core framework is generalizable to larger-scale data as long as this fundamental connection holds. To handle increased complexity, more efficient learning methods should be employed to handle more complex data patterns. Beyond simply adding more layers or experts, we can adopt localized solutions such as patch-wise or spatial-adaptive modulation, as explored successfully in [1, 2]. This reduces the burden of a single set of INR parameters to represent the whole data and allows the model to capture local structures, improving scalability and convergence without significantly increasing inference cost. ### **W2. Reliance on meta-learning** Thanks for pointing out this thoughtful concern. We agree that meta-learning requires a shared initialization for fast adaption, which works well for data with consistent structure (e.g., faces) but may struggle with more complex data. This is an intrinsic issue of the meta-learning method. To address this, one can allow more inner-loop updates at the cost of increased training time. It is also possible to incorporate more adaptive initialization or weight update strategies [3, 4] to better handle data variability. Alternatively, we can use auto-decoding as done in the NeRF experiments, which avoids the need for fast adaptation and may offer better generalization in such cases. ### **W3. Lack of targeted modification mechanism** Thank you for the insightful comment. CHINR is designed to leverage the hierarchical structure of INR parameters for layer-wise semantic control, rather than direct attribute editing. Although it doesn't currently support direct modifications like changing lip color, it can be extended with attribute supervision (in Stage 1) or latent manipulation to enable targeted edits—an exciting direction for future work. ### **Reference** [1] Bauer, Matthias, et al. Spatial functa: Scaling functa to imagenet classification and generation. 2023. [2] Park, Dogyun, et al. DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations. ICLR, 2024. [3] Wang, Ruohan, et al. Structured prediction for conditional meta-learning. Neurips, 2020. [4] Baik, Sungyong, et al. Meta-learning with adaptive hyperparameters. Neurips, 2020. --- Rebuttal Comment 1.1: Comment: Solved most of my concerns. I will keep my positive ratings.
null
null
null
null
Generalization in Federated Learning: A Conditional Mutual Information Framework
Accept (poster)
Summary: The paper introduces a novel information-theoretic framework to analyze generalization in federated learning (FL). By extending the supersample-based conditional mutual information (CMI) framework with a “superclient” construction, the authors decompose the generalization error into two components: the participation gap (between participating and non-participating clients) and the out‐of‐sample gap (local generalization error). They derive multiple CMI-based bounds, including high-probability and excess risk bounds, and show that differential privacy constraints naturally lead to tighter generalization guarantees. Experiments with FedAvg demonstrate that the evaluated CMI bounds are non-vacuous and reflect actual generalization behavior. ## Update after rebuttal I will stay at my current score and recommend a (4): Accept Claims And Evidence: The paper makes the following claims: - The superclient construction yields a two-level decomposition of generalization error with tight CMI-based bounds. - Differential privacy at both local and global levels ensures that CMI terms (and thus the generalization error) remain bounded. - Evaluated CMI (e-CMI) bounds can recover best-known FL convergence rates in low empirical risk regimes. These claims are well supported by both detailed theoretical proofs (Theorems 4.1, 4.2, and 4.3) and empirical evaluations using standard FL frameworks such as FedAvg (discussed in Section 7). Methods And Evaluation Criteria: The paper extends the existing conditional mutual information framework to federated learning. It introduces a new "superclient" construction along with supersamples to address the two levels of generalization error. The methods are rigorous, employing tools like KL divergence and concentration inequalities. While the mathematical methods are solid, including more straightforward explanations or diagrams could help clarify the concepts for readers who may not be experts in the field. Theoretical Claims: The authors make strong theoretical claims by breaking down the generalization error into two parts: the participation gap and the out-of-sample gap. They also show that differential privacy helps to keep these errors small. The detailed proofs build on well-known ideas from the literature. It would be beneficial if the paper discussed any limitations of these theoretical results, such as their dependence on the bounded loss function assumption. Experimental Designs Or Analyses: The experimental work supports the theory; however, the paper could be improved by testing on more diverse datasets or different federated learning scenarios to provide a broader view of the approach’s effectiveness. This suggestion is not a weakness (in terms of publication) but an opportunity for further validation. Supplementary Material: I have reviewed the supplementary material, which includes additional proofs, visualizations (such as the superclient construction), and further derivations related to structured loss functions and model aggregation strategies. Relation To Broader Scientific Literature: The work builds on established information-theoretic generalization analyses and extends them to the federated learning setting. It relates to recent advances in FL generalization and connects with the broader literature on differential privacy and meta-learning. Essential References Not Discussed: While the paper references key works in CMI and FL, a discussion comparing the new bounds with very recent developments in PAC-Bayesian approaches or alternative stability-based analyses could further contextualize the contributions. Other Strengths And Weaknesses: **Strengths:** - Innovative extension of the CMI framework with a superclient construction, addressing a gap in FL generalization analysis. - Rigorous theoretical derivations that are well-connected to existing literature. - Empirical validation that supports the theoretical claims. **Weaknesses:** - No major weaknesses were identified. The experimental section could be expanded to include more diverse FL settings or real-world datasets for broader validation Other Comments Or Suggestions: I do not have any major comments, and only have a few suggestions. Consider including a more detailed discussion of potential limitations or assumptions (e.g., reliance on bounded loss functions or the i.i.d. assumption in some cases). Another suggestion is to enhance the clarity of the presentation, perhaps through more diagrams or flowcharts, which would help readers unfamiliar with CMI techniques. Questions For Authors: - How sensitive are the bounds to deviations from the bounded loss assumption in practice? - Could you provide further insight into how the e-CMI bounds might be estimated in high-dimensional settings beyond the one-dimensional case? - How do the derived bounds perform in non-i.i.d. client scenarios that are common in cross-device FL? Providing answers to these questions may help readers better understand and apply this framework to analyze their FL algorithms. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you sincerely for your valuable feedback on our paper. Our responses follow. >- While the mathematical methods are solid, including more straightforward explanations or diagrams could help clarify the concepts for readers who may not be experts in the field. >- Another suggestion is to enhance the clarity of the presentation, perhaps through more diagrams or flowcharts, which would help readers unfamiliar with CMI techniques. **Response.** Thank you for your suggestion. We have included a visualization of our superclient and supersample construction, and we will also provide additional background on CMI techniques along with more accessible explanations of these concepts in the revised version. >- It would be beneficial if the paper discussed any limitations of these theoretical results, such as their dependence on the bounded loss function assumption. >- Consider including a more detailed discussion of potential limitations or assumptions (e.g., reliance on bounded loss functions or the i.i.d. assumption in some cases). >- How sensitive are the bounds to deviations from the bounded loss assumption in practice? **Response.** Regarding the boundedness assumption, as noted in the paragraph immediately following Theorem 6.2 (Lines 361–362) in the paper, this assumption can be relaxed to a sub-Gaussian condition, where the loss function may be unbounded but exhibits Gaussian-like tail behavior. Under this relaxed assumption, all the theoretical results in our paper remain valid. In the case of heavy-tailed losses, additional techniques such as truncation would be required to ensure the results hold. We will elaborate further on these points in the revised version. >- The experimental work supports the theory; however, the paper could be improved by testing on more diverse datasets or different federated learning scenarios to provide a broader view of the approach’s effectiveness. **Response.** Thank you very much for the suggestion. We will include an additional dataset in the revised version, and if the reviewer has any specific dataset in mind, we would be happy to incorporate it. >- While the paper references key works in CMI and FL, a discussion comparing the new bounds with very recent developments in PAC-Bayesian approaches or alternative stability-based analyses could further contextualize the contributions. **Response.** Compared to PAC-Bayesian and stability-based bounds, our e-CMI bounds are significantly easier to estimate in practice. Moreover, existing PAC-Bayesian and stability-based bounds do not account for the participation gap and do not exhibit fast-rate behavior for overall generalization. We will include a more detailed discussion comparing our results with these existing bounds in the revised version. >- Could you provide further insight into how the e-CMI bounds might be estimated in high-dimensional settings beyond the one-dimensional case? **Response.** We note that the e-CMI bounds in our paper always involve mutual information between one-dimensional random variables by construction. Specifically, e-CMI refers to the mutual information between a single loss value (treated as a one-dimensional random variable, since the loss function maps inputs to real values) and a Bernoulli mask, which is a binary random variable. This low-dimensional structure makes e-CMI particularly easy to estimate, which is one of its key advantages. >- How do the derived bounds perform in non-i.i.d. client scenarios that are common in cross-device FL? **Response.** Our bounds hold under a non-i.i.d. setting in the sense that each client may have a different data distribution, which is typical in cross-device FL. However, we do assume that clients are sampled independently, and that within each client, data points are drawn i.i.d. If either of these assumptions does not hold, the generalization bounds would need to be refined using additional techniques to account for the dependencies, for example, by invoking graph-based dependencies among clients or modeling temporal (mixing) dependencies in the data. We will include this discussion in the revised version.
Summary: The paper studies the generalization error of the federated learning algorithm using the CMI framework. The goal is to capture the out-of-sample gap and the participation gap using this framework. To do so, a federated learning setup is considered where each user observes data distributed according to a distribution sampled from a meta-distribution $\mathcal{D}$. The established bounds, including the fast-rate bounds, include two terms that capture the two above-mentioned gaps: the first term captures the effect of uncertainty in the choice of distribution, while the second term is a conditional mutual information term, as derived in the paper of Zakynthinou and Steinke. Claims And Evidence: 1. The paper claims to study the generalization error of FL. However, I believe that the bounds established do not capture any characteristic of FL algorithms. In particular, - The results obtained in Sections 4 and 5 are valid for "any" learning algorithm, and not necessarily for a federated learning algorithm, since in the end what is considered is the overall learning algorithm $\mathcal{A}:\mathcal{Z}^{nK} \to \mathcal{W}$, as a black box. Therefore, in my opinion, such results do not reflect any aspect specific to FL. - The results of section 5 are valid for one-round aggregation and under simplifying assumptions (such as Bregman divergence). Therefore, they cannot be considered as FL. 2. The claims about the behavior of Theorem 6.2. with respect to data heterogeneity do not seem to be precise enough. See questions. Methods And Evaluation Criteria: While CMI framework seems interesting; but I could not get the main take-home message of the paper. See the questions. Theoretical Claims: The established results sound correct to me. Experimental Designs Or Analyses: The experimental results are limited, but I do not think this is the main weakness of the paper. Supplementary Material: I have skimmed the proofs, which look correct. Relation To Broader Scientific Literature: Sufficiently discussed. Essential References Not Discussed: The paper sufficiently addresses and discusses similar work. However, the precise advantage of the bounds obtained over the previous literature is not clear. More precisely, it is not clear what is the main question studied in this work. Other Strengths And Weaknesses: Strengths: 1. The issue of studying the generalization error of FL is important, and the use of the CMI framework looks interesting. 2. A number of previously established results are adapted to this setup. 3. The idea of considering supersamples, where each client's "test" dataset may have a different distribution (sampled independently from the meta-distribution $\mathcal{D}$) than that client's training dataset distribution, to capture the effect of non-participating clients is interesting. Weaknesses: 1. While the established bounds are correct, their proof techniques are rather standard extensions of CMI results to the distributed setup (similar to what has been considered in a number of papers for the mutual information based bounds) or to the setup where the distribution of each client comes from a metadistribution. The latter case is also not new; as considered for example in Theorem 5.1 in Zhang et al. (Neurips 2024). The established results are mainly an adaptation of the previous results; without any significant novelty. 2. Many order-wise behaviors are not rigorous. See questions. Other Comments Or Suggestions: I found the notation very heavy and hard to follow. However, this may be partly unavoidable due to the complicated setup considered in this work. Questions For Authors: 1. My main concern is with the implications of the bounds. What are the new conclusions we learn from these bounds for FL? What exactly is the message of this paper? I think the general argument that MI bounds can become vacuous but CMI bounds can never become unbounded is not sufficient for a new publication. The authors need to give concrete examples or study cases where their result brings a new insight or understanding. 2. In various places it is mentioned that the order is $\mathcal{O}\left(\frac{1}{\sqrt{K}}+\frac{1}{\sqrt{Kn}}\right)$. However, if the order-wise behavior is examined, then $\mathcal{O}\left(\frac{1}{\sqrt{K}}+\frac{1}{\sqrt{Kn}}\right)= \mathcal{O}\left(\frac{1}{\sqrt{K}}\right)$. So the order-wise behavior of the bounds does not depend on $n$, which is a bad sign. Can you explain this? 3. Following on from the above point, it seems that in various places the above conclusions implicitly assume that KL divergences (or mutual information) are $\mathcal{O}\left(1\right)$, which is certainly not true in general. Therefore, the above order-wise behavior is not correct. 4. It is claimed that the bound of Theorem 6.2 becomes larger for more heterogeneous data. More precisely, it is mentioned that in the non-interpolating version of this result, the bound is proportional to $\mathbb{E}[\ell(W,Z_{i,j})]-\mathbb{E}[\ell(W_i,Z_{i,j})]$, and since this term becomes larger for more heterogeneous data, the bound then increases with heterogeneity. However, this argument is not sufficient because the second term of the bound is the square root of the product of this term and the conditional mutual information term. To evaluate the behavior of the bound with respect to heterogeneity, the behavior of this CMI term as a function of data heterogeneity must also be studied. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank you sincerely for your valuable feedback on our paper. Our responses follow. >-... the bounds established do not capture any characteristic of FL ... **Response.** We respectfully disagree with the reviewer's claim that our bounds fail to capture key characteristics of FL, and the argument that the overall algorithm is ultimately treated as a black box is a misinterpretation of our results in Sections 4 & 5. To analyze FL in the two-level generalization setting using CMI, the introduction of the superclient is essential—a construction not needed in centralized learning and one that highlights the fundamental differences between the two settings. Moreover, treating the FL algorithm as a global mapping and construct a supersample of size $2nK$, would break the symmetry required for CMI analysis due to the non-i.i.d. nature. Instead, we believe the more accurate interpretation is that our results in Sections 4 & 5 are designed to apply to any FL algorithm, without being restricted to a particular instantiation. The goal of these sections is to establish a general framework for analyzing FL as a learning problem. This generality should not be viewed as a weakness. In Section 6, we go beyond this generality by considering specific aggregation strategies and loss functions, allowing us to derive sharper insights such as faster learning rates and the benefits of interpolation in local training. Although we focus on the one-round setting, as noted in Remark 6.1, extending the results to the multi-round case is straightforward. We plan to include the multi-round extension in the Appendix, as introducing it in the main text would further complicate the already dense notation. >- Main take-home message ... | main question ...|Q1. My main concern ... **Response.** The main goal of our paper is to frame FL as a learning problem and analyze its learning rate in a two-level setting using CMI. Without assuming any specific FL strategy, our general two-step CMI bounds obtain several insights: (1) strong client privacy (both globally and locally) implies generalization; (2) in homogeneous settings, the bounds reduce to standard CMI analysis, with $I(W; V | \widetilde{Z}, U)$ capturing data heterogeneity; and (3) we identify conditions for fast learning rates. When applied to specific algorithms like FedAvg under certain loss assumptions, we show that FL can achieve even faster rates, thanks to model averaging and favorable loss properties. We believe this behavior extends to broader loss classes, though a formal proof remains open. Compared to existing bounds for FL, our e-CMI bound is notably easier to estimate and is the first MI-based bound to give fast-rate guarantees for both the participation and out-of-sample gaps. Lastly, we refer the reviewer to our first response to Reviewer 4mA5 for a discussion on the practical implications of our results, and the difference between using generalization bounds as indicators of learnability v.s. as sufficient conditions for generalization performance. >- their proof techniques are rather standard ... **Response.** We acknowledge that, following our superclient construction, our CMI bounds build on existing techniques, with some extensions to accommodate the two-level setting. However, developing a general CMI framework for FL does not require a complete overhaul of prior methods. As noted in the review guidelines, "originality may arise from creative combinations of existing ideas", and we believe this applies to our contribution. >- Q2. ... order-wise behavior is not correct. **Response.**We believe the reviewer raises a valid point regarding order-wise behavior, and we also share the concern that many prior works overlook MI, KL, or other complexity terms in such analyses. In the absence of clear decay rates, we think the stated rates should be seen as reflecting worst-case behavior. For example, $O(1/\sqrt{K} + 1/\sqrt{Kn}) = O(1/\sqrt{K})$ implies that in highly heterogeneous settings, even with infinite local data ($n \to \infty$) for participating clients, FL may still fail to generalize to unseen clients—an intuitive outcome given the impact of extreme heterogeneity. Regardless of the exact decay of the CMI term, removing the square-root dependency remains desirable for faster convergence. We will clarify this in the revision. >- Q4. It is claimed that the bound ... **Response.** This seems to be a misinterpretation of our result. The CMI term in Theorem 6.2, unlike those in Sections 4 & 5, is a local CMI for each client, involving the local model $W_i$ rather than the global model $W$, and thus is not influenced by client heterogeneity. However, the reviewer’s intuition holds in a multi-round extension of Theorem 6.2, where $W_i$ may depend on other clients through repeated communication. We will clarify this in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the provided response. I believe the paper has some interesting results, but not enough. About capturing the characteristics of FL: It is not a misinterpretation. The bound explicitly depends only on the data distribution (which consists of K sets, each containing n samples i.i.d. from a distribution that is itself randomly chosen from a meta-distribution); not explicitly on the learning algorithm. Surely, it depends implicitly through the CMI terms, but this cannot be considered as capturing the FL characteristics since, otherwise, one could argue that [SZ20]'s bound also captures the effect of FL (when clients' data have the same distributions). The learning algorithm $\mathcal{A}:\mathcal{Z}^{nK} \to \mathcal{W}$ is indeed a black box; if the $nK$ data points are processed at one point or distributed, the bound holds. The authors probably see this as a strength, but I see it as a weakness: if a bound holds for both centralized and distributed learning algorithms, it cannot capture the characteristics of FL in my opinion. Hence, in my opinion, the bound cannot have an FL-specific take-home message. As mentioned above, the meta-learning point was already shown in Theorem 5.1 of Zhang et al. (Neurips 2024). Here is the CMI version of it (we know technically their proofs are not very different); but I am not convinced of any significant new result/message/conclusion in this paper. Regarding the orderwise analysis, I agree with the authors that such a loose orderwise discussion has unfortunately become common in many papers and is not specific to this paper. Finally, regarding Q4, again this is not a misinterpretation. To explain it better, consider the experiment performed by Sun et al. 2024. To understand the effect of heterogeneity, they introduced a family of distributions over the MNIST dataset, indexed by $\rho \in [0,1]$, where $\rho = 0$ corresponds to the homogeneous case and $\rho = 1$ is the most heterogeneous case. Importantly, for each $\rho$, the distribution of **all** clients changes. There is a rational behind this choice: to compare the performance of two sets of distributions (for clients), they should be comparable in some sense. In that paper, the setup is considered such that for each $\rho$, the marginal distribution over all clients for each digit remains as $1/10$. Thus, we can now see that, for example, to apply Theorem 6.2 to this setup, the local CMI terms also change since the distribution of each client's data changes. Let's, for simplicity, first assume that instead of local CMI terms in Theorem 6.2, we had MI terms, i.e. $I(S;W_i)$ (or their signle-datum version), since their analysis is simpler in this case. Then, in the case where a similar algorithm is used in all clients, due to the concavity of $I(S;W_i)$ with respect to the data distribution, $\frac{1}{K}\sum_i I(S;W_i)$ for the heterogeneous case is smaller than $\frac{1}{K}\sum_i I(S;W_i)$ for the homogeneous case. I think a similar result holds for CMI terms as well (I am not sure, but if not, it needs to be shown). Thus, in a suitable setup, they are not easily comparable using Theorem 6.2., even if the CMI term includes only local algorithms. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their prompt reply and for actively engaging in the discussion. >- The bound explicitly depends only on the data distribution ... if a bound holds for both centralized and distributed learning algorithms, it cannot capture the characteristics of FL in my opinion. We thank the reviewer for further clarifying the concern. However, we still feel that the argument *"if a bound holds for both centralized and distributed learning algorithms, it cannot capture the characteristics of FL"* is a rather subjective point of view. In particular, if centralized learning can be viewed as a special case of distributed learning (where all data resides on a single client), then a generalization bound that holds for both centralized and distributed settings should be considered a desirable property, not a limitation. For example, in the work of Sun et al. (2024), the last sentence of the abstract states: "Particularly, in the i.i.d. setting, our results recover the classical results of stochastic gradient descent (SGD)". Moreover, the third contribution listed in their Section 1.2 emphasizes that "In i.i.d. setting with convex loss functions, our bounds match existing results of SGD in the sense that FedAvg reduces to SGD method". This clearly demonstrates that their bounds hold for both centralized (SGD) and distributed (FedAvg) cases. Yet, it may not be reasonable to claim that their bound fails to capture the characteristics of FL. Similarly, since the standard stability-based generalization bound for SGD is considered as a special case of the results in Sun et al. (2024), it is completely acceptable to say that [SZ20]'s bound captures the effect of FL in the i.i.d. case (i.e. Corollary 4.2), and our results generalize [SZ20]. We would also like to clarify that to rigorously study the characteristics of any specific FL algorithm, establishing a general generalization framework for FL is a necessary step. This is precisely how we organize our paper: Sections 4 and 5 present a general framework that treats FL as a learning problem without committing to any specific algorithm. This level of generality is what enables us to analyze specific FL settings in Section 6. In other words, if the generality of Sections 4 and 5 is viewed as a weakness, it would be difficult for us to further improve the paper, as this general foundation is essential to our overall contribution. >- ... the meta-learning point was already shown in Theorem 5.1 of Zhang et al. ... but I am not convinced of any significant new result/message/conclusion in this paper. As noted, our paper treats FL as a learning problem and mainly focus on its learning rates and learnability. Accordingly, we study the tightness of the generalization bound (e.g., fast-rate bounds), high-probability guarantees, and the conditions under which FL remains learnable (e.g., privacy) and achieves sharper learning rates (e.g., model averaging under strictly convex and smooth losses). In contrast, Zhang et al. (2024) take a practitioner-centric perspective, seeking a generalization guarantee for their specific algorithm and thus invoking a looser input-output MI bound. However, using the existence of their work to cast a negative light on our broader learning-theoretic contributions overlooks the full scope of our results. >- Finally, regarding Q4, ... We appreciate the reviewer's efforts in further clarifying the question. Regarding the relationship between the CMI terms in Theorem 6.2 and data heterogeneity, we suggest focusing on the fundamental quantity that our CMI term is used to bound, namely, the local generalization gap for client $i$, expressed as $\mathbb{E}[\ell(W\_i,Z'\_{i,j})-\ell(W\_i,Z\_{i,j})]$ (see Eq. (48) or Eq. (47) in Appendix). In our single-round setting, the local model $W\_i$ is trained independently and has not communicated with other clients, so this local generalization gap is, by construction, independent of any other clients. Initially, it seems counterintuitive to us to describe this quantity as a function of data heterogeneity, since it is only about a single client's data and model. In this context, analyzing the local generalization gap alone seems unrelated to the notion of data heterogeneity across clients. Based on the reviewer's further explanation, we now understand that if the data distribution itself is governed by some data heterogeneity parameter, then the value of this local gap may indeed vary with that parameter since the gap is ultimately a function of the underlying data distribution. That said, we note that even in homogeneous (i.i.d.) settings, different data distributions can result in different values of the local generalization gap. Therefore, while changes in the CMI terms may be influenced by variations in data heterogeneity, they do not, in general, give a direct or unique measure of it. We acknowledge this subtlety and will clarify it in the revised version of the paper.
Summary: The paper proves mutual information-based generalization bounds for a federated learning setting, that bounds both the out-of-sample gap (between the empirical and population distributions of the participating clients) and the participation gap (between the participating clients and the underlying meta distribution of clients). Theorem 4.1 includes the generalization bound as the sum of the bounds for the terms of out-of-sample and participation gaps, which is order-wise bounded in Remark 4.2 assuming a constant bound on the differential privacy degree of the FL algorithm. Next, in Theorem 4.2 the authors present a PAC-style generalization bound that bounds the gap with high probability. In section 5, the authors show fast rate bounds that decay as $ \mathcal{O} \left( \frac{1}{\sqrt{nK}}+\frac{1}{\sqrt{K}} \right) $ for $K$ clients and $n$ samples per client. Section 7 discusses some numerical results for FedAvg trained models, which show the smaller generalization gap when increasing $K$ or $n$. Claims And Evidence: The paper's theoretical claims on generalization bounds for federated learning seem correct. The current main text contains many theorems, which leads to less space for discussing the implications of the theorems. It would be better to dedicate more space for discussing why the MI-based generalization analysis for federated learning will be useful and how the approach can lead to regularization methods for reducing the out-of-sample gap and participation gaps. In addition, I would like to ask the authors how the theoretical analysis in this paper goes beyond a direct application of the existing MI generalization bounds to the FL setting. I understand that the FL problem has two error terms to bound (out-of-sample and participation gaps). Still, the authors seem to apply the MI generalization bounds in a standard centralized-like seting to bound each of the error terms. Can the authors elaborate on how their analysis contributes beyond the existing MI-based generalization frameworks? Methods And Evaluation Criteria: There is little discussion on estimating the mutual information terms in the generalization upper bounds when evaluating the generalization bounds in section 7. I want to ask the authors whether the bounded sample size of the clients is enough to estimate the mutual information terms in the generalization upper-bound. It seems to me that a challenge with MI generalization bounds is how to have a tight estimation of the mutual information term in the bound for the high-dimensional variables in the neural net layers. Theoretical Claims: Not in detail, but the results seem correct to me. As I asked before, can the authors explain what extra challenges their analysis addresses beyond the existing MI generalization results that can be applied to bound each of the out-of-sample and participation gaps? Experimental Designs Or Analyses: The experiments in Section 7 provide a reasonable sanity check that the bounds correlate with actual generalization error. However, the authors do not separately report the "out-of-sample gap" and "participation gap." I wonder how fast the participation gap (when considered alone) decreases with $K$. Also, the non-IID setting should be explained in more detail (in the supplementary lines 1666-1672 several details on the frequencies and parameters are missing), and the separated out-of-sample and participation gaps should be reported to see how they change with $K$ and $n$. Supplementary Material: Yes, I looked into the proofs, and they seem correct. I also checked the additional details on the experiments in Section 7. Relation To Broader Scientific Literature: While the paper extends mutual information-based generalization analysis to federated learning, I am uncertain about the practical role of the proposed generalization bounds in improving FL algorithms. In statistical learning theory, generalization bounds often introduce a capacity norm or an implicit property of the hypothesis class, which can be explicitly or implicitly regularized to reduce the generalization gap. However, in this work, it is unclear whether the MI-based generalization bounds can translate into regularization methods to improve generalization. Can the authors clarify how their generalization bounds can be used to regularize and reduce the participation gap in FL? The numerical results seem to suggest that the only way to reduce the gap is to increase the number of samples and clients. Essential References Not Discussed: Yes Other Strengths And Weaknesses: See the previous comments. Other Comments Or Suggestions: See the previous comments. Questions For Authors: See the previous comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you sincerely for your valuable feedback on our paper. Our responses follow. >- ... why the MI-based ... will be useful and how ... lead to regularization ... >- ... uncertain about the practical role ... **Response.** We note that our bounds do have practical implications, e.g., as mentioned in the Related Work section (Line 84-90), if the CMI terms in Theorem 6.2 are small, then the norm $||w-w_i||$ is also small, which is the regularization term used in FedProx. Upon reflection, we realize that this discussion would be more appropriately placed directly after Theorem 6.2, to make the implication more explicit. We understand that the reviewer may have been looking for new regularization schemes for reducing participation gap and out-of-sample gap, rather than connections to well-known ideas such as norm-based capacity control. In response, we recommend focusing on Theorem C.2 (which, while looser than the CMI bounds, is more interpretable). This result will ultimately lead to a gradient-norm-based regularizer when SGD or SGLD is used, and hints at a potential feature alignment mechanism in the KL sense for clients. We will elaborate on these implications in the revised version. Finally, regarding the practical implication, we would like to share a perspective based on our own research experience in learning theory. If the goal is to derive sharper generalization bounds with fast decay rates, i.e. to study learnability and learning rates of a problem, then practical applications may not follow directly. In this case, generalization bounds mainly serve to evaluate whether learning is theoretically possible. In the extreme case, the tightest generalization bound is the generalization error itself, which provides no new actionable insights for algorithm design. If the goal is to obtain actionable insights for algorithm design, then tightness of the bound becomes less critical. Generalization upper bounds, even if loose, can serve as a basis for designing regularizers. For example, penalizing weight norms is a widely used practice to improve generalization, despite the well-known fact that norm-based bounds are often vacuous and cannot explain generalization behavior in deep learning [R1, R2]. [R1] Vaishnavh Nagarajan, and J. Zico Kolter. "Uniform convergence may be unable to explain generalization in deep learning." NeurIPS 2019. [R2] Yiding Jiang, et al. "Fantastic Generalization Measures and Where to Find Them." ICLR 2020. >- ... how the theoretical analysis ...? >- ... explain what extra challenges ...? **Response.** The bounding steps for the out-of-sample gap are indeed similar to those in standard centralized analysis, which is expected as each individual out-of-sample gap is close to an in-distribution generalization gap. The main challenges arise in bounding the participation gap. Compared to existing MI-based bounds, our contributions include: the construction of the superclient, the proof of its symmetry properties (Lemma 4.1), the shifted Rademacher representation of the weighted participation gap (Lemma E.1), and the leave-one-out argument for the participation gap (Lemma F.1). To clarify, the use of weighted generalization error and leave-one-out arguments is not new in the broader learning theory literature. However, their application in our setting is enabled by the superclient construction, which serves as the foundation for these results. >- ... on estimating the mutual information ... **Response.** Please notice that we use e-CMI bounds in experiments to avoid the difficulties of estimating MI between high-dimensional R.V.'s. The second sentence in Section 7 states: "we estimate the fast-rate e-CMI bound for FL, as… Additionally, due to the challenges associated with estimating MI when dealing with high-dimensional random variables, we compute an evaluated version of the CMI bound from Theorem 4.1". Importantly, the e-CMI bound is computed between two one-dimensional variables (one of which is binary), making the estimation easy and eliminating the need for advanced MI estimators. Further details are in Appendix H. >- ... do not separately report ... >- ... the non-IID setting ... and the separated ... **Response.** The pathological non-IID data partitioning follows McMahan et al. (2017): data are sorted by label, split into 200 shards of size 300, and each client is randomly assigned 2 shards. We will include more details in the revision. As for separate plots of the participation gap (PG) and out-of-sample gap (OG), we would like to clarify that our experiments are based on the e-CMI bound in Theorem 5.1, which consists of three components weighted by jointly optimizable coefficients $C_1, C_2, C_3, C_4$. These coefficients are optimized using the SLSQP algorithm implemented in the SciPy package. As a result, the e-CMI bound is not a simple addition of the individual bounds for PG and OG, which makes it difficult to present separate plots for these two quantities. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response to my comments and questions. I find the responses satisfactory and now can better appreciate the authors' motivation behind the CMI generalization bounds. I will update my score accordingly. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for taking the time to carefully read our responses, and we are glad that our clarifications helped convey the motivation behind the CMI-based generalization bounds. We will incorporate the discussions from the rebuttal into the revised version of the paper.
Summary: This work studies the question of generalisation in federated learning (FL), where $K$ users aims to share some benefits of their learning phase via a central server without sharing their data. Authors propose novel generalisation bounds tailored to FL involving the Conditional Mutual Information (CMI) framework, yielding original information-theoretic results. Authors first focus on general in-expectation and high-probability generalisation bounds (Section 4), before proposing fast rates (Section 5). Finally, they involve the specificities of some popular aggregation algorithms used in FL in Section 6 and empirically study the tightness of their bounds in Section 7. Claims And Evidence: All theoretical results look reasonable and extend CMI bounds beyond batch learning to reach FL. The impact of their fast rate result compared to those of Section 4 is well established in Section 7. Something which would gain to be clearer: you said in l.122-123 left column that your CMI framework is inspired by the meta-learning one of Hellstrom&Durisi 2022. Is it possible to precise what are the specificities of your derived results (e.g. those of Section 4) compared to theirs? It seems you control the same global true risk, due to the assumptions that all tasks are drawn according to $\mathcal{D}$. Methods And Evaluation Criteria: The experimental framework looks sound, with reasonably big CNNs (170K parameters), which is nice for a theoretical paper althouigh I did not check carefully the details. An important point: it seems that there is no comparison with existing bounds. For instance, how does your bound behave wrt those of, e.g., Sefidgaran et al. 2024? Is it challenging to plot their results ? Theoretical Claims: I only looked at the proof of Theorem 4.1, which seems correct. Experimental Designs Or Analyses: I did not check in detail the experimental protocol. Supplementary Material: The proof of Theorem 4.1 Relation To Broader Scientific Literature: I do not know enough about either the FL or CMI literature to provide relevant feedback. Essential References Not Discussed: I do not know enough about either the FL or CMI literature to provide relevant feedback. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: - Something which would gain to be clearer: you said in l.122-123 left column that your CMI framework is inspired by the meta-learning one of Hellstrom&Durisi 2022. Is it possible to precise what are the specificities of your derived results (e.g. those of Section 4) compared to theirs? It seems you control the same global true risk, due to the assumptions that all tasks are drawn according to $\mathcal{D}$. - It seems that there is no comparison with existing bounds. For instance, how does your bound behave wrt those of, e.g., Sefidgaran et al. 2024? Is it challenging to plot their results ? - In most of your results, you have a $O(1/\sqrt{K})$ term. Would it be possbile to recover the influence of $n$ in such terms (maybe through the mutual information term)? In conclusion, this work looks serious, theoretically well-grounded with plethora of new generalisation bounds and nice experiments (which would gain to be completed). However, I do not know much about either FL or CMI literature. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank you sincerely for your valuable feedback on our paper. Our responses follow. >- you said in l.122-123 left column that your CMI framework is inspired by the meta-learning one of Hellstrom \&Durisi 2022. Is it possible to precise what are the specificities of your derived results (e.g. those of Section 4) compared to theirs? It seems you control the same global true risk, due to the assumptions that all tasks are drawn according to $\mathcal{D}$. **Response.** As mentioned in the paper, our construction is indeed inspired by the framework for meta-learning proposed by Hellström & Durisi (2022). However, our results are not directly comparable to theirs due to a key difference in problem setup: their meta-learning framework requires the meta-learner (i.e., the global model $W$) to be further trained on the test tasks (i.e., previously non-participating clients), whereas in FL, the global model is evaluated directly on unseen clients without any additional local fine-tuning. To enable a direct comparison with Hellström & Durisi (2022), the CMI framework presented in this paper would need to be extended to the personalized FL setting, where the global model is allowed further local adaptation. In that case, Lemma 4.1 would also need to be revised accordingly, as the hypothesis may no longer be invariant to the ordering of the "test data". We will include these discussions in the next revision. >- An important point: it seems that there is no comparison with existing bounds. For instance, how does your bound behave wrt those of, e.g., Sefidgaran et al. 2024? Is it challenging to plot their results? **Response.** The PAC-Bayesian and rate-distortion bounds in Sefidgaran et al. (2024) are indeed challenging to compute numerically for more complex neural networks, as they require estimating KL or MI between high-dimensional random variables, even when the model parameters are quantized. Notably, in their paper, the generalization bounds are not plotted for their ResNet experiments on CIFAR-10; instead, they only present the behavior of the generalization error to support the insights behind their bounds. In contrast, a key advantage of our results lies in the ease of estimating the e-CMI bound in the FL setting, making it more practical for empirical evaluation. >- In most of your results, you have a $O(1/\sqrt{K})$ term. Would it be possbile to recover the influence of $n$ in such terms (maybe through the mutual information term)? **Response.** Yes, the reviewer raises a valid point. Indeed, the participation gap term seems to follow an $O(1/\sqrt{K})$ behavior. At the same time, in the i.i.d. setting, increasing $n$ is also expected to reduce the participation gap. This effect is implicitly captured by the CMI term $I(W;V|\widetilde{Z},U)$, as demonstrated in Corollary 4.2, where all clients share the same data distribution (i.e., there is only one client distribution). In the non-i.i.d. case, however, it is more challenging to explicitly characterize the impact of $n$ on the participation gap, as its quantitative effect depends on the degree of data heterogeneity, which can vary significantly across scenarios. We will include this discussion in the revised version.
null
null
null
null
null
null
Falsification of Unconfoundedness by Testing Independence of Causal Mechanisms
Accept (poster)
Summary: Addressing the common assumption of causal sufficiency, this work proposes an approach to falsify this assumption in causal effect estimation settings. It relies on datasets from multiple environments to test for violations of the Independence of Causal Mechanisms (ICM) principle. The authors study a causal model where treatment and outcome are linear transformations of (known) feature representations and show theoretically that unobserved confounding creates dependencies in certain model parameters. They propose an algorithm to test for unconfoundedness along with a permutation-based calibration scheme to set a test rejection threshold. Besides conceptual improvements over existing approaches, such as avoiding CI testing and having milder assumptions, the method shows promising empirical improvements over existing algorithms in synthetic settings and convincing results on realistic data. ### update after rebuttal ### I thank the authors for the rebuttal. My assessment remains positive. Claims And Evidence: All theoretical statements are supported and the proofs are accessible. The authors also made efforts to support their claims empirically, for example illustrating the effect of shifts in different model parameters (Fig. 2). Methods And Evaluation Criteria: The authors evaluate their approach against suitable competitors for falsification of unconfoundedness in a treatment effect setting, using sensible evaluation criteria (falsification rate). The method was also tested in a realistic setting using a twin study proposed in earlier work (Karlsson and Krijthe, 23). Theoretical Claims: Thm. 4.2, 4.3, 4.4, where I did not find any issues. Experimental Designs Or Analyses: Figs. 1-5. The synthetic data generation is reasonable and there are no apparent issues with the twin dataset setup. Supplementary Material: The background and theory sections (A, B) and supplementary experiments (D4) in detail, the future work sketch (C) and data generation details (D1-3) in an overview. Relation To Broader Scientific Literature: The idea of detecting confounding from violations of the ICM principle has been explored before and the approach is somewhat close to existing work by Karlsson and Krijthe (2023). However, it takes a new perspective with different algorithmic ideas, adding sufficient originality. Overall, there is only a small literature addressing untestable assumptions in causal inference and discovery, and the paper adds useful new insights. Essential References Not Discussed: The relevant related work has been cited and put into context. Other Strengths And Weaknesses: The problem of addressing untestable assumptions in causal inference, such as sufficiency, is of high significance to the literature and, therefore, a strength of this work. The paper is clearly written, and the presentation of both theory and experiments is well structured. As a potential weakness, the assumptions are rather strong. In particular, the true feature representations $\varphi$ and $\psi$ in the model need to be known. The authors, however, illustrate the implications of this in experiments under misspecification (Fig. 2) and also sketch a kernelized version of their method (Appendix C), looking to remove this assumption in future work. According to the authors (l. 401 right) the parametric nature also brings certain empirical advantages over existing approaches. Given these points, it seems worthwhile to investigate the given model for the scope of this work. Other Comments Or Suggestions: - ln. 602, left: "then so are the transformations also independent r.v.s" - ln. 687, left: "where (b) equality follows from that" Questions For Authors: 1. Can you comment on how constraining you consider the assumption that the feature representations $\varphi$ and $\psi$ remain the same across environments while only the linear parameters shift? Related to this, how limiting is the fact that your method only works under changes of $\alpha$ and $\mu$ but not $\beta$ (e.g. Fig. 2)? 2. Do you have an explanation for the degrading performance with more covariates (Fig. 1 right) and why this is not an issue for HGIC? 3. The generating process in (3) as described in Appendix D.2. uses quite more samples and environments (1000, 250) than the experiments in D.1. Is such a large number of environments necessary here for the approach to work? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their invaluable feedback, suggestions for improvement and questions. Below are answers to your questions. # 1: Fixed feature representations with linear parameter shifts We agree that it is important to consider what it entails to assume a model class with fixed feature representation across environments with linear parameters shifts. Our key idea -- testing independence of causal mechanisms -- benefits from the fixed feature representation, as it enables efficient testing by focusing solely on the linear parameters. However, a fixed feature representation with linear parameter shifts is not fundamental to our proposed falsification strategy for testing mechanism independence. As explored in Appendix C, one could, in principle, replace the fixed feature representation with a potentially infinite-dimensional implicit representation using kernel methods. That said, we do not consider the assumption of a fixed feature representation to be particularly restrictive. To illustrate this, suppose a feature $X_1$ is highly predictive of treatment assignment in environment A but not in environment B. The fixed representation for environment A and B can still accommodate this by including $X_1$ (and its nonlinear transformations), and then only the linear parameter in environment A will utilize this feature whereas the linear parameter in environment B will not. This suggests having a fixed representation is not overly restrictive, as long as it is “rich enough” to accommodate different environments. In practice, of course, the richness of the representation will affect the efficiency of the test, making it a trade-off between expressiveness and statistical power. Hence, methods to learn a common representation or implicit representations are interesting future research directions. # 1: Unable to falsify under changes of $\beta$ In an ideal world, we would hope that our method could also falsify under solely changes on $\beta$. However, based on our current understanding, this may represent a fundamental limitation of this type of falsification approach, as both our findings and those of Karlsson and Krijthe (2023) suggest the same conclusion. But from a practical view-point, a key characteristic of observational studies is that treatment assignment is non-randomized. In our setting, where we look at multiple observational studies, it would seem plausible that each study has its own unique treatment assignment mechanism. Thus, in practical use cases one could expect $\alpha$ (if not $\mu$) to likely vary across environment. # 2: Degradation with more covariates The decline in falsification rate as the number of covariates increases in the right-most plot of Figure 1 can be explained as follows: As the number of covariates grows, estimating the linear parameters $(\omega,\gamma)$ becomes more challenging, leading to greater sample variance in the estimates. This, in turn, increases the uncertainty in estimating the covariance matrix $\text{Cov}(\omega,\gamma)$ which is crucial for computing our test statistic. Consequently, the test loses power as the number of covariates rises. For HGIC, the choice of conditional independence test significantly impacts its response to increasing covariates. For example, we found that using HGIC with the kernel conditional independence test (KCIT) performed poorly as the number of covariates increased; an expected outcome since large conditioning sets are known to harm KCIT (see Zhang et al., 2011). Meanwhile, HGIC remained effective when using the Pearson partial correlation test, even as the number of covariates grew. But this robustness was not unique to HGIC: the transportability test with Pearson’s test also remained unaffected by additional covariates. Thus, we could only speculate that the test statistic used in the Pearson test is very well-suited for the linear synthetic data we generated. # 3: Large sample sizes in data-generating process in Appendix D.2 In the experiment shown in Figure 2a, where we used data as described in Appendix D.2, our goal was to explore the theoretical limits of our approach in light of the results from Section 4.3. Specifically, the identifiability guarantees for falsifying unmeasured confounding under various linear parameter shifts. To minimize finite-sample effects, we increased the sample size and the number of environments. However, due to computational constraints, we could not scale them indefinitely. Notably, this was the only experiment where we took this approach. In our other experiments, our method remains effective even with much smaller sample sizes and fewer environments. For instance, in the left-most plot of Figure 1, our approach was able to successfully falsify unconfoundedness with just 10 environments and 25 samples per environment. # References Zhang, Kun, et al. "Kernel-based conditional independence test and application in causal discovery." UAI, 2011.
Summary: The paper proposes a test for unconfoundedness based on the usual assumptions of causality from the potential outcomes perspective, but more critically assumptions about the independence of causal mechanisms and a specific functional form (including specifying functional form well). The test requires observing more than one environment. The test statistic is based on a Frobenius norm of the covariance matrix of the parameters of the causal mechanisms. The test statistic is calibrated through permutation. Finally, the authors investigate the falsification rate of the method and baselines with synthetic data, modifying the number of environments, number of observations per environment, number of covariates, and violations to the specification of the model. They also test it on a semi-synthetic dataset. Claims And Evidence: In general the claims are supported by evidence, both theoretically and empirically. Maybe a minor complaint about claims vs. evidence is the fact that the authors claim they tried their method in real-world data whereas the closest they get to real world data is semi-synthetic data. Methods And Evaluation Criteria: Yes, the methods and the evaluation criteria make sense for the proposed problem. Theoretical Claims: I didn’t check the proofs of any of the theoretical results, but they all seem natural to me (of course with the exception of Lemma 4.3., which requires a lot of computation. Experimental Designs Or Analyses: The experimental design and analysis seem very reasonable to me. Both the test statistic and the permutation based calibration make sense. Would be interesting to know whether the authors tried other test statistics or whether they think that other tests would work and potentially would be more efficient in some way. Additionally, testing for the number of environments, sample size, number of covariates and misspecification is something I would expect to see. Supplementary Material: I checked the supplementary material on possible extensions where the functional form might not be needed (implicit feature representations). That might have been one of the most interesting parts of the paper for me. Relation To Broader Scientific Literature: I think the ideas presented in the paper are interesting. In the end, confounding is arguably one of the biggest problems in causality, and proposing a method for detecting confounding based on heterogeneous data is a valuable addition to the research on causality. Essential References Not Discussed: The authors cite several related research. I would personally add a couple more references, for example in page 2, column 2, line 97 when they talk about multi-environment data and looking only at the relevant portion for causal effect estimation, that’s precisely the idea of Invariant Causal Prediction (ICP) in Peters et.al. (2016) (from JRSSSB) which does the same task and were one of the first to explicitly include environment information for causal tasks. Other research that could have been included is the more recent work on falsifiability of causal discovery algorithms like Faller et.al. (2024) in AISTATS. Other Strengths And Weaknesses: Strengths: - The theoretical ideas of the paper are interesting and seem reasonable to me. - The empirical tests make sense with respect to their claims. Weaknesses: - I think the empirical evaluation reveals what I consider to be one of the biggest weaknesses of the paper. Particularly how data hungry the method is. I can imagine that because of the way the test statistic is designed –and the permutation based calibration– one needs to have several environments and several observations per environment to get a reasonable falsification rate. Other Comments Or Suggestions: None. Questions For Authors: The results of section 6.2.2. are really difficult to interpret. What I understand is that you change the parameters as changes in the mechanisms to see whether you can detect unconfoundedness using your test. But why do we expect changes in the alphas to give us falsification while changes in betas don’t? Ethical Review Concerns: No concerns Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the response, in particular their excitement for our proposal for extending our method to implicit feature representations. Whereas it felt as to be outside the scope of the current manuscript, we agree this is one of the more promising directions of our work and we also hope to encourage others to see this possibility. Below you can find answers to your comments. In addition, based on your feedback, we are doing the following changes to our camera-ready version: - we state more explicitly in the abstract and experimental section that we use semi-synthetic data. - we add the references you suggested to the related works section. # Clarification on results in section 6.2.2 The primary objective of the experiment described in Section 6.2.2 is to empirically validate the theoretical results presented in Section 4.3. To achieve this, we simulate data based on the model specified in Equation (3). The different parameters on the x-axis in Figure 2a represent which of the parameters in Equation (3) are allowed to vary across different environments. This is done under both the absence and presence of unmeasured confounding. Theorem 4.2 predicts that any changes associated with the $\alpha$ parameters should enable the detection of an unmeasured confounder, if one is present. Our experimental results confirm this prediction. The underlying reason is that, unlike the $\beta$ parameters, the $\alpha$ parameters appear in both the treatment and outcome mechanisms, as established in Lemma 4.3. # Using other test statistics There are indeed alternative test statistics for testing the independence between two random variables: the main requirement is that the test statistic works with multivariate random variables (due to $\omega$ and $\gamma$ being multivariate). Our reasoning for using the Frobenius norm of the covariance matrix as a test statistic stems from the fact that it only considers linear dependencies, which are typically the easiest to identify. One could also consider other test statistics that consider nonlinear dependencies, but this extra flexibility could lead to the method being more data hungry. For instance, we also considered the Hilbert-Schmidt Independence Criterion (HSIC) which allows for testing nonlinear dependencies using an appropriate kernel (when using a linear kernel one would obtain the test statistic which we already use). However, some initial experiments showed that HSIC was also not particularly robust. Another alternative test statistic that we did not explore but could be considered is distance correlation, which can also detect both linear and nonlinear dependencies. # Connection with ICP We agree that Invariant Causal Prediction (ICP), as introduced by Peters et al. (2016), should be acknowledged for its role in highlighting the premise of utilizing data from multiple environments. We have thus added a reference to their paper in the related works. While both ICP and our approach consider multiple environments, they are based on fundamentally different principles. Our method explicitly exploits changes across environments, whereas ICP is designed to identify what remains invariant. To further illustrate this contrast, Peters et al. (2016) state on page 2: *“We exploit, in other words, that the conditional distribution of the target variable of interest (often also termed “response variable”), given the complete set of corresponding direct causal predictors, has to remain identical under interventions on variables other than the target variable”* This stands in stark contrast to our framework. Unlike ICP, which believes the conditional distribution of the target variable to remain unchanged across environments, we explicitly state in Assumption 4.1 that the conditional distributions of the target variables (in this case outcome and treatment variables) can vary across environments rather than remain invariant. # Comment on our method being “data hungry” While improving sample efficiency is an important goal for future work, we emphasize that our method already outperforms the baselines across different numbers of environments and samples per environment. Even in the most challenging small-sample case we tested – 10 environments with 25 samples each – our method achieved a falsification rate of 0.8, compared to 0.6 and 0.4 for the baselines. Having said that, access to multiple environments is critical for any falsification method based on variation between environments to work, though the number of required environments will likely depend on how much variation there is between the environments.
Summary: This paper addresses the problem of unmeasured confounding in observational data. Most causal estimation methods assume that there are no unmeasured confounders, an assumption that is hard to test in practice. The authors propose a method for testing this assumption in situations where data from multiple environments is present. Previous work in this area relied on unreliable conditional independence testing, so this work proposes a novel approach based on hypothesis testing of statistical covariance instead. If the model is well-specified, the authors show that this approach consistently outperforms other approaches. I appreciate the authors' comments. The other reviewers seemed to share similar concerns around the clarity of the connection/difference from Karlsson & Krijthe 2023, and it seems like the authors are aware of the need to explain this in more detail. With that and an explicit discussion of assumptions are feature representation, I'm happy to raise my score. Claims And Evidence: The claims made by this paper appear well supported. The problem they're seeking to address is well-motivated and defined, and the approach seems sound. Methods And Evaluation Criteria: The synthetic data used for the experiments appears reasonable. I do think any detail at all about the synthetic data used should be included in the main body of the paper and not relegated to Appendix D.1, but it appears like a reasonable choice for synthetic data. The twins data set is a nice choice for an empirical evaluation. I wish more than 2 other methods were available for comparison, but given the specific nature of the problem being addressed, it's understandable. Theoretical Claims: While I did not dig into the proofs in the supplementary material, the theoretical claims in the main paper appear correct Experimental Designs Or Analyses: The baselines used for the experiments make sense, and the experimental design overall is sound. I do wish the authors had done more experimentation with model-misspecification, since the need for a correctly-specified model seems like a major weakness of this approach, so more robust testing of the consequences of model-misspecification would help reduce those concerns. Supplementary Material: Apart from skimming the synthetic data generation process, I did not review the supplementary material Relation To Broader Scientific Literature: This is probably my biggest open question. From the related work, it seems like your approach is quite similar, at least in problem setup and high-level approach, to Karlsson & Krijthe 2023. The related work, in fact, makes it seem almost incremental (i.e., same exactly approach but they did CI tests and you did a different sort of test), but I'm not sure if that's actually the case. It would be helpful to more clearly describe how this paper extends, and differs from, Karlsson & Krijthe 2023. For example, your first two listed contributions are: "we formalize a Neyman-Rubin causal model for multi-environment data under the principle of independence causal mechanisms" and "we prove that the presence of unmeasured confounding has testable implications in the form of dependencies between the model's observed parameters". Skimming through Karlsson & Krijthe 2023, both of those already seem to be covered by that work. Is there some subtle difference between their version and this one that would qualify both of these as novel contributions? Essential References Not Discussed: I'm not sure how essential this is, but in the Related Work, I was surprised that there was no discussion of identifiability, since that seems like a closer analogue to falsification than sensitivity analysis. Other Strengths And Weaknesses: Overall, I think this is a solid paper. The problem is well-motivated, the paper layout and descriptions are very clear, and the approach seems reasonable and effective. I wish there were more discussion around the feature representations psi and phi. The need for correctly-specified feature representations isn't really discussed in the paper until the experimental results in 6.2.3 and the post-experiment discussion. While the narrative of the paper as a whole flows well, Section 4.1 seems to be missing at least a sentence or two of lead-in. An assumption of this approach seems to be that we have access to these feature representations, but it's not actually directly called out as an assumption anywhere that I can see. As you mention in the discussion, we could learn these from data, so it's not a fatal flaw. But some discussion in Section 4 about how we get psi and phi, and experimental results showing that it's at least somewhat possible with empirically learned psi and phi, would help a lot. I think some clarity around what assumptions are made by your algorithm, and which assumptions are ones you'll be attempting to falsify, would help. Assumption 3.1 is a common assumption and one you'll falsify, but that's not directly stated in Section 3.2 until the final sentence, making it a bit unclear. Similarly, Assumption 4.1 is another one that your algorithm attempts to falsify, but just a little bit before that in the body of the text (lines 170-171), you state another assumption ("the feature representations psi(X) and phi(X,A) are considered to be fixed across environments"), but I think this assumption is one you're /actually/ making, not one you're falsifying. The first time psi-~ and phi-~ appear, I believe, is lines 210-211 in Section 4.2, but you don't actually define them at this point (you just say that we assume we have access to them). In the next paragraph, you discuss them more and it seems like they are estimates of phi and psi, but actually defining them before you use them would help. Are you using the wrong y-axis label in Figure 2? Figures 1 and 2 both have the y-axis labeled "Falsification rate", but in Figure 1, higher is clearly better (which makes sense since falsification is the goal), while in Figure 2, lower is better. Figure 2 seems to actually be the p-value from the hypothesis test? (hence why you show the alpha=0.05 significance level) Other Comments Or Suggestions: You should name your algorithm so it's not just called "Ours". If it's largely an extension to HGIC, naming it HGIC-XXX (for some relevant XXX acronym) would be fine, but it's clunky to not actually have a name in the paper. Questions For Authors: My first two are primarily what I went over under "Relation to Broader Scientific Literature: 1. How does your proposed algorithm differ from HGIC in Karlsson & Krijthe 2023? 2. In what ways are our first two listed contributions different from the similar formalisms/proofs in Karlsson & Krijthe 2023? 3. Could you discuss the transportability assumption, and how it relates to your work, a bit more? In Related Work, you contrast your work with prior work in transportability and point out the key difference as being that your approach "assumes independence of causal mechanisms" while this work "require[s] transportable treatment effect or access to randomized data." In what ways is the 'independence causal mechanisms' assumption more reasonable/achievable in practice than the 'transportable treatment effect' assumption? 4. Unless I'm misunderstand what's being shown in Figure 2 (see my confusion about the y-axis label for Figure 1 vs Figure 2), it appears as though, in Figure 2b, in the presence of unmeasured confounding, the falsification rate is actually significantly worse as the number of samples increases. For example, for a 6-degree polynomial, only using 50 samples has a falsification rate (p-value??) of nearly 0 (well below the 0.05 line) while 500 samples is over .9! Why is that? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the invaluable feedback and questions. We have provided answers and proposed changes related to your questions below. In addition, we also do the following changes for the camera-ready version: - update Section 4 with more explicit assumptions on the feature representations - name our method to MINT (Mechanism INdependence Test) - add references on identifiability in related works section, specifically on partial identification under unmeasured confounding # 1, 2: Differences in our work and Karlsson and Krijthe (2023) While we adopt the same setting as [KK23], a key conceptual difference in our approach makes our falsification strategy more sample-efficient. The framework of [KK23], which is based on DAGs, leads to a constraint-based causal discovery approach that relies on faithfulness and the causal Markov property. Importantly, they test independent causal mechanisms *indirectly* via a conditional independence statement involving treatment, outcome, and covariates in their assumed DAG. In contrast, one of our main contributions is the insight that it is possible to *directly* test the independence of causal mechanisms by inspecting the parameters of the treatment mechanism $E[A\mid X,S=s]$ and outcome mechanism $E[Y^a\mid X,S=s]$, which allows us to leverage functional assumptions. Here we prove the same theoretical guarantees for falsification remain when going from the indirect testing to the direct testing of ICM. This new conceptual shift results in a substantially different algorithm. While [KK23] performs conditional independence tests on the data, our algorithm uses a two-stage approach: first estimating nuisance models (parameters of the treatment and outcome mechanism) before applying an unconditional independence test. Our approach appears to be more efficient, provided these models are well specified, even in comparison to HGIC when a well-specified conditional independence test was used. There are two potential reasons for this efficiency. First, an unconditional independence test is statistically easier than a conditional one. Second, and more subtly, HGIC does not naturally allow for the efficient use of all available data in each environment because the conditional independence test in HGIC can only use 2 samples per environment, so it aggregates results from multiple tests to utilize all samples in each environment. Our algorithm avoids this issue entirely, as it leverages all data within a single independence test. Based on this, we will make the following changes to the manuscript: - add a paragraph in Section 4 highlighting the above-mentioned differences between our work and [KK23]. - reformulate first part of our contributions to: *“by formalizing the problem using a Neyman-Rubin causal model for multi-environment data, we show that falsification of unconfoundedness is possible by testing dependencies between causal mechanisms directly, rather than indirectly, by combining the principle of independent causal mechanism with functional assumptions on the mechanisms.”* # 3: Transportability versus ICM To argue why ICM can be a more reasonable assumption than transportability, we will illustrate why we think that ICM is typically a weaker assumption than transportability. We will add this explanation to our paper when introducing the ICM assumption. We also want to point the reviewer to Appendix A in the supplementary materials where we discussed transportability in more depth. Specifically, transportability violations occur (according to most definitions as we mention in Appendix A) when the outcome mechanism $E[Y^a\mid X, S=s]$ varies across environments. A reason why this can happen is due to the presence of unmeasured effect modifiers, which often is a plausible scenario in many real-world settings. Very importantly, our Assumption 4.1 on ICM specifically allows $E[Y^a\mid X, S=s]$ to change across environments. Thus, we can establish that even when transportability is violated, it is clear that ICM may still hold. What this means in practice is that our method is insensitive to whether transportability holds or not, whereas transportability-based tests will exhibit false positives (i.e. falsify unconfoundedness in the absence of unmeasured confounding) under violations of transportability. # 4: Clarification on Figure 2 Thanks to your comment, we understand the confusion. The falsification rate is the probability that a method will falsify unconfoundedness, regardless of whether this is correctly (there is an unmeasured confounder) or incorrectly done (there is no unmeasured confounder). It should have been more clear that under the absence of unmeasured confounding, we want a falsification rate below the significance level $\alpha$. Under the presence of unmeasured confounding, a higher falsification rate is better. Based on this, we will update the figure caption of Figure 2b to clarify how to interpret the falsification rate on the y-axis.
Summary: This manuscript presents an algorithm for falsifying the assumption of no unmeasured confounding in a setting of observational data from multiple environments. To this end, the authors employ the Rubin potential outcome causal model, assume positivity, consistency and no unmeasured confounding; subsequently they model the functional relationships between (a) the confounders and the treatment and (b) treatment, confounders and the outcome. Finally, they introduce the key assumption that the parameters associated with functions (a) and (b) are independent from each other for all environments. The main theoretical result of the paper follows: given all the aforementioned assumptions, the learned parameters associated with (a) and (b) are independent. Then a Gaussian linear model is assumed which makes it possible to equate the dependence of the learned parameters with unobserved confounding. The paper concludes with a series of experiments (3 on synthetic data and one on real data) comparing the proposed approach in terms of efficiency (falsification rates) to two benchmarks. ##### After rebuttal ##### I would like to thank the authors for their rebuttal. I acknowledge the algorithmic contribution of the paper. The authors' response has shed a little more light on the difference to the theoretical results in [KK23], but a closer examination and establishing a formal relationship between them (showing whether the difference only stems from switching paradigms from causal DAGs to potential outcomes) could improve the paper. All in all, I am willing to raise my rating by a notch. My current understanding is that there is no clear relationship between the theoretical results (the algorithmic ones differ of course) in [KK23] and this submission, but I still have a feeling that they are related and the relationship could be formalized by specifying e.g. additional conditions for some sort of equivalence. The lack of conditional independence testing is a clear difference, and maybe the biggest contribution of the manuscript. I would like to clarify that I am not in any way associated with the authors of (Wang, Blei, 2018) or of any other follow-up paper I mentioned, nor have ever I published in the post 2018 “deconfounder” line of research. I simply thought the high level intuition “if the model (or causal mechanisms) do not factorize perfectly, this might be because of unobserved confounding” is something the current submission and (Wang, Blei, 2018) have in common and was wondering whether the authors had any thoughts on this. I think that citing any of the works only makes sense if accompanied by a detailed discussion. I acknowledge the algorithmic contribution of the paper. The theoretical discussion has shed a little more light on the difference to [KK23], but a closer examination of the results could improve the paper. All in all, I am willing to raise my rating by a notch. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I read the three proofs in Appendix B (Theorem 4.2, Lemma 4.3, Theorem 4.4). I did not check the math details in the proof of Lemma 4.3, but it seems straightforward (multiplying two normal densities and plugging in to the assumed model). I also skipped the proof of Lemma B.2. The proofs appear to be OK. Experimental Designs Or Analyses: The experimental design presented in the main body of the paper seems OK (Chapter 6; two synthetic experiments and a real-data one). I did not check the details in Appendix D. Supplementary Material: I read the three proofs in Appendix B (Theorem 4.2, Lemma 4.3, Theorem 4.4). I did not check the math details in the proof of Lemma 4.3, but it seems straightforward (multiplying two normal densities and plugging in to the assumed model). I also skipped the proof of Lemma B.2. Relation To Broader Scientific Literature: The paper’s theoretical results are very similar to (Karlsson, Krijthe, 2023), and while it is cited, I feel that a more thorough discussion of the overlap (especially concerning theory) would improve the manuscripts’ clarity. A review of de-confounding, (e.g. Louizos et al. 2017), dealing with ignorability or propensity scores is missing, but this is not fatal and can be explained with limited space. Essential References Not Discussed: The paper’s theoretical results are very similar to (Karlsson, Krijthe, 2023), and while it is cited, I feel that a more thorough discussion of the overlap (especially concerning theory) would improve the manuscripts’ clarity. Other Strengths And Weaknesses: Strengths The paper tackles an important problem of testing the existence of unmeasured confounding. The main contribution of the paper is a performance improvement over a similar method (Karlsson, Krijthe, 2023) on synthetic and real data. Weaknesses The main weakness of the paper is its limited novelty. The theoretical results mirror those of (Karlsson, Krijthe, 2023) (Theorem 1 and Theorem 2). The main difference is that the current paper adopts the potential outcome framework instead of a Pearlian graphical model (for Thm 4.2) and a Gaussian linear model (Lemma 4.3, Thm 4.4). The paper would be easier to follow if a discussion of (Karlsson, Krijthe, 2023) were held along with enunciating the current paper’s relation to it in terms of theory. It would also improve clarity if the paper’s focus on experimental improvement over (Karlsson, Krijthe, 2023) was stated. I feel that the role of environments is not discussed clearly enough. It seems to me that the proposed method beats (Karlsson, Krijthe, 2023) more clearly for a small number of environments (Figure 1). Do the environments facilitate reasoning about statistical efficiency? It would be interesting to compare the paper with another approach for unmeasured confounding modelling, namely (Wang, Blei, The Blessings of Multiple Causes, 2018). The paper caused a lot of response leading to a number of limiting results for unmeasured confounding modelling (D’Amour 2018 Comment: Reflections on the Deconfounder; D'Amour 2019 On Multi-Cause Causal Inference with Unobserved Confounding: Counterexamples, Impossibility, and Alternatives; Ogburn et al. 2020 Counterexamples to "The Blessings of Multiple Causes" by Wang and Blei). The underlying intuition of (Wang, Blei, 2018) is similar to the one presented here: if the model for the confounded treatment-response factorizes into independent parts, there has to be (multiple) unmeasured confounding. Do any of the model limitations apply here? Do the environments play a role? Other Comments Or Suggestions: Typos: l. 112 refereed -> referred l. 162 accounted -> accounted for l. 324 refereed -> referred Questions For Authors: 1. Given the equivalence of the Pearl model and the potential outcome model (Pearl 2009), does Theorem 4.2 follow from Theorem 1 in (Karlsson, Krijthe, 2023)? Under the assumption of a Gaussian linear model, do Lemma 4.3 and Thm 4.4 follow from Theorems 1 and 2 in? 2. What is the relation of the unmeasured confounding falsification to (Wang, Blei, 2018) and the following discussion? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and for spotting a number of typos which now have been addressed. We have provided answers and proposed changes related to your questions below. # 1: Connection between our theory and that of of Karlsson and Krijthe (2023) While the possibility of testing for unconfoundedness may be less surprising in light of [KK23], it is reassuring that we obtain similar identifiability guarantees under different assumptions and proof techniques. Our problem formulation not only leads to a novel test that directly targets the dependence between mechanisms but also introduces different conditions under which the results hold. In this context, we would kindly ask you to also read our reply to reviewer sWrJ, where we emphasize a key conceptual difference between our approach and that of [KK23]. Specifically, our framework *directly* tests the independence of causal mechanisms (ICM), necessitating a new theoretical treatment of the problem. We now go over two key differences in the assumptions between our theory and [KK23] to clarify why our findings do not trivially follow from [KK23], even under a linear-Gaussian model: 1. Most importantly, there is a difference in the formulation of the ICM assumption. [KK23] assumes that, in the absence of unmeasured confounding, variations in the conditional distributions $P(A\mid X,S=s)$, $P(Y\mid X,A,S=s)$ and $P(X\mid S=s)$ are independent of each other. In contrast, our formulation in Assumption 4.1 only requires changes in $P(A\mid X, S=s)$ and $P(Y^a\mid X, S=s)$ to be unrelated, without making any assumptions about $P(X\mid S=s)$. As [KK23] highlights in their Section 4.1 ("Influence of assumptions"), this additional condition on $P(X\mid S=s)$ is necessary for their method to work. Meanwhile, our approach achieves the same falsification guarantees without requiring this additional condition because the results in Lemma 4.3 holds for any $P(X\mid S=s)$. 2. Also, [KK23] uses a constraint-based causal discovery approach relying on assuming the faithfulness and causal Markov property. In contrast, by instead leveraging functional assumptions, our approach does not require these assumptions. Based on the above discussion, and in addition to the changes mentioned in our response to reviewer sWrJ to clarify our contributions relative to [KK23], we will add a paragraph in Section 4 of our manuscript highlighting the key differences in assumptions between [KK23] and our work. # 2: Relationship to de-confounding literature We thank the reviewer for highlighting the work of Wang and Blei (2018) and the surrounding discussion on it. As suggested, we will add references to these papers in our manuscript. We are aware of this line of work, but we think the setting and motivation considered by these papers do not reflect our work. First, the argument used in Wang and Blei does not apply to our setting: we do not have multiple observed causes, but rather multiple environments. Second, they focus on identifiability about the interventional distribution under unmeasured confounding. In contrast, our goal is slightly less ambitious, we just want to determine whether there is an unmeasured confounder in the first place. That said, it would be interesting to explore if there is a similarity between the factorization exploited by Wang and Blei and the ICM assumption, which also assumes a type of factorization. This could help us understand if there are implications for identifying interventional distributions in our setting. Here, the key question becomes what the access to data from multiple environments will add and whether that will allow for identifiability. # Do the environments facilitate reasoning about statistical efficiency? The number of environments plays a crucial role in the success of falsification. As a thought experiment, consider a scenario where $(\omega_s,\gamma_s)$ are directly observed, allowing us to bypass estimating them in the first stage of our algorithm. In this case, the number of environments effectively determines the sample size available for testing independence between these two random variables. If $\omega_s$ and $\gamma_s$ are only weakly dependent – such as when unmeasured confounding is weak – detecting their dependence would require a large number of environments. Conversely, if the confounding is strong, fewer environments may suffice to reveal dependence in these model parameters. Once we add back the fact that we need to estimate $\omega_s$ and $\gamma_s$, this introduces additional sampling variance into the independence test. As a result, with fewer samples per environment, detecting a dependence becomes more challenging, and a larger number of environments will likely be required to achieve a high statistical power of our test. This is also what we observe in our experiments shown in Figure 1: decreasing number of environments or the sample size leads to lower falsification rate. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their rebuttal. Thank you for pointing out that lack of the reliance on $P(X|S=s)$ is another difference wrt [KK23]. My current understanding is that there is no clear relationship between the theoretical results (the algorithmic ones differ of course) in [KK23] and this submission, but I still have a feeling that they are related and the relationship could be formalized by specifying e.g. additional conditions for some sort of equivalence. The lack of conditional independence testing is a clear difference, and maybe the biggest contribution of the manuscript. I would like to clarify that I am not in any way associated with the authors of (Wang, Blei, 2018) or of any other follow-up paper I mentioned, nor have ever I published in the post 2018 “deconfounder” line of research. I simply thought the high level intuition “if the model (or causal mechanisms) do not factorize perfectly, this might be because of unobserved confounding” is something the current submission and (Wang, Blei, 2018) have in common and was wondering whether the authors had any thoughts on this. I think that citing any of the works only makes sense if accompanied by a detailed discussion. I acknowledge the algorithmic contribution of the paper. The theoretical discussion has shed a little more light on the difference to [KK23], but a closer examination of the results could improve the paper. All in all, I am willing to raise my rating by a notch.
null
null
null
null
null
null
Overshoot: Taking advantage of future gradients in momentum-based stochastic optimization
Reject
Summary: Proposes a variant of Nesterov's accelerated gradient method and presents some experiments Claims And Evidence: As this paper doesn't present any theory, the strength of the paper needs to the experimental evaluation. I don't see these experiments as strongly convincing for a number of reasons: - The biggest issue I see is the use of no learning rate schedule. This makes the comparisons meaningless as the evaluation is now far outside the regime we care about when the methods are used in practice. In addition, some methods will naturally do better than others just due to lower gradient variance or lower "effective" step size when no schedule is used. - Since the experimental setup is non-standard, I can't determine from the reported loss values if the experiments were run correctly, or if the baselines are reasonable. Accuracy numbers of ~52-55 are very poor on c100. - Lack of hyper-parameter tuning - tuning is necessary for a fair comparison. If using standardized setups then existing known good parameters can be used, but that is not the case here with the schedule omitted. - Since weight decay was not used on most problems, many of the benchmarks show extreme overfitting. Weight decay changes the learning dynamics significantly, and so it's not possible to make general conclusions from results without decay. Methods And Evaluation Criteria: Test problems chosen are reasonable, and multiple seeds are used. Theoretical Claims: No theory is presented in this work. The method is extremely similar to Nesterov momentum, only differing in the decoupling of one hyper-parameter, and so showing how the methods relate from a theory point of view would be interesting. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: Fits into an existing line of work proposing empirical modifications to momentum and averaging. Recent comparable work would be Adan. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This paper uses non-standard notation with clashing types. For instance, m_c is used for a momentum constant, while m_t is a time varying sequence of vectors. The same letter should not be used for a constant and a vector. Algorithm 1 is to generic, there is no reason to include this general form of the method in the paper as it's essentially so general as to be meaningless. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for your evaluation and insights. We will incorporate the suggestions therein into future incarnations of our work.
Summary: This paper proposes an overshooting technique for optimization, which evaluates the gradient at point that extrapolate the standard optimizer update. Claims And Evidence: The presentation of the Algorithm is somewhat confusing: for the general Algorithm 1 it remains unclear what is allowed for the update rule $\phi$, and how $\phi'$ differs from $\phi$. However, if we ignore Algorithm 1 for a while, the derivation of Overshoot for SGD seems to happen in Section 2.1. Using (5) in (7), the equation in (8) seems incorrect: it should be the gradient evaluated at $\theta_t$ and not $\theta_t'$. I am not sure how exactly the sequence $m_{t+1}$ is defined for the Overshoot algorithm, from Figure 1 it seems that it accumulates gradients evaluated at $\theta_t'$, and thus the derivation of (8) might reflect the intention of the authors, but needs to be fixed. It would be highly beneficial to first define Overshoot for SGD properly with the two sequences $\theta_t$ and $\theta_t'$, and then show how it can be simplified to track only one sequence for practical implementations. Methods And Evaluation Criteria: Methods are evaluated on a diverse set of optimization tasks. However, the experimental insight is limited, as learning rates are not tuned for the baseline (see details below). Theoretical Claims: Please provide a proof for Equations (14)-(15). Also, is it $\theta_t$ or $\theta'_t$ in (15)? Beyond that, the paper does not contain longer proofs that need checking. Experimental Designs Or Analyses: * It appears that the learning rate of all baseline methods (and the overshoot methods) are not tuned, but instead set to a fixed value. Given that learning rate tuning can have drastical impact on the apparent performance of a method (see for example Schmidt et al. 2021), this calls into question what can be inferred from the experimental evaluation at all. I would recommend to compare the Overshoot methods to the baseline methods where for each method the LR is tuned via grid search. While being aware this requires massive computational effort compared to the current setup, it is the only reliable way to account for possibly different optimal learning rate for the method with/without overshooting. * Table 2 suggests that the performance could be even better for larger $\gamma$. When does it start to degrade? Maybe larger values of $\gamma$ perform better because they use an implicitly larger learning rate (which again goes back to the point above, we can not know unless the learning rate has been tunded independently) * It is first stated that the weight decay implementation of AdamW is used, but then weight decay is set to zero? In this case, it is not necessary to mention which weight decay implementation is used. * Section 4.2 is missing information on which model/dataset etc. has been used to produce Figure 2. Do the insights of this section generalize for other model architectures or datasets? Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: For the motivation of the method, some more details could be provided how the Overshoot method differs from previous attempts. In terms of theoretical comparison, this is not applicable, as no convergence results are provided. The experimental comparison takes into account several related methods from prior work. Essential References Not Discussed: NA Other Strengths And Weaknesses: The motivation for overshooting in Section 2 is relying on several strong assumptions, which are not backed up by references to prior work or theoretical arguments. For each of the assumptions made here, it should be argued why one can hope that they might be (approximately) true in practice, or if it has been reported that they hold for the relevant applications. For example, the assumption in (1) will not hold for non-smooth problems (e.g. absolute value function), as the (sub)gradient can be identical for points very far apart. Related to this, the overshooting method also lacks theoretical insight: can the method be proven to converge under the standard assumptions (e.g. convex or Lipschitz-smooth problems)? Given the proximity to NAG and Polyak momentum, a comparative study of convergence results would be interesting. Other Comments Or Suggestions: Some parts of the notation are misleading/confusing/overly complicated, see details below: * Why denote update directions as $\hat{\theta}$ when $\theta$ already denotes the actual weights? This leads to unnecessary potential of confusion. * The coefficients $m_c$ and $g_c$ should obtain a different letter, as the current notation is in conflict with the sequences $m_t$ and $g_t$. * Given that $\gamma_t$ is first zero, then constant at $\gamma$ it seems unnecessary to introduce this notation. * Please refrain from using the term "weight decay scheme for gradients" (e.g. in line 092). Weight decay is an independent concept in optimization, and it is confusing to use the same term here. Minor remarks: * Hyperparameter table: typo in "Learning rate" * Equation 11: max and min should be in math-operator mode and not in text mode Questions For Authors: This mainly repeats the main concerns from above: * How is the performance when all methods are reasonably tuned (in particular their step size)? * Can you give any theoretical insight/ convergence theory for the overshoot mechanism? Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for putting your time into reviewing our work and for your insights. We will incorporate the suggestions therein into future incarnations of our work. > and how differs $\phi^{\prime}$ from $\phi$. $\phi$ and $\phi^{\prime}$ represents the same optimization methods, each for each weights sequence $\theta$ and $\theta^{\prime}$. > Using (5) in (7), the equation in (8) seems incorrect: it should be the gradient evaluated at $\theta$ and not $\theta^{\prime}$ Equation 8 defines the update rule for the $\theta^{\prime}$ sequence without ever using the $\theta$ sequence. The underlying idea of this approach is described in Equation 4. In Overshoot, gradients are always evaluated at $\theta^{\prime}$. We acknowledge that adding a figure illustrating the geometric intuition behind the approach would be beneficial. > I am not sure how exactly the sequence $m_{t+1}$ is defined for the Overshoot algorithm In SGDO $m_{t+1}$ sequence is defined the same way as in CM. > Please provide a proof for Equations (14)-(15). Also, is it $\theta^{\prime}$ or $\theta$ in (15)? In our opinion, Equations (14) and (15) are self-evident from Figure 1 and the definition of CM (Equations 5–6). We could have used either notation, $\theta^{\prime}$ or $\theta$ as both describe the same principle. We chose $\theta$ to highlight the distinction between SGDO and NAG. > Table 2 suggests that the performance could be even better for larger $\gamma$ . When does it start to degrade? This is partially addressed in Figure 4, where we tested multiple overshoot factors, not just three. > Maybe larger values of $\gamma$ perform better because they use an implicitly larger learning rate The learning rate in Overshoot does not scale with the Overshoot factor. While we understand why it might appear to do so, this is not the case. This can be seen in Figure 1 and Algorithm 2. >It is first stated that the weight decay implementation of AdamW is used, but then weight decay is set to zero? In this case, it is not necessary to mention which weight decay implementation is used. Weight decay is not zero across all tasks; it is applied in both Res-C100 and GPT-GLUE (see Table 1). > Section 4.2 is missing information on which model/dataset etc. has been used to produce Figure 2. Do the insights of this section generalize for other model architectures or datasets? Figure 2 demonstrates the equivalence of various methods with SGDO across different hyperparameter settings. Therefore, no specific dataset or model was used. The suggested optimal setting is estimated by minimizing Equation 16 using randomly generated gradients, as noted in the caption. > How is the performance when all methods are reasonably tuned (in particular their step size)? We are planning to evaluate the Overshoot method on well-tuned benchmarks in the near future. So far, we have been able to improve performance on airbench95.py (the version that does not use Muon) from https://github.com/KellerJordan/cifar10-airbench/ by approximately 10% using the Overshoot method. > Can you give any theoretical insight/ convergence theory for the overshoot mechanism? So far, we can’t offer any theoretical insights regarding convergence. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for the additional clarifications. In summary, providing (i) insightful benchmarks with tuned baseline methods and (ii) further motivation through theoretical results would improve this paper.
Summary: The submission presents Overshoot, a momentum optimizer that can be used with adaptive algorithms like Adam and SGD. Unlike Nesterov's Accelerated Gradient (NAG) or classical momentum (CM), Overshoot updates model weights in advance in anticipation of the upcoming momentum update even before gradients are calculated. This forward-looking approach leverages "future gradients" to provide a better estimate of future steps in optimization. The submission further presents efficient, lightweight SGD (SGDO) and Adam (AdamO) variants with effectively zero computational overhead and no additional memory. We benchmark across a set of tasks such as image classification, variational autoencoders, and GPT fine-tuning and demonstrate that Overshoot converges faster (by 15-26% fewer steps) and generalizes better than NAG, CM, and Adam. Claims And Evidence: The main benefit of the application of the proposed method is its fast convergence. while the claim of a 15% reduction in steps is based on the "Steps-to-95% Loss Reduction" metric. However, it’s not clearly defined, like, is that 95% relative to the baseline’s final loss or something else? The authors need to make it very clear to highlight your contribution. Methods And Evaluation Criteria: The tasks cover a range of architectures (e.g., MLP, ResNet, GPT-2) and datasets (e.g., CIFAR, GLUE), but lack diversity in other important settings such as object detection and segmentation. Moreover, the problem setups appear overly simplified—for instance, training on ImageNet has become a standard benchmark for evaluating optimizer performance and should be included. Incorporating transformer-based architectures, such as ViT or Tiny ViT would also be beneficial. Additionally, using fixed hyperparameters for baseline methods may underestimate their true potential. Theoretical Claims: The paper lacks rigorous convergence or gradient relevance assumption proofs. CM/NAG/SGD unification in Section 4.1 results from parameter substitution but is not formally established. However, I think these drawbacks have been mentioned in the Limitation section. Experimental Designs Or Analyses: Leaving out GPT-GLUE from parts of the analysis makes the paper’s claim of “consistent outperformance across a wide range of tasks” feel a bit shaky—especially since one of the toughest tasks (fine-tuning GPT-2) is only partially covered. It raises the question: does Overshoot really hold up when it comes to large-scale language models? GPT-GLUE does show up in Table 2 (training steps) and Table 3 (test performance), but it’s missing from key parts of the analysis (like Awd), which makes the evaluation feel kind of piecemeal. Plus, the issue of severe over-training in GPT-GLUE (Section 5.3.1) doesn’t really get addressed in the context of Overshoot’s supposed robustness. Supplementary Material: There is no supplementary material for this submission. Relation To Broader Scientific Literature: Overshoot builds on concepts such as Nesterov’s momentum and Lookahead, but distinguishes itself through parameter decoupling and single-step updates. It also connects to SUM, which seeks to unify various momentum methods. Essential References Not Discussed: The paper does not engage with more recent optimizers like AggMo and Sophia, nor with approaches that leverage adaptive look-ahead factors. Additionally, other families of optimizers—such as those designed to reduce gradient variance for faster convergence, like Katyusha, or meta-optimizers—are not discussed in the related works. Other Strengths And Weaknesses: 1. There are no theoretical convergence guarantees from the proposed method. 2. The tasks applied to evaluate the methods have limited difficulty and diversity. 3. Some recent and advanced optimisers are not discussed in the submission and more details can be found in the previous sections. 4. I suspect that fair comparisons are conducted to the baseline models. Other Comments Or Suggestions: The problem setups appear overly simplified—for instance, training on ImageNet has become a standard benchmark for evaluating optimizer performance and should be included. Incorporating transformer-based architectures, such as ViT or Tiny ViT would also be beneficial. Questions For Authors: Some questions about the hyperparameter tuning: I think the baseline models probably are not well-tuned. Do the authors believe one set of hyperparameters for all the experiments is fair? Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you for putting your time into reviewing our work and for your insights. We will incorporate the suggestions therein into future incarnations of our work. > while the claim of a 15% reduction in steps is based on the Steps-to-95% Loss Reduction" metric. However, it’s not clearly defined, like, is that 95% relative to the baseline’s final loss or something else? Yes, it represents the percentage of steps saved to achieve a 95% loss reduction (achieved by the baseline), compared to the baseline. We believe this metric is sufficiently explained in Section 5.3.1. > It raises the question: does Overshoot really hold up when it comes to large-scale language models? As described in Table 1, the first four tasks (MLP-CA, VAE-FM, VAE-M, 2c2d-FM) indeed share the same set of hyperparameters. The remaining three tasks (3c3d-C10, Res-C100, GPT-GLUE) use different hyperparameters to improve the performance of the baselines. We acknowledge that using underperforming baselines was not an ideal choice. However, we did so for the following reasons: - Limited resources for proper hyper-parameter finetuning. - Leave no room for potential bias in favor of the Overshoot method. - For the most part, we follow the one-shot setting established in previous work on benchmarking deep learning optimizers: https://arxiv.org/pdf/2007.01547 Question for the reviewer: Is the problem of underperforming baselines in the experiment section the main reason for rejection?
Summary: This paper draws inspiration from Nesterov’s Accelerated Gradient (NAG) and introduces a novel method called Overshoot. The Overshoot method calculates the gradient at model weights shifted in the direction of the current momentum, thereby leveraging information from the surrounding landscape more effectively. Unlike NAG, Overshoot employs a specialized reformulation that aims to reduce memory overhead. Claims And Evidence: Seems no problems. Methods And Evaluation Criteria: - The core idea behind Overshoot closely resembles the well-known NAG algorithm. The discussion in the introduction does not sufficiently distinguish Overshoot from NAG. - Moreover, through Equations (5)–(7) (and alternatively via Equations (12)–(15)), Overshoot for SGDM is $$ m_{t+1} = \mu m_t + \nabla f(\theta_t - \eta {\color{red}\gamma} m_t), $$ $$ \theta_{t+1} = \theta_t - \eta m_{t+1}. $$ While the original NAG is $$ m_{t+1} = \mu m_t + \nabla f(\theta_t - \eta \mu m_t), $$ $$ \theta_{t+1} = \theta_t - \eta m_{t+1}. $$ It becomes evident that the proposed algorithm is essentially equivalent to NAG, with the only notable difference being the replacement of a coefficient. This limited distinction raises concerns about the novelty of the method. Furthermore, the practical advantages resulting from this modification remain unclear without more substantial experimental or theoretical support. Theoretical Claims: - While the algorithm exhibits some heuristic appeal, it lacks rigorous theoretical guarantees. Additionally, the impact of the approximations introduced in AdamO requires further investigation, both theoretically and empirically, to assess their influence on convergence and performance. Experimental Designs Or Analyses: - The paper does not provide an adequate empirical or theoretical comparison with closely related methods. Such comparisons are crucial to highlight the advantages (or limitations) of the proposed approach relative to established techniques. - The experimental evaluation is constrained by the use of overly simple and outdated datasets, experimental settings, and network architectures. Consequently, the conclusions drawn from these toy scenarios lack strong evidence of generalizability to more complex, real-world tasks. Supplementary Material: No supp. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: None Other Strengths And Weaknesses: see above Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for putting your time into reviewing our work and for your insights. We will incorporate the suggestions therein into future incarnations of our work. > Unlike NAG, Overshoot employs a specialized reformulation that aims to reduce memory overhead. Overshoot does not aim to reduce memory overhead compared to standard Nesterov implementations; rather, it modifies the 'look-ahead' factor to improve convergence. > The discussion in the introduction does not sufficiently distinguish Overshoot from NAG. We distinguish between Overshoot and NAG in the introduction on line 47: ‘This makes Overshoot similar to NAG, however unlike NAG, Overshoot decouples the momentum coefficient and the ”look-ahead” factor.’ > The paper does not provide an adequate empirical or theoretical comparison with closely related methods. Such comparisons are crucial to highlight the advantages (or limitations) of the proposed approach relative to established techniques. Which ‘closely related methods’ do you mean? The most relevant one is Nesterov momentum, which is used for comparison with both SGD and ADAM. > The experimental evaluation is constrained by the use of overly simple and outdated datasets, experimental settings, and network architectures. Consequently, the conclusions drawn from these toy scenarios lack strong evidence of generalizability to more complex, real-world tasks. The set of tasks was chosen based on previous work dedicated to benchmarking of deep learning optimizers: https://arxiv.org/pdf/2007.01547 What tasks would you like to be used in the evaluation?
null
null
null
null
null
null
Debiased Orthogonal Boundary-driven Efficient Noise Mitigation
Reject
Summary: This paper exploits the properties of high-dimensional orthogonality to identify a robust and effective boundary in cone space for separating clean and noisy samples. They propose One-Step Anti-noise (OSA), a model-agnostic noisy label mitigation paradigm that employs an estimator model and a scoring function to assess the noise level of input pairs through just one-step inference. This method demonstrates enhanced training robustness, improved task transferability, streamlined deployment and reduced computational overhead across diverse benchmarks, models. Claims And Evidence: I think the claims made in the submission are supported by clear and convincing evidence. The paper claims that t the intersection boundary is highly likely to be a shifted orthogonal boundary in cone space. fig.1 shows that the empirically optimal decision boundary deviates significantly from the theoretical orthogonal threshold zero and the intersection points of clean and noisy distributions remain consistent for the same model across different datasets, suggesting the existence of a stable, dataset-irrelevant boundary. Methods And Evaluation Criteria: I think the proposed methods and evaluation criteria make sense for the problem. It uses a pre-trained clip as an estimator model. When training, the pair is input to the estimator model and outputs a similarity, by comparing with the space shift and then calculating the score as the loss weight. If the pair is a noise pair, the weight is close to zero, else as the further from the space shift, the bigger the loss weight. I think the method is simple and cost-effective. Theoretical Claims: I didn't check the correctness of the proofs Experimental Designs Or Analyses: The paper evaluates the method on three downstream tasks with noisy labels. When it comes to the analysis of results on MSCOCO, it says "table2 show that OSA outperforms all previous approaches on all metrics with a huge gap", but the data shows NPC outperforms OSA on many occasions. Supplementary Material: I have reviewed the supplementary material, it's well-written. The proof is comprehensive and the additional experimental results are adequate. Relation To Broader Scientific Literature: The paper is related to two subjects, noisy label learning and multimodal foundation models' application. I think it's worth discussing the problem of how to accurately identify noise based solely on cosine similarity scores since I have ever noticed that one paper I have read mentioned that they think the pair is positive when the similarity is around 0.3. At that time, I am confused why the threshold is around 0.3. besides, as the clip becomes widely used in many different domains, the application on mitigating noisy labels is nice. Essential References Not Discussed: Related works that are essential to understanding the (context for) key contributions of the paper are discussed in the supplementary material. Other Strengths And Weaknesses: Strength: The motivation is good, and it seems to be useful and can be applied to many real-world scenarios. Besides, it dives into the cone effect and provides verification for the space shift. Weakness: I think the method is quite simple. Other Comments Or Suggestions: I think it may be better to have a comprehensive introduction of related work in the main text. I can't have enough knowledge about the methods you compare when I am reading the main text. Besides, the 186,187 line, " brings xi and yi closer when c_i = 1" is a typo? It should be 'c_i=0'. Questions For Authors: There are five questions I want to learn from you, the first is how many pairs are used to calculate the space shift? Second, As far as I know, the pre-trained clip has some limitations in that it can make mistakes. For example, there are some cases in which positive pairs get low similarity scores and negative pairs get high similarity scores, will it have a big impact on the identification of noise and clean data? Third, is the clip you used as an estimator model off the shelf? For example, import the openai clip and calculate the similarity. Fourth, will it cost a lot of time to calculate the similarity? Once I have tried, it's a little bit slow. Fifth, when it came to image classification, how do you use clip as an estimator model, I mean, how to identify the noisy pair? Filling the label in a fixed sentence and then calculating the similarity between the image and the sentence? ## update after rebuttal The author's response has partially addressed my concerns, and after reviewing the other reviewers' comments, I support the acceptance of this paper. Therefore, I maintain my original score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer DHCR for the positive, patient and professional review, as well as the valuable suggestions for improvement. Our responses to the reviewer’s questions are below: ***W1 : The claim of "table2 show that OSA outperforms all previous approaches on all metrics with a huge gap" is not fully adequete, due to NPC outperforms OSA on many occasions.*** **A1:** Thank you for your careful review. In Table 2, OSA outperforms NPC in all metrics of R@1, which is the most important metric in image-text matching. For R@5 and R@10, OSA also outperforms NPC in most cases with a significant margin, especially in noisy scenarios. To accurately clarify this, we will revise the claim from 'on all metrics' to 'on most metrics' in the revised version. --- ***W2 : The method is quite simple.*** **A2:** The objective of our work is to develop a general and easily adaptable anti-noise method. Therefore, we hope the framework of our method is as concise as possible to ensure its practicality in complex real-world scenarios. To achieve this, we focus more on exploring anti-noise principles than sophisticated methods to mitigate the loss of generality arisen from complex methods. --- ***W3 : It may be better to have a comprehensive introduction of related work in the main text to help the readers to understand the knowledge about existing methods for comparison.*** **A3:** We are sorry to make this confusion. After your valuable reminders, we believe it would be beneficial to include more related work in the main text to enhance readers' understanding. In our next revision, we will transfer some content from the related work section to the main text. --- ***W4 : How many pairs are used to calculate the space shift?*** **A4:** During our training process, we randomly sample 256 images and 256 texts separately, forming $256 \times 256 = 65536$ pairs. We also evaluate different sample sizes ranging from $64 \times 64$ to $1024 \times 1024$, and found that the boundaries remain stable. Therefore, we ultimately set the sampling size to match the batch size (256) in most of our experiments. |Scale|64x64|128x128|256x256|512x512|1024x1024| |-|-|-|-|-|-| |Mean|0.215|0.216|0.215|0.214|0.214| --- ***W5 : There are some cases in which positive pairs get low similarity scores and negative pairs get high similarity scores, will it have a big impact on the identification of noise and clean data?*** **A5:** Thank you for your constructive question. To explore this, we further conduct experiments on a very large real-world dataset, CC3M, which contains 3 million image-text pairs. These samples are collected from webpages and filtered by Google AI based on instance matching. This suggests that noise in this dataset is rare and semantically relevant, which can somewhat represent negative pairs with high similarity scores. Additionally, we found that zero-shot CLIP performs poorly on this dataset, achieving only about 29 R@1. This indicates that the dataset is somewhat out-of-domain for CLIP, and that there are likely some clean samples with low performance. We report the performance of zero-shot CLIP, the Baseline (CLIP fine-tuned on CC3M), and OSA applied to the Baseline in the table below: |||i2t|||t2i|| |-|-|-|-|-|-|-| |Model|R@1|R@5|R@10|R@1|R@5|R@10| |zero-shot CLIP|29.25|50.47|59.47|28.80|51.04|60.38| |Baseline|42.41|66.70|75.56|42.45|67.83|76.46| |OSA|**43.34**|**67.48**|**75.79**|**43.46**|**68.33**|**76.58**| We observe that OSA still provides noticeable performance improvement in this challenging scenario. This phenomenon further demonstrates the effectiveness and robustness of OSA. Therefore, cases where some positive pairs have low similarity scores and negative pairs have high similarity scores do not significantly impact OSA's performance --- ***W6 : Is the clip you used as an estimator model off the shelf? For example, import the openai clip and calculate the similarity.*** **A6:** Yes, we use the off-the-shelf CLIP model released by OpenAI. --- ***W7 : Will it cost a lot of time to calculate the similarity? Once I have tried, it's a little bit slow.*** **A7:** We evaluate the time cost on an NVIDIA RTX 3090, processing the MS-COCO dataset (566,435 pairs) with a batch size of 4096 takes about 153 seconds, using ~24 GB of GPU memory. At this rate, processing 1 billion samples would take approximately 75 hours on a single RTX 3090. In addition, this process can be further accelerated through parallel inference. We think this is an acceptable overhead for real-world industrial training. --- ***W8 : How to indentify the noisy pair using CLIP in image classification?*** **A8:** We follow the same image classification pipeline as shown in Figure 1(b), which is also the format used in the CLIP paper [1]. The specific format is: "This is an image of [CLS]." **Refs:** [1] Alec Radford et al, "Learning Transferable Visual Models From Natural Language Supervision", ICML, 2021. ---
Summary: This paper proposes One-Step Anti-noise (OSA), a model-agnostic noise mitigation paradigm leveraging high-dimensional orthogonality and cone effects in pre-trained models (e.g., CLIP) to distinguish noisy and clean samples. Key contributions include: 1) It identifies a shifted orthogonal boundary in cone space as a stable decision threshold, supported by proofs showing that contrastive learning separates clean/noisy samples on opposite sides of the boundary. 2) A one-step inference scoring function that reweights loss based on debiased cosine similarity, reducing computational overhead. 3) This paper also demonstrates SOTA performance on cross-modal matching (MSCOCO, Flickr30K), classification (WebFG-496), and retrieval (CARS98N) under high noise ratios. Claims And Evidence: 1)Boundary Stability: Empirical results (Fig. 1 c ~ f) show consistent intersection points across datasets for the same model, aligning with theoretical analysis of cone effects. 2)Efficiency: OSA reduces training overhead by 90% compared to dual-backward methods like NPC (Table 12). 3)Model Agnosticism: Validated across ResNet, VGG, and ViT architectures (Table 4). Methods And Evaluation Criteria: Methods: CLIP/ALIGN as zero-shot noise detectors is justified due to their semantic alignment capabilities. Non-linear weighting based on shifted boundaries also effectively suppresses noisy samples. Evaluation Criteria: Standard metrics (R@K, accuracy) align with task goals. Noise ratios (20%-60%) and real-world datasets (CC120K) ensure practical relevance. Theoretical Claims: Probability calculations (Appendix D.1) and Gaussian feature distribution proofs (Appendix D.3) are rigorous. Experimental Designs Or Analyses: This paper provide comprehensive benchmarks across tasks and noise ratios. Supplementary Material: Appendix B: Implementation details (e.g., batch size, optimizer) are sufficient for reproducibility. Appendix D: Theoretical proofs are logically sound but rely on idealized assumptions (e.g., Gaussian weights). Appendix F: Additional experiments (e.g., real-world CC120K) strengthen claims. Relation To Broader Scientific Literature: This work is built on CLIP’s cross-modal alignment but introduces noise-aware boundary shifts. It improves NPC by replacing dual-backward passes with one-step inference. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. This paper provides a strong theoretical and empirical foundation for why a shifted orthogonal boundary emerges in high-dimensional embedding spaces. This is a refreshing perspective, as many noise-robust methods focus on heuristics in loss space; by contrast, this work demonstrates a rigorous approach toward understanding cosine-space separation. 2. OSA is proposed as an inference-only noise-mitigation strategy, independent of specific network architectures. Experiments show that it integrates with various architectures and tasks with minimal changes to the training pipeline. This broad adaptability is a significant practical advantage. Weaknesses: 1. While the experiments simulate large-scale scenarios (e.g., MSCOCO, CC120K), the evaluated dataset is much smaller than the dataset used to train CLIP. Denoising labels would be more meaningful when validated on large-scale datasets. 2. Although the authors show zero-shot CLIP/ALIGN are good estimators, OSA’s effectiveness depends heavily on that estimator’s domain alignment. If the estimator is weak or too out-of-domain, the boundary for noise vs. clean might become less reliable. Additional discussion about failure cases in severely domain-mismatched scenarios would strengthen the narrative. 3. The repository authors provided in the abstract is expired. Other Comments Or Suggestions: None Questions For Authors: I find some of the discoveries in this paper quite interesting and valuable. I have a few questions that concern me. 1) If the target domain is drastically different, can zero-shot CLIP/ALIGN still provide a reliable boundary? Authors do not have to provide extra experiments and just provide some experience. 2) How to handle moderate overlap between clean/noisy distributions near β? In the experiments, there seems to be little distribution overlap around the boundary. If, for a particular dataset, there was a moderate overlap of clean/noisy samples’ cosine similarity near 𝛽, would you recommend a different shaping function, or a more cautious threshold? 3) Hope the authors can fix the open-source repository they provided. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer Ho5e for the positive, patient and professional review, as well as the valuable suggestions for improvement. Our responses to the reviewer’s questions are below: ***W1 : It is more meaningful when validated on large-scale datasets.*** **A1:** Thank you for your insightful review. To further explore the effectiveness of our method in real-world scenarios, especially on large-scale and out-of-domain training, we conduct experiments on a very large real-world dataset, CC3M, which contains 3 million image-text pairs collected from webpage and filtered by Google AI. We find that zero-shot CLIP does not perform well on this dataset, achieving only 29.25 R@1 on i2t and 28.80 R@1 on t2i. Therefore, we believe this dataset can somewhat represent practical large-scale and out-of-domain scenarios. We report the performance of zero-shot CLIP, the Baseline (CLIP fine-tuned on CC3M), and OSA applied to the Baseline in the table below: |||i2t|||t2i|| |-|-|-|-|-|-|-| |Model|R@1|R@5|R@10|R@1|R@5|R@10| |zero-shot CLIP|29.25|50.47|59.47|28.80|51.04|60.38| |Baseline|42.41|66.70|75.56|42.45|67.83|76.46| |OSA|**43.34**|**67.48**|**75.79**|**43.46**|**68.33**|**76.58**| Although the samples in CC3M are filtered and have a lower noise ratio compared to natural ones, we observe that OSA still brings noticeable performance improvement. This phenomenon further demonstrates the effectiveness and robustness of OSA in real-world scenarios. Given the breadth and complexity of real-world domains, fully exploring all possible scenarios is extremely challenging. However, we believe this experiment could somewhat provide evidence and insights into OSA's effectiveness and robustness in real-world scenarios. --- ***W2 : OSA’s effectiveness depends heavily on that estimator’s domain alignment, and may become less reliable in severely domain-mismatched scenarios.*** **A2:** As mentioned in A1, in CC3M dataset, zero-shot CLIP only achieves a not good performance and this may somewhat represent domain-mismatched scenarios. In these scenarios, OSA still achieves improvements, indicating its reliablity across various domains. Furthermore, we also provide an optional domain adaptation (DA) solution in Section 3.2.1 to address edge-domain challenges in real-world scenarios, and it achieves significant improvements on noise detection accuracy in Table 5. --- ***W3 & Q3: The repository authors provided in the abstract is expired. Hope the authors can fix the open-source repository they provided.*** **A3:** We are sorry for this mistake. We have re-opened the repository, and it is now accessible! --- ***W4 : If the target domain is drastically different, can zero-shot CLIP/ALIGN still provide a reliable boundary?*** **A4:** Actually, we think that the boundary is an inherent property of the shared space. Therefore, we calculate the boundary using simulated images and texts to eliminate the influence of specific data domains. This suggests that the boundary itself is stable and independent of the target domain. In cases where the target domain is drastically different—where CLIP is almost impossible to understand the target domain—samples are likely to be distributed around the orthogonal boundary, which may lead to substantial overlap. In such cases, domain adaptation may be necessary to enhance the estimator's recognition ability for the target domain. However, based on our experiments on the Stable Diffusion domain and CC3M, we find that it is uncommon for CLIP to entirely fail to understand the target domain in real large-scale training scenarios. As long as CLIP can recognize the target domain to some extent, the orthogonal boundary can effectively separate clean and noisy samples. --- ***W5 : How to handle moderate overlap between clean/noisy distributions near β? Would a different shaping function or a more cautious threshold be better in such cases?*** **A5:** This is an important question and highlights a common challenge in the field. In practice, it is hard to perfectly separate overlapping clean and noisy samples based on a threshold boundary. But in our work, we identify an inherent decision boundary in the model space with theoretical significance. Compared to the common strategy of using a strict threshold, this approach allows us to design more sophisticated and accurate methods for handling overlap. For instance, our high-degree scoring function, where the gradient trends follow the probability trends of random vectors near the orthogonal boundary, achieves nearly optimal weight ranking in Table 11, suggesting the effectiveness of the function design. We therefore believe that designing a more suitable function based on the theoretical properties of the boundary is a better solution. Additionally, the overlap mainly arises from unfamiliarity with the target domain. As mentioned in A2, we also propose a domain adaptation technique to help the estimator better adapt to real-world scenarios. ---
Summary: This paper proposes One-Step Anti-noise (OSA), a model-agnostic noise mitigation method, addresses label noise in large-scale pre-training tasks. It leverages pre-trained models’ high-dimensional orthogonality and the cone effect, which shifts the orthogonal boundary in embedding space, intersecting clean and noisy samples. OSA computes cosine similarity and designs a scoring function to adjust sample weights, effectively mitigating noisy samples’ impact. Experimental results show OSA performs well across datasets and tasks, especially in high-noise conditions, improving model performance while reducing computational overhead. Claims And Evidence: 1. Shifted Orthogonal Boundary: Pre-trained models like CLIP have an intersected boundary between clean and noisy samples due to the cone effect. The paper shows that this boundary deviates from the theoretical orthogonal boundary, verified through experiments on datasets like MSCOCO and SDM. 2. Model-Agnostic Noise Mitigation: OSA is a model-agnostic noise mitigation method that works across various models and tasks, including image-text matching and image retrieval. It can be applied to pre-trained models like CLIP without model-specific modifications and performs well across different architectures. 3. One-Step Inference for Noise Detection and Reduced Computational Overhead: OSA uses a one-step inference process to detect noisy samples, reducing computational overhead compared to existing methods. It can effectively assess noise levels with just a single inference pass, achieving comparable or better performance with significantly less computational cost. Methods And Evaluation Criteria: Methods: OSA, a model-agnostic paradigm, reduces noisy sample impact on model training. 1. Pre-trained models like CLIP map sample pairs to an embedding space and evaluate noise levels by calculating cosine similarity. 2. OSA constructs random sample pairs, processes them through the estimator, and calculates the average cosine similarity to obtain the spatial shift for scoring. 3. OSA designs a scoring function based on the orthogonal boundary. Samples with lower cosine similarity are assigned lower weights, while those with higher similarity are assigned higher weights. 4. OSA weights sample losses with the scoring function during target model training. Noisy samples have lower weights, while clean samples have higher weights, guiding accurate parameter updates. 5. OSA’s adaptability is enhanced by adding a weight coefficient to the training loss function, making it suitable for different architecture models. Evaluation metrics, such as recall, accuracy, and precision and mAP, were used to evaluate OSA’s performance in various tasks. Theoretical Claims: The paper claims that high-dimensional orthogonality in pre-trained models (like CLIP) can identify noise samples. The orthogonal boundary shifts due to the cone effect, distinguishing between clean and noisy samples. The proof uses vector space properties and neural network embedding characteristics. It analyzes vectors in the embedding layer’s high-dimensional space, showing how the shifted boundary classifies samples as noisy or clean using cosine similarity calculations. Experimental Designs Or Analyses: 1. Dataset Selection: Multiple datasets, including MSCOCO, Flickr30K, and CC120K, were selected for image-text matching, retrieval, and classification. These datasets cover diverse image and text content and allow for a comprehensive evaluation of the proposed One-Step Anti-noise (OSA) method. However, they may not fully represent all possible real-world scenarios. 2. Baseline Comparison: OSA was compared with existing noise mitigation methods using common metrics to measure performance. This comparison provides a clear benchmark for evaluating OSA’s superiority. 3. Ablation Experiments: Ablation experiments were conducted to study the contributions of different OSA method components, such as the estimator model and scoring function. By removing or modifying components, researchers gained insights into how they interact and contribute to overall performance. Supplementary Material: I’ve reviewed parts A, B, C, and F of the supplementary material. The supplementary materials are rich and detailed, including experimental dataset information, implementation details, a review of related work, the theoretical proof process, and additional experimental results. They provide strong support for readers to understand the paper’s research content. Relation To Broader Scientific Literature: The paper reviews noise mitigation literature in cross-modal matching, image classification, and image retrieval. It highlights existing method limitations, such as hyperparameter reliance, poor adaptability, and high computational cost. OSA addresses these limitations through innovative method design, enhancing noise mitigation and contributing to research progress. Essential References Not Discussed: The paper cites relevant literature thoroughly, with no essential references missing. Other Strengths And Weaknesses: Strengths: - Originality: The One-Step Anti-noise (OSA) method is novel. It leverages the orthogonality in high-dimensional pre-trained model spaces to design a scoring function based on the cone effect, breaking limitations of traditional noise mitigation methods. OSA is model-agnostic and can adapt to various architectures. - High Efficiency: OSA accurately completes noise detection with a single inference, outperforming popular multi-model or multiple-inference noise detection schemes. - Comprehensive Experimental Validation: OSA has been tested on classic datasets like MSCOCO and Flickr30K in various scenarios, including image-text matching and image classification tasks. It accurately identifies noise samples and mitigates interference, demonstrating strong generalization. Weaknesses: - Limited Exploration in Scoring Functions: The study on employing high-degree scoring functions is insufficiently comprehensive. Other Comments Or Suggestions: 1. In section 2.2, the highlighted expression “Contrastive learning empowers the separation of clean and noisy samples” seems rather abrupt. Prior to this statement, the text focuses on verifying whether the origin of the intersection boundary is a shifted orthogonal boundary. However, there is a lack of sufficient lead-in and logical connection for this claim about contrastive learning enabling sample separation. 2. Is the expression of “brings x_i and y_i closer when c_i=1” in section 3.1 (Line 187) a mistake? A noisy sample x_i should be far away from y_i. Questions For Authors: It would be beneficial if the author could provide additional insights into the design of a score function, especially for the high-degree one. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer PeYE for the positive, patient and professional review, as well as the valuable suggestions for improvement. Our responses to the reviewer’s questions are below: ***W1 : Although the evaluation datasets cover diverse image and text content, they may not fully represent all possible real-world scenarios.*** **A1:** Thank you for your valuable insights. To further explore the effectiveness of our method in real-world scenarios, especially on large-scale and out-of-domain training, we conduct experiments on a very large real-world dataset, CC3M, which contains 3 million image-text pairs collected from webpage and filtered by Google AI. We find that zero-shot CLIP does not perform well on this dataset, achieving only 29.25 R@1 on i2t and 28.80 R@1 on t2i. Therefore, we believe this dataset can somewhat represent practical large-scale and out-of-domain scenarios. We report the performance of zero-shot CLIP, the Baseline (CLIP fine-tuned on CC3M), and OSA applied to the Baseline in the table below: |||i2t|||t2i|| |-|-|-|-|-|-|-| |Model|R@1|R@5|R@10|R@1|R@5|R@10| |zero-shot CLIP|29.25|50.47|59.47|28.80|51.04|60.38| |Baseline|42.41|66.70|75.56|42.45|67.83|76.46| |OSA|**43.34**|**67.48**|**75.79**|**43.46**|**68.33**|**76.58**| Although the samples in CC3M are filtered and have a lower noise ratio compared to natural ones, we observe that OSA still brings noticeable performance improvement. This phenomenon further demonstrates the effectiveness and robustness of OSA in real-world scenarios. Given the breadth and complexity of real-world domains, fully exploring all possible scenarios is extremely challenging. However, we believe this experiment could somewhat provide evidence and insights into OSA's effectiveness and robustness in real-world scenarios. --- ***W2 : Limited Exploration in Scoring Functions: The study on employing high-degree scoring functions is insufficiently comprehensive.*** **A2:** The scoring function is a crucial component of our work, and we conduct an initial exploration in Appendix F.1. Specifically, we compare three types of functions: Linear, Cosine, and High-Degree functions. In Table 6, we observe that our carefully designed high-degree function (based on orthogonal boundary properties) outperforms other methods. The rationale behind our design of the current high-degree function is as follows: 1) For cosine similarity values lower than the orthogonal boundary, there is a high probability that the sample is noise due to the huge gap caused by the orthogonal boundary in anisotropic space. To mitigate the impact of noise, we assign these samples a weight of zero to prevent them from influencing training. 2) On the positive side of the orthogonal boundary, the probability of a sample being noise decreases rapidly as the cosine similarity increases. Therefore, the gradient should initially increase rapidly as cosine similarity moves away from the orthogonal boundary. However, for samples with relatively high cosine similarity, we assume they have already been well-learned, so we assign them a lower weight to prevent overfitting. Considering these factors, we designed a high-degree function in the form of Eq. 34 ($y=-x^2(x-1), x>0$). If plotted, our high-degree function exhibits a curve in the range of 0-1 that rises slowly at first, then steeply, before gradually decreasing after 0.7 on the positive side. We will provide a more detailed explanation and supplement these design considerations in the updated version. --- ***W3 : The statement “Contrastive learning empowers the separation of clean and noisy samples” in Section 2.2 appears abrupt, lacking a clear lead-in and logical connection to the preceding discussion on the intersection boundary.*** **A3:** We sincerely appreciate your reminder of the confusion maybe caused by this statement order. This claim aims to explain why the orthogonal boundary in the shared space of a pre-trained model can accurately and naturally separate clean and noisy samples. Inspired by your valuable suggestion, we believe this discussion would be better placed in Section 2.3, 'Qualitative Analysis of Robustness and Applicability,' to improve the logical flow. We will make this correction in our revised version. --- ***W4 : Is the expression of “brings x_i and y_i closer when c_i=1” in section 3.1 (Line 187) a mistake? A noisy sample x_i should be far away from y_i.*** **A4:** We sincerely appreciate your careful review. This is indeed a typo—it should be "brings $x_i$ and $y_i$ closer when $c_i=0$." We will correct this in our revised version. --- ***W5 : It would be beneficial if the author could provide additional insights into the design of a score function, especially for the high-degree one.*** **A5:** Thank you for your constructive suggestion. As mentioned in A2, there are two factors behind our high-degree function design. We would like to include all of these discussions in our revision. ---
null
null
null
null
null
null
null
null
A Generalization Theory for Zero-Shot Prediction
Accept (oral)
Summary: The papers takes a theoretical approach towards understanding key quantities driving zero-shot prediction. Introducing and deriving bound for the aforementioned problem setting, the paper analyzes translation between modalities and effectiveness of prompt engineering strategies a multi-modal learning setting. Claims And Evidence: To the level I managed to delve deep, this the claims are supported with solid and rigorous evidence and theoretical corroboration. Methods And Evaluation Criteria: They make sense, even though they can be extended considerably. Theoretical Claims: Verifying this completely is almost a full time job! To the level I could dive deep, the derivation follow smoothly and make sense. I reviewed most of Appendix B. Experimental Designs Or Analyses: Yes, they make sense but again they are quite limited to classification in two simplistic settings. Supplementary Material: Appendix B. The rest skimmed through, and on the high level they follow. Relation To Broader Scientific Literature: In my view, this is an extremely insightful and well written paper --- definitely of value to the community. I enjoyed reading this paper. Essential References Not Discussed: Covers it reasonably well. Other Strengths And Weaknesses: Strengths - Well written paper, and extremely insightful narrative. - Solid theoretical foundation. Weaknesses: - Heavy focus on theory and limited experimental results and demonstrations. - Lots of empty spaces within the text, I would just move Appendix F to the main text to cover that, or even better than that expand the numerical results to other settings. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your hard work in verifying the paper. The authors are happy to hear that you found the narrative insightful and well-written. We address your concerns below. > **“[The Methods And Evaluation Criteria] make sense, even though they can be extended considerably… [The Experimental Designs Or Analyses] make sense but…are quite limited to classification… Heavy focus on theory and limited experimental results and demonstrations.”** We acknowledge this feedback by 1) motivating why this is the most relevant task for studying modern ZSP, 2) providing additional experimentation with simulations using CLIP/VICReg and linear probe baselines, and 3) by further justifying the theoretical focus of the work. Details are given below. Firstly, we would like to emphasize that this task is by far the canonical one for studying ZSP. This is largely due to the fact that prompting is inherently tied to natural language. Indeed, consider [Gadre et. al. (NeurIPS, 2023)](https://openreview.net/forum?id=dVaWCDMBof), one of the largest-scale studies on the design of multimodal pre-training data for foundation models. In over three hundred experiments, models are evaluated on either zero-shot image classification, zero-shot image/text retrieval, or linear probing. As in our case, only the encoder architectures, datasets, and prompting strategies are varied. In retrieval, there is no prompting, and in linear probing, training data is available. Secondly, in response to the explicit requests of 8U7M, we also include linear probing comparisons in [Figure 5](https://tinyurl.com/av8sd5x5) of the rebuttal. This gives an idea for the near optimal performance of the ZSP methods in the case there there was no residual dependence and that the prompting strategy was unbiased. How this performance gap depends on the residual dependence $I(X;Y|Z)$ is studied experimentally in [Figure 6](https://tinyurl.com/av8sd5x5) of the rebuttal as well. Finally, while you correctly pointed out that there is a relatively heavy focus on theoretical work in the paper, we highlight that this is an intention, as the mathematical foundations of modern ZSP is still in its infancy. During the time of submission, the only directly related work we were aware of was [Chen et. al. (ICLR, 2024)](https://openreview.net/forum?id=S5yOuNfSA0), which we comment on the bottom of Page 2. We were also made aware of the very recent preprint Oko et al. (arXiv, Jan. 2025) by Reviewer fhW7. Thus, we feel as though the theoretical focus of the work is apt based on the current gaps in the scientific literature on this topic. Once again, we are grateful for your recommendation of acceptance and are open to address any additional comments or questions during the discussion period. --- Rebuttal Comment 1.1: Comment: Thanks for further clarification, this addresses my remaining concerns.
Summary: This paper provides a formal modeling of the two-stage learning procedure, known as CuPL: (1) pretraining on multimodal labeled data and (2) zero-shot prediction (ZSP) on the pre-trained model with natural language prompts. The goal is to offer a theoretical explanation of the success of CuPL. To achieve this, the paper analyzes how ZSP optimality depends on the pretraining task distribution, the downstream ZSP task distribution, and the prompting strategy. To me, the key of this model is Equation (6), where two encoders, respectively on input and latent variables, are introduced. Via this modeling, the authors point out that the ideal prompt samples an unobservable conditional distribution of latent variables, given the label. Based on this construct, the authors then compare the informational dependence of unimodal contrastive learning, reconstructive learning, and multimodal contrastive learning. They point out that the last one is the most compatible dependence structure for ZSP. On top of the selected dependence structure, Theorem 1 identifies the epistemic error bounded by the pre-training sample size and the number of prompts. Then, Theorem 2 identifies the aleatoric error bound. These bounds eventually lead to the variance-regularized covariance loss in Equation (12). Experiments on image classification compare default community-curated prompts (baselines) and CuPL. Results show an increasing trend in CuPL accuracy with more prompts, supporting the claims of how the number of prompts affects error bound in the theorems. Claims And Evidence: Overall, I see several important claims in this paper, and they are all well supported. First, the authors claim that the dependence structure of multimodal contrastive learning fits the best for ZSP, and it is well-supported by the analysis in Section 2. Next, another claim is the epistemic and aleatoric error bounds in ZSP, given pre-training data size and number of prompts (Theorem 1 and 2). The proofs of these claims are sound, and the experimental evaluation supports the influence of prompt numbers. Finally, a third claim is a generalized form of variance-regularized covariance loss in Equation (12), and state-of-the-art CuPL methods are identified to fit this generalized form. Methods And Evaluation Criteria: The evaluation method is reasonable by comparing CuPL methods that fit in Equation (12) to default prompting strategy. The metrics is standard top-k accuracy, and the experiments not only compares between CuPL to the baseline, but also shows how increasing number of prompts affect the result. There are still several things unclear to me about the baseline methods, and I will point them out in the questions to authors. Theoretical Claims: As stated above, the theoretical claims are supported by sound analysis. Experimental Designs Or Analyses: As stated above, the experimental design looks reasonable to me, except for the selection of baseline. I will list my questions in the last part. Supplementary Material: Appendices and links are provided to further support the paper’s claims. Relation To Broader Scientific Literature: This paper is well-related to broader literature, as it aims to provide theoretical support for the success of CuPL methods. Essential References Not Discussed: I hope the authors could discuss more on the relationship between FSL, ZSP and MAML [1]. [1] Finn, Chelsea, Pieter Abbeel, and Sergey Levine. "Model-agnostic meta-learning for fast adaptation of deep networks." International conference on machine learning. PMLR, 2017. Other Strengths And Weaknesses: Strengths: + The paper is very well motivated, as the goal is to explain the success of CuPL. + The proofs are rigorous and sound. Weaknesses - Although Section 2 has demonstrate why ZSP is best compatible with multimodal contrastive learning, it does not necessary discuss the advantage of ZSP over other SSL methods, i.e., unimodal contrastive learning and reconstructive learning. One reason could be that obtaining labels for FSL is hard, but I would like to see more evidence supporting this. - Like the above bullet point, the experiment does not compare ZSP with FSL baselines. I hope the authors could provide a sound reason for not doing so. To me, demonstrating the success of ZSP requires analysis and/or experiments to show it defeats other SSL methods. - The narrative structure could be improved. For example, Assumption 1 is crucial for both theorems, but it is in the appendix. Many discussion of Theorem 1 appears before the theorem itself. It would be better if Assumption 1 and Theorem 1 are first stated, and then the analysis follows. Other Comments Or Suggestions: N/A Questions For Authors: I would love to improve my rating if the authors can answer the following questions. 1. In the experiment setup, how are prompts generated in the baseline method, i.e., community-curated prompts? 2. Why does the performance of the baseline stay exactly the same as more prompts are provided? Are the baseline prompts provided all at once? If so, this does not seem to be an apple-to-apple comparison, as CuPL gradually generates prompts. 3. Section 2 has discussed alternative approaches in SSL: unimodal contrastive FSL and reconstructive FSL. Why not compare CuPL/ZSP methods to these baselines? In order to demonstrate the success of CuPL/ZSP, I suppose that the paper needs to demonstrate the advantages of CuPL/ZSP over other SSL methods, but not just community-curated prompts. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thorough review. We address your comments below. > **“I hope the authors could discuss more on the relationship between FSL, ZSP and MAML.”** We discuss model agnostic meta-learning (MAML) in its [offline](https://proceedings.mlr.press/v70/finn17a.html) variant, i.e., as a tool for multi-task or meta-learning. The offline meta-learning method can be thought of a simpler setting than FSL, in which the downstream evaluation tasks are given to the user *upfront*. Thus, pre-training an encoder and training a predictor for all of the evaluation tasks can be done in one end-to-end step. On the other hand, FSL is not (in general) given any downstream task upfront, so models cannot be learned in an end-to-end manner at once. ZSP is a "harder" setting still, as no downstream data is given at any point in the training-evaluation pipeline. > **“Although Section 2 has demonstrate why ZSP is best compatible with multimodal contrastive learning, it does not necessary discuss the advantage of ZSP over other SSL methods. One reason could be that obtaining labels for FSL is hard, but I would like to see more evidence supporting this.”** First, we clarify that we do not intend to “discuss the advantage of ZSP over other SSL methods”, because SSL is precursor to ZSP; they are not comparable. However, for the comparison of ZSP to FSL, obtaining labeled data is *precisely* the bottleneck that motivates ZSP (as you correctly pointed out). Second, the authors request additional clarification on what “more evidence supporting this” refers to, so we may adequately address your points. Having no access to downstream training data is not a claim; it is a specific data availability regime that is now well-established a modern machine learning setting (see [Pourpanah (2022)](https://ieeexplore.ieee.org/abstract/document/9832795)). ZSP is the corresponding pipeline for this problem. Thus, we do not argue for any advantage of ZSP over FSL&mdash;they are simply different methods for different problems. Accordingly, different SSL methods accompany these problems, as alluded to in Section 2. In fact, we generally always expect FSL to perform better than ZSP with equal pre-training data, as FSL receives strictly more task information than ZSP. That being said, a question we do consider is 1) “how close can ZSP get to the performance of FSL (with a large amount of training data)?” and 2) “how do we quantify it?”. This exactly leads to our mathematical notions of prompt complexity and residual dependence. That is, if the prompting strategy approximates the conditional distribution of the text $Z$ given $Y = y$ and we have that $X$ and $Y$ are approximately conditionally independent given $Z$ (see Figure 2 in the paper), then ZSP has no *theoretical* disadvantage over FSL. In response to your review, we verify this in simulation in [Figure 6](https://tinyurl.com/av8sd5x5) of the rebuttal. > **“[Q1 + Q2] ...how are prompts generated in the baseline method... Why does the performance of the baseline stay exactly the same as more prompts are provided? Are the baseline prompts provided all at once?”** The baseline prompts are not “generated”, as they are selected by humans as defaults in the CLIP benchmark package. Therefore, they cannot be created in arbitrarily large amounts such as LLM-generated prompts. Rather than matching the number of prompts between human baselines and LLMs, this illustration shows the scaling of the downstream classification accuracy as the user generates a large volume of prompts. Moreover, the goal of the experiment is not to market class-conditional prompting as a new method, but to experimentally verify the saturation point at which $O(1/M)$ prompt variance term in Theorem 1 becomes negligible. > **[Q3] “The experiment does not compare ZSP with FSL baselines. I hope the authors could provide a sound reason for not doing so. To me, demonstrating the success of ZSP requires analysis and/or experiments to show it defeats other SSL methods… Why not compare CuPL/ZSP methods to these baselines?... the paper needs to demonstrate the advantages of CuPL/ZSP over other SSL methods, but not just community-curated prompts.”** As mentioned above, demonstrating the success of ZSP as compared to FSL was not a goal of this paper. Similarly, SSL was not a baseline for ZSP to defeat, but rather a form of pre-training that may lead to the ZSP capability. However, we are interested in studying the performance gap and its dependence (see [Figure 6](https://tinyurl.com/av8sd5x5)). We provide experiments that compare the ZSP performance to FSL baselines using linear probing on the evaluation datasets seen in the paper: FGVC Aircraft, DTD, Flowers 203, SUN397 (see [Figure 5](https://tinyurl.com/av8sd5x5)). If we have addressed your concerns, we would appreciate that your score be raised, and if not, we are happy to answer any questions or consider additional experiments!
Summary: This paper explores the theoretical foundations of zero-shot prediction (ZSP) in foundation models, establishing a formal statistical framework to analyze how pretraining on large-scale, multimodal, unlabeled datasets transitions into downstream zero-shot inference via prompting. The authors identify key factors influencing the success of ZSP and introduce a novel perspective by modeling multimodal data as a joint distribution over X (input images), Y (latent labels), and Z (image captions). Building on this framework, they reformulate image classification as a text classification problem through prompting, leading to a perspective where zero-shot inference is viewed as a sample estimator within a two-stage regression problem. Leveraging concepts from reproducing kernel Hilbert spaces (RKHS), the authors derive closed-form estimators for their statistical framework. They further provide a theoretical analysis of sample complexity, examining the impact of both dataset size and the number of prompts used in estimation. Additionally, they propose a new loss function for ZSP, referred to as the Variance Regularized Covariance objective, and show its connection to existing self-supervision objectives. Finally, the authors conduct semi-synthetic experiments using CLIP models to empirically assess their theoretical results, particularly the relationship between prompt sample complexity and performance. Claims And Evidence: The paper presents a statistical framework for analyzing zero-shot prediction (ZSP) in foundation models. It introduces key concepts such as prompt bias, residual dependence, and prompt sample complexity while relating a class of self-supervised learning (SSL) objectives to a variance-regularized covariance (VRC) form. While this framework offers an interesting theoretical perspective, the claims made in the paper are not sufficiently supported by empirical evidence. Below, I outline specific issues with the key claims: 1. Prompt Bias and Residual Dependence: - Claim: The authors introduce the notions of prompt bias and residual dependence to quantify deviations from optimal zero-shot inference. - Issue: The paper does not provide empirical evidence demonstrating the practical usefulness of these concepts. While they are mathematically well-defined, their impact on real-world zero-shot tasks remains unverified. 2. Connection Between SSL Objectives and Variance-Regularized Covariance (VRC): - Claim: The paper relates a class of SSL objectives to a variance-regularized covariance formulation, suggesting theoretical connections to VICReg. - Issue: Despite establishing these theoretical connections, the authors do not provide any experiments to validate the proposed loss function's effectiveness in the ZSP setting. Without empirical results, it remains unclear whether this formulation offers practical benefits over existing self-supervised learning objectives. 3. Prompt Sample Complexity and LLM-Ensembled Prompts: - Claim: The authors propose a notion of prompt sample complexity to support their theoretical analysis and argue that ensembling LLM-generated prompts improves zero-shot performance. - Issue: The proposed prompt sample complexity does not substantively contribute to the theoretical analysis, as it does not establish new insights beyond existing work. Moreover, the claim that ensembling LLM-generated prompts improves performance is already well-documented in the community (e.g., [1][2][3]), making this result unsurprising rather than a novel contribution. References: [1] Menon, Sachit, and Carl Vondrick. "Visual Classification via Description from Large Language Models." ICLR, 2023. [2] Yang, Yue, et al. "Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification." CVPR, 2023. [3] Esfandiarpoor, Reza, Cristina Menighini, and Stephen Bach. "If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions." EMNLP, 2024. Methods And Evaluation Criteria: The authors use CLIP models and standard zero-shot image classification benchmarks, which are reasonable choices for proof-of-concept experiments on prompt complexity. However, this setup is insufficient to fully support the broader claims made in the paper. Notably, the analysis does not extend to other self-supervised learning (SSL) settings, such as image-to-image tasks, which could provide a more comprehensive evaluation. Moreover, the paper explicitly poses the question: "By what composition of learning stages does ZSP achieve near-optimal statistical performance on a downstream task, and with what dependence on (1) the pre-training data distribution, (2) the downstream task distribution, and (3) the prompting strategy?" However, the empirical experiments are not structured to directly address this question, limiting the strength of the paper’s conclusions. Theoretical Claims: I checked the correctness of the theoretical proofs in detail, but the analysis appears to be a straightforward combination of standard results in RKHS theory. However, I noticed a potential issue in lines 1621–1622, where the approximation $\log(1+y) \approx y$ is used without justification. Without further clarification, the validity of this assumption and the subsequent results remains unclear. Experimental Designs Or Analyses: I reviewed the experimental design and analysis. The authors use CLIP models and standard zero-shot image classification benchmarks, which are reasonable choices for proof-of-concept experiments on prompt complexity. However, some aspects need further clarification. First, it is unclear how the number of prompts is scaled in \textit{Ideal Prompting with Observations from} $P^{\mathcal{\tau}}_{Z, Y}$. Specifically, do the authors sample captions from a predefined caption pool and simply combine them? More details on this process would help clarify their approach. Second, a potential issue arises from the use of LLaMA 3 for generating prompts. The distribution $P_{Z|Y=y} $ in CLIP may not be well approximated by a language model, introducing an additional source of bias. This discrepancy could affect the validity of the results and should be addressed. Supplementary Material: I reviewed the code in the supplementary material as well as the appendix. The code includes implementations of the experiments conducted in the paper, which appear to be well-structured and correctly implemented. The appendix provides a detailed theoretical analysis of the proposed framework, including: - Derivations of closed-form estimators within the statistical framework, - Connections to self-supervised learning (SSL) objectives and predictors, and - Detailed descriptions of the experimental setup. Overall, the supplementary material is comprehensive and aligns with the main paper. Relation To Broader Scientific Literature: The main contribution of this paper is to provide a theoretical foundation for the common practice of zero-shot prediction (ZSP) using multiple prompts per class as an ensemble. While this approach is widely used in the community, its theoretical underpinnings have not been well studied. The authors attempt to formalize this practice by introducing a two-stage regression framework and analyzing prompt sample complexity. This theoretical perspective helps unify existing empirical findings and suggests promising future research directions. Specifically, prior works have demonstrated the effectiveness of class-conditional prompt ensembling in improving zero-shot classification: - [1] Uses large language models (LLMs) to generate class descriptors for classification prompts, averaging them to create an ensemble—a direct example of the class-conditional prompt ensemble approach. - [2] Generates visual descriptions for each class and averages them, another instance of class-conditional prompt ensembling. - [3] Utilizes multiple concepts to describe each class and applies them in concept bottleneck models. - [4] Extracts detailed visual descriptions from LLMs for zero-shot classification and extends this technique to few-shot adaptation. - [5] Investigates the visual features that are most effective for vision-language model (VLM) classification, which can be interpreted as a direct attempt to estimate Y instead of sampling Z. The proposed two-stage regression framework provides a unified theoretical perspective on these empirical approaches, offering a mathematical basis for prompt ensembling and prompting strategies in ZSP. This connection highlights key aspects of prompt bias, residual dependence, and sample complexity, which could inspire future work on designing more theoretically grounded prompting techniques. References [1] Menon, Sachit, and Carl Vondrick. "Visual Classification via Description from Large Language Models." ICLR, 2023. [2] Pratt, Sarah, et al. "What does a platypus look like? Generating customized prompts for zero-shot image classification." ICCV, 2023. [3] Yang, Yue, et al. "Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification." CVPR, 2023. [4] Maniparambil, Mayug, et al. "Enhancing CLIP with GPT-4: Harnessing Visual Descriptions as Prompts." ICCV, 2023. [5] Esfandiarpoor, Reza, Cristina Menighini, and Stephen Bach. "If CLIP Could Talk: Understanding Vision-Language Model Representations Through Their Preferred Concept Descriptions." EMNLP, 2024. Essential References Not Discussed: [1] analyzes the relationship between concept frequency in the pretraining dataset and its impact on downstream performance. Since the proposed framework discusses the role of pretraining data distribution in ZSP, this phenomenon seems directly relevant to their theory and could provide additional insights into sample complexity and residual dependence. [2] proposes another statistical framework for ZSP and provides a theoretical analysis of multimodal generative AI, including CLIP. Given that the current paper also introduces a new theoretical framework, it is important to compare these approaches and clarify how they relate. This work seems particularly relevant to understanding the assumptions and limitations of the proposed framework. References: [1] Udandarao, Vishaal, et al. "No 'Zero-Shot' Without Exponential Data: Pretraining Concept Frequency Determines Multimodal Model Performance." NeurIPS, 2024. [2] Oko, Kazusato, et al. "A Statistical Theory of Contrastive Pre-training and Multimodal Generative AI." arXiv preprint arXiv:2501.04641, 2025. Other Strengths And Weaknesses: Strengths - The theoretical framework is interesting and could serve as a solid foundation for future work. - The paper provides an integrated perspective on self-supervised learning (SSL) and zero-shot prediction (ZSP), which may be valuable for bridging these areas. Weaknesses - The paper lacks clear organization. The scope is quite broad, yet the theoretical analysis and empirical results are not well integrated to support the full range of topics covered. - The main contribution is unclear, making it difficult to pinpoint the paper’s key takeaway. - The experimental validation is not comprehensive enough to fully support the claims and theoretical results. - The theoretical analysis is heavily reliant on RKHS theory but does not offer practical guidance on key aspects such as the choice of the number of prompts or effective prompting strategies. Other Comments Or Suggestions: Please see questions. Questions For Authors: 1. Clarification on "Ideal Prompting": In the experiment section, what exactly does "ideal prompting" refer to? Does it mean prompts manually designed by humans, in contrast to those generated by LLMs as described in the next paragraph? Understanding this distinction is important because it affects how the results should be interpreted. If "ideal" refers to human-generated prompts, it would be useful to clarify how they were designed and why they are considered ideal. If they are derived from some theoretical criterion, explaining that explicitly would improve clarity. 2. In the section "Learning via Variance-Regularized Covariance", are the authors proposing a new loss function for ZSP, or are they simply re-deriving existing self-supervised learning (SSL) objectives within the proposed framework? This distinction is important because if a new loss function is being introduced, its effectiveness should be empirically validated to demonstrate its advantages over existing approaches. If the section instead provides a theoretical reinterpretation of existing objectives, a more detailed discussion on how this perspective enhances our understanding of SSL objectives and whether it offers any practical benefits would improve the paper. 3. Can the authors control the pretraining process to empirically validate the proposed framework? If direct control over pretraining is not feasible, would it be possible to construct a synthetic dataset that aligns with the theoretical assumptions? This would provide stronger empirical support for the framework. 4. What are the implications of prompt bias and residual dependence? These concepts are introduced as key components of the framework, but their practical significance is not clearly articulated in the later sections of the paper. Clarifying their role—either through empirical validation or additional theoretical discussion—would strengthen the paper's contributions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to read our manuscript critically. This is undoubtedly one of the most comprehensive reviews we have ever received. Please see your comments addressed below. > **“What are the implications of prompt bias and residual dependence?... Clarifying their role... would strengthen the paper's contributions… would it be possible to construct a synthetic dataset that aligns with the theoretical assumptions?”** We provide a synthetic example (see [Figure 6](https://tinyurl.com/av8sd5x5)) to highlight these implications. We focus on residual dependence, which has only been previously alluded to in Oko et al (arXiv, 2025), but simply assumed be zero. We aim to clearly illustrate two claims: 1) the residual dependence $I(X;Y|Z)$ (for image-caption-label $(X, Z, Y)$) governs the performance gap between the two-stage predictor (Eq. (5) / Eq. (6) of our paper) and the Bayes optimal predictor and 2) the encoder-based zero-shot predictors used in practice (such as in CLIP) behave as the two-stage predictor given enough data. The mathematical details are given in the [linked derivations](https://tinyurl.com/av8sd5x5). This experiment controllably interpolates between the “worst” setting $I(X;Z|Y) = 0$, where ZSP performs at near chance and $I(X;Y|Z) = 0$, where ZSP performs optimally. While further investigation would be interesting, both claims are supported within the simulation. In other words, $I(X;Y|Z)$ is a simple distribution parameter that governs how close the best ZSP method can do compared to the optimal downstream predictor, which is of clear interest both in theory and practice. While out of scope for this paper, we hypothesize that this quantity can be estimated in pre-training data selection methods. > **“… are the authors proposing a new loss function for ZSP…? … it remains unclear whether this [variance-regularized covariance] formulation offers practical benefits over existing self-supervised learning objectives.”** There is no intention in the paper to propose a new loss function for SSL, or to even promote one SSL method over another. On the contrary, we include Appendix E to embrace existing, complementary work on SSL and to provide intuition on the relationship between our normalized cross-covariance-based estimator and the common SSL procedures used in practice. This is verified experimentally at least for the motivating example of CLIP in the given simulation. > **“zero-shot image classification benchmarks… are reasonable choices for proof-of-concept experiments on prompt complexity… the analysis does not extend to other self-supervised learning (SSL) settings.”** Due to character limits, please see our response to Reviewer GT2x, in which we describe the motivation for this task. In summary, because we specifically study zero-shot prediction through prompting, similar tasks such as image/text retrieval (which do not include a prompting component) are not meaningful in our setting. > **“…what exactly does "ideal prompting" refer to?.”** The ideal prompting strategy is one in which the prompt bias term is zero, i.e. the user is able to draw from the distribution $P\_{Z|Y=y}$ (one of the implications of defining prompt bias in the first place). In the ImageNet-Captions dataset, we are able to compare to this ideal strategy because we observe direct observations of $(Y,Z)$ pairs (as images have both captions and labels). Thus, we hold out a pool of pre-training examples and draw from these pairs to estimate the prompts. > **“Moreover, the paper explicitly poses the question: "...does ZSP achieve near-optimal statistical performance on a downstream task, and with what dependence on (1) the pre-training data distribution, (2) the downstream task distribution, and (3) the prompting strategy?" However, the empirical experiments are not structured to directly address this question.”** Thank you for raising this point. On (1), our theoretical analysis suggests that this dependence is captured by the residual dependence quantity $I(X;Y|Z)$, and upon your suggestion, we designed synthetic experiments to address this claim. We can perform similar real data experiments if the reviewer finds it helpful. On (2), we did not dedicate experiments to showing how distribution shift may affect performance, as countless empirical studies (see Quiñonero-Candela et al. (2022)) exist on this topic. On (3), the experiments in Section 4 help determine the scaling behavior of the accuracy with the number of prompts, highlighting that the ideal range for reducing prompt variance can be as high as 50-100. We also have addressed all other comments (references, clarifications) but reserve them for the discussion period due to space limitations. Given our efforts to improve the paper in experimentation and presentation in light of your recommendations, we hope you will consider raising your score to above the acceptance threshold. Please allow us to answer additional concerns you have! --- Rebuttal Comment 1.1: Comment: Thank you for your detailed and thoughtful response — many of your clarifications effectively addressed my questions and concerns. I’m now more convinced that the paper makes meaningful contributions. While I still think the organization could be improved to more clearly highlight the theoretical insights and their practical implications, I am considering increasing my score to above the acceptance threshold. Regarding your response > Thank you for raising this point. On (1), our theoretical analysis suggests that this dependence is captured by the residual dependence quantity $I(X;Y|Z)$, and upon your suggestion, we designed synthetic experiments to address this claim. We can perform similar real data experiments if the reviewer finds it helpful. On (2), we did not dedicate experiments to showing how distribution shift may affect performance, as countless empirical studies (see Quiñonero-Candela et al. (2022)) exist on this topic. On (3), the experiments in Section 4 help determine the scaling behavior of the accuracy with the number of prompts, highlighting that the ideal range for reducing prompt variance can be as high as 50-100. On point (1), I believe that including a corresponding real-world experiment—though understandably difficult to add during the rebuttal period—would substantially strengthen the paper’s contributions. Currently, the real-data experiments primarily focus on prompting strategies, which may make it harder for readers to fully appreciate the broader theoretical implications. On point (2), could you clarify or point to where in the manuscript your theoretical framework is connected to prior work on distribution shift? In particular, citing work where significant label shift leads to degraded performance in zero-shot prediction would help situate your theory more clearly within the existing literature. Lastly, I’d be very interested in seeing the clarifications and references you mentioned were omitted due to space constraints. Thank you again for your careful and thorough reply! --- Reply to Comment 1.1.1: Comment: Thank you for engaging in the discussion! On point (1), we agree that bringing ideas from the simulation into a corresponding real-world experiment is the ultimate goal and we will include such an experiment in the manuscript. We hope that the simulation can at least express our vision for the types of experiments that can be conducted to illustrate our theory. In particular, correlating accuracy gaps to (estimates of) the residual dependence on benchmark datasets will aid our case, as you pointed out. On point (2), in our response, we alluded to two settings. The first concerns more classical results on distribution shift in which the pre-training task may be supervised (i.e. ImageNet classification) and the downstream task has the same label space (so that no fine-tuning is necessary) or a fixed fine-tuning budget is permitted. These studies include [Hendrycks & Dietterich (ICLR, 2019)](https://openreview.net/forum?id=HJz6tiCqYm) and [Recht et al (ICML, 2019)](https://proceedings.mlr.press/v97/recht19a.html), where both natural and synthetic shifts are applied and both FSL and ZSP performance is measured. The second setting is the modern prompting-based ZSP that is studied in our paper. One central reference is [Goyal et al. (CVPR, 2025)](https://openaccess.thecvf.com/content/CVPR2023/papers/Goyal_Finetune_Like_You_Pretrain_Improved_Finetuning_of_Zero-Shot_Vision_Models_CVPR_2023_paper.pdf). Their analyses are in a setting that lies between FSL and ZSP, in that one may access image-caption pairs from the downstream task, but may not necessarily have direct image-label pairs. Based on your feedback, we plan to conduct experiments similar to those in their Section 4.1 for the final version, by measuring the correlation of ZSP performance with increasing severity of data corruption (in the sense of datasets such as ImageNet-C and CIFAR10-C). **Clarifications and References:** > **“…the approximation $\log(1 + y) \approx y$ is used without justification.”** The approximation is meant to follow from a first-order Taylor expansion under the assumption $y$ is sufficiently small. To make an exact equality we may use the exact formula $\log(1 + y) = y + o(y)$ and carry the remainder. We will make this edit in the final version. > **“The distribution $P\_{Z|Y = y}$ in CLIP may not be well approximated by a language model, introducing an additional source of bias. This discrepancy… should be addressed.”** The language model *is* the source of bias which is quantified in the theory, not an additional one. > **“[1] analyzes the relationship between concept frequency in the pretraining dataset and its impact on downstream performance… this phenomenon seems directly relevant to [the authors'] theory and could provide additional insights into sample complexity and residual dependence.”** It is absolutely relevant to both sample complexity and residual dependence, and we will include the discussion in the final version. The near-exponential scaling of pre-training data with linear improvement to downstream classification shown in [1] is equivalent to the excess misclassification risk decaying at near $O(1/\log(N))$, which is slower but practically reflective of our rate in Theorem 1. We hypothesize that their concept frequency notion reflects residual dependence; a conceptually rich caption can be predictive of the class label even without the image, indicating near conditional dependence of the image $X$ and label $Y$ given $Z$. > **“[2] proposes another statistical framework for ZSP and provides a theoretical analysis of multimodal generative AI, including CLIP… it is important to compare these approaches and clarify how they relate.”** Thank you for identifying this relevant reference. We will discuss [2] as concurrent work in adherence to the [ICML 2025 Guidelines](https://icml.cc/Conferences/2025/ReviewerInstructions). This work, while operating in a similar framework, captures a complementary aspect of the problem: the ability of the encoders learned by the CLIP objective to capture relevant distributional information. This leads to their concept of approximate sufficiency, and the generalization bounds measure errors in the encoders in terms of this sufficiency term (as opposed to our use of sample complexity). We feel the more interesting comparison is made when considering the analysis of downstream predictions. [2] makes two idealized assumptions in the case of ZSP: 1) they assume that at inference time, the user may sample prompts directly from the distribution of $Z$ given $Y = y$ (see the setup before their Eq. (5)) and 2) they assume (see their Assumption 2) that $X$ and $Y$ are conditionally independent given $Z$. The fact that neither of these hold in practice is precisely what gives rise to our notions of prompt bias and residual dependence; these two quantities *exactly* quantify the degree to which these assumptions are violated. Thank you once again for suggesting this comparison.
Summary: The paper proposes a theoretical framework for zero-shot prediction linking pre-training to prompting and also introduces residual dependence (information loss between modalities) and prompt complexity (sample/prompt trade-offs). Risk bounds show ZSP needs huge pre-training data but few prompts. ## update after rebuttal Thanks for the effort, I decide to keep the score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. The framework and experimental design rigorously explains why self supervised learning pre-training + prompting works for zero-shot image tasks. Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The work proposes a theoretical framework for zero-shot prediction linking pre-training to prompting Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: Rigorous analysis; connects SSL objectives (CLIP, VICReg) to theory; Explains success of LLM-generated prompts Weakness: The theoretical analysis is limited to CLIP-like multi-modal models. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review&mdash;we address your main comment below. > **"The theoretical analysis is limited to CLIP-like multi-modal models."** Our analysis broadly describes multimodal encoder + prompting strategies, where the encoders could be learned by a variety of objectives (not only CLIP). Please see Appendix E for a review of such multimodal encoder strategies, including Multimodal InfoNCE/CLIP, BarlowTwins/Nonlinear CCA, the Spectral Contrastive Loss, and Multimodal VICReg. Crucially, we do not focus on the mechanics of how particular objectives or optimization algorithms result in particular encoders (see the related work Section 1 and references therein). We focus on the downstream performance is affected by 1) the dependence structure of $(X, Z, Y)$, the image-caption-label triple, and 2) the nature and amount of the prompting strategy. To improve based on your feedback, we have included a simulation in [Figure 6](https://tinyurl.com/av8sd5x5) of the rebuttal wherein the two-stage predictor analyzed in our paper is compared to both CLIP and VICReg trained on the corresponding data. We find that the dependence of two-stage prediction on the residual dependence follows the same trend as that of both CLIP and VICReg. If this point is addressed adequately, then we would appreciate that the score be raised; if not, please let us know if you have further comments or questions and thank you once again for this feedback!
null
null
null
null
null
null
Domain2Vec: Vectorizing Datasets to Find the Optimal Data Mixture without Training
Accept (poster)
Summary: The document introduces DOMAIN2VEC, a technique for optimizing data mixtures in training large language models by decomposing datasets into linear combinations of "Meta-Domains" to enable efficient identification of optimal data mixture ratios. DOMAIN2VEC uses a meta-Domain classifier to classify any dataset and the Distribution Alignment Assumption (DA2) which suggests that the validation loss is low if the training and validation set are more aligned. Claims And Evidence: The authors have employed a K-means clustering resulting in 240 different Meta-Domain clusters for English and Chinese Data. Similarly, for code data, they classified code using 20 classes based on their programming language. They claim that they can decompose datasets according to these 260 Meta-Domains. However, it is unclear, especially for the English and Chinese text if this number of clusters is the appropriate one and the reason they chose K=240. Also, it is unclear whether these clusters which resulted from a specific dataset, can represent well other datasets. Methods And Evaluation Criteria: The authors use KNN as an embedding-based baseline without providing further details on that. The number of Nearest Neighbours K can have a significant effect on the data classification. Moreover, it seems that KNN performs better than DOMAIN2VEC + RegMix according to table 1. Theoretical Claims: All theoretical claims have been checked and seem valid. Experimental Designs Or Analyses: The experimental design seems valid. However, according to Table 3, the experimental results do not seem to significantly improve performance using DOMAIN2VEC. Supplementary Material: Yes, all supplementary materials have been reviewed. Relation To Broader Scientific Literature: Previous works use a proxy model or require resampling of the data mixtures. Essential References Not Discussed: To the best of my knowledge, there are no essential references that are not discussed. Other Strengths And Weaknesses: Strengths The paper is well-written and contains extensive experimental results. It touches on an important topic related to training Large Language Models. It seems to improve computational cost with respect to previous state-of-the-art. Weaknesses: The experimental results do not demonstrate significant performance improvement. Other Comments Or Suggestions: No other comments. Questions For Authors: Why did you use K-means for finding the domains? Why did you choose to represent these domains with 240 Meta-domains? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 75ct, Thanks for your very valuable review and recognition of our work! We will address your questions point by point. ## Q1 :Why did you use K-means for finding the domains? Why did you choose to represent these domains with 240 Meta-domains? Also, it is unclear whether these clusters which resulted from a specific dataset, can represent well other datasets. A1: First, the 5.2TB text data we used to construct the Meta-Domains is unlabeled. We assume there exist distinct characteristics among different Meta-Domains, such as semantic features. In our implementation, we computed the embedding representations of these 5.2TB data to construct the Meta-Domains. Utilizing K-Means clustering on embeddings is an efficient approach under unsupervised conditions. Second, we referred to the Elbow method, selecting the number of Meta-Domains based on the point where inertia changes relatively gradually, as shown in Figure 1. Meanwhile, the chosen number of Meta-Domains in this paper is merely an experimental setting. Finally, the meta-domain classifier achieved an accuracy of 74.73% on the validation set. Given that this is a 260-class classification task, we believe that setting K = 240 effectively ensures clear distinctions among different clusters. ## Q2: The authors use KNN as an embedding-based baseline without providing further details on that. The number of Nearest Neighbours K can have a significant effect on the data classification. Moreover, it seems that KNN performs better than DOMAIN2VEC + RegMix according to table 2. A2: The details on the KNN baseline are as follows: ``` First, for the training and validation datasets in section 4.1, we sampled 1000 examples from each dataset. Then, we used bge-small-v1.5 (since the datasets in section 4.1 are in English) to obtain embeddings for samples from each dataset and used mean pooling to get unique embeddings for each dataset. Meanwhile, we also used bge-small-v1.5 to obtain embeddings for the data in each Meta-Domain. Then, we set K as 1000 and used KNN (based on Euclidean distance) to obtain probability distributions of training and test datasets belonging to each Meta-Domain. Last, we treated these probability distributions as new domain vectors. Based on these domain vectors, we implemented the Distribution Alignment Assumption. ``` Second, we believe that the superior performance of kNN validates the rationality of our Meta-Domain construction process. **However, it should be noted that DOMAIN2VEC + DA² still significantly outperforms KNN**. Last, we would like to clarify that: Embedding-based methods, in addition to having more limited context length compared to our method, still have several disadvantages. For different types of data, we used different clustering methods to construct Meta-Domains. 1. For code, we directly identified its programming language without using an embedding model. 2. For Chinese and English data, we used embedding models that output embeddings of different dimensions. Moreover, the semantic meaning of the same dimensions of different models' embeddings is obviously different. Therefore, our proposed method has better generalizability。 ## Q3: However, according to Table 3, the experimental results do not seem to significantly improve performance using DOMAIN2VEC. A3: We want to clarify that Domain2Vec could provide a universal representation of pre-training datasets, which focuses on the scalability and efficiency of pre-training data mixture experiments. The scalability of Domain2Vec is reflected in: Domain2Vec establishes a latent space representation of datasets. Therefore, any pre-training data can be mapped into this space. Experiments conducted in the latent space remain consistent regardless of changes in pre-training datasets. In contrast, RegMix performs experiments at the dataset level, requiring all previous experimental results to be discarded and new experiments to be conducted when datasets change (such as, adding new datasets, improving the quality of some datasets).
Summary: This paper presents a method for determining the optimal data mixture weights for combining different pre-training datasets to train language models. The authors formulate this as an optimization problem, where the goal is to find the appropriate weights over a set of meta-domains. These meta-domains are constructed by applying K-means clustering to dataset embeddings. A meta-domain classifier is then trained to predict the probability of a dataset belonging to each meta-domain. By representing both the training datasets and validation datasets as a linear combination of these meta-domains, the authors ensure that all datasets exist within the same representational space. This enables them to optimize the dataset mixture weights to minimize validation loss effectively. Through extensive experiments, the authors demonstrate that their approach achieves performance comparable to prior methods such as DoReMi and RegMix while operating at a significantly lower computational cost. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Not applicable Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: This paper's contribution is relevant to the scientific literature in NLP but lacks discussion on other modalities Essential References Not Discussed: There are a couple, and I note them in the weaknesses sections Other Strengths And Weaknesses: Strengths: The paper is easy to follow and the idea to breakdown dataset into a set of “building blocks” called meta-domains is interesting. The results are also impressive. The paper should provide a more thorough discussion of its connections to prior related work. Specifically, Task2Vec (https://arxiv.org/pdf/1902.03545) introduces a framework for representing relationships between tasks, which conceptually aligns with the current study. The fundamental principle of meta-learning relies on identifying similar tasks and leveraging that information for training. Consequently, numerous studies have explored this, including those utilizing gradient similarity, such as the work presented in https://arxiv.org/pdf/1911.10600. Moreover, dataset classification and dataset biases are well-recognized as challenging problems. The authors should elaborate on how their work relates to recent studies tackling these issues, such as the one presented in https://arxiv.org/pdf/2403.08632. Given these papers it is crucial to situate the current findings within the broader research landscape and highlight their contributions in relation to these prior efforts. One more aspect that requires clarification is the discrepancy between the feature extraction model and the meta-domain classifier. The paper states that the features used for clustering are derived from the "bge-small" model, whereas the meta-domain classifier is trained using Qwen. The rationale behind this decision is not immediately apparent, and the authors should justify why these different models were chosen for these respective tasks. A clear explanation would strengthen the coherence of the methodological choices. Another issue pertains to the clustering methodology. The K-means clustering approach used in the paper does not inherently ensure that clusters are not sufficiently independent as also observed in author’s experiments. To mitigate this, it would have been beneficial to introduce a diversity-enhancing objective within K-means and this could be implemented using Faiss, A more critical concern relates to the use of linear regression on probabilities, as presented in Equation (6). Probabilities do not exist in a regular Euclidean space but instead lie on a simplex, making standard linear regression an inappropriate model choice. A more suitable approach would be geodesic regression and I would like the authors to comment on this. Additionally, there appears to be a discrepancy in the reported calculations in line 377, where the paper states that "Pile-CC only shows a 4.01% improvement over Human." However, based on the values presented in Table 3, the correct computation appears to be: [(0.439-0.424)/0.439]*100=3.5% rather than 4.01%. While I did not verify all numerical claims, I strongly recommend that the authors carefully re-evaluate their calculations to ensure accuracy. If my computation is incorrect, clarification would be helpful. Overall, while the paper introduces interesting ideas, the identified technical inconsistencies significantly weaken its contributions. Addressing these concerns would considerably improve the rigor and credibility of the study. In its current form, I am unable to recommend acceptance. Other Comments Or Suggestions: NA Questions For Authors: Please see weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer H5YS, Thanks for your insightful review and suggestions! We will respond to your questions one by one. ## Q1: The paper should provide a more thorough discussion of its connections to prior related work, i.e, Task2Vec [1]. A1: Similar to [2] and [3] cited in lines 407–412 (right column), Task2Vec [1] is an efficient way to represent a task or its corresponding dataset as a fixed-dimensional vector. However, Domain2Vec differs from these works in both purpose and implementation, as we focus on data mixture for language model pretraining rather than using a method like Task2Vec to select an expert from a collection (which can improve test performance while adding only minimal overhead to the training process). Last, we will also include a citation for Task2Vec. [1] https://arxiv.org/pdf/1902.03545 [2] https://arxiv.org/abs/1905.11063 [3] https://arxiv.org/abs/2406.00281 ## Q2: Dataset classification and dataset biases are well-recognized as challenging problems. The authors should elaborate on how their work relates to recent studies tackling these issues, such as the one presented in [4]. A2: We would like to clarify that the target of Domain2Vec is to determine which Meta-Domains a given dataset can be composed of. In practice, when applying Domain2Vec, we are actually performing a text classification task rather than classifying an entire dataset. As shown in Figure 6, different datasets can share the same Meta-Domain, which explains why they mutually benefit from training with each other. Last, We will also add references to [4] and discuss it in future work. [4] https://arxiv.org/pdf/2403.08632. ## Q3: The paper states that the features used for clustering are derived from the "bge-small" model, whereas the meta-domain classifier is trained using Qwen. The rationale behind this decision is not immediately apparent, and the authors should justify why these different models were chosen for these respective tasks. A3: There are few reasons we choose Qwen as the backbone rather than the bge model. 1) Since the embedding dimensions and semantic feature spaces of the Chinese and English bge models are inconsistent, and no embedding model was used for code data. 2) The Embedding models like bge have a context window limited to 512 tokens, while pre-training data is typically longer than 512 tokens. In contrast, our Meta-Domain Classifier, trained based on Qwen, can handle context lengths of up to 8k tokens or even longer. 3) More importantly, our Meta-Domain classifier can output very specific probability distribution on the meta-domains while the baseline can only output a hard assignment (0 or 1). There are indeed some kNN algorithms that can output soft scores, but the score is indirectly based on the distance to the center of clusters. The efficiency of kNN is also limited because it is a lazy learning algorithm and puts time complexity into inference time. ## Q4:The K-means clustering approach used in the paper does not inherently ensure that clusters are not sufficiently independent as also observed in author’s experiments. To mitigate this, it would have been beneficial to introduce a diversity-enhancing objective within K-means and this could be implemented using Faiss. A4: First, this is a 260-class classification task, and our meta-domain classifier achieved an accuracy of 74.73% on the validation set. Thus, we believe that the K-means clustering approach effectively ensures clear distinctions among clusters. Second, we have utilized FAISS to perform K-means clustering. In the future, we plan to explore introducing a diversity-enhancing objective within the K-means clustering process. ## Q5: A more critical concern relates to the use of linear regression on probabilities, as presented in Equation (6). Probabilities do not exist in a regular Euclidean space but instead lie on a simplex, making standard linear regression an inappropriate model choice. A more suitable approach would be geodesic regression and I would like the authors to comment on this. A6: Great question! We want to clarify that the domain vector is not merely the probabilities assigned by the Meta-Domain Classifier to a dataset. Its deeper implication is that once we obtain the domain vector of a dataset, the dataset can be regarded as a **linear combination** of various Meta-Domains represented by the domain vector. Therefore, we can apply algorithms such as RegMix (Equation (6)) on Domain2Vec. In future work, we also plan to explore methods like geodesic regression. ## Q7: While I did not verify all numerical claims, I strongly recommend that the authors carefully re-evaluate their calculations to ensure accuracy. A7: Thank you for your careful review. This is indeed a typo error. We have also verified other numerical claims throughout the paper, and this particular typo does not affect the conclusions presented.
Summary: Authors propose a sampling method for training LLMs on multiple sources. Their core idea is as follows: They construct a universal set of real-valued vectors from a large textual corpus--using K-means and doc embeddings. Each vector approximately represents a topical domain of the corpus. Then they take random samples from the training sources of LLM and use a classifier to determine on average which vectors are most similar to the sampled documents. They use these similarity scores to represent the entire multi-source training dataset. They follow the same procedure for the validation set as well. Then they learn a coefficient set that transforms the representation of the training set into the validation set. The obtained weight set is the desired sampling ratio. Conceptually, their idea is similar to what is used in topic-modeling, LSI, or LDA, but the vector entries here are words. So they extract a set of vectors from train and validation sets that best represent these two sets. Then they try to learn a set of coefficients that makes the train matrix similar to the validation matrix. Their argument is that if the model works on validation set, then it will work on unseen sets (the real test set). Claims And Evidence: Please see the section below Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Most of them Supplementary Material: Some of them Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths:** - The topic is timely. - The method is intuitive. - Then analysis is convincing. **Weaknesses:** - Reading Section 2 (which is the core part of the paper) was very painful. I had to read it at least three times, and for some parts of it even more. Here are some of the issues: - Line 88 (right column). what does this mean: (where each element vj of v represents the projection (weight) of the dataset D on Dj\*). what does it mean to have a weight of a dataset on Dj\*? After the Key assumption please explain what the reader is expecting to see. You suddenly jump into explaining that you are collecting data, the reader would wonder what you would need the data for. Line 136 (left column), how did you train the classifier, what is the training data? Please re-organize the section and put the training explanation before saying how you would use the classifier. Line 143 (left col), what is "domain vector"? is it something that represents a document or a dataset? On Line 143 you are using it for a doc, but on line 82 (right col) you are using it for a dataset. Line 156 (left com), when you define V_train the inner vectors should be transposed. Lines 159 and 162 (left col), why is there meta-domain once capitalized and once is not? is there ant difference between the two? Line 160 (left col), what does it mean when you say a "text belongs to a meta-domain". As far as I know the verb "belong" is used in the set theory and it means membership. As you can see the explanations are very vague, the organization of the section is messy, and the wording is not scientific. - The main reason that I oppose accepting this paper is this: the core idea of the authors to do the sampling such that the sampled documents from the training set become similar to the sampled documents from the validation set is a text-book example of overfitting. The role of validation set is to only validate your ML model, not to use it in the learning algorithm. The goal of ML is to develop a model to have a low error rate in an UNSEEN set, and the validation set can be used as an unseen set during the learning stage. But you are using the validation set inside the learning model itself. This is why your improvement is almost zero, and the algorithm is not generalizable. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Deal Reviewer HtfL, Thanks for your careful review! We will reply to your questions one by one and hope to solve your concern. ## Part1: Clarification of Our Method ### Q1: Line 88 (right column). what does this mean: (where each element vj of v represents the projection (weight) of the dataset D on Dj*). what does it mean to have a weight of a dataset on Dj*? A1: First, our proposed Domain2Vec can transform a dataset into a domain vector $v$. Next, we propose that any dataset can be viewed as a **linear combination** of the domain vectors for multiple Meta Domains $\mathcal D_j^*$. And the weight of this linear combination is $v$. Therefore, each element $v_j$ of $v$ represents the projection (weight) of the dataset $\mathcal D$ on $\mathcal D_j^*$. ### Q2: Line 143 (left col), what is "domain vector"? is it something that represents a document or a dataset? A2: First, we treat the domain vector as a vector representation of datasets. For a given dataset, we first sample N documents from it. For each document $text_j$, $\mathcal{p}_i$ represents the probability that $text_j$ originates from the i-th Meta-Domain. It should be noted that in our paper, for each document $text_j$, $\mathcal{p}_i$ is also referred to as the domain vector, since a single text can be viewed as a dataset with a sample size of 1. Therefore, the domain vector of the given dataset is obtained by averaging the domain vectors of documents sampled from it. ### Q3: The role of validation set is to only validate your ML model, not to use it in the learning algorithm. A3: First, **in previous studies on data mixture [1][2][3][4], it is common practice to use a validation set to optimize the data mixture, which should not be regarded as overfitting.** For example, Data Mixing Law, RegMix, D-CPT-Law, and BiMix all model the relationship between data mixture and validation loss to identify the optimal mixture that minimizes validation loss. Second, the idea of distribution matching has also appeared in previous works[5]. For instance, [5] improves the performance of continued pretraining by selecting a subset of a large raw unlabeled dataset to match a desired target distribution given unlabeled target samples. Third, our testing is independent of the validation set. 1) Section 4.1 only evaluates whether Domain2Vec can accurately predict the model’s validation loss. 2) When evaluating downstream task performance, we use downstream tasks that are entirely independent of the validation set, and our experimental setup follows the approach of RegMix[2]. Last, compared with other baselines, our method significantly reduces computational overhead. Moreover, the data mixture obtained by our method achieves clear performance improvements on downstream tasks compared to the original mixture of The Pile dataset. Therefore, the improvement brought by our method is not "almost zero." [1] https://openreview.net/pdf?id=jjCB27TMK3 [2] https://openreview.net/forum?id=5BjQOUXq7i [3] https://openreview.net/pdf?id=JzKFN5fWOk [4] https://openreview.net/forum?id=JsM46OZix7 [5] https://arxiv.org/pdf/2302.03169 ## Part2: Comments Suggestions And Typos ### Q4: Line 136 (left column), how did you train the classifier, what is the training data? A4: First, in lines 158 (left column) through 125 (right column), we have provided a detailed explanation of the meta domain classifier training. Next, we will swap the order of the sections in lines 141–157 (left column) with those in lines 158 (left column) through 125 (right column) to make this part clearer. ### Q5: After the Key assumption please explain what the reader is expecting to see. You suddenly jump into explaining that you are collecting data, the reader would wonder what you would need the data for. A5: In line 91 (right column), we will add a corresponding transitional sentence for better clarity. ### Q6: Line 156 (left com), when you define V_train the inner vectors should be transposed. A6: Thanks for your suggestions. We will fix this typo error. ### Q7: Lines 159 and 162 (left col), why is there meta-domain once capitalized and once is not? is there and difference between the two? A7: There is no difference here. We will capitalize "meta-domain" in line 162. ### Q8: Line 160 (left col), what does it mean when you say a "text belongs to a meta-domain". As far as I know the verb "belong" is used in the set theory and it means membership. A8: Perhaps "originate from" would be a better expression. We will carefully review the usage of certain phrases.
Summary: Domain2Vec introduces a method for vectorizing datasets by decomposing them into linear combinations of Meta-Domains, which enables efficient identification of optimal dataset mixtures for LLM pretraining. They sampled and embedded texts from which predetermined clusters/labels, which they then clustered and trained the labels on with a classifier head. Domain2Vec matches dataset distributions by minimizing validation loss differences without the heavy computational cost of training proxy LLMs. The method achieves comparable dataset mixture quality to existing approaches like DoReMi and RegMix at significantly reduced computational expense (only 0.26% of DoReMi’s cost). Claims And Evidence: The paper claims substantial computational savings compared to baseline methods (DoReMi and RegMix), with evidence supported by experiments demonstrating Domain2Vec's ability to find optimal data mixtures. However, there are two unaddressed claims: - DoReMi demonstrated its efficacy on two datasets (The Pile and The Glam), while this paper only focused on The Pile. Domain2Vec involves much more modelling primitives (an embedding model, training a classifier head, etc) and other hyperparameters, as such, demonstrating the method’s robustness is necessary. - Specific claim of DoReMi requiring 3.7e19 FLOPS is not clearly substantiated within the reviewed paper or the original DoReMi work. Methods And Evaluation Criteria: Domain2Vec's method involves embedding datasets into a "Meta-Domain" space, clustering embeddings, and training a classifier head on top of a small LM to produce domain vectors. Evaluation is performed primarily on the Pile dataset, comparing Domain2Vec’s performance against baselines through validation loss and downstream task accuracy. The evaluation is thorough and well-done, but limited to a single dataset (The Pile). Theoretical Claims: None Experimental Designs Or Analyses: The paper conducts extensive experiments comparing Domain2Vec against established baselines, showing that Domain2Vec achieves similar downstream task performance at dramatically lower computational cost. The baselines of DoReMi and RegMix are well thought and carried out. However, additional analyses on dataset hyperparameter sensitivity (e.g., varying the number of Meta-Domains and how that would affect the classifier and the end performance) are needed. Supplementary Material: none Relation To Broader Scientific Literature: The current established works on Data Mixtures are DoReMi, RegMix, Data Mixture Laws (Ye et al). All of these papers require training and using smaller proxy models to inform data mixture for larger models. Domain2Vec effectively bypasses the need to train any small proxy model, by leveraging an embedding model instead. Thus enabling computational savings. Furthermore, all the above works do not generalize to new datasets, as they’ll require retraining of the small proxy models, while Domain2Vec works out of the box with its trained classifier. As such, if proven to be robust, this work has a meaningful impact on the field of pretraining data mixtures. Essential References Not Discussed: Distribution Alignment Assumption (or target distribution matching) are shown in prior works but not cited. It’ll be interesting to see simpler methods like the Hashed NGram method in Xie et al [1] compared as well. [1] Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang. Data Selection for Language Models via Importance Resampling. https://arxiv.org/abs/2302.03169 [2] Suchin Gururangan and Ana Marasović and Swabha Swayamdipta and Kyle Lo and Iz Beltagy and Doug Downey and Noah A. Smith. Don't Stop Pretraining: Adapt Language Models to Domains and Tasks. https://arxiv.org/abs/2004.10964 Other Strengths And Weaknesses: Strengths: Domain2Vec significantly reduces computational overhead; strong comparative experiments; clear methodological explanations. I believe it would be a helpful addition to the data curation/mixture/filtering stack, given the transferability of the classifier and the simplicity of the methodology. Weaknesses: The complexity of the embedding and classification steps could benefit from additional robustness demonstrations. Limited hyperparameter ablation (260 meta domains). Validation was performed primarily on one dataset. Other Comments Or Suggestions: Minor presentation improvements, i.e. enlarging plot axes and legends for clarity (Figures 3, 4, 5, and 6). Questions For Authors: 1. Could you provide explicit calculations or further clarify the claim regarding DoReMi's and RegMix’s computational cost (3.7e19 FLOPS)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Revierwer ZTKc, Thank you for your recognition of the value of our work, as well as for your valuable comments. We will respond to your questions one by one. ## Q1: Domain2Vec involves much more modelling primitivesand other hyperparameters, as such, demonstrating the method’s robustness is necessary. A1: In section 4.1, we use a mixture of C4 and the Knowledge Pile as the training set, and the Pile and RedPajama as the validation set. Experiments on different datasets (Sec 4.1: C4, Knowledge Pile, Sec 4.2: The Pile) demonstrate the robustness of Domain2Vec. ## Q2: Specific claim of DoReMi requiring 3.7e19 FLOPS is not clearly substantiated within the reviewed paper or the original DoReMi work. Could you provide explicit calculations or further clarify the claim regarding DoReMi's and RegMix’s computational cost? A2: Of course, the estimated FLOPS of various baslines are from the Tabel 4 of RegMix paper. [1] https://arxiv.org/abs/2407.01492 ## Q3: However, additional analyses on dataset hyperparameter sensitivity (e.g., varying the number of Meta-Domains) are needed. A3: Theoretically, increasing the number of Meta-Domains will lead to more accurate Domain2Vec representations of pretraining datasets. Due to limited computational resources, we only experimented with the number of Meta-Domains during clustering (Figure 3). Exploring how varying the number of Meta-Domains affects both the classifier and the final performance is left for our future work. ## Q4: Distribution Alignment Assumption (or target distribution matching) are shown in prior works but not cited. It’ll be interesting to see simpler methods like the Hashed NGram method in Xie et al [2] compared as well. A4: First, we will cite [2] in our next version. Second, we would like to clarify that although [2] and our proposed Distribution Alignment Assumption share certain similarities: 1) The methods used for feature construction differ. [2] is based on Hashed N-gram Features, whereas Domain2Vec uses a Meta-Domain Classifier to generate Domain Vectors. 2) [2] conducts data selection at the sample level, whereas Domain2Vec performs data mixing at the dataset level across different datasets. 3) [2] is validated on encoder-only language models such as BERT and RoBERTa, while Domain2Vec is validated on autoregressive, decoder-only language models. [2] https://arxiv.org/pdf/2302.03169 ## Q5: Minor presentation improvements, i.e. enlarging plot axes and legends for clarity (Figures 3, 4, 5, and 6). A5: Thanks for your suggestions, and we will increase the font size in all the figures.
null
null
null
null
null
null
A Closer Look at Transformers for Time Series Forecasting: Understanding Why They Work and Where They Struggle
Accept (poster)
Summary: The paper investigates the effectiveness of Transformer-based models for time series forecasting, focusing on why simpler Transformers outperform more complex ones. These findings include that intra-variate dependencies dominate the performance of existing forecasting benchmarks, tokenization/channel independence are critical techniques to capture intra-variate patterns, and Z-score normalization significantly improves performance on non-stationary forecasting benchmarks. The authors delve into several representative Transformers on commonly adopted benchmarks and synthetic datasets. Claims And Evidence: > The assumption $H\left(\hat{\mathbf{x}}_j \mid \mathbf{x}_i, \mathbf{x}_{/ i}\right)=0$ (line 166) This assumption is not right. The authors regard the model as a deterministic forecaster. However, the model is trained with probabilistic distributions, e.g., MSE specifies a Gaussian distribution. > Point-wise Transformers underperform due to their weak capability of capturing intra-variate patterns (supported by Intra MI scores and synthetic experiments). The claim that Transformers are less effective at capturing patterns within a single variate may not only stem from their tokenization (counterexamples such as recent pre-trained Transformers Time-MoE and Chronos). In this regard, the authors should also consider the sufficiency of training samples. > Skip connections are crucial for capturing intra-variate dependencies but unsuitable to capture inter-variate patterns. This conclusion can be highly dependent on the selected architecture of iTransformer. Broader validation on more types of Transformers (e.g., Transformers in Table 1) is needed. Methods And Evaluation Criteria: * The MI-based metrics assume deterministic outputs, which may not hold for probabilistic distribution for optimization. Theoretical Claims: See above. Experimental Designs Or Analyses: See Claims And Evidence. Supplementary Material: Supplementary Material provides the complete results of experiements. Relation To Broader Scientific Literature: Several of the authors' findings in this paper have been mentioned in previous works: * About the influence of Channel Independence: Rethinking Channel Dependence for Multivariate Time Series Forecasting: Learning from Leading Indicators. ICLR 2024. * Z-Norm addresses distributional shift: Reversible Instance Normalization for Accurate Time-Series Forecasting against Distribution Shift. ICLR 2022. Essential References Not Discussed: The aforementioned works should also be cited. Other Strengths And Weaknesses: * Strength: The metric of mutual information for time series forecasting is original. * Weakness: (1) The findings presented by this paper need more comprehensive validation (e.g., evaluation on short-term forecasting datasets) (2) The paper can be further improved by providing insights into the solution to address these issue. (3) The description of data synthesis takes up too much space in the main text, while the conclusions drawn from the experiments are presented in a disorganized manner. Other Comments Or Suggestions: Suggestion: (1) Include error bars in evaluations to assess variance. (2) Instead of assessing different types of Transformers, presenting respective ablation studies on disentangled components (e.g., tokenization, attention mechanism, channel independence, and Z-norm) can exclude their mutual influence. (3) It can be more helpful to provide showcases of synthetic datasets (under different configurations $\alpha$ $\gamma$) to illustrate the pipeline of Figure 1. Questions For Authors: The main conclusions are drawn by comparing the model's performance without multiple runs. Is the performance mutable and easily influenced by the training process? The authors need to provide the training configurations and error bars (at least in the appendix). Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you very much for your comments. We truly appreciate your effort and time for reviewing our work. *Response to the comment challenging the assumption that the model's output is deterministic:* &nbsp;&nbsp;&nbsp;&nbsp; While MSE is linked to the assumption of Gaussian noise during training, it does not make the model's predictions stochastic. In this case, the model trained with MSE produces a single, fixed output for a given input. To obtain truly probabilistic or stochastic outputs, one would need to model and sample from the predictive distribution explicitly, such as through Bayesian methods or by estimating parameters of a distribution and sampling accordingly. *Response to the comment regarding pre-trained Transformers such as Time-MoE and Chronos:* &nbsp;&nbsp;&nbsp;&nbsp; As stated at the end of Sec. Related Work, our study focuses on lightweight transformer architectures that are designed to be trained from scratch for an individual dataset. We did not include pre-trained models as they typically follow different training protocols and involve more complex architectures, making fair and transparent comparisons difficult. Additionally, Time-MoE uses a decoder-only architecture, which is fundamentally different from the encoder-based structures used in the lightweight models. Chronos, on the other hand, tokenizes time series into discrete bins through simple scaling and quantization, a strategy that differs significantly from the point-wise, patch-wise, and variate-wise tokenization. We appreciate the suggestion and will include references in the revision. *Response to the comment regarding broader validation of the importance of skip-connection:* &nbsp;&nbsp;&nbsp;&nbsp; We conducted the skip-connection ablation study to explore the following questions (Line 327): “Why do transformers with basic attention mechanisms perform well in time series forecasting? Which components of the basic transformer architecture contribute most to this success?” Our focus is specifically on **basic and effective** transformer architectures. iTransformer meets both criteria, and we consider its structure representative for understanding key design components in transformer-based models. We analyzed skip connections in iTransformer because we hypothesized that they play a key role in learning intra-variate patterns, especially within an architecture designed to support inter-variate attention. Moreover, in models using point-wise tokens, skip connections primarily enhance inter-variate patterns, which differs fundamentally from the goal of this ablation study. *Response to comments regarding additional references.* &nbsp;&nbsp;&nbsp;&nbsp; Thank you for your suggestions. While we agree that the suggested papers are relevant to the broader topic, we believe that the specific findings of our work are **not** addressed in them. &nbsp;&nbsp;&nbsp;&nbsp; [1] Rethinking Channel Dependence... ICLR 2024 &nbsp;&nbsp;&nbsp;&nbsp; [1] proposes LIFT, a plugin method designed to identify and utilize locally stationary lead-lag relationships between variates to improve forecasting performance. While it focuses on modeling lead-lag dependencies, our work evaluated the effectiveness of various models in capturing general inter-variate relationships. The mutual information-based scores we propose are model-agnostic and capable of assessing inter-variate dependencies without being limited to specific relationship types. &nbsp;&nbsp;&nbsp;&nbsp; [2] Reversible Instance Normalization... ICLR 2022. &nbsp;&nbsp;&nbsp;&nbsp; Reversible Instance Normalization (RevIN), a method similar to Z-score normalization, was designed to address distributional shifts in time series data. A key difference is that RevIN incorporates learnable parameters within its normalization process, whereas the Z-score normalization examined in our study is the standard, non-learnable version. Moreover, our findings **contradict** the suggestion that "Z-Norm addresses distributional shift." Specifically, we observed that Z-score normalization degraded model performance on synthetic datasets which are non-stationary with monotonic trends (Table 4). *Response to other suggestions and questions:* &nbsp;&nbsp;&nbsp;&nbsp; 1. "...include error bars", "...without multiple runs", "...provide training configurations". &nbsp;&nbsp;&nbsp;&nbsp; As stated in Sec. Experiments (Line 216 - 219): "All the experimental results in this work are averaged over 3 runs with different random seeds. Standard deviation of the results are provided in the Appendix." Both Fig. 3 and 4 include error bars. We will add hyperprameter configurations in the appendix in the revised manuscript. &nbsp;&nbsp;&nbsp;&nbsp; 2. "...provide showcases of synthetic datasets". &nbsp;&nbsp;&nbsp;&nbsp; We will add visualization of synthetic datasets in the revised manuscript.
Summary: The paper focuses on a recently heated topic and an important topic in time series forecasting -- which is to conduct further evaluation on the previously proposed models, not only on how a model can improve the forecasting performance, but also consider the performance changes are potentially related to the characteristics of the datasets. While the paper does not introduce a novel model architecture, it provides valuable insights into the generalization ability of Transformer-based models beyond widely used benchmark datasets (lots of them are poorly designed, see **Other Strengths And Weaknesses**). Claims And Evidence: See limitation with the **experimental designs** and **questions** to better support the claims. Methods And Evaluation Criteria: The evaluation is conducted by averaging across the forecasting horizon, but a more comprehensive approach would be to assess also the model’s predictability for each forecasting horizon independently, as recommended in time series literature [1,2]. This is especially important for a paper claiming to reduce the error accumulation over the forecasting horizon. [1] Another look at measures of forecast accuracy. [2] Forecasting: Principles and Practice. Chapter 3. Theoretical Claims: See limitation with the **experimental designs** and **questions** to better support the claims. Experimental Designs Or Analyses: The experiments primarily rely on benchmark datasets that have already been used to evaluate the selected Transformer models. From this perspective, this paper is mainly a reproduction of the previous models on the already-tested data sets. However, these benchmark datasets alone are insufficient to draw a comprehensive understanding of how Transformers perform in time series forecasting (see Weaknesses). It would be more valuable to include additional high-quality datasets, particularly from diverse domains such as weather forecasting (weather bench etc.), power price prediction (see power price [1] used in TimeXer[2]), and other datasets that have long been established in specialized fields. This would help assess the models’ robustness and generalization across different types of time series data. - [1] Forecasting day-ahead electricity prices: A review of state-of-the-art algorithms, best practices and an open-access benchmark - [2] TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables Supplementary Material: The appendix mainly discusses the used data sets and further results. It would be better to optimise the table design, eg. bold the best performing ones and use professional-quality tables with booktabs. Relation To Broader Scientific Literature: see *Essential References* Essential References Not Discussed: There has been work discussing how data set characteristics could influence the model's predictive capacity [1]. The traditional methods have also included a lot of modelling / preprocessing designs for different data sets (eg. ARIMA with differencing for non-stationary data sets). Such modelling design should also be considered with modern time series forecasting in transformer models, and it would be of greater impact of this paper to provide a more comprehensive guidance towards how data characteristics can guide the development / choice of deep learning models. - [1]: Forecast Evaluation for Data Scientists: Common Pitfalls and Best Practices Other Strengths And Weaknesses: **Data set:** It has been argued that current benchmark datasets lack general predictive power (see: https://cbergmeir.com/talks/neurips2024/). For instance, the weather dataset requires a longer training time span (like PanguWeather[1], Aurora[2], ClimaX[3] etc.) and more external indicators and that the electricity load in the ETT dataset is also closely related to the weather, which potentially could benefit from the inclusion of weather indicators. The reliance on such low-quality datasets raises concerns about the generalisability of the conclusions drawn in this paper to real-life scenarios. A good thing about this paper, is that they also include a synthetic data set, which to some extent alleviates such concern. - [1] Pangu-Weather: A 3D High-Resolution Model for Fast and Accurate Global Weather Forecast - [2] Aurora: A Foundation Model of the Atmosphere - [3] ClimaX: A foundation model for weather and climate Other Comments Or Suggestions: see **questions for authors** Questions For Authors: **Synthetic data** The use of synthetic datasets is a promising approach to explore how dataset characteristics affect model performance. However, several aspects of the dataset design would benefit from further clarification: - **Limited Complexity**: The current design includes only two variates, which may not adequately reflect the complexity of real-world multivariate time series that often involve higher dimensionality and more intricate interactions. Could the authors clarify on the rationale behind this choice and whether extending the dataset to more variates was considered? - **Simplified Dependency Structure**: The dependency parameter $\alpha$ appears to introduce only linear dependencies between the variates. This design may not sufficiently represent the diversity of relationships observed in real-world data, where dependencies are often non-linear, lagged, or state-dependent. Could the authors clarify whether non-linear or dynamic dependencies were considered, and how the dataset design could be extended to better capture such patterns? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful comments. We truly appreciate your effort and time for reviewing our work. *Response to comments regarding additional metrics for evaluating forecasting accuracy:* &nbsp;&nbsp;&nbsp;&nbsp; In this work, our primary focus is on understanding how different token representations within the attention mechanism lead to significant variations in model performance. We report MAE and MSE because they are the most widely used evaluation criterion in the literature, allowing for easy comparison with the related works. We acknowledge that this is not sufficient to evaluate models comprehensively. This is also the reason we introduce several mutual information-based scores in this work to explore model behaviour in capturing interactions between variates. The broader question of how to design and apply appropriate evaluation metrics for time series forecasting is a substantial topic in itself, requiring holistic analysis of both datasets and model behaviour. We consider this a valuable direction for future research. *Response to comments regarding the inclusion of additional datasets:* &nbsp;&nbsp;&nbsp;&nbsp; We fully agree that the benchmark datasets commonly used for evaluating time series forecasting models should be significantly expanded. This is one of the key points we aim to highlight through our work. In this study, we selected the most widely used datasets in the literature, as they have become a standard reference for comparing model performance. However, based on our analysis, these benchmarks are limited in scope and do not sufficiently reflect the diversity of real-world forecasting challenges. To address this, we present results not only on these standard datasets but also on eight synthetic datasets and two real-world healthcare datasets, which help reveal the limitations of relying solely on several homogeneous benchmarks. We appreciate the suggestion and will examine the recommended datasets in our experimental settings. *Response to comments regarding essential references:* &nbsp;&nbsp;&nbsp;&nbsp; Thank you for the valuable suggestion. We agree that considering dataset characteristics and appropriate modelling strategies—like traditional methods such as ARIMA—can greatly benefit modern time series forecasting. We see this as an important direction for future work, which should ideally be addressed alongside the analysis of dataset characteristics and the design of suitable evaluation metrics. We'll acknowledge this point in the revised manuscript. *Response to questions:* 1. Limited Complexity -- Why only two variates included in the synthetic data? We used only two variates in the synthetic data to maintain clarity in analyzing inter-variate dependencies. We had considered to generate more variates, however, it made the analysis of results much less clear. For instance, introducing a third variate would require defining its dependency on the other two, resulting in two additional hyperparameters. Moreover, since the second variate is already dependent on the first, the dependency of the third variate would effectively be influenced by three parameters. This added complexity would make the analysis less transparent and interpretable. 2. Simplified Dependency Structure -- Why only linear dependencies between variates in the synthetic data? Similarly, linear dependencies are straightforward to interpret and analyze. We had considered increasing the complexity of the dependencies if the models demonstrated strong performance in capturing linear relationships. However, even with these simple linear dependencies, we observed that patch-wise and variate-wise models still rely heavily on intra-variate dependency except in cases where the inter-variate dependency is extremely high (e.g. $\alpha = 0.8$). Given this, we did not see a clear benefit in introducing more complex dependencies at this stage.
Summary: There has been many proposals of transformer architectures for time series forecasting, and some of them are simpler some are more sophisticated, some work well, and some struggle. This work, examines why some of them work better and some struggle: In doing so, it uses an existing classification (Wang et al 2024), point wise, patch-wise and variate-wise tokens, and uses a representative of each one, and introduce mutual information metric and additional synthetic datasets to test them. They show that intravariate dependencies are the primary source of contribution to prediction performance, while intervariate dependencies play rather a minor part. Moreover, Z-score normalisation and skip connections play a crucial role, in their ablation studies. They validate these insights through synthetic and real world data sets. ##UPDATE AFTER REBUTTAL## I maintain my initial positive score after the rebuttal, thanks to the quality of the work, and well-addressing of my questions by the authors. Claims And Evidence: In general claims and evidences are in line and are very well. There are some claims that are hard to justify in the presented tables: Claim 1: "removing skip connection notably degrades the performance across most datasets, except for synthetic datasets with high inter- variate dependencies (α = 0.8) or low autocorrelation (γ = 0.5). " The difference seems to minuscule, or does not exist: so mainly on the ones with the no interrelation (alpha = 0). Moreover low autocorrelation does not seem to be degraded at all. Claim 2: "Replacing variate independent decoder with with a variant dependent decoder improves performance on synthetic datasets with high inter-variate dependencies (α = 0.8) or low auto- correlation (γ = 0.5)." The results are either so minuscule or non-existent that such bold claim that follows is difficult to make. Methods And Evaluation Criteria: Definitely makes sense. In general I like the mutual information tests, and the ablation studies ideas. In order to do that paper tries to characterise the data we are testing these systems, and this looks like the obvious viable option. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: I did check all of them. Only weak part is the claims regarding the synthetic datasets. So perhaps this is attributed to the way that the synthetic datasets are generated. Autoregression and interrelatedness are too simple, more free form function versions (inverse relations), nonlinear relations, sinusoidal seasonalities, or aperiodic cycles could justify the results better. Supplementary Material: I did not review the supplementary material for it just multiply the results. Relation To Broader Scientific Literature: It is related to causal discovery using transformer architectures e.g., [Kong, Lingbai, et al. "CausalFormer: An Interpretable Transformer for Temporal Causal Discovery." IEEE Transactions on Knowledge and Data Engineering (2024).] since this very much depends on the transformer architecture taken into account. Essential References Not Discussed: Temporal Fusion Transformer (Lim, Bryan, et al. "Temporal fusion transformers for interpretable multi-horizon time series forecasting." International Journal of Forecasting 37.4 (2021): 1748-1764.) Other Strengths And Weaknesses: Paper is very well written, so the clarity is good. The work is original in the sense of methodology, and quite significant, since there was quite many debates on the usefulness of transformers in forecasting with more traditional forecasting side was more skeptical. It can definitely be influential in understanding transformers for forecasting. The paper also sheds light on the influential controversial paper showing some linear functions do better in forecasting (Zeng, A., Chen, M., Zhang, L., and Xu, Q. Are transform- ers effective for time series forecasting? In Proceed- ings of the AAAI conference on artificial intelligence, volume 37, pp. 11121–11128, 2023.) The role of interdependencies between variates per data set, on the role of performance of certain architecture, is a message that makes utter sense. Moreover, although kept a bit short, the finding of Avg Var Corr not being very crucial metric, and the effect of Z-normalisation in TIHM and MINDEr datasets are important findings. Other Comments Or Suggestions: Include TFT in the analysis. Try to come up with a better synthetic data generation that can pronounce your claims stronger. (if they are really the case.) Since mutual information metric is crucial it needs better explanation. Questions For Authors: 1)Why did you not include TFT (perhaps the first model) in your results? (it is a highly-cited and commonly practiced successful model.) 2)What is the major role of MAX Mi? Does it give us some extra information? 3) Could you explain \sigma ij, at the lowest level, is it simply correlation? (the N sample thing does not fully click in my mind, and the way they are selected.) Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your encouraging comments. We truly appreciate your effort and time for reviewing our work. *Response to Claims And Evidence:* 1. In Table 3, removing the skip connection leads to a clear performance drop across all benchmark datasets, with particularly significant degradation on Electricity (MAE increases from 0.266 to 0.320) and Traffic (MAE increases from 0.283 to 0.591). On the synthetic datasets, this degradation becomes negligible while $\alpha$ increases under the condition of $\gamma$ = 0.95. In the manuscript, we stated that "removing the skip connection notably degrades performance across most datasets, **except** for synthetic datasets with high inter-variate dependencies ($\alpha$ = 0.8) or low autocorrelation ($\gamma$ = 0.5)," which aligns with your observation that "low autocorrelation does not seem to be degraded at all." We will revise this statement to improve its clarity in the revision. 2. Apologies for the confusion—this statement was incorrect. It should be: "It improves performance on synthetic datasets with high inter-variate dependencies ($\alpha$ = 0.8) under low autocorrelation ($\gamma$ = 0.5)." We will revise the corresponding claim to: "The variate-dependent decoder has the potential to enhance the model’s ability to capture inter-variate interactions, particularly in scenarios with strong dependencies between variates." *Response to broader literature and essential references:* &nbsp;&nbsp;&nbsp;&nbsp; Thank you for your valuable suggestions. We will add discussions of both papers in the Related Work section. *Response to questions:* 1. Why was TFT not included? In this study, we focused on representative transformer architectures from the perspective of token representations within the attention mechanism. TFT, however, is a hybrid model that incorporates variable selection and LSTM-based encoders prior to applying attention. As a result, its token representations are less explicit and not as directly comparable to the models we selected. We agree that TFT is a relevant model and will include a discussion of it in the revised manuscript. 2. What is the major role of MAX MI? We propose MAX MI as a measure of the mutual information captured by a model between the most strongly interacting variates. This metric helps assess whether a model is effectively learning inter-variate dependencies, particularly in cases where the number of variates is large and only a few exhibit strong interactions. For instance, in Figure 3, Crossformer achieves the highest MAX MI on the Traffic dataset (which has 862 variates), while its average inter-variate mutual information (Avg. Inter MI) remains lower than that of most other models. 3. Explanation of $\sigma_{ij}$? $\sigma_{ij}$ estimates the extent to which changes in the prediction of variate $j$ are caused by changes in variate $i$. Unlike Pearson's correlation, it is more versatile as it captures both linear and non-linear relationships. Mathematically, $\sigma_{ij}^2$ is the variance of the predictions of variate $j$ conditioned on inputs $\mathbf{x}$ where all variates are held fixed except for variate $i$. To estimate this conditional variance, the variation of variate $i$ is introduced through N different samples by augmenting original samples in the dataset. Specifically, each original sample is augmented into N = 5 versions, differing only in the value of variate $i$: one instance is set to zero, one retains the original value, and the remaining instances are generated by adding Gaussian noise of varying strengths to the original value. We will amend the explanation in the revised manuscript to improve the clarity of the paper.
Summary: The authors survey the literature on time-series forecasting with transformers, and divide previously published approaches into 3 categories. Given multiple time-varying signals (variates), the signals can be chopped into tokens along the time axis, along the variate axis (one token per variate), or both. The authors then analyze the performance of various transformer architectures on a variety of commonly used benchmark data sets. They find that intra-variate dependencies in the data are much more predictive than inter-variate dependencies. Point-wise models, which bundle all variates together into a single token, perform poorly because they fail to capture patterns within a single variate. The authors also design a set of synthetic datasets, in which the inter-variate and intra-variate dependencies can be precisely controlled, to further back up their findings. Claims And Evidence: The authors do a good job on the literature survey, covering a variety of recently published architectures. The experiments, using both benchmark datasets and custom synthetic datasets also seem well-designed. The author's analysis is both thoughtful and detailed. Methods And Evaluation Criteria: The methods and evaluations make sense. Theoretical Claims: The only theory was in the equations for mutual-information. I did not find any errors, but this is not my area of expertise. Experimental Designs Or Analyses: The experiments seem to be well designed. Supplementary Material: Yes. All of it. Relation To Broader Scientific Literature: The literature survey, which compares the different architectures of various models, was particularly well done. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: I do have one question. I am not very familiar with time-series forecasting, but I was a bit confused after reading this paper. The variate-wise models, like iTransformer, do not use attention to capture temporal (intra-variate) dependencies, and they still perform well. But if attention is not being used to capture temporal dependencies, then why use a transformer for time-series modeling at all? In normal language modeling, the whole point of attention is that it's very good at capturing long-range temporal dependencies. If attention is not good at capturing such dependencies in time series data, then perhaps the transformer is not an appropriate architecture. The authors further reinforce this idea by pointing out that the skip connection, which bypasses attention, is crucial to model performance. Am I missing something? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your encouraging comments. We truly appreciate your effort and time for reviewing our work. *Response to questions:* 1. Why use transformers? In previous work on developing new transformer architectures for time series forecasting, there has been little validation of whether the attention mechanism functions as intended. Although several studies aim to enhance transformers' ability to capture inter-variate dependencies, none have explicitly verified this in relation to the underlying data characteristics and model outcomes. This lack of validation motivated us to explore the issue in our study. Our findings show that patch-wise and variate-wise transformers effectively capture intra-variate patterns, which remains a useful property in time series forecasting. Additionally, experiments with synthetic datasets demonstrate that certain transformer architectures—such as Crossformer—are capable of learning inter-variate patterns. This suggests that transformers can still be effective in scenarios where such patterns are important. We hope that our findings can offer useful insights for designing new architectures or for more effectively applying transformers in related applications. 2. Is attention useful in iTransformer? As we understand it, attention is still useful in iTransformer. For example, in our skip-connection ablation study, we observed that iTransformer performed better without skip connections on synthetic datasets with high inter-variate dependencies. This suggests that, in such scenarios, the attention mechanism alone is still capable of capturing meaningful patterns. Additionally, the skip connection in iTransformer is added to the attention output—not as a simple bypass, but rather as a means to reinforce the self-attention for each variate. --- Rebuttal Comment 1.1: Comment: Thank you for the explanation.
null
null
null
null
null
null
Alpha-SQL: Zero-Shot Text-to-SQL using Monte Carlo Tree Search
Accept (poster)
Summary: This paper proposes a novel approach based MCTS to enhance the zero-shot performance of LLMs in the text to SQL domain. Authors designed a set of task specific actions such as question rephrasing, schema selection, SQL generation, column value identification, column function identification, and SQL revision. Authors used LLM-as-action model to enhance the reasoning capabilities of the LLMs to generate CoT traces based on the context of the problem. Based on the actions spaced defined in the paper, authors proposed using MCTS to effectively generate the SQL queries for the given NL queries. In order to obtain the Q values for each action and node they proposed using self-consistency based query scoring. Their proposed approach has improved the performance of the open-source LLMs mainly from Qwen family and outperformed some of previous works with larger models like GPT family models. Claims And Evidence: The main claim about improving the performance of smaller large language models has demonstrated by comparing the performance with current SOTA methods and the base model performance for some models are provided in table 4. However in order to truly demonstrate the impact of their proposed test-time compute approach I believe a comparison with the best-of-N method (directly generate N candidate SQL queries then use self-consistency to select the most consistent answer) which is the common strong baseline for any test-time compute methods would truly convince me about the effectiveness of this method. Additionally, the base performance for the larger Qwen 14B and 32B models are not provided in table 4 but I think seeing the performance gain for these models is also important cause I believe the performance gap could be smaller for these models. Methods And Evaluation Criteria: Most of the SOTA approaches have been considered in this paper, and two famous benchmarks of Spider and BIRD are used. Theoretical Claims: Yes, the theoretical claims about the self-supervised rewards and MCTS are aligned with previous works. Experimental Designs Or Analyses: Yes, experimental designs and analysis mostly make sense. Table 2 and Table 3 provide the comparison with the SOTA text-to-SQL methods which has latest methods. Table 4 has the issue of missing the baseline performance for 14B abd 32B models. Table 5 also provides a detailed ablation study on action space which shows the importance of the proposed decomposition. Supplementary Material: Yes, prompts and algorithm provided for Alpha-SQL are reviewed. Relation To Broader Scientific Literature: The proposed method can be considered as one of novel and effective test-time compute methods to improve zero-shot performance of LLMs. Specifically with open-source models they were able to catch the performance of larger models like GPT family models. Essential References Not Discussed: No Other Strengths And Weaknesses: Remaining weaknesses: 1) A detailed analysis on the token usage of this method is required and a comparison with a baseline like Best-of-N method is very important. 2) A detailed analysis on the latency of this method is also required, most of the text-to-SQL systems require performing in few seconds to be able to useful in real world settings. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your comprehensive review and for providing a clear summary of our proposed Alpha-SQL method. We understand the need for comparison against a "Best-of-N" baseline, the omission of base model performance for larger Qwen models in Table 4, and the lack of detailed analysis on computational costs (token usage and latency). These are valid points crucial for thoroughly evaluating a test-time compute method like Alpha-SQL. We appreciate the opportunity to address these concerns. Below, we address each point in detail: **1. Comparison with Best-of-N Baseline** >Reviewer Concern: "...a comparison with the best-of-N method (directly generate N candidate SQL queries then use self-consistency to select the most consistent answer) which is the common strong baseline for any test-time compute methods would truly convince me about the effectiveness of this method." > Reviewer Weakness: "... a comparison with a baseline like Best-of-N method is very important." - **Our Response**: Thank you for suggesting the comparison with the Best-of-N baseline. We agree that this is a relevant and strong baseline for evaluating test-time inference methods employing self-consistency. While Best-of-N explores the model's output diversity from a single prompt state, Alpha-SQL uses MCTS to explore diverse reasoning paths before final SQL generation. - Comparison with Best-of-N: | Base LLM | Direct Prompting | Directly Promoting with Best-of-N | Alpha-SQL | |------------------------------|------------------|-----------------------------------|-----------| | Qwen2.5-Coder-Instruct-7B | 48.4% | 56.3% | 66.8% | | Qwen2.5-Coder-Instruct-14B | 57.4% | 62.3% | 68.7% | | Qwen2.5-Coder-Instruct-32B | 62.6% | 63.4% | 69.7% | - The results clearly demonstrate that Alpha-SQL significantly outperforms the Best-of-N baseline across all tested Qwen model sizes. For instance, with the 7B model, Alpha-SQL achieves 66.8% EX compared to 56.3% for Best-of-N (+10.5%). Similarly, with the 32B model, Alpha-SQL reaches 69.7% compared to 63.4% (+6.3%). This confirms that the structured reasoning path exploration enabled by Alpha-SQL's MCTS framework offers substantial benefits beyond simply sampling multiple outputs from a fixed initial prompt state, even when using self-consistency for selection. We will add these comparative results and include this analysis in the revised manuscript. **2. Missing Base Model Performance (Qwen 14B & 32B in Table 4)** > Reviewer Concern: "...the base performance for the larger Qwen 14B and 32B models are not provided in table 4 but I think seeing the performance gain for these models is also important cause I believe the performance gap could be smaller for these models." > Reviewer Note on Table 4: "Table 4 has the issue of missing the baseline performance for 14B abd 32B models." - **Our Response**: Thank you for pointing out the missing base model performance data for the Qwen2.5-Coder-14B and -32B models in Table 4. We completely agree that showing the performance gain achieved by Alpha-SQL over the respective base models is important for a full evaluation across different model sizes, and we appreciate you highlighting this omission. As shown in the table provided in our response to your first point (regarding fair comparison and Best-of-N baselines), we have now completed the evaluation to determine the baseline performance for these models. We will update Table 4 in the revised manuscript to include this complete comparison and add a discussion analyzing these performance gains. **3. Cost Analysis: Token Usage and Latency** > Reviewer Weakness: "A detailed analysis on the token usage of this method is required..." > Reviewer Weakness: "A detailed analysis on the latency of this method is also required, most of the text-to-SQL systems require performing in few seconds to be able to useful in real world settings." - **Our Response**: We understand the need for a detailed analysis of computational costs, specifically token usage and inference latency, to assess the practical viability of Alpha-SQL. - Due to rebuttal length limitations, we kindly refer the reviewer to our detailed response to **Reviewer guJX, specifically: Point 1 (Fair Comparison...) and Point 2 (Computational Cost Analysis...)**. We believe those sections fully address the concerns regarding token usage and latency analysis. We believe these additions will significantly strengthen the paper's evaluation by providing crucial comparative data and cost context. We appreciate the reviewer's constructive feedback, which helps us improve the rigor of our study, and we hope the revised manuscript will meet the standards for acceptance. Sincerely, The Authors
Summary: The paper presents a novel approach to Text-to-SQL that eliminates the need for fine-tuning by leveraging the reasoning capabilities of large language models (LLMs). Alpha-SQL employs a Monte Carlo Tree Search (MCTS) framework to progressively construct SQL queries by breaking them down into smaller, manageable sub-tasks. The core component, LLM-as-Action-Model, dynamically generates SQL construction actions and provides step-by-step reasoning, maintaining context throughout the query-building process. A self-supervised reward function evaluates the quality of candidate SQL queries by computing a self-consistency score, helping to refine the exploration and prioritize promising paths. Experimentally, Alpha-SQL achieves a 69.7% execution accuracy on the BIRD development set. Claims And Evidence: The claims made in this submission are not supported by clear and convincing evidence. The main concern is that the authors' proposition of formulating Text-to-SQL as a search problem and modeling it using the Monte Carlo Tree Search (MCTS) framework is not convincing. The paper designs seven actions for text-to-SQL based on MCTS. However, the reward model, which relies on SQL execution results, can only provide rewards for the two actions related to SQL generation (action-5 SQL Generation, action-6 SQL Revision). For the prior actions (action 1-4: Question Rephrasing, Schema Selection, Column Value Identification, Column Function Identification), no rewards are obtained as they do not lead to SQL generation results. Consequently, the reasoning path search space is limited, as the reward function is not comprehensive. As a result, the proposed Alpha-SQL still adheres to the traditional architecture, performing steps such as Question Rephrasing and Schema Selection sequentially, before considering SQL execution results in the SQL generation and refinement phases, similar to approaches like CHESS [1]. [1] CHESS: Contextual Harnessing for Efficient SQL Synthesis Methods And Evaluation Criteria: The proposed approach of treating Text-to-SQL as a search problem using the Monte Carlo Tree Search (MCTS) framework does not make sense. The paper lacks theoretical support for this formulating text-to-SQL as a search problem. Although Table 1 provides a search space for each action and Section 5.2 mentions 3000 possible reasoning paths in a Text-to-SQL task, in practice, based on empirical experience and previous research, the main pipeline of a Text-to-SQL framework is nearly fixed. For instance, as outlined in this paper, the sequence of Question Rephrasing, Schema Selection, Column Value Identification, and Column Function Identification, followed by SQL generation and SQL refinement, is an effective process already extensively discussed in papers like CHESS [1]. This implies that the majority of the claimed 3000 reasoning paths are practically ineffective. For example, the difference between a sample reasoning path like "Question Rephrasing to SQL Generation" and a complete reasoning path such as "Question Rephrasing, Schema Selection, Column Value Identification, Column Function Identification, SQL Generation, SQL Refinement" is significant. Naturally, the latter provides a more detailed analysis and yields more accurate results. Given this, why spend extra computational resources to decide every detail of the reasoning actions? Considering the reward model is solely based on SQL execution feedback, it further highlights the inconsistency of defining Text-to-SQL as a search problem. Since the proposed reward function only impacts the SQL generation and refinement stages, it does not significantly differ from prior methods like CHESS [1] and E-SQL [2], which also utilize SQL execution feedback for refinement. Moreover, unlike traditional workflows that execute actions sequentially, Alpha-SQL requires multiple interactions with the LLM and several rounds of expansion and backpropagation. This could lead to longer runtime when using the same LLM as the base model. An analysis of the inference item cost under the same settings should be presented in a limitation section, which is currently missing in the paper. [1] CHESS: Contextual Harnessing for Efficient SQL Synthesis [2] E-SQL: Direct Schema Linking via Question Enrichment in Text-to-SQL Theoretical Claims: The paper does not include any proofs for theoretical claims. Experimental Designs Or Analyses: A concern is that the experimental section does not compare baselines using the same LLM as the base model. The proposed Alpha-SQL employs the Qwen2.5-Coder series, while the baselines predominantly use GPT-4. Using different LLMs for baselines may introduce bias that affects the results. Since GPT-4 also supports zero-shot settings, the authors should provide experimental results of Alpha-SQL based on GPT-4 to ensure a fair comparison with the baselines. Supplementary Material: The supplementary material includes the codes of the proposed framework. Relation To Broader Scientific Literature: The paper attempts to model Text-to-SQL as a search problem and proposes a Monte Carlo Tree Search (MCTS)-based method. MCTS is an algorithm used for decision-making processes, widely applied in tasks requiring reasoning and strategic planning, by utilizing effective reward mechanisms to aid decision-making. MCTS employs the Upper Confidence Bound (UCB) formula to balance exploration and exploitation. This strategy allows MCTS to efficiently explore high-potential nodes while avoiding premature convergence to local optima [1]. Recently, MCTS has been employed in LLM-based research to enhance reasoning and decision-making capabilities [2]. [1] Monte Carlo Tree Search: A Review of Recent Modifications and Applications [2] A Survey on Large Language Model-based Autonomous Agents Essential References Not Discussed: The paper includes a detailed discussion of related work on Text-to-SQL. However, it would be beneficial to include and discuss the following two papers: [1] E-SQL: Direct Schema Linking via Question Enrichment in Text-to-SQL [2] A Survey on Large Language Model-based Autonomous Agents Other Strengths And Weaknesses: Strengths: 1. The writing and visualization in the paper are clear and straightforward. 2. The paper effectively argues that the paradigm of zero-shot settings is more suitable for the development of LLM-based Text-to-SQL instead of extensive fine-tuning. Weaknesses: 1. Modeling Text-to-SQL as a search problem lacks sufficient theoretical support. Using the MCTS framework to search reasoning paths appears to have limited practical significance in text-to-SQL, as detailed in the above Claims and Evidence and Methods and Evaluation Criteria. 2. The reward model in the proposed MCTS framework is applicable only to the two actions that generate SQL. This methodology is similar to existing approaches with execution feedback, such as CHESS [1] and E-SQL [2]. 3. The experimental section lacks a fair comparison with baselines, as most baselines use GPT-4, while Alpha-SQL employs Qwen2.5-Coder. The paper also does not justify this choice, raising concerns about whether the observed SQL generation results heavily depend on the specific LLM used. 4. The core action component, SQL Generation, is based on an existing method. Given that SQL Generation is a critical step in text-to-SQL, relying on existing research like CHASE-SQL (which has achieved a good performance in the leaderboard) raises concerns about the novelty and effectiveness of the proposed framework. [1] CHESS: Contextual Harnessing for Efficient SQL Synthesis [2] E-SQL: Direct Schema Linking via Question Enrichment in Text-to-SQL Other Comments Or Suggestions: N.A. Questions For Authors: Please address the aforementioned questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your detailed feedback. We appreciate the opportunity to clarify our approach, particularly regarding the core concerns about the MCTS framework's validity, the reward mechanism, experimental fairness, and novelty, which we believe may stem from some misunderstandings. **1. Validity and Significance of the MCTS Framework for Text-to-SQL** > Reviewer Concerns: Formulating Text-to-SQL as a search problem is unconvincing/lacks support; main pipeline is nearly fixed (like CHESS), making many paths ineffective; questions value of searching reasoning actions; inconsistency between MCTS claim and reward function impact. - **Our Response**: We respectfully disagree that formulating Text-to-SQL as an MCTS search problem lacks significance. While fixed workflows like CHESS exist, Text-to-SQL often requires flexible adaptation (e.g., for schema complexity, query ambiguity) that MCTS provides. - **Why MCTS is Beneficial**: Alpha-SQL uses MCTS precisely to navigate the choices within the reasoning process, not just execute fixed steps. - **Adaptability**: MCTS allows Alpha-SQL to dynamically select the most useful sequence of preparatory actions (like Schema Selection, Value Identification) for a given query, rather than following a rigid pipeline. It efficiently prunes suboptimal paths while exploring promising variations. **Our analysis presented to Reviewer 5kbR (Point 5) demonstrates this adaptive path selection based on database complexity.** - **Clarifying Reward Propagation**: This seems to be a key point of misunderstanding. The reward obtained after SQL generation is backpropagated through the entire MCTS tree via standard MCTS updates. This directly influences the selection probabilities of all preceding actions (including Actions 1-4) based on their contribution to successful outcomes. This holistic path guidance fundamentally differs from the local feedback/refinement mechanisms in methods like CHESS or E-SQL [2]. - **Theoretical Support**: Our contribution is the novel application and formulation of the well-established MCTS algorithm for zero-shot Text-to-SQL, supported by strong empirical results. We will enhance Section 3 to clarify the MCTS reward backpropagation mechanism. **2. Fair Experimental Comparison (Qwen vs. GPT-4)** > Reviewer Concerns: Experiments lack fair comparison using the same base LLM; Alpha-SQL uses Qwen, baselines use GPT-4; need Alpha-SQL results on GPT-4. - **Our Response**: We acknowledge the importance of fair comparison. Due to rebuttal length constraints, we kindly refer you to our detailed response to **Reviewer guJX (Points 1 & 2)**. This includes direct performance comparisons of Qwen-32B and GPT-4o (showing they have similar base capabilities on BIRD-dev) and evaluations of baselines (RSL-SQL, CHESS-SQL) run directly on Qwen-7B. These results demonstrate Alpha-SQL's significant gains originate from the framework itself, not just the base model. **3. Novelty Concerns (SQL Generation)** > Reviewer Concern: Core SQL Generation action is based on existing method (CHASE-SQL), raising concerns about novelty and effectiveness. - **Our Response**: While we leverage insights from prior work like CHASE-SQL for components like SQL Generation, the primary novelty of Alpha-SQL lies in the overall MCTS framework. This includes the dynamic orchestration of actions, adaptive reasoning path construction, and the integration of components into a cohesive, effective system for zero-shot Text-to-SQL. We will refine our discussion to better delineate these contributions. **4. Computational Cost Analysis** > Reviewer Concern: MCTS likely increases runtime; analysis of inference cost needed; limitations section missing. - **Our Response**: We agree cost analysis is essential. Our analysis, detailed quantitatively in response to **Reviewer guJX (Points 1 & 2)**, confirms Alpha-SQL achieves higher accuracy but incurs higher latency compared to baselines run on the same model. We commit to adding a dedicated "Computational Cost Analysis" subsection and a "Limitations" section discussing these trade-offs in the revised manuscript. **5. Missing References** > Reviewer Suggestion: Include discussion of E-SQL and the LLM Agents survey. - **Our Response**: Thank you for suggesting E-SQL and the LLM Agents survey. We agree they are relevant and will incorporate discussion of both in our revised related work section. **6. Supplementary Material Clarification** > Reviewer Statement: "There is no supplementary material provided." - **Our Response**: We apologize for any confusion. Supplementary material, including detailed prompts, algorithm code, and running results, was provided with our initial submission and should be accessible via the OpenReview page. We hope that these clarifications and planned revisions will address the reviewer's concerns and lead to a re-evaluation of our work's contributions.
Summary: This paper introduces a novel zero-shot Monte Carlo Tree Search (MCTS)-based Text-to-SQL approach that constructs SQL queries progressively, enhancing the Text-to-SQL capabilities of Qwen2.5-Coder-32B. The proposed method achieves an execution accuracy of 69.7% on the BIRD dev set and 87.0% on spider dev set, surpassing previous approaches. ### update after rebuttal: While I appreciate the author's rebuttal, I have decided to maintain my score, as most of my concerns remain valid, as detailed in my comments following the rebuttal. Claims And Evidence: The paper claims that a key challenge in zero-shot Text-to-SQL lies in the difficulty of transferring and generalizing knowledge from pre-trained LLMs to the specific task of SQL generation. However, there is no further explanation or experimental evidence supporting this inference, particularly in how this limitation affects complex query mapping. Additionally, while Alpha-SQL achieves a high execution accuracy (EX) score on test sets, the paper lacks a clear discussion of its advantages over existing methods beyond this metric. To strengthen the evaluation, the paper should include analytical experiments such as case studies or statistical analyses to better illustrate Alpha-SQL’s effectiveness and potential improvements in reasoning or structural accuracy. Methods And Evaluation Criteria: This method decomposes SQL generation into multiple subproblems and employs MCTS for test-time scaling, which helps enhance generation quality. The evaluation criteria, such as execution accuracy, BIRD, and Spider, are well-established benchmarks in the field. Theoretical Claims: N/A Experimental Designs Or Analyses: - The main experiments only take experiments on Qwen2.5-Coder family model that is different from other baselines. This raises concerns about the validity of the comparisons. Given that Qwen2.5-Coder-32B has comparable general coding performance to GPT-4o (2024-08-06) and may even outperform it on SQL tasks (e.g., 85.1 vs. 79.8 on Spider, according to [1]), the superiority of Alpha-SQL could stem from the model itself rather than the proposed method. To ensure fair comparisons, all the important methods should be evaluated using the same model—either by running other baselines on Qwen2.5-Coder-32B or by testing Alpha-SQL on GPT-4o/GPT-4. - The computational cost of each method during inference is not clearly discussed. Since different methods may use varying numbers of forward passes, self-consistency participants, SQL execution attempts, and input/output token counts, a direct comparison without accounting for these factors may be unfair. A thorough cost analysis is necessary to contextualize performance gains. - The paper lacks detailed explanations and experimental analyses to highlight the specific performance improvements and challenges addressed by Alpha-SQL, which affects the completeness of the study. [1] Alibaba Group. Qwen2.5 Coder Family[EB/OL]. [2025-03-06]. https://qwenlm.github.io/blog/qwen2.5-coder-family/. Supplementary Material: Yes, A.1, A.2, A.3, and A.4. Relation To Broader Scientific Literature: This paper builds upon prior research in zero-shot Text-to-SQL methods, particularly those leveraging LLMs, such as CHESS, Chase-SQL, and C3-SQL. Unlike previous approaches that generate SQL in a single step, it introduces a novel MCTS-based framework that decomposes SQL generation into subproblems, improving test-time scalability. While prior studies have explored MCTS for structured reasoning, this paper is among the first to apply it effectively to SQL generation. Essential References Not Discussed: No. Other Strengths And Weaknesses: Other Weaknesses: It seems that the proposed method is computational costly as it requires decomposing the task and MTCS. Other Comments Or Suggestions: - Sections 3 and 4 contain overlapping details about Alpha-SQL, making the paper somewhat redundant. This repetition reduces the space available for discussing the motivation behind the approach and providing insightful analytical experiments. A clearer separation of conceptual explanations and technical details would improve the paper’s structure. - The first mention of Monte Carlo Tree Search (MCTS) appears in the introduction, yet the citation for it is only provided in Section 3.2. To ensure proper attribution and clarity, the citation should be introduced at its first mention - In Section 5.2, while previous methods primarily used closed-source models, this paper adopts the Qwen2.5-Coder family. The rationale behind this choice is not clearly explained. The authors should clarify why they opted for Qwen2.5-Coder instead of continuing the trend of using closed-source models, especially given its potential impact on comparability. Questions For Authors: I am very interested in these experimental results, as they directly impact my evaluation of this paper. 1. **Direct Performance Comparison Between Qwen2.5-Coder-32B and GPT-4o on the BIRD Dev Set:** The paper highlights the effectiveness of Qwen2.5-Coder-32B with Alpha-SQL, but it does not provide a direct comparison between Qwen2.5-Coder-32B and GPT-4o on the BIRD dev set. Understanding their relative performance would help determine whether Alpha-SQL’s gains stem from the proposed method itself or the choice of model. 2. **Fair Comparisons with Key Baselines (e.g., MCS-SQL, RSL-SQL):** A fair evaluation should consider consistency across models and computational costs. Specifically: - **Model Selection:** All methods should be evaluated on either GPT-4o(same version) or Qwen2.5-Coder-32B to ensure a valid comparison. - **Computational Cost :** One possible approach is to compare performance using the same number of self-consistency samples and provide statistics on the number of input tokens, output tokens, and SQL execution times for each method. 3. **Further Validation of Claims in Section 1:** The paper states that prior methods struggle with SQL generation while Alpha-SQL addresses these challenges to some extent. However, further experimental validation is needed to support this claim. For example, an analysis of failure cases from previous methods compared to Alpha-SQL would help illustrate specific improvements and their underlying causes. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your detailed and insightful review of our paper. Below, we address each point in detail: **1. Fair Comparison, Model Choice, and Performance Validation (Addressing Concerns on Experiments, Q1, Q2a, and Rationale for Qwen)** - **Our Response**: We acknowledge the critical importance of fair comparison and understand the concern that the observed performance gains might be attributed solely to the base LLM (Qwen2.5-Coder) rather than the Alpha-SQL framework. We also agree that the rationale for choosing Qwen needs clarification. - **Direct Qwen vs. GPT-4o Prompting Comparison - Addresses Q1**: - To directly address the relative strength of the base models on this task (Q1), we performed a direct prompting comparison between Qwen2.5-Coder-Instruct-32B and GPT-4o on the BIRD dev set using the same simple prompt structure. We also included results using simple self-consistency: | Model | Execution Accuracy | |---------------------------------------------|--------------------| | GPT-4o | 62.3% | | Qwen2.5-Coder-Instruct-32B | 62.6% | | GPT-4o + Self-consistency | 63.2% | | Qwen2.5-Coder-Instruct-32B + Self-consistency | 63.4% | - This comparison reveals that Qwen2.5-Coder-Instruct-32B and GPT-4o exhibit very comparable performance on the BIRD dev set when using simple direct prompting (62.6% vs 62.3%) and even when enhanced with basic self-consistency (63.4% vs 63.2%). This indicates that neither model has a significant inherent advantage over the other for this specific task under these simple zero-shot conditions. - **Run Baselines on Qwen2.5-Coder-Instruct-7B**: | Methods | Execution Accuracy | Input Tokens (K) / Question | Output Tokens (K) / Question | Total Tokens (K) / Question | Latency (s) / Question | |-----------|--------------------|-----------------------------|------------------------------|-----------------------------|------------------------| | RSL-SQL | 57.7% | 12.1 | 0.3 | 12.4 | 11.35 | | CHESS-SQL | 61.0% | 327.0 | 24.8 | 351.8 | 284.4 | | Alpha-SQL | 66.8% | 138.0 | 72.2 | 200.2 | 377.1 | - Comparing these results, Alpha-SQL (66.8%) significantly outperforms both RSL-SQL (57.7%) and CHESS-SQL (61.0%) when using the identical base LLM. Specifically, Alpha-SQL achieves a +5.8% absolute gain in execution accuracy over the strongest baseline evaluated here (CHESS-SQL). This result strongly suggests that the performance improvements demonstrated by Alpha-SQL are substantially attributed to our proposed MCTS framework and reasoning path exploration, rather than solely being an effect of the base model choice. We will add these crucial comparative results to the relevant tables in the revised manuscript. **2. Computational Cost Analysis (Addressing Concerns on Cost, Weakness, Q2b)** > Reviewer Concerns: Computational cost not discussed; method seems costly; need for analysis considering forward passes, self-consistency samples, tokens, execution time; comparison needed using same self-consistency N. - **Our Response**: We agree that a discussion of computational cost is essential for contextualizing Alpha-SQL's performance gains, especially given its MCTS nature. We have analyzed the average computational cost per query on the BIRD dev set using the Qwen2.5-Coder-Instruct-7B model. The table above (in point 1) includes Alpha-SQL and the baselines run on the same model. - In summary, Alpha-SQL delivers state-of-the-art accuracy among these methods on the same base model, demonstrating better token efficiency than the next best method (CHESS-SQL), but this comes at the cost of increased latency. In future work, we plan to optimize the MCTS process to mitigate this latency, potentially exploring strategies such as heuristic pruning and SQL execution caching mechanisms. **3. Depth of Analysisn (Addressing Concerns on Claims, Analysis, Q3)** - **Our Response**: We appreciate the reviewer's push for deeper analysis beyond aggregate execution accuracy to better illustrate how Alpha-SQL improves text-to-SQL generation. - As detailed in our response to **Reviewer 5kbR's point 5 ("Analysis of Common Reasoning Paths")**, our analysis of the reasoning paths chosen by Alpha-SQL demonstrates its ability to dynamically adapt its strategy based on database complexity (e.g., selectively including 'Schema Selection' only when needed). This provides initial insight into its advantages over fixed-sequence methods. --- Rebuttal Comment 1.1: Comment: Thanks for the responses. Considering the overall quality of the paper, I will keep my original rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer guJX, Thank you for acknowledging our rebuttal and for your time reviewing our responses and the additional results provided. We understand that you are maintaining your original rating based on your assessment of the paper's overall quality at this stage. We aimed to thoroughly address each specific concern raised in your initial review through our rebuttal, including providing new experimental results such as the baseline comparisons on the same Qwen model and the detailed cost analysis data (latency and token usage). We want to reaffirm our strong commitment to incorporating *all* the promised additions and revisions into the final revised manuscript. We believe these changes, made in direct response to the points you and other reviewers highlighted, will significantly strengthen the paper's completeness and clarity. **We appreciate your feedback throughout this process and sincerely hope you might re-evaluate our work, considering the clarifications provided and the efforts invested during this rebuttal phase.** Respectfully, The Authors
Summary: The authors propose a Monte Carlo tree search framework for zero-shot text-to-SQL with LLMs. The action space is a set of sub-tasks whose composition (subject to ordering rules) defines a reasoning path that terminates in a SQL output given a question and database. They generate candidate SQL queries by MCTS rollout, using the LLM to generate the next state given the action, and using consistency in execution accuracy as a self-supervised reward. They report state-of-the-art results on BIRD dev set among zero-shot methods for open-source LLMs. Claims And Evidence: The main claims in the introduction are the introduction of a novel MCTS approach for text-to-SQL along with state-of-the-art results for zero-shot performance on the BIRD benchmark. These are supported by the sections that follow. As a very minor complaint, the authors sometimes claim that the internal nodes of their tree correspond to "partial SQL query states", but this seems misleading as the query is generated entirely by a single action ($A_5$), while most of the remaining actions simply gather context. Methods And Evaluation Criteria: The method makes sense and seems original for text-to-SQL. The self-consistency based reward function is reasonable, though model confidence is not always a good proxy for correctness. One thing that was slightly unclear is how exactly the samples are generated for self-consistency. Evaluation uses standard datasets (Spider / BIRD) and metrics (execution accuracy). One thing that is notably missing, however, is evaluation in terms of cost at inference time. This seems like a significant and relevant question for an MCTS-based approach. Theoretical Claims: N/A Experimental Designs Or Analyses: The main experiments for performance are standard. The ablations give some confidence about the relevance of including each action, though since these accuracy numbers are computed on a subsampled dataset it's not clear if the drops are all statistically significant. One comparison that might have been useful would have been the performance of a method that just calls the "SQL Generation" action alone (using the same model as in Alpha-SQL). Supplementary Material: I read the appendix and briefly looked through the provided code. Relation To Broader Scientific Literature: There is a long literature on text-to-SQL, with zero-shot methods increasingly successful and popular given the quality of new pretrained models. Most text-to-SQL systems decompose the problem into smaller tasks (i.e. actions), but these are usually composed in a fixed sequence rather than dynamically according to some policy. Essential References Not Discussed: None Other Strengths And Weaknesses: The MCTS approach seems novel for text-to-SQL, and it advances the field by enabling dynamic composition of a reasoning path from component actions rather than a fixed sequence of steps. The paper is written clearly and the empirical results are contextualized against strong baselines. Other Comments Or Suggestions: It would be interesting to see a summary of the most common reasoning paths selected by Alpha-SQL (in terms of action sequences), and how these compare to the fixed sequence of steps implemented by other methods. It also seems notable that there is only a slight improvement in the accuracy as a function of the MCTS rollouts (Fig 5). It would be interesting to relate this to diversity of the reasoning paths. Questions For Authors: - Could the authors comment on or provide evidence regarding the inference-time cost of Alpha-SQL? How does this compare to other zero-shot baselines? - How are the samples drawn for the self-consistency reward function? Are they generated from the same leaf node as the candidate query? - If labeled data is available, could it be used for the reward rather than the consistency criterion? Or in this case is it just better to finetune a model directly? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer, Thank you for your thorough review and constructive feedback. We appreciate your positive assessment and valuable suggestions, which will help improve our paper's clarity. **1. Terminology ("Partial SQL Query States")** - **Our Response**: Thank you for highlighting this ambiguity. We agree "reasoning state" or "contextual state" is more accurate for internal nodes representing accumulated context (selected schema, functions, etc.) prior to the final SQL generation action. We will revise the manuscript accordingly. **2. Self-Consistency Sample Generation** - **Our Response**: Thanks for the question. There are two distinct stages: 1) During MCTS, we use deterministic sampling (Temperature=0) for the primary candidate SQL from a given reasoning state. 2) For reward calculation, we use stochastic sampling (Temperature =1.0) to generate N diverse SQL samples from the exact same reasoning state. The reward is based on execution agreement between the deterministic query and the diverse set. We will clarify this process in the revision. **3. Inference-Time Cost Evaluation** - **Our Response**: This is a very relevant point. The inference cost, particularly in terms of LLM calls and latency, is an important consideration for MCTS-based methods. Due to rebuttal length limitations, we kindly refer the reviewer to our detailed response to **Reviewer guJX (Point 1 & 2)**. We believe those sections fully address the concerns regarding token usage and latency analysis. **4. Comparison with "SQL Generation" Action Alone** - **Our Response**: This is an excellent suggestion for a baseline. This baseline, representing a direct prompting approach using only the 'SQL Generation' action with the initial question and schema, corresponds to the "Direct Prompting" results in our experiments. | Base LLM | Direct Prompting | Directly Promoting with Self-consistency | Alpha-SQL | |------------------------------|------------------|------------------------------------------|-----------| | Qwen2.5-Coder-Instruct-7B | 48.4% | 56.3% | 66.8% | | Qwen2.5-Coder-Instruct-14B | 57.4% | 62.3% | 68.7% | | Qwen2.5-Coder-Instruct-32B | 62.6% | 63.4% | 69.7% | - The results clearly show Alpha-SQL significantly outperforms direct, single-step prompting across all model sizes, demonstrating the substantial benefit of the MCTS framework for dynamic path construction and context gathering. We will ensure this is clearly discussed. **5. Analysis of Common Reasoning Paths** - **Our Response**: A valuable suggestion for deeper insight. Our analysis (Qwen-7B on BIRD-dev) shows Alpha-SQL dynamically adapts reasoning paths based on database complexity, unlike fixed-sequence methods. - We analyzed the reasoning paths selected by Alpha-SQL (based on Qwen2.5-Coder-7B) on the BIRD-dev dataset. Our findings reveal interesting patterns that demonstrate the adaptive nature of Alpha-SQL's reasoning: - **Simple Schema Databases**: For databases with relatively simple schemas (e.g., 'toxicology' with 4 tables, avg. 2.8 columns/table), the most frequently selected reasoning path followed the pattern: Root -> Identify Column Values -> SQL Generation -> End. Notably, this common path **omits the 'Schema Selection' action**. This suggests that for simpler schemas where most tables/columns might be relevant or easily inferred, Alpha-SQL learns that the computational effort or potential risk of error from explicitly running 'Schema Selection' outweighs its benefits, adapting by taking a more direct path to SQL generation. - **Complex Schema Databases**: In contrast, for databases with more complex schemas (e.g., 'student_club' with 8 tables, avg. 6 columns/table; 'california_schools' with 3 tables, avg. 29.3 columns/table), the most common reasoning path pattern included explicit schema filtering: Root -> Identify Column Values -> Identify Column Functions -> Schema Selection -> SQL Generation -> End. The inclusion of the 'Schema Selection' action in these cases highlights Alpha-SQL's ability to recognize when detailed schema filtering is necessary due to complexity and dynamically incorporate the appropriate actions into the reasoning process. - **Comparison to Fixed Sequences**: This flexibility, driven by MCTS, demonstrates an advantage over rigid pipeline approaches. Leveraging the MCTS framework, it dynamically adapts the reasoning path based on the perceived complexity and characteristics of the specific database and query, selecting actions only when deemed beneficial by the search process. We will add details of this analysis, including path examples, to the Appendix in the revised manuscript. **Note: Due to the character limit for replies, we will discuss your other two questions, "Rollouts vs. Path Diversity" and "Using Labeled Data for Reward vs. Fine-tuning," later.**
null
null
null
null
null
null
Learning With Multi-Group Guarantees For Clusterable Subpopulations
Accept (poster)
Summary: This paper focuses on providing *multigroup* guarantees (with a focus on multicalibration, though the techniques are more general) in a stochastic online prediction game in the novel setting where the groups of interest are not provided beforehand as functions of the feature values, but, rather, as unknown endogeneous subpopulations that emerge from the distribution of individuals. This provides an alternative view of recent "multigroup" guarantees that has an unsupervised flavor; instead of *predefining* a collection of groups (typically a collection of indicator functions on the input space), this paper seeks to argue that another perspective is where groups arise naturally from the true distribution generating the input individuals. The authors provide two main algorithms/approaches for providing multi-group calibration guarantees for this model. The first "warmup algorithm," denoted "Cluster-then-Predict," provides suboptimal $O(T^{2/3})$ rates under rather restrictive assumptions. The algorithm is simple: (1) take some tuned number of timesteps to learn the underlying group clusters and then (2) run a marginal calibration algorithm for each group for the remaining timesteps. The second algorithm uses a multicalibration approach to provide a better $O(T^{1/2})$ rate under more general assumptions (that the clusters are drawn from exponential families). The key algorithmic idea here is to use multicalibration on a pre-defined collection of groups/distinguishers; specifically, the collection that is defined by the possible family of density ratios under consideration. Because this collection is not too large combinatorially, the main algorithm proceeds by (i) constructing a cover and then (ii) running a multicalibration algorithm over the cover. Claims And Evidence: Yes, I believe all the claims are supported with clear and convincing evidence. The main claims are the guarantees for the "Cluster-then-Predict" warmup algorithm, in **Proposition 3.1** and **Proposition 3.4**, and the guarantee for the cover + multicalibrate algorithm in **Theorem 4.1.** I have verified the proofs for all these claims, and the proofs are correct to my verification. An additional, more normative claim of the paper is present in Appendix A, which provides an argument for the endogeneous subgroup model of the paper. I found this argument persuasive (and refreshing! I believe more papers in this space should grapple with the implications of their mathematical assumptions), particularly in that this model "contextualizes" individuals with others whom predictions are being made. The authors make an interesting distinction in this Appendix article on how their model differs from the model accepted in most multicalibration/multigroup learning literature -- rather than viewing subgroup membership as "computationally-identifiable" groups, this clustering viewpoint views subgroups as "statistically identifiable." Methods And Evaluation Criteria: Yes, the main methods and evaluation criteria were proofs that the authors' two main desiderata (denoted as Discriminant Calibration Error and Likelihood Calibration Error) indeed decay sublinearly in the stochastic online model for both their algorithms. The desiderata themselves are natural and are well-accepted notions of prediction quality in the multicalibration literature. In particular, DCE measures the $\ell_{\infty}$ norm of the calibration error over all groups on all the rounds for which a particular group is the "most likely." LCE measures the $\ell_{\infty}$ norm of the calibration error over all groups, but the rounds are weighted according to the generative model for the groups. These are the natural definitions of prediction quality that arise if one is concerned about the model the authors pose for their problem (the *endogeneous subgroups generative model*) and calibration as a measure of prediction quality. Theoretical Claims: I checked all the main claims for the algorithms' correctness (Prop 3.1, Prop 3.4, Theorem 4.1 and its corresponding lemmas). I carefully went through the proof sketches or proofs that were in the main body, and I read over the Appendix C and D arguments, though less carefully than when I checked claims in the main body. These theoretical claims all seem to check out. Experimental Designs Or Analyses: This is a theory paper, so no experiments to check. Supplementary Material: I reviewed Appendix A carefully, referenced Appendix B when the authors provided proofs/proof sketches for the second algorithm in the main body, and I read through the proofs of Appendix C and D. I did not go line-by-line rechecking the proofs for C and D, but I did scan through the arguments, and they seem valid on my reading. Relation To Broader Scientific Literature: This work can be situated squarely in the literature on multicalibration initialized by [HKRR18], and, more broadly, the literature concerned with obtaining multigroup guarantees over computationally-identifiable subpopulations. The authors provide a different perspective, however, than the model where groups are computationally-identifiable collections of indicator functions (typically finite or in a VC class). Instead, they consider clusterable groups coming from a natural generative model. The main tool used in this work draws from an algorithm of [HJZ23] instantiated with an appropriate collection of groups/distinguishers. The work also touches on clustering and unsupervised learning; their first algorithm relies on black-box access to a clustering algorithm, specifically the algorithm for Gaussian clustering of [AAW13]. [HKRR18] Hébert-Johnson, Ursula, et al. "Multicalibration: Calibration for the (computationally-identifiable) masses." International Conference on Machine Learning. PMLR, 2018. [HJZ23] Nika Haghtalab, Michael Jordan, and Eric Zhao. "A unifying perspective on multi-calibration: Game dynamics for multi-objective learning." Advances in Neural Information Processing Systems 36 (2023): 72464-72506. [AAW13] Azizyan, Martin, Aarti Singh, and Larry Wasserman. "Minimax theory for high-dimensional gaussian mixtures with sparse mean separation." Advances in Neural Information Processing Systems 26 (2013). Essential References Not Discussed: I believe that the authors have cited all relevant work. It doesn't seem to me that there are any references missing (though I am less familiar with the unsupervised learning/clustering literature). Other Strengths And Weaknesses: **Strengths** - The work is *extremely* well-written and clear. I had no trouble following the arguments, the overall flow of the paper, and the claims and model are well-motivated. The paper was a pleasure to read. - This alternative view of subpopulations is an interesting and novel alternative to the typical view of group structure in this literature, and I believe that it is quite worth disseminating. In particular, the model is natural, and I believe that it should be further studied. - The algorithmic results are interesting -- I particularly liked the key idea of cleverly choosing the class of distinguishers for multicalibration to be the class of possible density ratios, which are combinatorially bounded. This highlights how far multicalibration algorithms can go with a well-defined class of distinguishers for the problem at hand. - Though "fuzzier," I believe that this a key strength that should not be overlooked is the focus the authors give on the model and the normative assumptions underlying the model. Reading Appendix A was a pleasure, and it is rare to find authors giving the requisite thought to the assumptions underlying their model/problem of choice. I believe that this is particularly lacking (though very warranted) in subfields where the algorithms concern predictions with a human dimension, where this paper is clearly situated. **Weaknesses* The following are more questions than glaring weaknesses. The following contributions might make the paper more complete, but I believe that their omission does not detract from the overall quality of the paper (they seem to me to be more "next steps" or ways to round out the story). - This is not a huge weakness, but it might make the paper more complete to have a corresponding batch setup, with results for the batch case. I wonder if the authors have already considered this and ran into a snag with, say, a boilerplate online-to-batch conversion of their algorithm? I imagine this is straightforward, though I might be missing something. - The rates presented all depend on $T$ indiscriminately for *every* group, which I understand is part of the definition of DCE and LCE (as both definitions take a max over the groups). However, is it possible to obtain more fine-grained guarantees for each group for DCE corresponding to the number of rounds for which the group was active? If $T_g := \sum_{t = 1}^T \mathbf{1}\{ g = \mathrm{argmax}_{j \in [k]} f( j \mid x_t, y_t) \}$, is it possible to achieve a more fine-grained bound where DCE depends on $T_g$ for each group? Other Comments Or Suggestions: Here are some other minor suggestions that might improve the presentation of the paper: - Page 4: I would suggest introducing the Section in the paragraph before **Minimizing discriminant calibration error...** with a disclaimer and motivation that the section only deals with two groups that are Gaussian. I understand that it is a warmup, but it may slightly clarify the presentation to introduce the section as dealing with this special case first. - Page 5: A nitpick for Propositions 3.1 and 3.4 is to define $d$ in the prop statements. - Page 6: It may clarify presentation for readers less familiar with online prediction to mention that $O(T^{1/2})$ is often minimax optimal. Questions For Authors: See "Weaknesses" above. I will just paste the same questions I had above here: 1. This is not a huge weakness, but it might make the paper more complete to have a corresponding batch setup, with results for the batch case. I wonder if the authors have already considered this and ran into a snag with, say, a boilerplate online-to-batch conversion of their algorithm? I imagine this is straightforward, though I might be missing something. 2. The rates presented all depend on $T$ indiscriminately for *every* group, which I understand is part of the definition of DCE and LCE (as both definitions take a max over the groups). However, is it possible to obtain more fine-grained guarantees for each group for DCE corresponding to the number of rounds for which the group was active? If $T_g := \sum_{t = 1}^T \mathbf{1}\{ g = \mathrm{argmax}_{j \in [k]} f( j \mid x_t, y_t) \}$, is it possible to achieve a more fine-grained bound where DCE depends on $T_g$ for each group? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your detailed review and suggestions! To briefly address your questions: **Q1: Online-to-batch.** > This is not a huge weakness, but it might make the paper more complete to have a corresponding batch setup, with results for the batch case. I wonder if the authors have already considered this and ran into a snag with, say, a boilerplate online-to-batch conversion of their algorithm? Yes, the boilerplate online-to-batch conversion works fine to extend our guarantees to the batch setting. Indeed in the batch setting, even an algorithm that is similar in style to our cluster-then-predict is slightly less egregious. That is, because there would be no “explore-then-commit” penalty—all $T$ points could be reused to first learn the likelihoods $f(g | x)$ and then learn to make predictions $f (y | x)$ )—achieves near-optimal rate in T but still suffers from a dependence on the separation between clusters ($\gamma$). Our multi-objective algorithm entirely avoids the dependence as well. We agree this is an interesting point worth making, especially since it already follows directly from our online result—we will add a discussion on this. **Q2: Group-dependent rates.** > The rates presented all depend on T indiscriminately for every group… Is it possible to obtain more fine-grained guarantees for each group for DCE corresponding to the number of rounds for which the group was active? This is an interesting question! Group-dependent rates are indeed attainable through our reduction to multicalibration, where one can attain rates that scale with $\sqrt{T \Pr(x \in g)}$ “for free” (see e.g. Theorem 3.1 [1], Theorem 5.7 in [2]). This is in a sense optimal; the right concentration rate for a group of probability mass P is $1/\sqrt{T P}$. Group-dependent rates are not the focus of our paper, but we agree it will be nice to highlight that it comes for free with our reduction. [1] Noarav et.al. High-Dimensional Prediction for Sequential Decision Making [2] Haghtalab et.al. A Unifying Perspective on Multicalibration: Game Dynamics for Multi-Objective Learning
Summary: The paper considers a multi-group online learning problem with instances $(x_t, y_t)$ arriving in sequence. In contrast to prior work in online multi-group learning, the groups themselves are not known at each instance. Instead, the paper assumes that there is an endogenous unknown subgroup model, such as, e.g., a mixture of Gaussians. We can think of this model as first sampling a mixture component (based on a distribution components), then sampling an $x_t$ from this mixture, and then finally sampling a label from some unknown distribution $f(y_t | x_t)$. At each time step $t$, the learner should produce some action $a_t$. Given a datapoint $x_t$, it is impossible to identify which “group” (or mixture component) generated it. Therefore, two types of errors are considered. The first, discriminant error, takes into account only the most likely group that $x_t$ belongs to. The second, likelihood error, weighs each group that $x_t$ could have been generated from by the likelihood that it was generated by that group (mixture component). The main concern of the paper is to generate calibrated predictions which have bounded worst group discriminant and likelihood error. First, the authors utilize the common technique of discretizing the outputs of the model into “buckets” when considering calibration errors. Then, they introduce two calibration variants to the aforementioned errors: discriminant calibration error (DCE) and likelihood calibration error (LCE). DCE can be thought of as minimizing the discriminant error over the worst “group intersect calibration bucket” in the sequence $x_t, y_t$. That is, for each mixture component and each prediction bucket (for example all predictions in $[0, 0.2]$ or $[0.2, 0.4]$), sum up the error of all predictions / actions taken by the learner in the sequence which belong to that group and that bucket. “Belong” here is defined in the discriminant way. LCE is the identical concept, but “Belong” is instead a probabilistic notion, since one $x_t$ can contribute to the error of multiple groups (depending on the probability of that “group” / mixture component generating $x_t$). With DCE and LCE in hand, the paper then proposes some algorithms to achieve bounded error. To start with, a standard “cluster-then-predict” algorithm is proposed for Gaussian mixtures. This algorithm first learns the underlying groups (mixture components), then learns calibrated predictors for each group independently (a la Foster and Vohra.). The paper shows that this family of algorithms can achieve a $O(T^{⅔})$ error rate on DCE and LCE, and indeed, that this is tight due to the difficulty in determining underlying mixtures. In addition, the bound has a necessary dependence on a separation parameter $\gamma$ for the underlying mixtures. The paper then considers a new family of algorithms which _do not_ actually learn the underlying groups. In particular, if one can provide multicalibrated predictions for a suitable cover of all possible groups, this should be sufficient to obtain good worst-group performance guarantees. The main result of the paper, Theorem 4.1, shows that such an approach is feasible and enjoys the better DCE/LCE error rate of $O(\sqrt{T})$ over the cluster-then-predict family of algorithms. Furthermore, no “separation” parameter dependence on $\gamma$ is necessary. This new family of algorithms works by efficiently multicalibrating over a finite cover of the likelihood ratios of the underlying density class. Claims And Evidence: Yes, all proofs are present. Methods And Evaluation Criteria: N/A Theoretical Claims: I did not check the correctness of any proof. Nonetheless, the main technical insight seems to be a combination of the following two facts 1) we should apply online multicalibration algorithms with the “right” groups (here, using $\mathcal{H}$, the likelihood ratios of the underlying density function class); and 2) An approximate covering can be computed efficiently in the first phase of the algorithm. Both facts seem reasonable as someone who has not checked the proofs carefully. Fact 2) is especially interesting, since one can apparently learn a good covering with less data than one can actually identify the underlying groups (see, for example, the similarity between phase 1 of algorithm 1 (cluster-then-predict) and algorithm 2 (online multicalibration approach)). Experimental Designs Or Analyses: N/A Supplementary Material: Skimmed the appendix. Relation To Broader Scientific Literature: This paper relates to a line of work on online multicalibration. Importantly, the paper does not assume that group membership is known (or even that it is deterministic). To the best of my knowledge, this sets it apart from previous work in the area (although I do not work in online multicalibration, and may not be totally up to date with the literature). Essential References Not Discussed: Given the technique of multicalibrating w.r.t. likelihood ratios of the underlying function class, it may be useful to discuss [1], which study the problem of (offline) learning multicalibrated partitions via a likelihood ratio / importance weight approach. I am not an expert in this area, but at a surface level the technique seems to be similar. However, in the submitted paper, concrete bounds on the pseudo-dimension of the derived likelihood ratio are discussed and utilized, whereas this doesn’t seem to be discussed in [1]. [1]: Multicalibrated Partitions for Importance Weights. Gopalan et al. 2021. Other Strengths And Weaknesses: I think this is a technically strong paper on an interesting (and to my knowledge, previously unexplored) problem: online multicalibration with unknown groups. I would suggest increasing the amount of discussion and removing most proofs from the main paper. For example, the proof of lemma 4.5 is a fairly straightforward argument to bound the pseudodimension of a derived function class — this could be replaced with a more detailed proof sketch of Theorem 4.4.. Other Comments Or Suggestions: I enjoyed the extended discussion in Appendix A. Defining subpopulations via statistical identifiability. I think this discussion represents an important dilemma in the multicalibration literature, namely that groups are known / deterministic. Algorithms like the proposed one which allow for partial / probabilistic and _unknown_ group membership may represent an important development in the literature. I would suggest this discussion be included in the main paper somehow (or at least an abridged version of it), perhaps possible with the increased final paper page count. Questions For Authors: 1. Since $\mathcal{F}$ is the underlying density class with bounded pseudo-dimension, I understand that we can bound the pseudo dimension of $\mathcal{H}$, the class of density ratios on $\mathcal{F}$. To obtain theorem 4.4, we need to run algorithm 2 with $\mathcal{G} = \mathcal{H}$, correct? So when we are computing our approximate cover (a la appendix D.2), I should think of computing a cover on the likelihood ratios of the original function class? 2. Is there a natural, intuitive interpretation of DCE / LCE and the difference between the two? Maybe a simple two cluster example may help distinguish what the different error rates may signify? 3. Is the proposed algorithm 2 computationally tractable? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your review and suggestions! We will incorporate your presentation recommendations in the next versions. **Q1: Covering of H** > To obtain theorem 4.4, we need to run algorithm 2 with G=H, correct? For Theorem 4.4, we run Algorithm 2 with $\mathcal{G}$ defined according to eq. 1 (for DCE) or eq. 2 (for LCE) — essentially, $\mathcal{G}$ corresponds to whatever the proper notion of “group membership” should be (for either DCE/LCE). Thus, the cover computed in the first phase of Algorithm 2 is a cover on the function classes corresponding to the group membership functions, but it turns out that the size of those covers only needs to depend on the pseudodimension of the class of likelihood ratios. **Q2: DCE vs LCE** > Is there a natural, intuitive interpretation of DCE / LCE and the difference between the two? The difference between DCE and LCE as (per-subgroup) performance measures is critical when subgroups are overlapping. As one example, first consider two well-separated standard Gaussians supported on a number line; for example, suppose their centers are at x=-1000 and x=1000 respectively. For any given candidate predictor, LCE and DCE will be nearly identical since $f(g_1 \mid x) \approx 1[ f(g_1 \mid x) > f(g_2 \mid x)$. In particular, both LCE and DCE can be understood as enforcing that a predictor is calibrated on the part of the data distribution where x<0 and calibrated on the part where x>0. Now suppose we reduced the separation between the two Gaussians so that their centers are x=-0.01 and x=0.01 respectively. LCE and DCE will now differ significantly. LCE measures calibration error on two groups whose likelihood ratio is ~50/50 across the entire domain, which means the LCE of any predictor is just an approximation of the predictor’s overall calibration error. In contrast, DCE measures calibration error on disjoint two groups: x<0 and x>0. That is, DCE still measures the same quantity as before, enforcing that a predictor is both calibrated on x<0 and calibrated on x>0. It’s less clear whether DCE’s treatment of x>0 and x<0 as disjoint groups makes sense in this case where the clusters are nearly indistinguishable. **Q3: Computational Efficiency** > Is the proposed algorithm 2 computationally tractable? Multicalibration algorithms including Algorithm 2 are generally not computationally efficient. However, there’s been recent progress towards oracle-efficient algorithms for multicalibration, which our results would directly benefit from (e.g., [1]). It’s also worth noting that in practice multicalibration can be approximately implemented as efficient boosting algorithms (e.g. [2]); we see these potential extensions as exciting directions for future work. **On Gopalan et.al. 2021:** Thank you for the pointer. We will add an extended form of the following discussion to our revision: While Gopalan et al. 2021 also studies likelihood ratios, we take very different perspectives on how multicalibration relates to the likelihood ratio. In Gopalan et.al., the goal is to approximate the likelihood ratio as accurately as possible; their perspective is that multicalibration can be useful as a tool to relax pointwise to setwise accuracy. In our setting, our goal is not to approximate the likelihood ratios, but rather to make predictions that are multicalibrated with respect to the class of all plausible likelihoods consistent with our generative model. [1] Garg et.al. Oracle Efficient Online Multicalibration and Omniprediction [2] Globus-Harris et.al. Multicalibration as Boosting for Regression --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed responses. I will keep my score!
Summary: This paper focuses on evaluating prediction performance on meaningful subpopulations rather than the overall population in a clustering problem. It proposes two levels of guarantees for capturing performance per subgroup: (1) evaluating the performance if assigning an individual to the most likely cluster, and (2) evaluating the performance if assigning each individual to all clusters weighted by their relative likelihood. The paper introduces a multi-objective algorithm to simultaneously handle both formalisms for all plausible underlying subpopulation structures, and evaluates the proposed algorithm in the context of online calibration as a case study. Claims And Evidence: This paper provides extensive theoretical proofs on different scenarios. However, the method is not evaluated properly empirically. Methods And Evaluation Criteria: There are no evaluations on either simulated datasets or benchmark datasets. Theoretical Claims: I did not review every detail of the proofs. Please refer more to the other reviewers' comments regarding the correctness of the theory. Experimental Designs Or Analyses: This paper didn't provide any empirical studies nor experiments on real data. Supplementary Material: No, I didn't take further look at the supplementary. Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: The paper is well-presented and the motivation is clearly presented. However, the assumptions can limit its applicability to common scenarios in practice. There's no empirical justification of the method with simulated data or real data. See Questions Section for more discussion. Other Comments Or Suggestions: n/a Questions For Authors: I have the following key concerns: (1) I don't see any empirical studies nor experiments to justify your theory and your algorithm in both main text and supplementary. Can you provide empirical results to justify the effectiveness of your method? (2) I wonder how much we can trust the underlying "naturally emerging" distributions. For example, in mixture models, we make assumptions about the structure of each subpopulation. However, these assumptions can often be incorrect, leading to problematic results [1]. Given the potential for model misspecification, would this method still hold? In comparison, the method proposed by [2] considers both subgroup clustering performance and model robustness when evaluating and making predictions. Could you provide some high-level comparison between the method proposed here versus the power posterior proposed in [2] from the methodology perspective? (3) The framework of this paper is based on assumptions that may be challenging to scale to real-world applications. For instance, the paper assumes the data is generated from a mixture of distributions with a fixed number of subpopulations $k$. However, $k$ is typically unknown in clustering problems, and different choices of $k$ can significantly impact the structures and the number of subpopulations on which this paper relies. [1] Cai et al (2020). Finite mixture models do not reliably learn the number of components. In Int. Conf. on Machine Learning, Online, 18–24 July 2021, pp. 1158–1169. [2] Miller, J. W., & Dunson, D. B. (2019). Robust Bayesian inference via coarsening. Journal of the American Statistical Association. Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for your review and questions! **Q1: On empirical results** > I don't see any empirical studies nor experiments to justify your theory and your algorithm in both main text and supplementary. Can you provide empirical results to justify the effectiveness of your method? We emphasize that the main contributions of our work are theoretical in nature. This paper contributes to the multicalibration literature which is a well-established field of theoretical research (e.g. [1-3]). The literature we draw from (such as learning algorithms and guarantees for Gaussian mixture models) are similarly in well-established theoretical fields (e.g. [4-5]). Given that our paper’s primary subject area is also Theory, we believe that a fully theoretical treatment is an appropriate and meaningful way to advance understanding and make foundational contributions to these theoretically rich fields. **Q2: On misspecification.** > Given the potential for model misspecification, would this method still hold? We believe that our multi-objective approach (Alg 2) is indeed useful for reducing the impact of misspecification of the generative model. This is because we never commit to learning a specific mixture distribution explicitly; instead, we are “robust” to any likelihood that could have been modeled by one's hypothesis class of clustering functions. Thus, even under misspecification, our method succeeds with provable guarantees as long as the “true” likelihood function is reasonably well-approximated by some function in a large empirical cover we construct. It is also important to note that the consequences of model misspecification in multicalibration are significantly more benign than misspecification in many other empirical settings. Multicalibration seeks to refine the average prediction accuracy of a classifier to hold also on a per-subgroup level. If the subpopulations are misspecified, the model still outputs a well-calibrated highly accurate predictor, but with a potentially coarsened per-subpopulation guarantee. > Could you provide high-level comparison between the method proposed here versus the power posterior proposed in Miller et.al. from the methodology perspective? Miller et al. studies robustness to uncertainty around identifying the clustering parameters of one’s data. In contrast, our goal is to make predictions that are high-quality across clusters; we are not focused on learning model parameters or densities directly. In fact, one of our paper’s main messages is that—for the purpose of providing subgroup guarantees—it is not necessary to identify the clustering parameters of your data. This is why we are able to get learning rates that are independent of cluster separation. That is, the robustness that Miller et al. studies is robustness that we show can be enjoyed entirely for free. Perhaps the methodological similarity between our multi-objective algorithm (Algorithm 2) and the “coarsening” idea in Miller can be understood as, essentially, improved performance arising from simultaneously considering many true underlying “likelihoods” — in our case, all possible likelihood functions consistent with the original function class, and in Miller et.al.’s, all possibilities within a perturbed neighborhood of a particular radius. We think a valuable direction for future work is to study whether methods for robustly learning likelihoods (such as Miller’s) could be useful as an alternative to our covering approach. **Q3: Knowing the number of clusters k.** > The paper assumes the data is generated from a mixture of distributions with a fixed number of subpopulations k. However, k is typically unknown in clustering problems, and different choices of k can significantly impact the structures and the number of subpopulations on which this paper relies. While setting k is generally a sensitive parameter choice for clustering algorithms, it is *not* a sensitive parameter for our algorithm. Our algorithm doesn’t hinge on the data being exactly described by k clusters: instead, it provides guarantees for all plausible ways of clustering the data into *up to* k clusters. Thus, it suffices to just set k to be a generous upper bound on the number of clusters that you might expect in your data. All this said, we think that dealing with misspecification of the likelihood function class $\mathcal{F}$ and/or of $k$ would also be worthy avenues for study. As discussed above, we believe our multi-objective approach has potential in terms of providing such robustness guarantees, though we leave explicit development of these arguments to future work. [1] Hebert-Johnson et.al., Multicalibration: Calibration for the (computationally-identifiable) masses [2] Dwork et.al., Outcome Indistinguishability [3] Gopalan et.al., Ominpredictors [4] Azizyan et.al., Minimax theory for high-dimensional gaussian mixtures with sparse mean separation [5] Hardt and Price, Tight bounds for learning a mixture of two gaussians.
null
null
null
null
null
null
null
null
On Differential Privacy for Adaptively Solving Search Problems via Sketching
Accept (oral)
Summary: First the authors consider the differentially private ANN problem. They reduce this problem of differentially private ANN to differentially private selection problem, and then solve the differentially private selection problem via differentialy private selection methods. Next, the authors consider the problem of differentially private linear regression. They solve this problem by applying matrix sketching techniques. The key idea is to use standard a number of different random matrix sketches, and then to aggregate the solutions in a differentially private manner via an extension of a differentially private median approach of (Beimel et all 2022). Finally, to bound the utility they use they compute an ell-infinity bound on the error introduced by sketching. As the authors note, this is an interesting approach to bounding the utility as it is an uncommon use of ell-infinity bounds in matrix sketching. ## update after rebuttal Thank you for the helpful clarifications. I have raised my score accordingly. Please include these clarifications in the final version. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I read the technical overview, but did not carefully check the proofs. Experimental Designs Or Analyses: There are no experiments in the paper. Supplementary Material: Please see question about "proofs" above. Relation To Broader Scientific Literature: The authors do a good job of discussing which previous techniques they build on. However, as far as I can tell, the authors do not provide much comparison to prior related works. It would be good if the authors could clarify how their work improves on prior works, or point to where the improvement over prior works is discussed in the paper. Essential References Not Discussed: Please see "Relation To Broader Scientific Literature" question above. Other Strengths And Weaknesses: I think the strength of the paper is in the techniques, which are well explained in the technical overview. However, the comparison to prior works can be improved. Specifically, it would be good if the authors could clarify how their work improves on prior works. Other Comments Or Suggestions: N/A Questions For Authors: It would be good if the authors could clarify how their work improves on prior works. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their question about prior work, which we address below: 1) Prior works using differential privacy for adaptivity focus on numerical estimates [Hassidim et al., 2022; Beimel et al., 2022; Song et al., 2023; Cherapanamjeri et al., 2023]. Our work instead tackles the *search* problem—returning a point using an adaptive DP data structure. While differentially private heavy hitter algorithms (e.g., [Chadha et al., 2023]) relate to private search, they lack adaptivity. To our knowledge, this is the first work combining DP with adaptive search. 2) Because of this, prior adaptive search methods either (a) build $T$ data structures (one per query), or (b) use a net argument over the space, requiring $\widetilde{O}(d)$ structures. Below we compare our method to these baselines for ANN and regression. We omit $\widetilde{O}(\cdot)$ for clarity. ### **Approximate Nearest Neighbor (ANN)** | Method | Space | Amortized Prep Time | Query Time | Update Time | |-------------|------------------------------|-------------------------------|----------------|--------------------------| | $T$ copies | $T n^{1+\rho} + nd$ | $n^{1+\rho} d$ | $n^\rho d$ | $T n^\rho d$ | | $d$ copies | $n^{1+\rho} d$ | $\frac{d}{T} n^{1+\rho} d$ | $n^\rho d$ | $d n^\rho d$ | | **Ours** | $\sqrt{T} s n^{1+\rho} + nd$ | $\frac{s}{\sqrt{T}} n^{1+\rho} d$ | $s n^\rho d$ | $\sqrt{T} s n^\rho d$ | For $T$ adaptive queries/updates, the exact method scans the dataset. If $T \leq d$, we outperform $T$-copies when $s \leq \min\\{\sqrt{T}, n\\}$; if $d \leq T$, we outperform $d$-copies when $s \leq \min\\{\frac{d}{\sqrt{T}}, n\\}$. --- ### **Regression** Let $U \in \mathbb{R}^{n \times d}$ and $b \in \mathbb{R}^n$, with goal to compute $x$ minimizing $\\|Ux - b\\|_2$ up to $(1+\alpha)$. Each step updates $U$, $b$ via $v_t$; $\kappa$ bounds $U$'s condition number. | Method | Space | Amortized Prep Time | Query Time | Update Time | |-------------|----------------------------------|----------------------------------------------|--------------------------------|--------------------------------------------------| | $T$ copies | $\frac{T d^2}{\alpha^2}$ | $\text{nnz}(U,b)+d^3+\frac{d^2}{\alpha^2}$ | $\frac{d^{\omega+1}}{\alpha^2}$ | $T(\text{nnz}(v_t)+d^3+\frac{d^2}{\alpha^2})$ | | $nd$ copies | $\frac{n d^3}{\alpha^2}$ | $\frac{nd}{T}(\text{nnz}(U, b)+d^3+\frac{d^2}{\alpha^2})$ | $\frac{d^{\omega+1}}{\alpha^2}$ | $nd(\text{nnz}(v_t)+d^3+\frac{d^2}{\alpha^2})$ | | **Ours** | $\frac{\sqrt{T} d^{2.5} \kappa^2}{\alpha^2}$ | $\sqrt{\frac{d}{T}}(\text{nnz}(U, b)+d^3+\frac{d^2 \kappa^2}{\alpha^2})$ | $\frac{d^{\omega+1} \kappa^2}{\alpha^2}$ | $\sqrt{T d}(\text{nnz}(v_t)+d^3+\frac{d^2 \kappa^2}{\alpha^2})$ | We outperform $T$-copies when $\kappa^2 \leq \sqrt{T/d}$, and $nd$-copies when $\kappa^2 \leq n \sqrt{d/T}$—conditions often met in practice. --- ### **Regression with Sparse Label Updates** If only $b$ is updated and each $v_t$ changes at most $s$ entries, $\kappa^2$ dependence can be reduced to polylogarithmic. | Method | Space | Amortized Prep Time | Query Time | Update Time | |-------------|-----------------------------|---------------------------------------------|------------|-------------------------------------------| | $T$ copies | $\frac{T d^2}{\alpha^2}$ | $\text{nnz}(U)+d^3+\frac{d^2}{\alpha^2}$ | $d^2$ | $T(s+d^3+\frac{d^2}{\alpha^2})$ | | $n$ copies | $\frac{n d^2}{\alpha^2}$ | $\frac{n}{T}(\text{nnz}(U)+d^3+\frac{d^2}{\alpha^2})$ | $d^2$ | $n(s+d^3+\frac{d^2}{\alpha^2})$ | | **Ours** | $\frac{\sqrt{T d} \cdot d^2}{\alpha^2}$ | $\sqrt{\frac{d}{T}}(\text{nnz}(U)+d^3+\frac{d^2}{\alpha^2})$ | $d^2$ | $\sqrt{T d}(s+d^3+\frac{d^2}{\alpha^2})$ | Only $n$ copies are needed. Our method is faster when $T \leq n$ and $d \leq T$, or when $n \leq T$ and $\sqrt{T d} \leq n$. Since $T$ is user-chosen (rebuilding occurs every $T$ queries), trade-offs can be tuned (see Claims D.16–D.17). --- **Summary:** Our method leverages DP with advanced composition, achieving $\widetilde{O}(T)$ scaling and improved space, preprocessing, query, and update complexity. These advantages are especially notable when $T$ is large, or when sparsity $s$ and condition number $\kappa$ are favorable. We will ensure these comparisons are clearly included in the revision.
Summary: The paper addresses the problem of hiding internal randomness of data structures using differential privacy. Specifically, the authors focus on nearest neighbor search and regression problem and aim to protect the internal randomness against adaptive adversary using differentially private techniques. ## Update after rebuttal The author response provided during the rebuttal clearly addressed my questions regarding the interaction between the privacy mechanism and the underlying data structure, and the privacy loss accumulation. I have raised my rating. Claims And Evidence: A sequence of theorems presented in Section 1.1 clearly demonstrates improvements over existing algorithms. They provide probabilistic utility guarantees as well as time and memory complexity, but it is not immediately clear how differential privacy interacts with these bounds. While I understand that characterizing the interplay between data structure and differential privacy is not the main focus, it could be of interest to a broader audience. Methods And Evaluation Criteria: * Section 2 provides explanation on how the search problem can be formulated as differentially private selection, which completely makes sense. The authors use the standard Laplace mechanism to achieve pure $(\epsilon, 0)$-DP. I wonder if the Gaussian mechanism can be used instead to achieve $(\epsilon, \delta)$-DP. In that case, how does it affect the guarantees stated in theorems in Section 1? * Another aspect not discussed in the paper is the accumulation of privacy loss. In differential privacy, the total privacy loss accumulates as the mechanism releases private answers to successive queries, which means the mechanism must shut down the database and stop answering after a certain number of queries. However, it is unclear from the paper whether there is a limit to the number of queries the underlying data structure can handle. Theoretical Claims: I didn’t thoroughly check the correctness of the proofs. Experimental Designs Or Analyses: This is a theory paper that doesn't contain experimental results. Supplementary Material: No supplementary materials are available. Relation To Broader Scientific Literature: This paper develops efficient data structures that protect their internal randomness from adaptive adversaries, offering a secure and private foundation for machine learning tasks. Essential References Not Discussed: I don't find any missing related work. Other Strengths And Weaknesses: * Strength - The paper presents a sequence of theorems that describe improvements over existing results. - The authors provide interpretations of theorems. * Weakness - This may not be a weakness, but rather a comment. While the paper is clear about the algorithmic properties of underlying data structures, e.g., how its time and storage complexities scale with input size, they don’t reveal much about the properties of differential privacy, such as composition and privacy loss accumulation. - I believe that including some figures or diagrams would have been helpful for better understanding the proposed technique. Other Comments Or Suggestions: None Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your very helpful comments. We would also like to address your questions and comments regarding differential privacy. Let us start with a high-level overview of the framework, first introduced in [Hassidim et al., 2022; Beimel et al., 2022]. The rough idea is to treat the random seeds used by data structures as the database for which we want to preserve privacy. In the simplest setting, to achieve a target privacy level $\epsilon$, one might need to aggregate the responses of $T$ data structures in a differentially private manner. However, by the **advanced composition theorem**, the same level of privacy in terms of $\epsilon$ can be achieved using only $\widetilde{O}(\sqrt{T})$ independent outputs—at the cost of moving from pure DP to approximate DP with $\delta > 0$. As long as $\delta$ is small, the approximation is acceptable. ### Responses to Specific Questions > **Q: Can the Gaussian mechanism be used for $(\epsilon, \delta)$-DP? In that case, how does it affect the final guarantees?** **A:** Yes, the Gaussian mechanism is valid for achieving $(\epsilon, \delta)$-DP, provided that $\delta$ is not too large. Suppose we want the data structure to succeed with probability at least $1 - \beta$. Let $\delta_{\text{final}}$ denote the total privacy loss, composed of: - $\delta_{\text{Gaussian}}$: from the Gaussian mechanism - $\delta_{\text{advanced}}$: from advanced composition Then: $$ \delta_{\text{final}} = \delta_{\text{advanced}} + T \cdot \delta_{\text{Gaussian}}. $$ In our original setting, we set $\delta_{\text{advanced}} = \beta / 100$ using an appropriate $\epsilon$. To use the Gaussian mechanism instead, we may set $\delta_{\text{Gaussian}} = \frac{\beta}{200T}$ and reduce $\delta_{\text{advanced}}$ to $\beta/200$. This still yields $\delta_{\text{final}} = \beta / 100$, matching the Laplace mechanism’s privacy loss. We chose the Laplace mechanism for conceptual simplicity and because the task is essentially a counting problem, where Laplace noise is more natural. > **Q: In DP, privacy loss accumulates with repeated queries, so mechanisms are often shut down after a limit. Does your adaptive data structure exhibit this behavior?** **A:** Yes, the adaptive data structure’s ability to handle $T$ queries hinges on **privacy composition**. Using the advanced composition theorem, we guarantee $(\epsilon, \delta)$-DP across $T$ interactions by using $\widetilde{O}(T)$ resources (e.g., data structure copies or random seeds), rather than $O(T)$. However, this requires $T$ to be fixed in advance. For indefinite querying, a standard approach is to **rebuild the data structure periodically after every $T$ queries**. This permits optimization of $T$ to balance space, amortized preprocessing time, and query/update efficiency. ### Additional Comments > **It would be better to discuss more on differential privacy, such as composition and loss accumulation.** **A:** Thank you for pointing this out. We will ensure that the final version includes a more detailed discussion on differential privacy, particularly covering composition theorems and cumulative privacy loss (please refer to our high-level overview and answers regarding loss accumulation above). > **Including figures and diagrams would be helpful for explaining the techniques.** **A:** We appreciate the suggestion. We agree that visualizations—especially for illustrating how the sparse argmax mechanism is applied in our ANN algorithm—would enhance clarity and intuitively explain why our algorithm is both correct and efficient. We will include appropriate figures and diagrams to better illustrate the framework in the revision.
Summary: The paper studies the problem of adaptive algorithms: the algorithms where the adversary interacts with a randomized algorithm and the goal of the adversary is to increase the error probability. It is known that differential privacy could be used to design such algorithms, but the existing applications were for algorithms computing numeric results; this paper generalizes the approach to vector-valued algorithms. As a result, it builds an algorithm for the nearest neighbor problem with better than previously known running and preprocessing time. Claims And Evidence: The evidence sufficiently supports the claims. Methods And Evaluation Criteria: Methods and evaluations are appropriate for the problem at hand. Theoretical Claims: The proofs presented in the paper are correct. Experimental Designs Or Analyses: There are no experiments in the paper. Supplementary Material: I haven't reviewed supplementary material. Relation To Broader Scientific Literature: For me, personally, the idea of using differential privacy for designing adaptive algorithms is one of the most exciting ideas in the last couple of years: it connects the fields that from the first glance have almost nothing in common. While this is not the first paper in this direction, it generalizes the idea to a much wider class of problems. Essential References Not Discussed: I don’t think any specific reference is missing. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Line 40: It would be better to rephrase the sentence about queries; it is a bit confusing in the current form. Line 94: The assumption doesn’t say anything about neighbours, it only talks about cr balls. Perhaps the explanation needs to be rewritten. Line 258: It should be argmax \sum b_{(i), j}; the j index is missing in the paper. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We appreciate your encouraging and helpful comments. Regarding your comments: - **Line 40:** We will clarify this sentence. Our intention is to express both ANN and regression in a unified way, and we will revise it to clearly explain the query and update models in each problem. - **Line 94:** The assumption states that the predicate function $f_{v_t}$ is sparse, and we define the predicate as the intersection of the dataset $U$ and the $cr$-ball around $v_t$—that is, all points in $U$ within distance $cr$ of the query. We acknowledge that the current formulation does not make this connection explicit and will revise it for clarity. - **Line 258:** We will fix the missing index $j$.
Summary: The paper explores using differential privacy for answering adaptively chosen queries (potentially by and adversary) for search problems. The authors consider approximate nearest neighbor and regression as the main problems in this work. The main contributions are showing that under reasonable assumptions, it is possible to maintain $O(\sqrt{T})$ copies of data structure to answer $T$ queries for both problems, matching the bounds from previous work that uses DP for adversarial streaming. For the ANN problem, they reduce the problem to private selection and provide a new mechanism that obtains better runtime for sparse vectors. For regression problem with updates to both design matrix and response vector, the approach involves maintaining and updating multiple sketch matrices, solving to generate multiple solution vectors and using DP median to generate the final solution. The authors also show application to online weighted matching and computing terminal embeddings. Claims And Evidence: All the claims made in paper are accompanied with proofs. Reasoning and justification for the assumptions made are also provided. Methods And Evaluation Criteria: The paper does not have experiments. Theoretical Claims: The proofs are correct to the best of my knowledge. The contributions sublinear dependence on $T$ for time/space complexity in answering queries. The baseline comparison are the naive or existing bounds of $O(T)$ and $O(dim)$. The accuracy guarantees follow from success probability for the data structure or the generalization guarantee for DP algorithms. Experimental Designs Or Analyses: This is a theoretical work, no experiments are provided. Supplementary Material: The proofs in appendix are correct to the best of my knowledge. Relation To Broader Scientific Literature: This work contributes line of prior work that explores connection between differential privacy and robustness to adaptivity. The paper makes novel contribution in this field by giving approaches that work for search problems. Essential References Not Discussed: Not to the extent of my knowledge. Other Strengths And Weaknesses: The sparse argmax mechanism is a nice contribution and might be of interest to broader DP community. For regression problem, the dependence on $\kappa$ is quadratic which might be suboptimal -- in addition the alternative approach suggested obtains linear in $T$. Other Comments Or Suggestions: No additional comments. Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank you for your very helpful comments and we would like to address your comments on the quadratic dependence on $\kappa$ and linear dependence on $T$ for the alternative. For the $\kappa^2$ dependence, the main reason is that by using the $\ell_\infty$ guarantee together with the standard relative error guarantee of sketched regression, we can show that the regression cost of the private median estimator $g$ satisfies $\\|Ug-b\\|_2 \leq (1+\kappa \alpha’)\\|Ux^*-b\\|_2$. Hence, to account for the blowup in condition number, we need to scale down the approximation factor by $\kappa$. Since the sketching dimension depends inverse quadratically on the approximation factor, we have a $\kappa^2$ dependence in the runtime. One could argue that known sketching-based methods based on the following framework need to incur such a dependence: our algorithm estimates each coordinate to high precision, and this inevitably incurs a dependence on the condition number for the error, and the sketching dimension in turn depends quadratically on the condition number. When there are no updates to $U$, we could use an iterative method instead to remove the polynomial condition number dependence. It is an interesting open question to develop an iterative algorithm that can remove the dependence on the condition number while also supporting updates to $U$. Therefore, while the $\kappa^2$ dependence is a consequence of our current sketching approach for worst-case guarantees, we believe the overall framework offers significant advantages, and reducing this dependence under updates is an important avenue for future work. The alternative method based on the bounded path technique can be treated as a refinement to the method of using a fresh sketch for each update and query. Specifically, the algorithm depends on $\log |{\cal P}|$, and $|{\cal P}|$ can be naively upper bounded by $(n\kappa)^{dT}$, making it less appealing. However, we note that $|{\cal P}|$ is a more fine-grained parameter that could capture the structure of the input, e.g., if only entry changes between queries are allowed, then the upper bound reduces to $(nd)^T$. If the updates are infrequent, e.g., only $\sqrt T$ updates are performed, then the bound becomes $(nd)^{\sqrt T}$. Hence, while the alternative is no more efficient in the most general sense, it provides more fine-grained control and a speedup in situations for which the instance has extra structure.
null
null
null
null
null
null
Optimal transport-based conformal prediction
Accept (poster)
Summary: The paper addresses multivariate conformal prediction and proposes an approach based on multivariate quantiles derived from optimal transport (OT), using a notion of multivariate statistical depth (MK depth) to define ranks. The application to both multivariate regression and classification is discussed, including a section on improved input-conditional coverage (with asymptotic guarantees). The approach is experimentally shown to improve conditional coverage while retaining useful set sizes against a few (simpler) baselines. Claims And Evidence: The key contribution is the use of OT-based multivariate quantiles which allows the handling of multivariate scores and does not pre-define prediction set shapes (e.g. rectangle = independence, or ellipsoids etc.) while incorporating data-dependent correlations. This results in perhaps less interpretable prediction set shapes but also makes it more data-driven, in line with CP goals. In general, the claims are backed up and convincing, albeit I remain a bit unconvinced of the practical usefulness of the method (see below) since shown experimental results and metrics seem carefully selected to support their method. Methods And Evaluation Criteria: Experiments are provided for both regression and classification, against (only very few) relevant baselines. Results are compared in terms of common CP metrics which include (worst-case) coverage, set size, and fraction of singleton sets. Given the requirements to solve an OT problem for $n$ points (potentially multiple times if doing input-conditional) I would wish to also see more discussion and experimental results on computational costs and runtimes. Theoretical Claims: The use of OT-based multivariate quantiles is well-founded based on recent works in that direction, and well explained. The included Theorem 3.2 on asymptotic coverage under some assumptions seems reasonable, albeit I did not carefully check the proof. I appreciate that the regularity assumptions are clearly stated. Experimental Designs Or Analyses: I do not argue against the novelty of the proposed approach. My main gripe with this paper is the slightly insufficient experimental comparison and critical analysis of the method against related works, of which there exist many. For starters, the paper is lacking *both* a related works section and a discussion of limitations, namely practicality and computational complexity (it is only briefly mentioned in Remark 2.3). To state a few more explicit experimental points: - Fig. 2 seems a bit of an unfair comparison in the sense that OT-CP (their method) effectively accounts for data correlations, whereas the considered baselines are corrected with Bonferroni, subsuming an independence assumption across test dimensions. Given this fact, it is actually a bit surprising to see that their approach only fairs a bit better in terms of prediction set size. It would seem more fair to compare to other recent proposals exploiting correlations, e.g. [1,2]. - In Fig. 3b coverage within each partition / bucket is still marginal, right? And I would assume that as the buckets (I think of this as partition-conditional or mondrian CP) shrink, the conditional coverage would worsen as it approaches sample-conditional? So the claims on some sort of (exact) input-conditional coverage in finite samples do not hold true, e.g. as alluded in the caption of Fig. 3. - Relatedly, sec. 3.2. with OT-CP would mean repeatedly solving the OT problem for every input sample based on its k neighbours. I am missing a proper discussion on practicality and runtime costs here. Is this really a useful approach? - In Fig. 4 only worst-case coverage is shown, and the results seem to support the fact that OT-CP provides more balanced coverage. This is nice to have, but the considered baseline does not actually promise this form of coverage, right? Could we also see the marginal coverage results and prediction set sizes for the experiment in Fig. 4? Since if OT-CP tends to strongly overcover or provide overly large set sizes, then the worst-slab coverage results are not surprising. - Since the empirical benefits (also for regression tasks) seem mainly in the realm of improving empirical conditional coverage (across various partitions), I would expect some comparisons to the multitude of recent CP methods addressing such tasks of conditional coverage, e.g. [4,5,6]. - Overall, there is substantially more recent work for multivariate CP that is entirely missed or omitted from this paper. I strongly encourage to have a look at e.g. [3] to see how a more thorough empirical evaluation could look like. I am not suggesting to implement the same breadth of comparisons, but it seems reasonable to consider or at the very least discuss a few more recent proposals. And again, in particular a meaningful discussion and reporting of empirical computational costs and runtimes seems necessary. [1] Messoudi, Soundouss, Sébastien Destercke, and Sylvain Rousseau. "Copula-based conformal prediction for multi-target regression." Pattern Recognition 120 (2021): 108101. [2] Timans, Alexander, et al. “Max-Rank: Efficient Multiple Testing for Conformal Prediction.” AISTATS (2025). [3] Dheur, Victor, et al. "Multi-Output Conformal Regression: A Unified Comparative Study with New Conformity Scores." arXiv preprint arXiv:2501.10533 (2025). [4] Romano, Yaniv, et al. "With malice toward none: Assessing uncertainty via equalized coverage." Harvard Data Science Review 2.2 (2020): 4. [5] Sesia, Matteo, and Yaniv Romano. "Conformal prediction using conditional histograms." Advances in Neural Information Processing Systems 34 (2021): 6304-6315. [6] Gibbs, Isaac, John J. Cherian, and Emmanuel J. Candès. "Conformal prediction with conditional guarantees." Journal of the Royal Statistical Society Series B: Statistical Methodology (2025): Supplementary Material: I had a look at the appendix Relation To Broader Scientific Literature: There is no proper discussion of related works, so I believe the paper does not appropriately address the recent body of work (both on CP for conditional coverage and multivariate CP) Essential References Not Discussed: I've provided some suggested references above. Other Strengths And Weaknesses: - In Example 2 the motivation to use multivariate scores includes "This can be more helpful to capture the underlying confusion patterns of the predictor across different label modalities.". This motivation is never followed up on or shown in any way, so the exact benefit of working with multivariate scores versus collapsing them into a single dimension (but still accounting for correlations) remains a bit lacking - How is the OT problem exactly solved in the experiments? It would be helpful to provide details on this (e.g. in the Appendix) to actually permit CP practitioners unfamiliar with OT to leverage this approach, beyond stating the OT problem only. - Overall the paper reads quite well and is well structured, and I did not see any obvious typos. I appreciate that the authors provide several illustrative figures (e.g. Fig 1, 2a, 5, 6) that help visualize the concepts and make the intuitively understandable. The OT side of things is kept at a relatively high level which is fine, but could use a more thorough practical description in the appendix. Other Comments Or Suggestions: See above. Questions For Authors: Please see my comments and questions especially on the experimental design above. Ethical Review Concerns: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the careful reading and evaluation. We agree that our numerical experiments primarily compare against simple baselines. Our goal, however, is not to claim superiority over all methods but rather to motivate the use of transport-based quantiles in fundamental settings. We acknowledge that this may not have been sufficiently clear and we clarify this below. Thanks to your feedback, we also report additional metrics and results to provide a more comprehensive evaluation. *(computational costs and runtimes)*. We thank your for this comment which allows us to precise key properties of our methodology. Indeed, OT-CP+ requires solving multiple OT problems, which might be costly, especially for large-scale datasets. However, for the numerical experiments carried out in Fig.4, the computational time remains reasonable. An additional figure supporting this claim is now included: https://ibb.co/YTtv36ZM *(Comparison to existing related work)*. Our revised version now includes more related works as well as a discussion of limitations. Our main objective is to motivate the possibility of replacing univariate quantiles in the CP recipe by multivariate ones. OTCP should only be seen as a multivariate sorting of scores, which stands in contrast with the design of scores in applied settings. Our experiments suggest benefits of MK quantiles in terms of flexibility, but we do not claim to outperform all existing methods in multivariate regression. Rather, future work might profitably combine OTCP with more advanced multiple scores, different from componentwise residuals. Further numerical comparison is beyond the scope of our proposal and may perturbate the key message. *(Fig. 2)*. First, we stress that the ellipsoidal approaches do exploit correlations. Second, we intentionally only consider methods that can be interpreted as center-outward quantile regions, just as OT-CP, to ensure fair comparison. The mentioned references [1,2] differ in that they assume positive dependence, more related to a left-to-right ordering than a center-outward one. *(Fig 3b)*. We reported the empirical coverage averaged across subsets, which is drastically different from strategies that divide calibration examples into fixed groups (e.g., Mondrian CP). We also do not claim to achieve exact input-conditional coverage. We acknowledge the potential confusion and have revised our claim in Sec 3.2. *(Runtime cost of OT-CP+)*. Indeed, computational times is a limitation of OT-CP+, that is more costly than OT-CP. This cost depends on the parameter k. However, for experiments carried out in our paper, the price to pay seems relatively cheap, as illustrated in the new figure https://ibb.co/hpyCyFN comparing the time between OT-CP and OT-CP+ in the setting of Fig. 4. *(Type of coverage in Fig 4)*. The considered baseline shares our purpose of improving adaptivity, which is a common desiderata in CP. We agree that it does not guarantee asymptotic conditional coverage. In contrast, OT-CP+ satisfies this property at the price of larger set sizes on average when compared with local ellipsoids https://ibb.co/ymW5Mym6 A potential reason is that having more balanced conditional coverage requires including more points. We added a new figure tracking the marginal coverage of OT-CP+, to highlight that it does not tend to overcover https://ibb.co/1YNdSDny *(Recent CP methods for conditional coverage)*. As argued previously, our purpose is not to outperform all existing methods in conformal regression. This point has been clarified in the main body to ensure clarity when expressing our purposes. *(Discussion on related works)*. We discussed related work in "Introduction" and other paragraphs (CP methods for multi-output regression, in Sec. 3; CP methods for classification, in Sec. 4). We will integrate all these bibliographical elements into a dedicated subsection of the introduction. *Other Strengths And Weaknesses:* We appreciate your comments on the writing, structure and illustrative figures. We have added more details on how to solve OT in practice. *(Motivation for Example 2)* We believe that the results in Fig. 4 demonstrate OT-CP's ability to adapt to label confusions. Therein, classes 0 and 1 tend to be confused by the QDA classifier. In such cases, OT-CP achieves better trade-off between coverage and efficiency/informativeness. When the distribution is made easier (classes 0 and 1 become more distinct for QDA) all methods yield similar efficiency and informativeness: https://ibb.co/mVwMbsF6. This supports that OT-CP can effectively adapt to classification patterns where certain labels are prone to confusion. *(Solving the OT problem in practice)* We used POT library [R2], which solves OT with the simplex algorithm. This solver is written in C++/Cython and is thus generally faster than pure Python implementations. We have added these details in Appendix. [R2] Flamary et al. “POT: Python Optimal Transport.” JMLR 2021 --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and clarifications. Your answers satisfy most of my questions, but it is clearly visible in the additional figures (e.g. OT-CP+ vs. OT-CP; OT-CP vs. ELL) that the approach of combining CP with OT **does not**, at least in the presented form, convincingly outperform other recent proposals in the literature, and can tend to be both more computationally expensive and more conservative. Personally, this does not bother me as long as the limitations of the approach are **openly and clearly** stated, and the claims or contributions adjusted accordingly. I think there is sufficient novelty in the combination of these two fields to warrant acceptance, and there is no need to obfuscate details and try to promote the method beyond its abilities. Perhaps, as you suggested, future work can incorporate further ways to improve the conformal results. Such adjustments to the paper as well as dedicated related work and discussion sections would positively strengthen the paper in my opinion, and I hope the authors will take them to heart. **Overall I believe this paper warrants acceptance and is of interest to the CP community, and so I am raising my score.** --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read our rebuttal: we appreciate your recommendation to accept our paper. In line with your comments, we have included our additional results and discussion in our revised paper to provide a more nuanced and comprehensive evaluation of our method. In particular, we have added the following paragraph to our conclusion: "Approaching conformal prediction through the lens of optimal transport offers a unified framework for addressing a wide range of applications involving multivariate scoring functions, while maintaining rigorous theoretical coverage guarantees. In practice, these methods may be observed to be more conservative than existing score functions in the setting of multi-output regression. We identify opportunities for improvement in this direction by employing more suitable reference distributions for instance. Moreover, the flexibility of the approach, rooted in the Monge-Kantorovich quantile formulation, comes at the cost of increased computational complexity compared to conformal methods based on univariate scores. Building on these findings, future work will explore incorporating alternative transport-based methods into OT-CP to achieve better computational and statistical efficiency with respect to the scale of the problem."
Summary: The paper introduces OT-CP, a novel conformal prediction method for multi-output tasks based on optimal transport. The method constructs quantile regions for multivariate conformity scores while ensuring finite-sample coverage and achieving asymptotic conditional coverage. It introduces a new nonconformity score composed of two functions: a multivariate scoring function that preserves information about the error and a mapping function leveraging optimal transport to transform the multivariate score into a real-valued measure. This approach extends univariate conformal prediction by establishing a mapping between the predicted distribution and the uniform distribution on the unit ball. Applications in regression and classification demonstrate improvements in conditional coverage and/or volume. Claims And Evidence: Yes. Methods And Evaluation Criteria: The method is evaluated on relevant synthetic and benchmark datasets using appropriate metrics, including marginal coverage, average size, and worst slab coverage. Experiments on synthetic and real data demonstrate that OT-CP outperforms ELL in regression (in terms of conditional coverage) and effectively balances coverage and informativeness in classification compared to IP, MS, and APS. However, the region size is not reported in the regression setting. Theoretical Claims: I checked the proof of Theorem 2.4 and the proof of Theorem 3.2, both of which appear to be correct. Experimental Designs Or Analyses: I find the empirical results unconvincing. OT-CP does not appear to stand out significantly in Figures 8, 9, and 10. The coverage is not necessarily better; in some cases, the prediction regions are larger, and the informativeness often seems lower compared to the IP and MS methods. In instances where OT-CP appears to perform better, additional analysis is needed. For example, in the *Experiments on real data* section for regression with OT-CP+, it would have been helpful to report the average size of the prediction sets and other metrics evaluating conditional coverage in comparison to ELL. Supplementary Material: Yes, all parts (Appendix A and B). Relation To Broader Scientific Literature: This paper lies at the intersection of optimal transport and conformal prediction, both of which are active research areas. Recent notable works in optimal transport include the introduction of quantile regions by Hallin et al. (2021), where vectors are ordered based on optimal transport, and their extension to regression by del Barrio et al. (2024). In conformal prediction, several new methods have been proposed. The paper appropriately references copula- and ellipsoid-based approaches. Additionally, it acknowledges the more flexible method introduced by Feldman et al. (2023), which is particularly relevant to the discussion. Essential References Not Discussed: [1], based on a generative model, is another generative model that could be mentioned. [1] Wang et al. “Probabilistic Conformal Prediction Using Conditional Random Samples.” AISTATS 2023. Other Strengths And Weaknesses: #### **Strengths:** - Exploring quantile regions for multivariate scores is a natural and worthwhile research direction. - The framework is general and supports any multivariate conformity scores, making it applicable to both regression (e.g., residual-based scores) and classification (e.g., inverse probability scores). - Theoretical guarantees are provided for both marginal and asymptotic conditional coverage. #### **Weaknesses:** - The **OT-CP+** method does not satisfy finite-sample marginal coverage, which is a fundamental requirement in conformal prediction. - While computational aspects are briefly discussed, it remains unclear how **OT-CP+** compares computationally to **[1], [2], and [3]**. If I am not mistaken, the method requires computing the optimal transport map for each test input \( x \), which has a high computational complexity ($O(n^3)$ or $O(n^2)$ for approximations**), making it computationally demanding. - Several relevant baselines are missing, particularly **[1], [2], and [3]** in regression. Notably, **[2]** can handle multimodal distributions, a capability that has not been discussed in this paper. - In regression, the volume of the quantile regions has not been compared, limiting the assessment of efficiency. - The motivation for using multivariate conformity scores is not entirely clear. Existing methods such as **[1], [2], and [3]** can already generate multivariate prediction regions without requiring multivariate conformity scores. - The lack of publicly available code hinders reproducibility. #### **References:** - **[1]** Feldman, Shai et al. *Calibrated Multiple-Output Quantile Regression with Representation Learning.* JMLR (2023). - **[2]** Wang, Zhendong et al. *Probabilistic Conformal Prediction Using Conditional Random Samples.* AISTATS (2023). - **[3]** Sun, Sophia et al. *Copula Conformal Prediction for Multi-Step Time Series Prediction.* ICLR (2024). Other Comments Or Suggestions: 1) In Assumption 3.1, two quantities are denoted by $ \lambda $. Where do these quantities originate from? 2) In Example 2, the authors illustrate that a univariate score may fail to distinguish between the vectors (0.6,0.4,0) and (0,0.4,0.6), whereas a multivariate score can. I agree with this argument; however, later in the method, these vectors are mapped onto the unit ball. Given this transformation, isn’t there a high likelihood that the ranking induced by this mapping would still assign very similar ranks to these two vectors? 3) Could the choice of the unit ball shape influence the final prediction regions? If so, what is the rationale behind selecting the $ L_1 $-norm ball over other possible choices? Questions For Authors: Please sea above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive evaluation and address their comments with additional discussion and empirical results to illustrate the relevance and computational properties of OT-CP. *(Region size in regression)*. In Fig. 2, we reported the volumes to demonstrate that OT-CP achieves the desired coverage while producing smaller prediction sets than ELL, RECT. In Fig. 4, we initially did not include volumes since ELL fails to attain the desired conditional coverage, unlike OT-CP+. Based on your suggestion, we now monitor the volumes: https://ibb.co/ymW5Mym6 Overall, our results suggest that OT-CP(+) achieve coverage while ensuring efficiency, whereas other CP methods either fail to attain coverage or do so by producing unnecessarily large prediction sets. ELL generates smaller regions than OT-CP+, which may explain why it fails to achieve the desired coverage. We will add these results in our paper. *(OT-CP in Figures 8, 9, and 10)*. OT-CP provides more balanced coverage across classes, despite not being specifically designed for this task, while non-adaptive scores IP/MS can have low coverage (e.g., label 6 in Fig. 8). This benefit comes with better efficiency/informativeness than APS. This justifies our claim that OT-CP strikes "a favorable balance across all the considered metrics". Our revised version includes results averaged over labels: https://ibb.co/1GV3bsdq , https://ibb.co/nqTH1VCq. *(Average size of the prediction sets and other metrics with OT-CP+)*. Additional metrics complementing Fig. 4 will be included: https://ibb.co/ymW5Mym6 , https://ibb.co/YTtv36ZM Volumes of OT-CP+ are often larger than those from the ellipsoidal approach, but the smallest average set size is not necessarily the best as argued in Angelopoulos and Bates (2023). The motivation for OT-CP+ is precisely to enhance adaptivity with respect to the input. *([1] could be mentioned)*. This relevant reference was already included in the initial submission ("Conclusion and Perspectives"). *(OT-CP+ and marginal coverage).* We agree that the core guarantee of CP is finite-sample coverage. However, achieving adaptivity (i.e., obtaining prediction intervals with length adapted to the uncertainty relative to the considered test point), which is also a desirable property, is unattainable in finite samples. To introduce adaptivity, a sound strategy is to leverage universal consistent estimators, such as $k$NN. While this improves flexibility, it offers only asymptotic coverage, providing a different trade-off. *(OT-CP+ vs. [1], [2], and [3])*. A general comparison with [1-3] is an interesting direction for future work, but we explain below why it is beyond our current scope. *(Other relevant baselines)*. [1,2] were already cited and discussed in our initial submission. We now cite [3] and note that this work proposes a CP method tailored for time series forecasting. Comparing our approach to theirs would thus require adapting OT-CP to handle time series, which is challenging since extensions of MK quantiles to such data have never been studied. *(Multivariate prediction regions of [1], [2], and [3])*. Thank you for the question, which helps us refine our conclusions: - We do not claim OT-CP outperforms all CP methods across every metric, but rather present it as a general methodology, valid in regression settings, improving simple baselines with basic multivariate scores via optimal transport. - The suggested references are certainly relevant but appear to be more complementary than concurrent: [1] does not apply to black-box models, unlike OT-CP+ ; [2,3] propose CP strategies for handling complex univariate scores or time series. - Combining OT-CP and [1-3] can foster interesting future work. For instance, one can replace the ball-shaped regions with conformalized radii from [3] by our MK quantile regions for greater flexibility. *(Available code)*. We provided the code in the supplementary material. *(Assumption 3.1).* It implies $p(\cdot | x)$ is bounded away from $0$ and $+\infty$ on any compact subset of its convex support, a common assumption for transport quantiles (e.g., del Barrio et al., 2024). *("isn’t there a high likelihood that the ranking induced by this mapping would still assign very similar ranks to these two vectors?")*. Our mapping does not necessarily lead to similar rankings for these vectors, precisely because we use optimal transport rather than simple normalization. Therefore, MK ranks are $d$-dimensional vectors that integrate the overall geometry of the underlying distributions and capture directional differences (and not just the magnitude). *(Choice of the reference distribution)*. This is a relevant question, as choosing a reference distribution is an open research problem (even beyond CP) and is largely guided by heuristics. We chose the $L_1$-norm ball because the $L_1$-norm of our score yields IP, a commonly used univariate score in CP for classification: see below eq.(11). --- Rebuttal Comment 1.1: Comment: We thank the authors for their response. We have increased our score. --- Reply to Comment 1.1.1: Comment: Thank you for your time and for raising the score.
Summary: In the paper Optimal Transport-based Conformal Prediction, the authors propose a new conformal prediction framework leveraging optimal transport to produce multivariate score functions. They prove that such a framework also achieves distribution-free coverage. They validate the method for multi-output regression and multiclass classification using synthetic and real datasets. The results show how the proposed method is able to achieve asymptotic conditional coverage and provide adaptive predictive sets. Claims And Evidence: Everything seems reasonable to me. The most conflicting point I find is the use of the k-nearest neighbor to extend OT-CP to OT-CP +. I would expect that the fix of the standard OT-CP to achieve conditional coverage could be solved in a more fundamental way. That is, using the k-nearest neighbor approach seems more like an approximation than a natural solution to the problem. The dependence on the $k$ hyperparameter Methods And Evaluation Criteria: It wasn’t very clear when OT-CP+ or base OT-CP was being used during the experiments. I would make that more explicit, or even add both in the comparison. Regarding the evaluation criteria, I think results per class in the classification section are a bit difficult to read. Maybe averaging across classes for the coverage and size of prediction sets is a more informative metric to report for the readers. And the refer to the Appendix the more granular results for class-specific coverage. Theoretical Claims: Without diving very deep into the proofs, all claims made sense to me. Experimental Designs Or Analyses: The real datasets (lines 272) for the multi-output regression problem are not really explained, just referred to the reference work. I would add more information about which kind of datasets these are. Otherwise, it’s very difficult to tell the kind of regression problem we are facing. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Most references w.r.t conformal prediction are discussed. Maybe regarding optimal transport more works could have been included, e.g. how to make more efficient the optimal transport problem. I believe this might be pointed out by other reviewers with more background on this field. Other Strengths And Weaknesses: I think the use of optimal transport to construct the conformity scores, both for classification and regression, is quite interesting and novel. This way the conformity score is able to adapt to the data distribution as shown in Figure 2b). Regarding some weaknesses, I missed a computational comparison between baselines and the OT-CP in some tables or Figures. As far as I know, optimal-transport algorithms can be computationally quite demanding, and I worry about how practical would be the application of this work compared to simpler solutions already proposed in the conformal prediction literature. How does including the k-nearest neighbor step affect the computational complexity? Is it negligible? Related to the point above, I missed the comments about the choice of the $k$ parameter in the $k$-nearest neighbor. How is this selected? Given that this is what yields to (asymptotic) conditional coverage, I think this is rather important. Other Comments Or Suggestions: - In Figure 2 a), I would change the color of the prediction and MK prediction set, maybe: two different colors, or two shades of blue. It’s a bit difficult to read the dashed line with that shade of blue over the grey points. - I find Figure 8 a bit difficult to read. At a first glance, it seems all methods are comparable. Would it make sense to average result for all classes, instead of outputting results per class? Maybe this way it’s easier to see the improvement using OT-CP. If following this suggestion, I would probably add the results for the other benchmark datasets, rather than having them in Appendix B. - The use of $k$ for the $k$-nearest neighbor and $K$ for the number of classes in the classification problem can lead to errors when reading. I would maybe change the number of classes to a different variable (e.g. $L$, $M$?). - For the results for regression, are we using OT-CP+, or OT-CP? It’s not clear from the caption and legends in the figures (Figure 2,3,4). Questions For Authors: - How computationally complex is it to apply OT-CP? How much time does it take compared to the baselines? - Can OT-CP be applied on full conformal prediction? - Is the asymptotic conditional coverage determined by the k-nearest neighbor step of the algorithm? - In Theorem 3.2, you state that, assuming $k \to \inf$ as $n \to \inf$, then $k/n \to 0$. Maybe I’m missing something trivial, but why? We also assume $n$ goes faster to infinity? - I just checked that it’s mentioned in the Appendix A.3 - $k$ is a function on $n$. You mention that you omit this dependency for clarity - could you add some note on that on the main paper/referral to the Appendix? Thanks - Why do you assume $K \gt 3$ in the classification setup? - How would OT-CP behave for a classification problem where we assume a One-vs-all strategy, i.e. using binary-to-multiclass framework based on sigmoids rather than softmax? Would results and assumptions still hold? Does this align with your future lines of work (lines 429-434? - Figure 4, what is ELL? _ELL_ not really mentioned in the text. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive and detailed feedback. *(OT-CP+ and the use of k-nearest neighbor)*. We agree that OT-CP+ is not the only way to make OT-CP adaptive: our proposed methodology aims to demonstrate that conditional coverage can be achieved with only a slight modification of the generic OT-CP framework. In addition to being easy to implent, the added k-NN step allows us to leverage established results on the consistency of quantiles (del Barrio et al., 2024), which serve as the foundation for our Theorem 3.2. Therefore, OT-CP+ should be seen as evidence of OT-CP's inherent flexibility, showing its ability to incorporate refinements to achieve specific properties, such as adaptivity. *(OT-CP or OT-CP+)*. The results for base OT-CP and OT-CP+ are resp. presented in Sections 3.1 and 3.2: Figure 2 for OT-CP, and Figures 3 and 4 for OT-CP+. The figure titles now explicitly indicate the used method. *(Numerical results per class)*. We will add results marginalized over $Y$ to improve readability: https://ibb.co/1GV3bsdq ; https://ibb.co/nqTH1VCq ; https://ibb.co/p6jGSmXj See also our answer "OT-CP in Figures 8, 9, and 10" for reviewer bVvm. *(Real datasets)*. In addition to the reference, we included a table in Appendix B.1 summarizing the main statistics for each dataset. Since these are directly sourced from the literature rather than created in our work, we felt that no further details were necessary. That said, we are open to add any relevant information if needed. *(Reference not discussed)*. Thank you for the suggestion. We will expand Remark 2.3 by referencing Sections 3 and 4 of Peyré & Cuturi (2019) for an overview of solvers for OT. *(Computational comparison between baselines and the OT-CP)*. The calibration time corresponding to Fig. 4 (OT-CP+ compared with ELL) will be added in the appendix, https://ibb.co/YTtv36ZM *(Computational complexity)*. Including the k-NN increases the required complexity, as illustrated in https://ibb.co/hpyCyFN. For all the experiments carried out in this paper (up to $n=10\,000$ points), the computational time of OT-CP is fast thanks to the Python Optimal Transport library (which solves OT with a simplex implemented in C++). OT-CP+ is more demanding than OT-CP, as it solves one OT problem per new test point. The parameter $k$ drives the computational time, and our Theorem 3.2 only requires that it grows slower than $n$ (see next answers). *(Choice of k in k-NN)*. As the reviewer correctly pointed out, $k$ should be chosen carefully to ensure asymptotic conditional coverage. More precisely, our Theorem 3.2 shows that k should grow to $+\infty$ as $n \to +\infty$ at a slower rate than $n$ (this point is further clarified below, in "Questions for Authors"). In our experiments, setting $k = n/10$ provides a tradeoff between adaptive results and fast computational complexity. Thank you for your comment: we agree that this is an important aspect to highlight that has been added in our paper. *(OT-CP and full CP)*. OT-CP can be adapted to full CP and would consist in solving OT on the entire training dataset and every candidate $(X_{test}, y)$. Similar to full CP methods, this would improve statistical efficiency and reduce variability by avoiding data splitting, at a price of increased computational cost. *(Asymptotic conditional coverage)*. Indeed, Theorem 3.2 applies to the quantile region computed with the $k$NN step, under some mild assumptions on $k$ as a function of $n$. We provide more explanations on this point below. *(Assumption on k and n)*. The assumption is $k \to +\infty$ as $n \to +\infty$ **and** $\frac{k}{n} \to +\infty$. This means $k$ grows to $+\infty$ as $n \to +\infty$, and at a slower rate than $n$ (for instance, $k = \log(n)$). This assumption is taken from del Barrio et al. (2024) and ensures the consistency of the sequence of weight functions in their Theorem 3.3, which we use (through their ensuing Corollary 3.4) to prove our Theorem 3.2. Note that this is a classical assumption to ensure the universal consistency of $k$NN estimators (e.g., Corollary 19.1 in [R1]). *(Classification with K>=3)*. Our method can also be applied to binary classification ($K = 2$) but our score (11) would have an intrinsic dimensionality of 1, since the vector of estimated class probabilities belongs to the simplex. Because this setting is not the most relevant for us, we chose to focus on $K \geq 3$. *(One-vs-all strategy)*. OT-CP can be applied to one-vs-all classification by simply selecting an appropriate score function (i.e., based on K sigmoids). Our theoretical results remain valid in this setting. This is an interesting research direction that could further broaden the applicability of OT-CP. *(ELL)*. ELL refers to the ellipsoidal approach with adaptive covariance estimation from Messoudi et al. (2022): see l.292-297. We have made this point clearer in our revised paper. [R1] Biau, Devroye (2015). Lectures on the nearest neighbor method. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing my concerns and the other reviewers'. As **Reviewer Gw2r**, I still think that the proposed method might be lacking more improvements, especially the computational complexity, considering that the method often exhibits the same performance as other baseline methods. However, I believe the paper is still interesting for the CP community and that's why I raise my score. --- Reply to Comment 1.1.1: Comment: Thank you for your response and for increasing your score, we are glad that we addressed your concerns. In our revised version, we have added more discussion on the advantages and limitations of our methodology, in particular regarding the computational complexity, as outlined in our response to Reviewer Gw2r.
Summary: The authors tackle the problem of conformal prediction in regression and classification settings when the target random variable is multivariate. To do so, they use the quantiles definition as in [1], that defines the quantile function of a r.v. $Y \in R^n$ as the euclidean optimal transport map between a uniform random variable $U \in R^n$ and and $Y$. [1] Chernozhukov, V., Galichon, A., Hallin, M., and Henry, M. Monge–Kantorovich depth, quantiles, ranks and signs. The Annals of Statistics, 2017 ## update after rebuttal My novelty considerations have been addressed in the rebuttal and so I increased the recommendation to weak accept. Claims And Evidence: They claim to "introduce a novel general CP framework" for multivariate conformal prediction. However, since the concepts of multivariate quantiles, ranks and confidence sets are already defined in [1] and derivative work, it is not clear to me what the main contribution is. Methods And Evaluation Criteria: Method and evaluation are sound, there is both synthetic and real data and the target dimension is reasonable. Theoretical Claims: I did not check. Experimental Designs Or Analyses: The experiment design seems sound as it is described in the paper and the code is provided. I did not check the code. Supplementary Material: I just checked that code was provided. Relation To Broader Scientific Literature: The paper follows the work of [1] and its derivative papers but it is not clear to me what the main novelty is. Essential References Not Discussed: May be worth noting/including in the benchmarks the paper [2], that also applies the multivariate quantile function defined in [1] to regression tasks. [2] Fast Nonlinear Vector Quantile Regression, AA Rosenberg, S Vedula, Y Romano, AM Bronstein, ICLR 2023 Other Strengths And Weaknesses: The paper is not super clear and the figures need clearer descriptions. Other Comments Or Suggestions: No. Questions For Authors: I would like to have the main contribution of the paper better clarified. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## General comment We would like to thank all the reviewers for their time and feedback. We have revised the paper accordingly and provide detailed responses below. 1. We emphasize that integrating multivariate quantiles into the conformal prediction framework while ensuring theoretical coverage guarantees is non-trivial. The property that ranks follow a uniform distribution, under exchangeability, is crucial. In the literature of multivariate quantiles, optimal transport tools extend this critical property (see, e.g., Hallin et al. 2021). However, the stability arguments invoked in standard proofs of the quantile lemma (see, e.g., Tibshirani et al. 2019) do not directly apply here, as they rely on unresolved theoretical questions in optimal transport. To better highlight this subtle point, we clarified Step 2 of our OT-CP methodology and explicitly demonstrate how our approach provides theoretical coverage guarantees. 2. We have added the suggested references to improve the discussion on related work. 3. We systematically computed the volumes of the prediction regions, precised the running times of the method, and conducted new numerical experiments to better support our claims. We now provide detailed reponses below. While we have carefully considered all reviewer comments, we are sometimes unable to provide an exhaustive answer for every point raised due to the character limit. [Tibshirani et al.,2019] Tibshirani, Foygel Barber, Candès, Ramdas, Conformal prediction under covariate shift, Neurips 2019 [Hallin et al.,2021] Hallin, Del Barrio, Cuesta-Albertos, Carlos Matràn, Distribution and quantile functions, ranks and signs in dimension d: A measure transportation approach, Annals of Statistics, 2021 ## Answer to Reviewer RqtM We thank the reviewer for their comments. We appreciate the opportunity to clarify our contribution. *"However, since the concepts of multivariate quantiles, ranks and confidence sets are already defined in [1] and derivative work, it is not clear to me what the main contribution is."* While optimal transport-based quantiles are not new, the novelty of our work lies in integrating them into an effective and flexible conformal prediction (CP) framework designed for multivariate scores. This aligns with the CP literature, where quantiles typically serve as building blocks for uncertainty quantification strategies, rather than primary contributions. More precisely, unlike [1], we address several nontrivial challenges specific to Monge-Kantorovich quantiles for CP, including establishing both nonasymptotic and asymptotic coverage guarantees, selecting reference rank vectors $\{U_i\}_{i=1}^n$ that are appropriate for multivariate scores, and comparing performance with existing CP methods. *"The paper follows the work of [1] and its derivative papers but it is not clear to me what the main novelty is."* Our paper goes beyond prior work on transport quantiles, including [1], by integrating them in conformal prediction, which had not been explored before. By combining these two lines of work, we make the contributions outlined above but also offer a novel perspective that could be of interest to both the machine learning and optimal transport communities. *"May be worth noting/including in the benchmarks the paper [2], that also applies the multivariate quantile function defined in [1] to regression tasks."* We thank the reviewer for the additional reference [2], which we have added to the introduction. Nevertheless, we emphasize that this paper leverages the Monge-Kantorovich quantiles from [1] to address quantile regression, and not conformal prediction. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification, I will increase my recommendation. --- Reply to Comment 1.1.1: Comment: Thank you: we appreciate that you took the time to consider our clarification and that you recommend acceptance.
null
null
null
null
null
null
How to Evaluate and Mitigate IP Infringement in Visual Generative AI?
Accept (poster)
Summary: This paper explores the intellectual property (IP) infringement risks posed by state-of-the-art visual generative AI models, such as DALL-E 3, Stable Diffusion XL. The study shows that these models can generate content resembling IP-protected characters (e.g., Spider-Man, Iron Man, Superman) even when given prompts that do not explicitly mention their names. The authors develop a benchmarking framework to evaluate infringement and propose a mitigation strategy TRIM that detects and suppresses infringing outputs using guidance techniques in diffusion models. Claims And Evidence: Yes. Methods And Evaluation Criteria: The benchmarking approach is well-designed and directly relevant to the problem. Theoretical Claims: No formal proofs are presented. Experimental Designs Or Analyses: The experiments are well-structured. Supplementary Material: Not provide Supplementary Material. Relation To Broader Scientific Literature: The study is highly relevant to AI ethics, fair use in AI-generated content, and legal compliance in generative models. Essential References Not Discussed: None. Other Strengths And Weaknesses: **Strengths:** This paper introduces a well-structured benchmarking framework to systematically evaluate IP infringement across multiple AI models and characters. The proposed TRIM method effectively prevents IP infringement without requiring model retraining. **Weaknesses:** The proposed method depends on predefined lists of IP-protected characters. How would the system adapt to new characters or lesser-known copyrighted content? Any existing mitigation techniques can be compared with TRIM? This paper relies on human evaluation to measure IP infringement in the generated content. While the authors mention that the human evaluators are familiar with the characters involved, they do not provide sufficient details about the evaluators' backgrounds or their familiarity with intellectual property (IP) law. Assessing IP infringement is not just about visual similarity; it also involves legal judgments about whether the generated content constitutes a violation of copyright. The paper provides only one visual example (Figure 12) comparing the generated images before and after applying the proposed TRIM method. While this example effectively demonstrates the mitigation of IP infringement for the character Spider-Man, it is insufficient to fully evaluate the generalizability and effectiveness of the method across different characters and scenarios. The authors should include more visual examples of the generated content, especially for other well-known IP-protected characters like Iron Man, Superman, and Batman, as well as non-human characters. Additionally, visual examples of failed cases or edge cases (e.g., where TRIM fails to mitigate infringement or where it overly suppresses the generation of non-infringing content) would provide a more comprehensive understanding of the method's strengths and limitations. Other Comments Or Suggestions: None. Questions For Authors: see Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your insightful comments. We hope the following results and clarifications can address your concerns. Please let us know if anything is still unclear. We are more than willing to provide further clarification and conduct more experiments if needed. **Q1:** The proposed method depends on predefined lists of IP-protected characters. How would the system adapt to new characters or lesser-known copyrighted content? **A1:** Thanks for your thoughtful question. In this paper, we focus on defending against IP infringement involving well-known, existing intellectual property (IP) content, as these characters are typically owned by major entertainment companies and are associated with significant financial value. Thus, these IP content can be listed in a predefined list. Since IP infringement by visual generative AI often stems from memorization of training data, it is less likely for text-to-image and text-to-video models to reproduce infringing content related to newly created or lesser-known copyrighted works. As a future direction, we aim to make the system adaptable to such content without retraining large multimodal language models used in the detection process, for example through knowledge editing techniques like those proposed by Cheng et al. Cheng et al., Can We Edit Multimodal Large Language Models? EMNLP 2023. **Q2:** Any existing mitigation techniques can be compared with TRIM? **A2:** Thank you very much for your insightful question. Please refer to Reviewer-uqQi-A3 and Reviewer-uqQi-A4. **Q3:** This paper relies on human evaluation to measure IP infringement in the generated content. While the authors mention that the human evaluators are familiar with the characters involved, they do not provide sufficient details about the evaluators' backgrounds or their familiarity with intellectual property (IP) law. Assessing IP infringement is not just about visual similarity; it also involves legal judgments about whether the generated content constitutes a violation of copyright. **A3:** Thanks for your insightful comments. We ensure our annotators are familiar with the judgment process in the real-world related lawsuit case Andersen v. Stability AI Ltd., 23-cv-00201-WHO. Thus, we believe our annotators can be considered as legally knowledgeable human annotators. We acknowledge that the annotators are not experts in intellectual property infringement law (e.g., they do not hold advanced degrees such as a doctoral degree in this field), and we will clarify it in the limitation section. We will make it more clear in the revised version. **Q4:** The paper provides only one visual example (Figure 12) comparing the generated images before and after applying the proposed TRIM method. While this example effectively demonstrates the mitigation of IP infringement for the character Spider-Man, it is insufficient to fully evaluate the generalizability and effectiveness of the method across different characters and scenarios. The authors should include more visual examples of the generated content, especially for other well-known IP-protected characters like Iron Man, Superman, and Batman, as well as non-human characters. Additionally, visual examples of failed cases or edge cases (e.g., where TRIM fails to mitigate infringement or where it overly suppresses the generation of non-infringing content) would provide a more comprehensive understanding of the method's strengths and limitations. **A4:** Thank you very much for your constructive commments and valuable suggestions. Please see https://anonymous.4open.science/r/GAI_IP_Infringement_submission-0A27/more_visualizations.pdf for the added visualizations. We have added more visual examples of the generated samples by default model and our method on different characters and contents (i.e., Spider-Man, Iron Man, Incredible Hulk, Superman, Batman, and Coca-cola). We also added the examples of the failure cases accordingly. Will add more visual examples in our revised paper. --- Rebuttal Comment 1.1: Comment: Thank you for the clarification. I will consider this rebuttal result in my final decision.
Summary: The paper presents a method for creating prompts that may cause T2I/T2V models to generate images infringing on IP rights and show that IP infringement issues are widespread across different visual generative models based on their constructed prompts. They then develop a defensive method which combines detecting IP infringing contents using LLM and VLM and suppress IP infringement by modifying the diffusion generation process. Experiments show that the designed defense can effectively reduce the IP infringement in visual generative models. Claims And Evidence: This is an empirical paper and the claims in this paper are well supported by the experiments. Methods And Evaluation Criteria: The weaknesses of the proposed method: * As the proposed defense works by modifying and controlling the diffusion process of the model, this method might require the white-box access of the T2I/T2V models. Its application on the models with only black-box access might be limited. * The proposed defense method is specifically designed for diffusion-based Text-to-Image (T2I) and Text-to-Video (T2V) models. Its effectiveness may not extend to other architectures such as GANs or autoregressive models like VAR [1]. This limitation exists because the defense operates by suppressing intellectual property infringement through targeted modifications to the diffusion process itself. Consequently, its transferability to models with fundamentally different architectures remains uncertain. * The revised generation process in the proposed defense could introduce large runtime overheads. [1] Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction. NeurIPS 2024. Except the above weaknesses, the methods and the evaluation criteria in this paper are easy-to-understand and sound. Theoretical Claims: This paper is mainly empirical based and it does not include theoretical claim and proof. Experimental Designs Or Analyses: The experiment designs and the results are clear. I notice that the IP infringement rates for different characters is different. For example, the average infringement rates for Spider Man is larger than the rates for Iron Man. Could the authors provide some analysis to this phenomenon? Supplementary Material: Yes. The supplementary materials in this paper includes the results on different types of IP contents, discussion about the efficiency, comparison to other defenses for IP infringement, and more visualizations. Relation To Broader Scientific Literature: The key contributions of this paper can be summarized as: 1. Proposing an approach to evaluate the IP infringement on T2I/T2V models. 2. Showing that the IP infringement issues on recent T2I/T2V models are common. 3. Proposing an effective defense method for IP infringement on T2I/T2V models. These key contributions are related to the existing research in both the visual generative AI domain (such as the diffusion model community) and the field of AI safety and governance. Essential References Not Discussed: It is suggested to add the discussion about the transferability of the proposed defense to other model architectures besides diffusion, such as autoregressive models [1]. Other Strengths And Weaknesses: Strengths: * The topic is important and trendy. The findings and the contributions in this paper could be significant for both the visual generative AI field and and the field of AI safety and governance. * The method for evaluating the IP infringement in T2I/T2V models is new, and it's a general method to test the IP infringement for different contents. * The performance of the proposed defense method is promising. * This paper is well-structured and easy-to-follow. Weaknesses: * The proposed defense method might be specifically designed for diffusion-based Text-to-Image (T2I) and Text-to-Video (T2V) models. * The proposed defense method requires the white-box access to the models. * The revised generation process in the proposed defense could introduce large runtime overheads. * There are various symbols used in this paper, especially Algorithm 1. It is suggested to have a table to summarize the meaning of different symbols. Considering the strengths and the weaknesses of this paper, I lean to accept. Other Comments Or Suggestions: See above Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments. We hope the following results and clarifications can address your concerns. Please let us know if anything is still unclear. We are more than willing to provide further clarification and conduct more experiments if needed. **Q1:** As the proposed defense works by modifying and controlling the diffusion process of the model, this method might require the white-box access of the T2I/T2V models. Its application on the models with only black-box access might be limited. **A1:** Thank you for your insightful comments. In this paper, we focus on the threat model where the defender has white-box access to the model—a practical assumption, as IP infringement concerns typically arise for model owners themselves. For example, companies like OpenAI and Midjourney Inc. are likely to protect their web or API-based services (e.g., DALL-E 3, Midjourney) from IP violations. With white-box access, they can directly integrate our method into their generation pipelines. Defense under the threat model where the defender only has the black-box access of the model is out of the scope of this paper and will be our future work. We will clarify this point further in the revised version. **Q2:** Transferability to autoregressive models. **A2:** Thank you very much for your thoughtful comment. In this paper, we focus on the defense for diffusion-based visual generative models. Yao et al. have shown that diffusion-based models outperform purely autoregressive model VAR in image generation. Specifically, many diffusion models (e.g., MDTv2, REPA, and LightningDiT) achieve better performance than VAR. Most state-of-the-art open-source and proprietary visual generative models—such as Midjourney, DALL-E, Flux, and Ideogram—are based on diffusion. Even for recent GPT-4o image generation, it's also possible that it is implemented by the combination of autoregressive and diffusion. Developing defense methods for autoregressive models is an important direction for future work. We will clarify further in the revised version of our paper. Yao et al., Reconstruction vs. Generation: Taming Optimization Dilemma in Latent Diffusion Models. CVPR 2025. **Q3:** The revised generation process in the proposed defense could introduce large runtime overheads. **A3:** Thanks for your helpful questions. Table 7 in the Appendix reports the average runtime of each process on Stable Diffusion XL. Image generation with classifier-free guidance on detected IP names takes nearly the same time as standard generation (32.25s vs. 32.04s). Character name detection (0.42s) and image infringement detection (2.42s) add minimal overhead. For benign images, the total added cost is just 0.21s. For infringing images, the runtime roughly doubles due to a second diffusion step. However, since only a small fraction of images (those detected with IP infringement issues) proceed to the second diffusion process (lines 10–14 in Algorithm 1), and this step effectively mitigates IP infringement, we believe the runtime cost of our method to be acceptable. We will include a more detailed discussion in the revised version of our paper. **Q4:** I notice that the IP infringement rates for different characters is different. For example, the average infringement rates for Spider Man is larger than the rates for Iron Man. Could the authors provide some analysis to this phenomenon? **A4:** Thanks for your insightful question. Since visual generative models are trained in a data-driven manner, potential IP infringement by these models can often be traced back to memorization of the training data. Studies by Somepalli et al. and Carlini et al. have shown that the degree of memorization is correlated with the number of duplicate samples in the training set. As a result, characters with more training samples tend to have higher IP infringement rates. We will include a more detailed analysis of this phenomenon in the revised version of our paper. Somepalli et al., Understanding and Mitigating Copying in Diffusion Models. NeurIPS 2023. Carlini et al., Extracting Training Data from Diffusion Models. USENIX Security 2023. **Q5:** There are various symbols used in this paper, especially Algorithm 1. It is suggested to have a table to summarize the meaning of different symbols. **A5:** Thank you very much for your helpful suggestion. We will add the table for summarizing the meaning of different symbols accordingly in our revised version.
Summary: This paper illustrates that IP Infringement often happens under both ‘Name-based Lure Prompt’ and ‘Description-based Lure Prompt’ situations. Then, this paper proposes a defensive method, named TRIM, to mitigate the infringement. TRIM blocks the targeted name and detect the infringement to regenerate the image. This paper investigates an interesting problem: how to evaluate copyright infringement in text-to-image and text-to-video models, how severe these copyright issues are for current state-of-the-art models, and how to mitigate such issues. It proposes methods to construct "lure prompts" for testing copyright infringement severity and also introduces techniques to mitigate copyright infringement during generation time. Based on the experimental results, current visual generative models have severe copyright infringement issues, and the proposed mitigation framework can effectively reduce the probability of copyright infringement. This is a well-written paper. The studied problem in this paper is highly practical, timely and impactful. The proposed "lure prompts" based copyright infringement evaluation is novel. The proposed mitigation framework is simple yet effective. There are some concerns regarding the evaluation especially the results under adversarial jailbreak attacks. Claims And Evidence: The IP Infringement problem is supported by clear and convincing evidence from the experiment and references. The main claims in this paper, such as the severe copyright issues associated with the current state-of-the-art models and the effectiveness of designed mitigation are all well supported by the evidence in the experiment results. Methods And Evaluation Criteria: In general, the proposed method to construct the "lure prompts" and evaluate the copyright infringement in text-to-image and text-to-video models are reasonable. The proposed mitigation framework also makes sense, and shows effectiveness with acceptable efficiency overhead based on the results. The reason for the selection of the evaluation criteria is clearly discussed in Section 4. Regarding the evaluation criteria for the proposed mitigation method, since it initially employs a VLM to detect copyright infringement in generated images, it would be beneficial to measure the recall rate of this detection process. More explanation might be needed: 1. The weight for the U-Net needs more explanation. Currently, authors only claim that the value is 7.5 without further explanation. 2. Some other notations also need to be explained, e.g. \tilde{\epsilon}. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall, the experiment results in this paper is sufficient to support the main claims of this paper. The weakness of the experimental designs is that the evaluation under adversarial settings for the proposed mitigation method is missing. For example, there are some adversarial jailbreak method for forcing text-to-image models to generate copyright infringing images, such as [A], The evaluation of the robustness of the proposed mitigation method against such attack might be helpful. The criterion for human judgment should be listed in the experiment settings. It’s better to explain why CLIP score is suitable to evaluate the performance of the proposed method. [A] Kim et al., Automatic Jailbreaking of the Text-to-Image Generative AI Systems. Supplementary Material: Yes, I have checked the entire supplementary material. Relation To Broader Scientific Literature: The contribution of this paper is suitable for the scope of IP Infringement, and it could have large impact to the text-to-visual-content community. Essential References Not Discussed: *Essential References Not Discussed Kim et al. proposes a jailbreak attacks forcing text-to-images models to generate copyright infringing content. The evaluation of the proposed mitigation method under this attack is unclear. Kim et al., Automatic Jailbreaking of the Text-to-Image Generative AI Systems. Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: 1. What is the recall rate of this VLM-based detection process in the proposed mitigation method? 2. What is the robustness of the proposed mitigation method under jailbreak attacks targeting on copyright infringement? 3. Does the evaluation for the experimental results refer to utilising techniques such as crowdsourcing? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your helpful comments. We hope the following results and clarifications can address your concerns. Please let us know if anything is still unclear. We are more than willing to provide further clarification and conduct more experiments if needed. **Q1:** Recall rate of VLM-based detection process. **A1:** Thank you very much for your constructive question. The average detection recall rates on differernt visual generative models are shown as follows: Character | Recall ---- | --- Spider Man | 0.98 Iron Man | 0.99 Incredible Hulk | 0.99 Super Mario | 1.00 Batman | 0.96 SuperMan | 0.98 As can be observed, the VLM-based detection has high recall rates for detecting IP infrigement. We will add more results in our revised version. **Q2:** The weight for the U-Net needs more explanation. Currently, authors only claim that the value is 7.5 without further explanation. **A2:** Thanks for your thoughtful comment. The weight applied to the U-Net controls the trade-off between suppressing IP infringement and maintaining generation quality. A higher weight leads to stronger suppression, but it may also degrade the quality of the generated images (such as the text-image alignment). Wang et al. found that a value of 7.5 provides a good balance in most classifier-free diffusion guidance settings. Therefore, we adopt 7.5 as the default value in our setup. We will clarify this point in the revised version of our paper. Wang et al., Analysis of Classifier-Free Guidance Weight Schedulers. TMLR 2024. **Q3:** Some other notations also need to be explained, e.g. \tilde{\epsilon}. **A3:** Thank you very much for your useful comment. \tilde{\epsilon} means the mapping between the predicted noise and the input noise, prompt as well as the timestep in the revised diffusion process. Will make it more clear in the revised version. **Q4:** Robustness of the proposed mitigation method under jailbreak attacks targeting on copyright infringement. **A4:** Thank you very much for your constructive questions. we conducted the suggested experiments to evaluate our defense against the adversarial infringement method proposed by Kim et al. [A], using Stable Diffusion XL. The results on Spider-Man are as follows: Method | IP Infringement Rate ---- | --- Undefended | 81.6% TRIM (Ours) | 6.8% The results on Superman are as follows: Method | IP Infringement Rate ---- | --- Undefended | 94.6% TRIM (Ours) | 8.4% Our method shows strong robustness against adversarial infringement, as the classifier-free guidance mechanism effectively constrains the output space, preventing alignment with protected IP. We will include additional experiments and discussions in the revised version. Thank you again for your valuable feedback. **Q5:** The criterion for human judgment should be listed in the experiment settings. **A5:** Thanks for your useful question. In our evaluation, the annotators are familiar with the judgment process used in the real-world lawsuit Andersen v. Stability AI Ltd., 23-cv-00201-WHO. They are instructed to assess whether an image constitutes IP infringement by selecting either "yes" or "no," based on their understanding of the legal reasoning and criteria outlined in that case. We will make it more clear in our revised version. **Q6:** It’s better to explain why CLIP score is suitable to evaluate the performance of the proposed method. **A6:** Thanks for your thoughtful comment. The CLIP score is used to evaluate text-image alignment in visual generative models, which is a key quality metric for text-to-image generation. Empirical studies, such as Hessel et al., have shown that the CLIP score correlates strongly with human judgments of how well an image matches its corresponding caption. Will make it more clear. Hessel et al., CLIPScore: A Reference-free Evaluation Metric for Image Captioning. EMNLP 2021. **Q7:** Does the evaluation for the experimental results refer to utilising techniques such as crowdsourcing? **A7:** Thank you very much for your constructive question. Yes, the evaluation of our experimental results is based on a form of crowdsourcing. We will make it more clear.
Summary: This paper discovers that SOTA diffusion models tend to generate content that very highly resembles data that could be protected by IP rights, for example, Marvel characters. Their human evaluation studies show that the risk with these characters/concepts is very high. They also propose a mitigation method that uses the names/descriptions of these characters as a negative prompt in classifier free guidance. Claims And Evidence: The paper claims to recognize protected content and mitigate copyright infringement by modifying the generation process. Although their method significantly improves upon existing models, I believe the corpus of selected characters/ content is quite limited. These models have been shown to memorize many images within the training data and the style of certain artists. But the authors do not test their model on these concepts. Therefore, the claims may be over stated. Methods And Evaluation Criteria: The application at hand is mitigating the generation of copyrighted content in SOTA diffusion models. The methods and the evaluation criteria make sense in the context of the problem considered. Theoretical Claims: No theoretical claims are made in this paper. Experimental Designs Or Analyses: I think the experimental design and analysis are mostly sound. However, I would like to highlight the following - The proposed method will fail in a white-box adversarial attack as the adversary can simply disable the infringement detection model. Many other memorization papers in diffusion models like [1,2,3,4] have the same problem. Can the authors compare their work with these papers? [1] Understanding and Mitigating Copying in Diffusion Models, Somepalli et al [2] Exploring Local Memorization in Diffusion Models via Bright Ending Attention, Chen et al [3] Unveiling and Mitigating Memorization in Text-to-image Diffusion Models through Cross Attention, Ren et al [4] Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models, Hintersdorf et al Concept Erasure in diffusion models can erase the knowledge of the objects from the model itself. These models can still generate safe images under white box attacks as the model parameters have been changed to forget undesired concepts. Is the proposed better than concept erasure methods? How do the authors propose to handle white-box attacks? [5] Erasing Concepts from Diffusion Models, Gandikota et al [6] Unified Concept Editing in Diffusion Models, Gandikota et al [7] ConceptPrune: Concept Editing in Diffusion Models via Skilled Neuron Pruning, Chavhan et al [8] Ablating Concepts in Text-to-Image Diffusion Models, Kumari et al Supplementary Material: Yes, I have looked at the additional results in the supplementary. Relation To Broader Scientific Literature: The paper contributes to mitigating memorization in diffusion models, which is an upcoming field. The key contributions of this paper are - using an infringement detection module in the diffusion model pipeline, and using the 'protected' concept as negative prompt for classifier-free guidance. I believe the method proposed in this paper is related to other papers in this field where they propose some form of prompt perturbation. Essential References Not Discussed: I believe some papers like [1, 2], related to memorization in diffusion models, have not been discussed. [1] Finding NeMo: Localizing Neurons Responsible For Memorization in Diffusion Models, Hintersdorf et al [2] Memorized Images in Diffusion Models share a Subspace that can be Located and Deleted, Chavhan et al Other Strengths And Weaknesses: Strengths - 1. The paper is well written and the methodology is sound. 2. Their method is very effective in protecting copyrighted data. Please see Experimental Designs or Analyses section for Weaknesses. Other Comments Or Suggestions: I believe this paper could have a significantly stronger impact if Groot was added. Because, I am Groot. Questions For Authors: Please see Experimental Designs or Analyses section. I have some extra questions - 1. How did the users in your human evaluation detect infringement? More specifically, where they given a 'yes' or 'no' option or a Likert scale? 2. Would it be useful to have an LLM perturb the prompt in a way such that the model generates a different image? What are your thoughts on this? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful comments. **Q1:** Corpus of selected characters/content. **A1:** Thank you very much for your constructive comments. Besides the results in our main paper, we also have the results on different types of non-human IP contents in Table 6 in the Appendix. We also conducted more experiment to include more IP content. The results alongside the characters in our main paper are as follows: IP | Method | IP Infringement Rate ---- |---- | --- Mercedes-Benz |Undefended | 42.4% Mercedes-Benz |TRIM (Ours) | 0.0% Nike |Undefended | 33.6% Nike |TRIM (Ours) | 0.8% Coca-Cola |Undefended | 76.4% Coca-Cola |TRIM (Ours) | 2.0% The model used here is Stable Diffusion XL. The results demonstrate that our defense method is generalizable to different media formats and different IP content types on visual generative AI. We also conducted additional experiments using name-based lure prompts to evaluate IP infringement on van Gogh and Ghibli styles using the recent GPT-4o model, and on the character "Groot" in Stable Diffusion XL and Stable Diffusion XL Turbo. The results are as follows: IP | Model | IP Infringement Rate ---- |---- | --- van Gogh style |GPT-4o | 97.2% Ghibli style |GPT-4o | 98.6% Groot |Stable Diffusion XL | 99.0% Groot |Stable Diffusion XL Turbo| 100.0% We will add more results in our revised version. **Q2:** How do the authors propose to handle white-box attacks? **A2:** This paper focuses on a threat model where the defender has white-box access to the model, while the attacker only has black-box access—such as through a website or API. This is a practical setting for proprietary, closed-source models like DALL-E 3 and Midjourney, which represent most state-of-the-art models today. Defending open-source models against white-box attacks is beyond the scope of this work and will be explored in future research. **Q3:** Comparison to memorization mitigation methods. **A3:** We'd like to clarify that our work addresses a broader issue than memorization mitigation. While memorization papers focus on preventing models from reproducing nearly exact training images, we target IP infringement — including outputs that resemble copyrighted content even without nearly exact matches. What these papers considers successful mitigation often still qualifies as infringement under our evaluation. For example, in Figure 3 of Hintersdorf et al., the generated image no longer matches training samples but still clearly violates the IP of DC's "Hawkgirl." We also compare our method and the open-sourced inference-time memorization mitigation approaches suggested using Stable Diffusion XL and Spider-Man. The results are as follows: Method | IP Infringement Rate ---- | --- Undefended | 76.6% Somepalli et al. | 69.4% Ren et al . | 30.2% Hintersdorf et al. | 43.2% TRIM (Ours) | 5.8% The results demonstrate that our method is more effecive for mitigating the IP infringement. We will add more discussion in our revised version. **Q4:** Comparison to concept erasure methods. **A4:** Existing concept erasure methods often degrade the quality of all outputs, including non-infringing ones. For example, on Stable Diffusion, removing "Spider-Man" with Gandikota et al. [6] results in an LPIPS of 0.23; erasing 100 concepts with UCE [7] leads to 0.30 LPIPS; ConceptPrune [7] increases FID by 16.6% when removing 5 artist styles; and Kumari et al.'s method reduces CLIP scores by ~5% on non-infringing images. In contrast, our method does not influence the quality of non-infringing outputs, which represent the majority of generated content. These methods are also computationally expensive (e.g., ~170 minutes per concept in [6]) and less effective (e.g., 11.2% Spider-Man infringement rate [6] vs. 0.0% with ours). Moreover, their performance worsens as more concepts are erased. Our method handles multiple IPs efficiently without influencing output quality. We will elaborate further in the revised version. **Q5:** I believe some papers like [1, 2], related to memorization in diffusion models, have not been discussed. **A5:** Thank you very much for your helpful suggestion. We will add more discussion on the suggested papers related to memorization in diffusion models accordingly. **Q6:** How did the users in your human evaluation detect infringement? More specifically, where they given a 'yes' or 'no' option or a Likert scale? **A6:** In our evaluation, the annotator gives a 'yes' or 'no' option. We will make it more clear in our revised version. **Q7:** Using an LLM to perturb the prompt. **A7:** We conducted experiments using GPT-4o to rewrite and perturb prompts to test its ability to reduce IP infringement. For Spider-Man with Stable Diffusion XL, the infringement rate dropped by only 2.7%. This shows that simply using an LLM to change the prompt is not effective, since the key semantics and meaning in the prompt that leads to IP infringement is still there. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal. I will increase my score to 4.
null
null
null
null
null
null
A Reasoning-Based Approach to Cryptic Crossword Clue Solving
Accept (poster)
Summary: This paper presents a multi-stage pipeline approach to solving cryptic crossword clues, focusing on reasoning through the wordplay mechanisms that make these puzzles challenging. The system consists of: 1. A candidate answer generator (fine-tuned Gemma2-9B model) 2. A wordplay suggestion generator (fine-tuned Gemma2-9B model) 3. A formalizer that converts wordplay into verifiable Python code (using either Gemini-Flash or Gemma2-9B) 4. A purpose-built verifier that evaluates the Python code and provides feedback for refinement The key methodological contribution lies in the design of the Formaliser and verification system. This framework systematically decomposes model-generated solutions into verifiable sub-tasks through Python scripts, where each verification component either leverages LLM reasoning (via the `is_synonym` and `is_homophone_functions` functions) or employs traditional retrieval methods (via the `action_type` and `is_abbreviation` functions). This verification-based approach enables interpretable reasoning and provides a mechanism for iterative improvement of solutions. The authors evaluate their system on the Cryptonite benchmark dataset and achieve 32.5% Top-1 accuracy with their Gemini-Flash Formaliser, outperforming GPT-4o (27.6%) on the same test samples. Additionally, they demonstrate that open-source models can achieve competitive performance (29.5%) when used throughout the pipeline. Claims And Evidence: - The paper's claim about providing verifiable reasoning procedure for cryptic clue solutions is well-supported through examples and is a clear strength of the approach. - The paper's primary claim of achieving state-of-the-art results on the Cryptonite dataset is supported by experimental evidence, but the claimed improvements over GPT-4o (32.5% vs 27.6%) may be within the margin of error (stated as ±3.3% at 200 samples). - While the experimental results support the conclusion that fine-tuned open-source models with less parameters can achieve competitive performance, the contribution of the Python formalization and verification components is less clearly established. The improvement from the Formaliser over the fine-tuned models is not particularly substantial and may fall within the margin of error, failing to conclusively validate the additional mechanisms' contribution beyond fine-tuning. Methods And Evaluation Criteria: ### Methods: The paper's method (especially the formalisation and verification parts) is derived from observation and emulation of human solver processes for cryptic crossword clue solving. By explicitly modeling how humans parse, reason through, and verify cryptic clues, the authors have designed a specialized verification workflow tailored to this specific task. The systematically decomposition aligns well with the inherent structure of cryptic puzzle solving. ### Evaluation Criteria: The basic metrics are appropriate for the task at hand: - Using the Cryptonite dataset is justified as it represents a standard benchmark in the field - The Top-1 exact match accuracy is an appropriate primary metric for cryptic crossword solving - The "Partial Correctness Metrics" in the appendix provide additional insights into system performance with varying levels of letter hints Theoretical Claims: The paper makes no theoretical claims. Experimental Designs Or Analyses: The paper fails to provide comprehensive timing and computational cost metrics. The formalization process substantially increases the pipeline's complexity, requiring additional model calls, verification steps, and potential rewrites. Without detailed per-step and end-to-end timing data, it is impossible to fairly compare this approach against simpler, more direct baselines. While the authors mention total costs under $100 USD, they do not break down the inference time or computational requirements at each stage and each inference experiment of their pipeline, making it difficult to assess the practical efficiency trade-offs between accuracy gains and increased computational overhead. Supplementary Material: The paper has no supplementary material. Relation To Broader Scientific Literature: Pros: The approach draws on the Draft, Sketch, and Prove frameworks, taking a decomposition-then-formal verification approach. The analogy is apt, and the paper provides valuable insights into designing verification functions without a formal DSL for verification. To some extent, it is inspiring for addressing key challenges in current AI research in the code and math domain, especially in developing O1-style reinforcement learning training methods where verification is critical. Cons: However, current implementations are limited to relatively restricted task domains. Whether this approach can be generalized to more complex scenarios—especially those where human behavior is not easily observable or where specialized tools and sufficiently powerful verification models are not available—remains an open question. The broader applicability of this verification-centric approach to less structured reasoning tasks warrants further exploration. Essential References Not Discussed: While the paper effectively connects to literature on mathematical reasoning and code verification, there's a potential connection to tool-integrated reasoning frameworks that isn't discussed. The paper frames its approach primarily in relation to Draft, Sketch, and Prove frameworks, but the methodology of using Python as an external verification mechanism for natural language reasoning problems also shares conceptual similarities with tool-integrated reasoning approaches. For example, the recent works in mathematical problem solving fields like [1] present many framework where language models leverage external tools to enhance reasoning capabilities. [1] Gou, Z., et al. (2024). ToRA: A Tool-Integrated Reasoning Agent for Mathematical Problem Solving. https://arxiv.org/abs/2309.17452 Other Strengths And Weaknesses: ### Strengths: - The paper provides good coverage of cryptic crossword conventions and mechanisms - The step-by-step formalization of cryptic reasoning and the feedback mechanism for refining formalized proofs is well-designed ### Weaknesses: - (Minor) Testing is limited to a single dataset (Cryptonite) rather than exploring transferability to other crossword sources Other Comments Or Suggestions: No other comments. Questions For Authors: No other questions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ### Statistical Significance of Improvement over GPT-4o We acknowledge the reviewer's point regarding the margin of error. Since the improvement over GPT-4o on our sampled test set was within the simple margin of error, we performed the Bayesian IRT analysis presented in the paper that suggests a high probability (92%) that Gemini-Flash Formaliser is indeed better than GPT-4o. Beyond numerical improvements, a key contribution is the verifiable reasoning process itself, offering interpretability not available in black-box models. ### Contribution of Formalization/Verification Components We agree that demonstrating the isolated numerical contribution of the formalization and verification steps is challenging. However, the ablation studies (Table 1 - AB lines) clearly show that removing these components significantly degrades performance, indicating they are not merely incremental but essential for the system's success. These steps provide crucial interpretability and enable systematic error correction through feedback, which fine-tuning alone cannot achieve. The focus is not solely on maximizing Top-1 accuracy, but on reasoned solutions. ### Lack of Timing/Cost Metrics We appreciate the reviewer's point about timing and computational cost. Quantitatively comparing proprietary API-based models (like GPT-4o) with local open-source models on FLOPs is problematic due to the unknown parameter count/architecture of the proprietary models. Moreover, the proprietary models are likely run on very capable hardware - so wall-clock timing comparisons also do not make much sense. We mention the total cost was under $100 USD to highlight the feasibility and accessibility of our approach, especially using open-source models. That being said, our research prioritized reasoning and interpretability over raw speed : Generating multiple candidate answers, followed by multiple wordplay samples, can been framed as inference-time computation (using 9B models) rather than using a large proprietary model. ### Limited Task Domain We recognize the domain-specificity of cryptic crossword solving. However, we argue that it serves as a rigorous and complex testbed for reasoning, requiring multi-faceted language understanding and logic. While the specific DSL is tailored to crosswords, the underlying principles of our approach – decomposition, formalization, and verification with feedback – are intended to be more broadly applicable to other reasoning tasks. Cryptic crosswords, with their clearly defined rules and solutions, allow for precise evaluation and iterative refinement of these principles. ### Missing Reference (ToRA) We thank the reviewer for pointing out the relevance of tool-integrated reasoning frameworks and specifically for suggesting the ToRA paper [Gou et al., 2024]. We agree that our work shares conceptual similarities with this approach, and we will add a discussion of ToRA and cite this paper in the related work section. A key novelty in our approach is the use of Python itself as the "tool" for verification within an NLP task, enabling a flexible and interpretable verification process without relying on pre-defined formal DSLs for the entire reasoning chain. ### Single Dataset Testing (Cryptonite) We acknowledge that testing is primarily on Cryptonite. Appendix A.3 details the rationale for choosing Cryptonite over the Guardian dataset [Rozner et al., 2021], including dataset size; source ('gold standard' Times/Telegraph clues); and the consistent use of Cryptonite's train/val/test splits for focusing on reasoning vs the Wordplay Dataset. We believe our rebuttal addresses the insightful points raised in your review and provides important clarifications regarding the statistical significance of our results, the contribution of the formalization and verification components, the practical considerations of timing and cost, the domain-specificity of our approach, and the connection to tool-integrated reasoning frameworks. We would be grateful if you would consider re-evaluating your rating in light of these responses. Thank you again for your time and constructive feedback, which has been invaluable in improving our work.
Summary: This paper proposes a reasoning-based approach to solving cryptic crossword puzzles, integrating large language models (LLMs) with Python formal verification. The system generates answer candidates, derives wordplay explanations, and translates them into verifiable Python code for validation. It achieves a new state-of-the-art (SOTA) performance on the Cryptonite dataset. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Theoretical Claims Experimental Designs Or Analyses: Yes Supplementary Material: No Relation To Broader Scientific Literature: The paper contributes to the intersection of natural language understanding (NLU), reasoning, and programmatic verification within the domain of cryptic crossword solving. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - The approach introduces Python formal verification, ensuring that each solution's reasoning process is transparent and verifiable. By converting wordplay explanations into executable Python code, the system eliminates the "black-box" problem of traditional LLMs. - The study demonstrates that fine-tuned open-source models (Gemma2-9B) can outperform proprietary models like GPT-4o in complex reasoning tasks. This makes the solution cost-effective, locally deployable, and a strong alternative for NLP applications. Weaknesses: - The Python-based verification system has weaknesses, such as bypassing assert statements or generating logically inconsistent proofs. This reduces reliability and may lead to incorrect solutions being accepted. - The approach is tailored for cryptic crossword solving, and its effectiveness in broader NLP reasoning tasks remains unproven. Additionally, the LLM struggles with highly complex or unconventional clues, limiting its generalizability. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer highlighting the identified limitations of our Python-based verification system. To clarify, these shortcomings were not overlooked, but rather explicitly discussed in Section 4.5 ("Known Limitations of the System") to ensure transparency. Presenting these potential 'shortcuts' was a deliberate choice to enhance the paper's rigor and openness, crucial for fostering further research – especially as it may serve as a cautionary note regarding the potential for a Reinforcement Learning loop (in future work). As to the reviewer's point about the challenges posed by complex and unconventional cryptic clues, while the paper doesn't explicitly claim that the LLM struggles with these in a way that fundamentally limits generalizability, we acknowledge that cryptic crosswords, by their very nature, present a spectrum of complexity, and some clues are indeed more challenging than others. Although we haven't explicitly tested generalizability to all broader NLP tasks, we believe the developed techniques – particularly formal verification and iterative refinement – offer valuable insights for improving LLM reasoning in complex NLU scenarios. Future research could fruitfully explore applying these techniques to other reasoning-intensive NLP tasks. We sincerely appreciate the reviewer's valuable feedback, which has allowed us to clarify these key points. Having addressed the verifier limitations and the scope of generalizability, we respectfully request that the reviewer re-evaluate the paper's rating based on this improved understanding of our work.
Summary: The paper presents a reasoning-based system for solving cryptic crossword clues using open-licensed LLMs. It follows a three-step pipeline: (1) generating answer candidates, (2) proposing wordplay explanations, and (3) verifying solutions via a Python-based formalizer. The system outperforms prior methods on the Cryptonite dataset, achieving a Top-1 accuracy of 32.5%, surpassing both rule-based and previous LLM-based approaches. The key contribution is using Python assertions to validate reasoning, improving reliability and interpretability. Claims And Evidence: Most claims are well-supported, particularly the state-of-the-art performance (32.5% Top-1 accuracy) on Cryptonite and the effectiveness of wordplay verification. However, the Python-based verifier has limitations (e.g., bypassing assertions), meaning correctness is not fully ensured. The claim of broader generalization is weak, as the study focuses only on cryptic crosswords. More analysis of failure cases and alternative tasks would strengthen the paper’s broader impact. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are well-suited for cryptic crossword solving. However, there is no evaluation of the boarder application of the proposed method. Theoretical Claims: N/A Experimental Designs Or Analyses: Log probability-based ranking ablations suggest verification matters, but the study lacks detailed error analysis on failure cases. The paper would benefit from qualitative examples of incorrect reasoning that passes verification. The verifier can be bypassed via faulty assertions, but there is no analysis of how often verification fails or of its impact on final accuracy. Supplementary Material: No Relation To Broader Scientific Literature: The paper proposes an LLM-based and program-aided approach to significantly improve machine performance on the cryptic crossword solving task. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: In Section 4.5 ("Known Limitations of the System"), we chose to explicitly discuss the limitations of our Python-based verification system to ensure transparency. However, these potential 'shortcuts' are mainly a cautionary note regarding the potential for a Reinforcement Learning loop (in future work), where we would expect an RL system to exploit them, whereas the current system only rarely showed such behaviour (which is why these loopholes were not eliminated during development). On the other hand, we would be delighted to add a section containing a qualitiative analysis of the key false positive / negative failure modes of the system. Notably: * The headline success rate is bounded above by the initial candidate answer generation process. If the system cannot guess the answer in its top-k (k=20 here), the remaining process is doomed. As shown in Figure 7a, even with higher top-k, this puts an upper bound on performance that is well below 100% correct. Having better candidate answer generation would be beneficial - and this would directly feed through our verification process (which is a step that the proprietary models do not benefit from, and gives us human-interpretable reasoning for each solution) * A significant source of false negatives is the `is_synonym` function, which relies on a sequence of steps: first we attempt a look-up in an open-source thesauraus, then in a dataset of 'regular crossword answers'. But the final fall-back is asking an LLM whether given phrases are synonyms. While the first two steps may vote positively (for easy matches), it is common in cryptic clues that the `definition` and the `answer` are more distantly related than regular crosswords. For instance, in Appendix A.1.7, we have the true answer `UNDERMINED` being defined by `damaged`. This would likely be too distant to be reasonable for a regular crossword, but the strength of the wordplay (the answer being literally given in the clue) is confirmation enough to satisfy solvers. Setting this 'synonym distance hurdle' is an ongoing challenge. Clearly, this analysis deserves more space than was available in the submitted version of the paper, and would provide valuable additional insight to those interested in the issues being tackled. Thank you for highlighting the need for this quantitative analysis.
Summary: This paper presents a system for solving cryptic crossword clues using a collection of fine-tuned and ICL-prompted open-weight language models, as well as a custom Python-based domain-specific interpreter. The proposed system first samples a set of candidate solution words from a fine-tuned proposal model. Another fine-tuned model is then used to sample potential wordplay decompositions of the answer candidates. An ICL prompted model then translates these wordplay breakdowns into a set of Python assertions containing a mixture of vanilla expressions and special functions (e.g. `is_synonym(a, b)`, `is_homophone(a, b)`) backed by additional small LM calls. Two rounds of refinement are conducted on this translation based on interpreter feedback. The highest-frequency answer with a wordplay breakdown that passes its translated assertions is returned as the answer to the clue (or the highest-frequency answer if no wordplay passes verification). The authors evaluate their method on the Cryptonite dataset, containing cryptic crossword clues from the Times and Telegraph UK newspapers. They compare their method to several single-stage few-shot prompted LMs, including GPT-4o and Gemini 1.5 Flash. On the Hard subset of the test split, their method outperforms all considered baselines. Claims And Evidence: The authors claim that their method achieves SOTA results on Cryptonite. Section 2.2.2 does mention that the lack of reasoning capabilities presented a problem for models prior to 2024, but makes no mention of the new generation of LLMs post-trained for reasoning (the only mention I could find is a namedrop on line 207), begging the question of how the proposed method would perform against DeepSeek R1, Gemini Flash Thinking, OpenAI O1 or Claude 3.7 Sonnet. Methods And Evaluation Criteria: Yes, the methods and benchmark are appropriate. Theoretical Claims: N/A Experimental Designs Or Analyses: The main comparison is sound. I also checked the evaluation setup for the partially-filled answer setting ($\S$3.6, A.3.5.7), and nothing stood out to me as improper. Supplementary Material: N/A Relation To Broader Scientific Literature: The authors cite the prior rule-based SOTA system for cryptic crossword solving. Their general scheme of checking "loose" answer proposals from an LM has gained a lot of steam recently in the guise of "reasoning models", LMs post-trained with CoT+RL which can exhibit self-verification behavior. So far, public reasoning models have not been developed that are trained end-to-end to take advantage of autoformalization; such a direction is very promising. This work definitely serves as positive evidence of the power of this kind of architecture, even using small models and a semi-informal format. Essential References Not Discussed: I think it might be worth discussing the connection to prior LM+autoformalization approaches - SatLM (Ye et al, 2023) comes to mind, especially since that work also deals with the LM language familiarity issue by using Python as an intermediate language to decode constraints from the model. Other Strengths And Weaknesses: I thought the authors did a good job of motivating the problem area as a useful exercise, and in introducing the problem format, which is admittedly confounding at first glance. Parts of the paper were a bit hard to follow or lacked important detail, namely the method description ($\S$3); while Fig. 1 shows a clear overview of the modeling flow, Section 3 does not contain a complete text description of the pipeline; it contains a stepwise summary of the human strategy that inspired the method, and subsections 3.1-3.5 detail stages of the process and design considerations for those stages, but there is no concise description of the overall procedure, which would be helpful to lead us through the subsections. Other Comments Or Suggestions: No typos stood out to me during my reading. See "Other strengths and weaknesses" for a suggested change to Section 3. Questions For Authors: One key question that I was unable to find in the text (although I may have missed it) is: How many wordplay/definition suggestions are generated per answer candidate? Parts of the text allude to checking multiple wordplay options, but I'm not sure if this is just due to considering multiple answer candidates. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### Answer to "How many wordplay/definition suggestions are generated per answer candidate?" For each clue, we generate 20 candidate answers (the cumulative probability graph for the upper bound of success after this is given in Figure 7a). These are then deduplicated, and we create 5 definition and wordplay examples for each candidate answer (we previously experimented with 10 samples, but the difference was marginal). Examples of wordplay for correct and incorrect candidate answers are given in Section 4.2. From a human point of view, the wordplay explanations for incorrect candidate answers are clearly nonsense - but (as commonly seen using LLMs) the models tend to 'approve' of their own outputs. Therefore, proving out the reasoning in a more concrete way is essential, so we use our novel 'formalization/verification' process on all of these outputs. Thank you for the question : We will certainly add this important experimental detail to the paper, as well as expanding the process flow explanation in the text of Section 3 (rather relying so heavily on Figure 1).
null
null
null
null
null
null
Learning Single Index Models with Diffusion Priors
Accept (poster)
Summary: The work addresses the problem of signal reconstruction in semi-parametric single-index models, where the link function is unknown. They propose a new method relying on parametrizing the signal prior by a diffusion model. Building on the observation that the measurements may be related to noisy versions of the signal, the related noise variance (and corresponding diffusion time) is learned in the method, and the generator of the diffusion process and its inverse used to yield the final estimator. Theoretical learning guarantees are derived. Numerical experiments on the FFHQ and ImageNet datasets are provided, revealing competitivity of the method compared to existing schemes. Claims And Evidence: The method is benchmarked against concurrent schemes, in terms of performance metrics and compute. Reconstructed images by the evaluated methods are further illustrated in Figs. 2,3. Methods And Evaluation Criteria: To the best of my awareness, the benchmark datasets considered were also selected in previous works on the topic to evaluate the methods. For instance, FFHQ was used in (Meng and Kabashima, 2023), to benchmark the QCS-SGM method. Therefore, they seem relevant for the evaluation, and ensure comparability with some of the previous works. Theoretical Claims: I did not check carefully the proofs of the theoretical claims. To strengthen the paper, it would be beneficial to include a version of Theorem 2 for the SIM-DMS algorithm (28), if feasible. For now, only the SIM-DMIS algorithm is endowed with the theoretical guarantees. Besides, Theorem 2 does not bear entirely on the SIM-DMIS estimator. Would it be feasible to combine (24) and (29) to reach an end-to-end guarantee? Experimental Designs Or Analyses: I did not check the details of the experimental designs. Supplementary Material: I did not read in detail the supplementary material. Relation To Broader Scientific Literature: Although I have limited familiarity with the literature, it is my understanding that compared to key previous works such as (Meng and Kabashima, 2023) or (Zhang et al., 2024), the main novelty of the method is its ability to accommodate unknown non-linear link functions. It also displays competitive performance with previously proposed methods. However, it is possible there exists other methods which I may be missing. As a minor comment, the related work section could be further clarified, notably the last paragraph of page 2. Currently, it is unclear which work address linear or non-linear settings, and in which works the link function is (un)known. Essential References Not Discussed: I did not identify any essential missing reference. Other Strengths And Weaknesses: The paper is well written, and intuition is provided in support of the method. I have a few clarifying questions, which I included in the Theoretical Claims section. I am overall in favor of acceptance. Other Comments Or Suggestions: I do not have further comments or suggestions. Questions For Authors: Some clarifying questions are detailed in previous sections. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your recognition of this paper and the helpful comments and suggestions. Our responses to the main concerns are as follows. All citations refer to the reference list in the main document. (**To strengthen the paper, it would be beneficial to include a version of Theorem 2 for the SIM-DMS algorithm (28), if feasible. For now, only the SIM-DMIS algorithm is endowed with the theoretical guarantees. Besides, Theorem 2 does not bear entirely on the SIM-DMIS estimator. Would it be feasible to combine (24) and (29) to reach an end-to-end guarantee?**) We are grateful to the reviewer for the valuable suggestions. In the revised paper, we will include a version of Theorem 2 for SIM-DMS. Eq. (29) within the statement of Theorem 2 essentially says that for any sample drawn from the marginal distribution $q_t$, when we perform the inversion process from time $t$ to $T$, followed by the sampling process from time $T$ to $\epsilon$, the resulting vector will be reconstructed to closely approximate the corresponding ground-truth data from $q_0$ (and lie on the same ODE trajectory as the original sample). Additionally, Eq. (24) (along with Eq. (25)) essentially indicates that (the scaled version of) $\frac{\mathbf{A}^T\mathbf{y}}{m}$ approximately follows the marginal distribution $q_{t^*}$ for some $t^*$. If we make the (relatively strong) assumption that (the scaled version of) $\frac{\mathbf{A}^T\mathbf{y}}{m}$ precisely follows $q_{t^*}$ for some $t^*$, then combining Eqs. (24) and (29) provides an end-to-end guarantee. We leave the end-to-end guarantee in the case where Eq. (25) only approximately holds to future study. (**As a minor comment, the related work section could be further clarified, notably the last paragraph of page 2. Currently, it is unclear which work address linear or non-linear settings, and in which works the link function is (un)known.**) Among the methods described in the last paragraph of page 2, MCG (Chung et al., 2022b) and $\Pi$GDM (Song et al., 2023) are mainly designed for the linear setting. Additionally, $\Pi$GDM can be extended to certain nonlinear settings (where the link function is known) by leveraging a combination of pseudoinverse operations and nonlinear transformations. DPS (Chung et al., 2023b) and DAPS (Zhang et al., 2024) are applicable in nonlinear settings as well, also primarily under the assumption that the link function is known. We will clarify these in the revised version. --- Rebuttal Comment 1.1: Comment: I thank the authors for clarifying these points and promising the revisions. The answers seem to corroborate my current evaluation and understanding of the paper, and I maintain my score.
Summary: The authors propose a diffusion model sampling scheme to reconstruct an unknown signal from measurements, assuming a single-index model with a known compressed sensing matrix and noise distribution but unknown and potentially nondifferentiable link function. The approach leverages a property of the link function to find an intermediate time at which to begin inversion, thereby making inversion much more efficient than the naïve approach. The authors provide a theoretical analysis and numerical experiments on image datasets, comparing to state-of-the-art diffusion inverse solvers. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I was unable to check proof correctness. Experimental Designs Or Analyses: Yes, for the 1-bit compressed sensing experiments on FFHQ and ImageNet. It is strange that DPS and DAPS with knowledge of the link function (i.e., DPS-N and DAPS-N) perform worse than they do without knowledge of the link function (i.e., DPS-L and DPS-L). This is true for all the results in Tables 1 and 2 except DPS in Table 2. Supplementary Material: No. Relation To Broader Scientific Literature: One perspective is that this work extends the work of Meng and Kabashima 2022 to handle unknown link functions. Another perspective is that it adds to the vast literature of diffusion inverse solvers (including DPS, DAPS, and Pi-GDM) with a simple method for a particular type of inverse problem, namely one where the measurements are an unknown/nondifferentiable transformation of compressed sensing measurements. Essential References Not Discussed: No. Other Strengths And Weaknesses: [Strengths] The proposed idea is quite simple and makes for an efficient algorithm. It leverages a property of the link function (Eq. 20), so the simple algorithm has some theoretical inspiration behind it. [Weaknesses] The method makes several shortcuts without strong theoretical justifications for them. For example, the leap from Eq. 27 to Eq. 28 is rather tenuous if it relies on the assumption that the reverse diffusion process will effectively remove the noise from the naïve estimate. Overall, it is difficult to tell how theoretically justified the method is. There is little intuitive explanation of Theorem 2 (I could not find a Theorem 2, by the way), making it difficult to judge its significance and usefulness. And there is no discussion about whether this would provide samples from a Bayesian posterior. The presentation of the paper is rather poor. The authors should include more intuitive explanations. For example, what exactly are the properties in Eqs. 20 and 21 saying? What is the intuition behind Eq. 25 and why it’s different from Eq. 22? Also, the background on diffusion models is too dense and contains a lot of unnecessary information. I recommend sticking the equations and ideas that are essential just for the understanding of this paper. It is strange that DPS-N and DAPS-N perform so poorly in the numerical experiments. I am curious why the authors think they would perform so poorly compared to SIM-DMS and SIM-DMIS. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your helpful comments and suggestions. Our responses to the main concerns are as follows. All citations refer to the reference list in the main document. (**It is strange that DPS and DAPS with knowledge of the link function (i.e., DPS-N and DAPS-N) perform worse than they do without knowledge of the link function (i.e., DPS-L and DAPS-L).**) In 1-bit measurements, even in the noiseless scenario, the link function $f(x) = \mathrm{sign}(x)$ is non-differentiable at $x = 0$ (though as mentioned in Footnote 9, PyTorch can still enforce automatic differentiation). The differentiability of $f$ matters significantly. As mentioned in Section 1.1, under the strong differentiability assumption on the link function, (Wei et al., 2019) can handle general non-Gaussian sensing vectors. However, most SIMs research assumes Gaussian sensing vectors, like the seminal work (Plan & Vershynin, 2016) and many follow-up studies such as (Liu & Liu, 2022). Without differentiability, extending these to non-Gaussian cases is difficult. The non-differentiability also poses challenges for DPS-N and DAPS-N as they rely on $f$ in gradient based updates. This can lead to inaccurate gradients and ultimately resulting in subpar performance. (The DAPS supplementary material suggests using Metropolis Hasting for non-differentiable forward operators, but it is also mentioned to have inferior performance and low efficiency, with results only in the supplementary material.) In contrast, DPS-L and DAPS-L do not utilize $f$, thus allowing for relatively better performance (note that as demonstrated in the work (Plan & Vershynin, 2016), SIMs can be transformed into linear measurement models with unconventional noises). (**There is little intuitive explanation of Theorem 2 (I could not find a Theorem 2, by the way). And there is no discussion about whether this would provide samples from a Bayesian posterior.**) Theorem 2 essentially says that for any sample drawn from the marginal distribution $q_t$, when we perform the inversion process from time $t$ to $T$, followed by the sampling process from time $T$ to $\epsilon$, the resulting vector will be reconstructed to closely approximate the corresponding ground-truth data from $q_0$ (and lie on the same ODE trajectory as the original sample). In the revised version, we will incorporate this intuitive explanation in the paragraph preceding Theorem 2. We are unsure what the reviewer means by “I could not find a Theorem 2”. We guess that the reviewer might be referring to the proof of Theorem 2, which is available in Appendix B.1. The approaches presented in our work are not directly related to sampling from a Bayesian posterior. While it would be interesting to discuss whether our approaches could yield samples from a Bayesian posterior, we believe that such an exploration is beyond the scope of the current work. (**The authors should include more intuitive explanations. For example, what exactly are the properties in Eqs. 20 and 21 saying? What is the intuition behind Eq. 25 and why it’s different from Eq. 22? Also, the background on diffusion models is too dense and contains a lot of unnecessary information.**) The condition in Eq. (20) is a classic and crucial condition for SIMs. For example, it is (albeit implicitly) assumed in the seminal work (Plan & Vershynin, 2016) and in subsequent research that builds upon it. If this condition fails to hold, specifically when $\mu = 0$, the recovery of $\mu \mathbf{x}^*$ as in (Plan & Vershynin, 2016) and in our Eq. (24) becomes meaningless. We follow (Liu & Liu, 2022) to assume the condition in Eq. (21), which generalizes the assumption that $f(\mathbf{a}^T\mathbf{x}^*)$ is sub-Gaussian (which is satisfied by quantized measurement models), and accommodates more general nonlinear measurement models, such as cubic measurements with $f(x)=x^3$ and their noisy counterparts. The intuition underlying Eq. (25) is that (the scaled version of) $\frac{\mathbf{A}^T\mathbf{y}}{m}$ is approximately the ground-truth signal $\mathbf{x}^*$ (drawn from the target data distribution $q_0$) with an added zero-mean Gaussian noise component. Then, considering the forward process of diffusion models, which gradually adds Gaussian noise to the ground-truth data, performing a full inversion (from time $\epsilon$ to $T$) and then a full sampling from time $T$ to $\epsilon$ as in Equation (22) is inappropriate. As mentioned in the paragraph below Eq. (22), this approach implicitly (and wrongly) assumes that $\frac{\mathbf{A}^T\mathbf{y}}{m}$ approximately follows the target data distribution $q_0$ (without taking additive zero-mean Gaussian noise into account). In the revised version, we will incorporate the above intuitive explanations. Additionally, we will streamline the background on diffusion models, focusing solely on the equations and concepts that are fundamental to understanding the core ideas presented in this paper.
Summary: In this manuscript, the authors address a notable shortcoming in current signal recovery techniques based on diffusion models: most existing methods either concentrate on narrowly defined reconstruction tasks or fail to handle nonlinear measurement models with discontinuous or unknown link functions. To tackle this issue, the authors propose an efficient reconstruction strategy that requires only a single round of unconditional sampling and partial inversion of the diffusion models. Their theoretical analysis and experimental evaluations collectively verify the effectiveness of this approach. Claims And Evidence: Yes. Methods And Evaluation Criteria: FID is widely recognized as a key evaluation metric in diffusion models. Why wasn’t it employed here? Theoretical Claims: I have reviewed the "Setup and Approaches" section and did not identify any issues. Experimental Designs Or Analyses: I have carefully reviewed the experimental results and noticed that the SIM-DMS method achieves the second-best performance with only 50 NFE, whereas SIM-DMIS requires 150 NFE. To ensure a fair and comprehensive comparison, I suggest conducting additional experiments, specifically comparing SIM-DMIS with 50 NFE and SIM-DMS with 150 NFE. This would help better understand the relative performance under equivalent NFE. Supplementary Material: Yes, I have carefully reviewed the additional experimental results. Relation To Broader Scientific Literature: This work extends the approach of QCS-SGM by addressing its limitations. While QCS-SGM either focuses on specific reconstruction problems or cannot effectively handle nonlinear measurement models with discontinuous or unknown link functions, the proposed method overcomes these challenges. Moreover, the proposed method achieves more accurate reconstructions with significantly fewer neural function evaluations (NFEs) compared to QCS-SGM. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper is clearly structured and well-organized. 2. The experimental evaluations presented are comprehensive. 3. The theoretical derivations provided are rigorous and complete. Weaknesses: 1. While the SIM-DMS method achieves competitive performance using only 50 neural function evaluations (NFEs), the SIM-DMIS approach requires 150 NFEs. To ensure a fair comparison, the authors are encouraged to perform additional experiments by evaluating SIM-DMIS with 50 NFEs and SIM-DMS with 150 NFEs. This would provide a clearer understanding of the relative performance of both methods under an equivalent NFEs. Other Comments Or Suggestions: 1. It would be beneficial to include additional comparisons highlighting the reconstruction speed of the proposed method relative to existing approaches. 2. Some terms in the references need to maintain consistency with the original papers, particularly regarding capitalization. For instance, on line 500, "ImageNet" should be capitalized exactly as in the original source. Please carefully check and correct all similar cases to ensure accurate citation formatting. Questions For Authors: FID is widely recognized as a key evaluation metric in diffusion models. Why wasn’t it employed here? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your recognition of this paper and the helpful comments and suggestions. Our responses to the main concerns are as follows. (**FID is widely recognized as a key evaluation metric in diffusion models. Why wasn’t it employed here?**) We carry out additional experiments to report the FID. The results are presented in Table B1 below. Given the time constraint during the rebuttal period, and following the work for DAPS (Zhang et al., 2024), we compute the FID using a set of 100 validation images. The FID results for QCS-SGM, DPS, and DAPS are not included at this stage as their calculation is relatively time-consuming. However, in the revised version, we will incorporate the FID results for all methods. (**To ensure a fair and comprehensive comparison, I suggest conducting additional experiments, specifically comparing SIM-DMIS with 50 NFE and SIM-DMS with 150 NFE.**) Thanks for the insightful comment. We have conducted simple ablation studies for SIM-DMS and SIM-DMIS on the CIFAR-10 dataset (mentioned in Footnote 10, and detailed in Appendices F and G), and the results show that further increasing the NFE for SIM-DMS does not lead to substantial performance enhancements. We also perform the suggested experiments on the FFHQ dataset, and summarize the additional results in the following table. The results also indicate that under the same NFEs, SIM-DMIS consistently outperforms SIM-DMS across all metrics. | Method | NFE | PSNR | SSIM | LPIPS | FID | |----------|-----|---------------|---------------|---------------|--------| | SIM-DMS | 50 | 17.14 ± 2.41 | 0.44 ± 0.07 | 0.48 ± 0.05 | 105.04 | | SIM-DMS | 150 | 17.72 ± 2.63 | 0.46 ± 0.08 | 0.48 ± 0.06 | 95.52 | | SIM-DMIS | 50 | 18.78 ± 3.09 | 0.58 ± 0.10 | 0.41 ± 0.08 | 89.47 | | SIM-DMIS | 150 | 19.87 ± 2.77 | 0.60 ± 0.09 | 0.37 ± 0.05 | 76.21 | Table B1: Quantitative comparison of SIM-DMS and SIM-DMIS on the FFHQ dataset. (**It would be beneficial to include additional comparisons highlighting the reconstruction speed of the proposed method relative to existing approaches.**) We conduct additional experiments to measure the reconstruction speed of our method, and the results are presented in the table below. The reported inference time refers to the average reconstruction time for 10 validation images from the FFHQ dataset. All of these experiments are executed on a single NVIDIA GeForce RTX 4090 GPU. The results indicate that SIM-DMS and SIM-DMIS exhibit significantly faster reconstructions when compared to the competing methods (we do not compare with QCS-SGM since it is very time-consuming). We will include the comparisons with respect to reconstruction speed in the revised version. | Method | NFE | Inference Time (s) | |----------|------|--------------------| | DPS-N | 1000 | 142 | | DPS-L | 1000 | 142 | | DAPS-N | 1000 | 160 | | DAPS-L | 1000 | 160 | | SIM-DMS | 50 | 1.96 | | SIM-DMIS | 150 | 5.66 | Table B2: Comparisons for reconstruction speed on the FFHQ dataset. (**Some terms in the references need to maintain consistency with the original papers, particularly regarding capitalization.**) In our revised version, we will carefully check all the references and correct all the inconsistent cases to ensure accurate citation formatting.
Summary: Summary This paper proposes a novel method for reconstructing images from measurements obtained through a nonlinear compressed sensing model. The degradation model consists of a measurement matrix, an unknown and potentially discontinuous nonlinear element-wise link function, and additive Gaussian noise. The proposed reconstruction algorithm begins by initializing the estimate using the measurement vector, applying the transposed measurement matrix, and applying an empirically tuned normalization. The final reconstruction is achieved using a pre-trained diffusion model (DM). The method involves partial DM inversion followed by DM sampling, where the inversion start time is determined based on the norm of the DM inversion initialization. The authors provide a theoretical analysis by proving a theorem that offers an upper bound on the distance between the reconstructed image and the desired image under specific conditions. Main findings: - The authors establish an upper bound for the distance between the reconstructed and the desired images. - Experimental results demonstrate that the proposed approach outperforms existing methods in 1-bit and cubic measurement scenarios. - The authors show that the combination of partial inversion and sampling yields better reconstruction quality compared to sampling alone or full inversion followed by sampling. ## Update After Rebuttal I thank the authors for their detailed responses. I respectfully disagree with the author's assumption that the constant $C$ is fixed and independent of the number of inversion steps (i.e., the number of times the denoiser is applied). This assumption leads to an unrealistic conclusion that increasing the number of inversion steps indefinitely would drive the upper bound on the reconstruction error to zero, thereby implying perfect reconstruction. If this were the case, the authors should demonstrate in the experimental section that the reconstruction error consistently decreases toward zero as the number of inversion steps increases. A more reasonable and realistic assumption is that the denoiser has a fixed Lipschitz constant **per activation**, in which case the overall constant $C$ in the bound would grow with the number of inversion steps. Furthermore, in contexts involving bounded quantities, such as image pixels constrained within the range $[0, 1]$, expressing an upper bound using capital-O notation such as $O(1)$ is inappropriate. Instead, the upper bound should be stated explicitly as $distance \le C$, where $C$ is a meaningful, domain-aware constant (in the case of image pixels less than 1). As noted in my original review, proving that the **per-pixel** distance between the reconstructed and desired images is $O(1)$, which allows for an arbitrary constant, potentially even exceeding 1, does not yield a meaningful or non-trivial insight in this context. Given the concerns outlined above, I maintain my original recommendation to reject the manuscript. Claims And Evidence: I the claims made in the submission are supported by evidence. Methods And Evaluation Criteria: In my opinion, yes. Theoretical Claims: I have a concern regarding the theoretical part of the manuscript. The upper bound presented in the main theorem appears trivial and lacks practical significance. The theorem states that: $\|\|x - x_r\|\|_2 = O(\sqrt{n}(h^1 + Lh^2))$, where $x$ is the desired image, $x_r$ is the reconstructed image, $n$ is the image dimension, $L$ is a Lipschitz constant, and $h^1$ and $h^2$ are diffusion model parameters. This result essentially implies: $\|\|x - x_r\|\|_2 \le Ch\sqrt{n}$, where $C$ is a positive constant and $h = h^1 + Lh^2$. However, the image pixels are typically bounded within the range [0, 1]. Therefore, the maximum possible Euclidean distance between any two images is already bounded by $\sqrt{n}$, representing the distance between a completely black and a completely white image. Consequently, the upper bound derived in the theorem does not provide any meaningful or non-trivial insight. If the pixel were unbounded, the theorem might offer valuable insights. However, under the current setting, the theoretical contribution seems to lack practical relevance. Experimental Designs Or Analyses: In my opinion, the experiments presented in the manuscript are sound Supplementary Material: I have reviewed the appendix. Relation To Broader Scientific Literature: The key contributions of the manuscript are related to the literature about image reconstruction using diffusion models. Essential References Not Discussed: I did not come across related works that are essential for understanding the key contributions of the paper but are not currently cited. Other Strengths And Weaknesses: Since the theoretical analysis is a significant component of the manuscript's contribution, I recommend rejecting the manuscript. Other Comments Or Suggestions: I have no further comments. Questions For Authors: I have no further questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the feedback. We are pleased that the claims in our submission have been recognized as "supported by evidence" and our experiments have been recognized as "sound". Regarding your major concern about the practical relevance of Theorem 2, our responses are as follows: - For practical relevance, the parameter $h_{\max}=\max_{i \in [N]} \big(\lambda_{t_i}-\lambda_{t_{i-1}}\big)$ in Eq. (29) plays an important role. When the number of sampling/inversion steps is large, for instance, on the order of $10^2$ or $10^3$, $h_{\max}$ is approximately $10^{-2}$ or $10^{-3}$ (if using typical uniform $\lambda$ steps). Therefore, when the number of sampling/inversion steps is sufficiently large, in the upper bound $C h \sqrt{n}$ illustrated by the reviewer, the value of $h$ will be much smaller than $1$ (e.g., $h = 0.001$; note that $C$ is a positive constant of order $\Theta(1)$). This makes the upper bound practically meaningful when image pixels are bounded within the range $[0,1]$. - The proof of Theorem 2 (see Appendix B.1) is built upon the proof for Theorem 3.2 in the popular work for DPM-Solver (Lu et al., 2022a) (note that the upper bound in (Lu et al., 2022a, Theorem 3.2) is for each individual pixel, and thus their bound does not have the $\sqrt{n}$ factor), which is in turn based on classic analyses of local truncation error for numerical ODE solvers, such as those presented in Section 5.6 of (Burden & Faires, 2005). And our Theorem 2 bears similar practical relevance to the theoretical findings presented in these works. Richard L. Burden and J. Douglas Faires, Numerical Analysis, 8th edition, Thomson/Brooks/Cole, 2005. --- Rebuttal Comment 1.1: Comment: According to the definition provided in the manuscript [*], Section 1.3 Notation, lines 114-117, is it correct to conclude that any arbitrary constant $C$, no matter how large, for example, $C = 1000$, is considered to be of order $O(1)$? [*] "Given two sequences of real values $\{a_i\}$ and $\{b_i\}$, we write $a_i = O(b_i)$ if there exists an absolute constant $C_1$ and a positive integer $i_1$ such that for any $i > i_1$, $|a_i| ≤ C_1b_i$." --- Reply to Comment 1.1.1: Comment: Thank you for the update. Our responses are presented as follows. All citations refer to the reference list at the end of these responses. The implied constant $C$ within the $O(\cdot)$ term in our Eq. (29) is similar to the one in the $O(\cdot)$ term stated in (Lu et al., 2022, Theorem 3.2). This $C$ is a fixed positive constant that depends on the pre-trained neural network model; it is not a variable parameter that can grow arbitrarily large. In contrast, $h$ is a variable parameter that can approach zero. The upper bound $C h \sqrt{n}$ can be interpreted as follows: Given $\varepsilon \in (0,1)$, to achieve pixel-wise $\varepsilon$-accuracy, $h$ only needs to be smaller than $\frac{\varepsilon}{C}$. For instance, if $C$ is fixed at $1000$ (and remains constant thereafter) and $\varepsilon=0.1$, $h$ only needs to be smaller than $0.0001$. Although this may seem to demand that $h$ be very small (or the number of sampling/inversion steps be very large), we note that upper bounds similar to ours are prevalent in the theoretical analysis of diffusion models. Examples include the bound in (Lu et al., 2022) and those in recent theoretical works such as (Chen et al., 2023a; Chen et al., 2023b; Chen et al., 2023c; Li et al., 2024). For a specific example, to achieve $\varepsilon$-accuracy in terms of the total variation distance, (Li et al., 2024, Theorem 1) requires the number of steps to exceed $C\big(\frac{n^2}{\varepsilon} + \frac{n^3}{\sqrt{\varepsilon}}\big)$, where $C$ is a fixed positive constant and some minor logarithmic terms have been omitted. In practical applications, the data dimension $n$ is often very large. For instance, for the FFHQ 256x256 dataset, $n = 256\times 256 \times 3 = 196608$. This example directly follows the bound in (Li et al., 2024), and similarly implies a very large number of sampling steps in high-dimensional settings, indicating that such bounds have been widely accepted in the research area of theoretical analyses of diffusion models. References: [1] Lu et al. "DPM-Solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps." NeurIPS, 2022. [1354 citations] [2] Chen et al. “Sampling is as easy as learning the score: theory for diffusion models with minimal data assumptions." ICLR, 2023. [325 citations] [3] Chen et al. “Improved analysis of score-based generative modeling: User-friendly bounds under minimal smoothness assumptions." ICML, 2023. [177 citations] [4] Chen et al. “The probability flow ODE is provably fast." NeurIPS, 2023. [108 citations] [5] Li et al. "Towards non-asymptotic convergence for diffusion-based generative models." ICLR, 2024. [92 citations in total for two versions of the work as shown in the first author’s Google Scholar page]
null
null
null
null
null
null
Locate-then-edit for Multi-hop Factual Recall under Knowledge Editing
Accept (poster)
Summary: This paper extends the *locate-then-edit* approach of model editing proposed by [Meng et al, 2023](https://arxiv.org/pdf/2202.05262) to multi-hop factual recall tasks. During the localization experiments the authors find that: on multi-hop factual recall tasks, the LM retrieves implicit subject information in the *deeper* MLP layers; which the paper claims as the key reason why existing knowledge editing methods targeting shallower MLP layers do not generalize for multi-hop factual reasoning tasks. With this finding, the authors propose IFMET, a new 2-step knowledge editing approach that edits shallower MLP layers first for single-hop fact recall tasks, and then edits deeper MLP layers for multi-hop factual recall tasks. ## update after rebuttal I thank the authors for their detailed response to my questions. And I would like to keep my score as is. Claims And Evidence: I have several concerns on the localization experiments (Section 3.1) and evaluation of IFMET. Please see my questions. Methods And Evaluation Criteria: The authors use MQuaAKE [Zhong et al, 2024](https://arxiv.org/pdf/2305.14795), which is a recognized benchmark for multi-hop factual recall tasks, to evaluate their method. However I have some concerns about the evaluation methodology. See questions. Theoretical Claims: The paper is empirical in nature, and does not contain any strong theoretical claims. Experimental Designs Or Analyses: I think using Logit Lens as a tool to identify key locations/states in the LM computation has a systematic bias towards the later layers. See questions for details. Supplementary Material: I have read parts of the Appendix. Relation To Broader Scientific Literature: Previous works investigating factual recall mechanisms on LMs attributed knowledge retrieval (also known as enrichment, detokenization, ...) to MLPs is shallower layers, at the subject last token position ([Meng et al, 2023](https://arxiv.org/pdf/2202.05262), [Geva et al., 2023](https://arxiv.org/abs/2304.14767)). This paper extends the setting to multi-hop factual recall tasks, and finds that the LM retrieves implicit subject information in the *deeper* MLP layers, at the last token position. The paper further shows that a knowledge editing method designed with this insight is more effective in multi-hop factual recall than previous methods targeting shallower MLP layers. Essential References Not Discussed: Two relevant papers comes to my mind. 1. [Geva et al., 2023](https://arxiv.org/abs/2304.14767) -- investigates single-hop factual recall mechanism in LMs. 2. [Merullo et al., 2023](https://arxiv.org/abs/2305.16130) -- also investigates mostly single-hop factual recall. But the paper finds that sometimes the subject entity is being recalled again at the last token position of the prompt. Just to be clear, I don't feel strongly about either of these papers. I thought that they are relevant to the paper and would leave it to the authors to decide whether to cite them. Other Strengths And Weaknesses: **Strenghts:** Well-motivated paper -- addresses a key problem in knowledge editing in LMs. **Weaknesses:** See questions. Other Comments Or Suggestions: N/A Questions For Authors: 1. **Localization of key states and modules** a. Section 3.1 extensively uses Logit Lens ([Nostalgebraist, 2020](https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens)) as a mechanistic interpretability tool to identify key locations/states in the LM computation during a multi-hop factual recall task. However, Logit Lens can give noisy results in earlier layers ([Belrose et al., 2023](https://arxiv.org/pdf/2303.08112)) and only starts giving meaningful results in deeper layers ([Geva et al., 2024](https://arxiv.org/pdf/2401.06102)). Therefore, in my opinion, using Logit Lens as a localization tool in this case has a systematic bias towards deeper layers. I would love to hear the authors' thoughts on this. b. I am not sure that I clearly understand the motivation behind the choice of Equations 1 and 2. * In Equation 1, all $j \in s_2$ are being optimized for. However, $W_U h$ is a linear operation. Thereby shouldn't the (probably weighted) mean of the rows of $W_U$ corresponding to $j \in s_2$ be sufficient? Do we really need a SGD optimization here? * Similarly in Equation 2, all $j \in o_2$ are being considered. However, these LMs are autoregressive and will not generate all $j \in o_2$ at once. Therefore, shouldn't we only consider the first token in $o_2$? I understand that the first token is not always enough to identify the correct generation, but it should be possible to curate a set of candidates where this is not the case, which I think is enough for the purposes of this localization experiment. c. I am not really sure I understand the results of Figure 3. Probably having the prompts on which the intervention is being performed would help. Here's my guess about the setup, please correct me if I am wrong. * The multi-hop prompt is something like `"The capital of the country where {} is located is"`. `{}` is replaced with $s_2 = \texttt{the Louvre}$ and the answer is $o_2 = \texttt{Paris}$. Then $s_l^*$ is optimized for a different $s_2 = \texttt{the Statue of Liberty}$ and the intervention is being performed on corresponding states at the last token position of the prompt (?) * What does "invent" label mean in Figure 3? Is it a typo for "intervention"? Also the probability *decrease* is being measured. Assuming the setup I understand is correct, is it measuring the probability decrease for `Paris` or increase for `Washington`? * The values in Figure 3 aren't very large, strongest effects around 0.025 for residual states and $7 \times 10^{-6}$ for MLPs. Is this significant enough to draw any strong conclusions? 2. **Evaluation** a. The paper mentions that for IFMET the multi-hop prompt is constructed using the WikiData and the LM itself (Lines 309-10). How much do they differ from the prompts in MQuaAKE, on which the edit is being evaluated? b. As the single edit performance of other methods reported in Table 3 is so low, I am assuming they are all being evaluated on multi-hop prompts. Is this correct? Then shouldn't this be reported as a grid shaped #Edits $\times$ #hops matrix instead? c. Looking at Table 7 in the Appendix, I wonder why the single edit efficacy of PMET below 90\% despite it being a direct optimization approach? I also encourage the authors to also include ROME for single edits and MEMIT for mutiple edits in this comparison. 3. When you incorporage CoT, did you make sure to not give the answer away? The example in Table 10 has the answer in the thought process. One way you can easily do that is cut-off the prompt before the answer. In your example it would be `"... Chevrolet is owned by"`, But then I think the single hop editing should also be effective here. However, probably that is not what you do as you report poor performance of single hop editing in Tables 3 and 4. Alternatively, you can cut the prompt after `Thoughts:`, so that the LM has to generate the thought itself (pressured by the prompt and ICL examples). The setup wasn't clarified in the paper (if I didn't miss it) and I think it's an important thing to clarify. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your recognition of our work. Here is our response. > Response to Q1.a in Questions For Authors In the past exploration process, we also used patchscope to perform the experimental effects shown in figure 2, and judged whether the corresponding information was contained by counting whether the subsequent output of patchscope contained the answer. During the experiment, the results were surprisingly consistent with logitlens, so considering the universality of the logitlens method and cost-effectiveness, we still use it as our interpretability tool. We believe that the results obtained by logitlens are already trustworthy, but in the future, we can continue to explore further through methods such as tuned-lens. > Response to Q1.b in Questions For Authors First,regarding the optimization of s2 related information, we did not use SGD optimization, but replaced the logits of the token in s2 with the minimum value on the whole vocabulary. Here it should be expressed as $s_l^*[j]$ = min($s_l$) in equations 1, where $s_l$ represents the all tokens in vocabulary, and then a simple mathematical calculation is performed to get the corresponding hidden state with a combination of least squares and minimum-norm methods. And, we are sorry that we omitted explicit description of the Equation 2 in the paper. Your understanding is completely correct. We did not take all j∈o2 tokens into account. In practice, we only consider the first token in o2. As you said, this is enough for the purpose of this localization experiment. > Response to Q1.c in Questions For Authors Corresponding to the example you gave, it is a two-hop fact recall chain. It includes (the louvre, located country, france) + (France, capital, Pairs). Instead of using a counterfactual example to perform a conventional corruption, we refer to [1] and reduce the amount of information about **France** in the last token position by minimizing its corresponding logits value. Then, we observe the causal effect of **France, the implicit intermediate subject,** by observing the decrease in the probability of **Paris**. And the "invent" is a spelling error, and we will correct it in the camera-ready version. Regarding the issue you mentioned about the significance of causal effects, we have provided a detailed explanation in the section "**Response to Q1.2 & Q1.3 & Q1.4 in Claims And Evidence about the causal intervention experiments**" in our reply to **Reviewer JTzm**. Please refer to that section for more details. [1] Understanding and Patching Compositional Reasoning in LLMs > Response to Q2 in Questions For Authors We construct a multi-hop prompt to put the knowledge to be modified into the implicit reasoning step. While mquake only provides a single edit template, we give a practical example here. | Edit Case | Single-hop Prompt in MQuAKE | Test multi-hop Prompt in MQuAKE | Our multi-hop Prompt | | -------- | -------- | -------- | -------- | | (Marc Cherry,cizizen,United States of America -> Bulgaria) | Marc Cherry is a citizen of | Which country is the creator of \"Devious Maids\" a citizen of? | The creator of Desperate Housewives is a citizen of | We use all the relationships that have appeared in MQuAKE to construct possible multi-hop edit prompts and then delete those that have the same explicit subject as the test multi-hop question, keeping the best one that the model can successfully answer. In Table 3, the other methods we reported only used single-hop editing prompt, and only IFMET used our framework. For the application of multi-hop prompt in other methods, please refer to the section in the reviewer 1ooJ's response. We reported in detail the improvement achieved by applying our framework to **MEMIT**, which illustrates the generalization of our method for editing baselines. > Response to Q3 in Questions For Authors about the CoT. We are sorry that our examples are confusing to you. We may not have explained them clearly enough. As you said, our cot setting is to cut the prompt after `Thoughts:`, so that the LM has to generate the thought itself (pressured by the prompt and ICL examples). For the same multi-hop problem, we generate cot and its corresponding answers three times to ensure that possible errors are avoided. We will correct this in the new version PDF. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response to my questions. However, I would like to keep my score to 3. Congratulations to the authors for their work and the overall positive feedback from the reviewers. I look forward to seeing the final version of the paper in the conference. --- Reply to Comment 1.1.1: Comment: Thank you very much for the time and effort you dedicated during the review process, which significantly improved the quality of our manuscript. We also appreciate your recognition of our work and will incorporate all the valuable insights provided during the rebuttal phase into the final version
Summary: This paper investigates how LLMs handle multi-hop factual recall under knowledge editing. Using various interpretability methods, the authors uncover a critical insight: for multi-hop questions, LLMs rely primarily on implicit subject information encoded in deeper MLP layers to derive final answers. This mechanism differs significantly from that of single-hop tasks, explaining the failure of previous locate-then-edit methods when applied to multi-hop factual recall. Building on these findings, they introduce IFMET, a novel locate-then-edit knowledge editing method. Besides editing the shallow MLP layers with single-hop edit prompts, IFMET employs multi-hop edit prompts to precisely locate knowledge requiring modification and subsequently edits the deeper MLP layers. Comprehensive experimental evaluation demonstrates that IFMET substantially outperforms other methods. update after rebuttal My concerns have been addressed, and I maintain my score. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence Methods And Evaluation Criteria: Experimental results show that IFMET exhibits superior performance. Theoretical Claims: Yes, The theories of logitlens and causal intervention methods used in mechanism exploration have been checked. Experimental Designs Or Analyses: The validity and rationality of the experimental design and analysis of the IFMET method are mainly demonstrated through experiments on the answer accuracy of the original answers and the edited answers, as well as comparative experiments across multiple models and ablation studies. Supplementary Material: No supplementary material. Relation To Broader Scientific Literature: Knowledge editing is an important research field, and the analysis of reasoning mechanisms and the improvement of methods in knowledge editing scenarios are very valuable. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: Clear Structure and Precise Expression: The quality of the writing is good. Complex concepts and intermediate solutions are effectively expressed through well-chosen examples and informative visualizations. Novel Insight: The paper highlights the limitations of current knowledge editing methods and provides valuable guidance for future research. The analysis of reasoning mechanisms in knowledge editing scenarios and the discovery of implicit subject information in intermediate layers contribute significantly to our understanding of LLM reasoning. Strong Theoretical Support of Method: The authors have constructed a rigorous logical chain through extensive exploratory experiments and theoretical derivations, making the logical foundation of the proposed methodology, IFMET, both solid and well-supported, with high interpretability. Experiments: IFMET improves both original and new answers, has been tested across various models, and demonstrates its effectiveness. The authors also provide a detailed discussion of generalizability. Weaknesses: The IFMET framework proposed is based on existing knowledge editing methods, but seems to have been experimented on PMET, hoping to be able to extend the framework to other editing methods (such as MEMIT) to demonstrate the generalization of the method itself. Other Comments Or Suggestions: Refer to weakness. Questions For Authors: Refer to weakness. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you very much for your recognition of our work and for your suggestion to extend the framework to other editing methods (such as **MEMIT**) to demonstrate the generalization of the method itself. We conducted detailed experiments to verify this point. Due to time constraints, we tested it on the ablation set (a subset of 500 items consistent with the ablation experiment). The experimental results on **MEMIT** are as follows | Editor | Multi-hop | Efficacy | Specificity | Paraphrase | -------- | -------- | -------- | -------- | -------- | | MEMIT | 15.6 | 93.0 | 76.6| 76.4 | | IFMET(with MEMIT) | 24.2 | 100 | 67.2 | 82.2 It can be seen that using **MEMIT** as the baseline and using **PMET** as the baseline (Table 7) have similar trends in the four metrics of **Multi-hop Acc, Efficacy, Specificity, and Paraphrase**, which proves that our framework is generalizable on different editing baselines.
Summary: This paper introduces a method called IFMET to perform multi-hop factual edits to language models. It first conducts an analysis of LM factual recall in the presence of multi-hop edits and finds that LMs integrate hops beyond the second in deeper MLP layers, compared to single-hop facts which are retrieved in shallow MLP layers. It then introduces IFMET, a two-stage edit process that propagates to multi-hop facts by first editing the first-hop fact at shallow MLP layers, then editing the subsequent-hop facts at deeper MLP layers. It uses a search-then-edit process in both stages, first locating where the nth-hop fact is stored using a n-hop edit prompt, then editing the MLP at that location to store the new fact. Experimental results show that IFMET significantly improves performance on multi-hop factual recall tasks compared to previous locate-then-edit methods. Ablation studies further confirm the importance of both stages of editing, the use of multi-hop prompts, and the targeting of deeper MLP layers for effective multi-hop knowledge editing. Claims And Evidence: Examining the claims made and the evidence presented 1. Multihop facts accumulate at the last token position compared to single-hop facts. This is supported by logitlens finding that the 2nd-hop fact subject peaks in the middle layers of the last token, while the single-hop fact subjects do not peak at all at the last token. It is also supported by a causal analysis: modifying the representation at that layer (/at those MLPs) to erase the subject of the 2nd-hop fact decreases the probably of the correct answer, compared to erasing a random token. 1. Figure 2 shows that the final answer probability actually decreases in the last few layers (beyond layer 25) which seems questionable to me. I expect the final answer probability to peak at the last layer? 2. The effect size is small for the causal analyses: the probability of the final token decreases by only 2.5% when the 2nd-hop fact is erased in the residual stream, and on the order of 1e-6 when it is erased in the MLP. This makes me wonder if the information is more distributed than the authors claim here. It's also questionable to me why, in the MLP version of this analysis, the effect is measured at the *residual stream of the layer*, rather than the final output... 3. While the results *do* show the final token has more information about the 2nd-hop fact compared to the 1st-hop fact, and that removing this information has a small degree of causal effect on the final answer compared to removing a random token, it's unclear whether this information is actually localized to the final token rather than an intermediate token. The authors should do some comparison on effect sizes through different positions in the prompt. 4. Furthermore, the claim that multi-hop facts are accumulated into/applied at the MLP layers should be checked, especially because of the relatively small effect size. Other layers, like attention, should be compared against. 5. Experiments are run on 2-hop facts but are used to generalize n-hop facts. Are similar results observed in n-hop facts? 2. An implicit assumption is that the multi-hop fact *should* change if a fact is edited. It may be good to contextualize when this would be desired: for example, if the capital of Spain changes to Hartford, then it could be possible that we are in a counterfactual world where Spain and Connecticut switched names, so Pablo Picasso was actually born in "Connecticut" which still has capital "Madrid". Of course, it is more likely is that the capital of Spain changed its name. 3. IFMET outperforms prior methods at multi-hop edits. This seems well-supported by performance improvements on multi-hop editing facts compared to baselines, especially PMET, and ablation studies show the necessity of each part of the method. Methods And Evaluation Criteria: The method generally makes sense: MLP layers are modified for both the single-hop and multi-hop versions of prompts. MQuAKE-3K, a multi-hop knowledge editing benchmark, was used for evaluation. It should be clarified whether the same multi-hop prompts were used during editing as during evaluation, e.g. it seems important to validate that the edits generalize to when the fact is present alongside in a multi-hop chain with facts not seen during editing (e.g. if we change Spain's capital to "Hartford", then if its multihop edit was done using the prompt "Pablo Picasso's country's capital is Hartford", then it should be evaluated on the prompt "the capital of the 4th most populous country in Europe is Hartford" or even "Hartford is in Europe"). In general, IFMET appears to be an effective method for editing multi-hop facts, and does not seem to negatively impact single-hop facts. However, there are several limitations of this method compared to prior work, especially because more edits need to be made, for example: 1. Negatively impacting specificity (Table 8) indicating that other unrelated facts are being affected by these edits 2. Being more expensive because of the need to make more edits (Table 9) 3. Assuming access to a knowledge base (e.g. WikiData/SPARQL) in order to construct multihop prompts to use for editing, which prior methods do not. Theoretical Claims: No theoretical claims in this paper. Experimental Designs Or Analyses: The experimental designs generally seem sound. The authors perform evaluation against baselines on MQuAKE and compare results based on # of edits and # of hops. There is also comprehensive ablation analysis on each component of the IFMET method, showing that both stages are necessary, and using multi-hop prompts at deeper MLP layers are necessary. Supplementary Material: Yes, the parts referred to from the main paper. Relation To Broader Scientific Literature: The authors compare against essential prior work on model weight editing like MEND, ROME, MEMIT, PMET, as well as finetuning. They show improvements against these baselines on multi-hop knowledge edits on an established benchmark, MQuaKE. They make an interesting discovery that multi-hop information is stored in later layers than single-hop information, and accumulated at the last token, which builds upon prior understanding of where single-hop knowledge is stored. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See strengths and weaknesses above. In addition, some clarity suggestions below: 1. Important methodological details about IFMET is allocated to the appendix, as well as context for understanding the metrics used in Table 4. These should be moved up, and perhaps some details about the ablations and generalizability can be allocated to the appendix. 2. The paper should -- early on -- clarify whether what matter is the *total* number of hops in the edit, or how many hops deep the *edited fact* is. For example, it is my understanding that editing the first fact in the chain is akin to a "single-hop" edit (and is stored in shallow MLP layers), while editing the nth fact in the chain is akin to a multi-hop edit (and is stored in deep MLP layers). So it doesn't matter how long the multi-hop chain is in total, just the (absolute?) position of the edited fact in the chain. This should be clarified early on. Other Comments Or Suggestions: N/A Questions For Authors: 1. How are 3/4-hop facts stored? Are they all stuck in the same MLP layer past the first hop? Or is there a smooth correlation of depth and position in the edit chain? 2. "Even in the pre-type tasks, where previous methods are relatively more proficient, IFMET achieves a significant improvement." (L385 right column) Why is this the case? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Response to Q1.1 in Claims And Evidence. In our actual experiments, we conducted a simple exploration, and found that the probability of articles, e.g. *the*, *a* will increase significantly in the last few layers, which will squeeze the absolute probability of the answer to a certain extent. But the answer token is still in the position with the highest probability, so it will be output in the last layer. > Response to Q1.2 & Q1.3 & Q1.4 in Claims And Evidence about the causal intervention experiments. First of all, we need to state that we reported the absolute probability change in the paper. This was our oversight. Based on your and reviewer ixiZ’s suggestions, we revised the causal intervention method in the paper. We re-conducted a detailed intervention experiment with a window size = 5 to explore **the the role of layer hidden state and components like MLP and ATTN at different positions of prompt** with the decrease of probability in the last prediction layer as the causal effect *IE*. The experimental results are as follows: 1. Prompt position: We compared last subject token position (which is also generally considered important), the last token of first hop and the last token position. **Any intervention effect at the last subject token position (<3%) and last token of fist hop (<2.5%) was weak** at the layer level or at the attention head and mlp levels. The response at the last token position was significantly stronger with **largest response over 12%** in deep layers. 2. Layer intervention: We reached a consistent conclusion that the highest decay occurred in the **16-21th** layer, whose largest effect about **12%** on prob. We compared the strength of the average causal effect mentioned in figure 2 and 3 of the paper[1]. The intervention effect ranged from about 2.5% to 15%. We believe that such strength on prob and logit is acceptable. 3. Component: We compared the effect of intervening the input of the attention head and the MLP on answer prediction. **The effect of the MLP is significantly better than that of the ATTN head, generally reaching more than 12% in the deep 18th-24th layers**, while the ATTN is less than 3% over all layers. 4. In general, the results of the intervention experiment are consistent with the phenomenon observed in Figure 2. The implicit subject information plays a causal role in the generation of the final answer through the deep MLP. [1] Locating and Editing Factual Associations in GPT > Response to Q2 in Questions For Authors about the performance of pre-type tasks. Sorry for that our definition of **pre and post type** in the Table 4 is ambiguous. In the paper, we have added the cases of editing the first n consecutive hops into the **pre** category, such as editing the first and second hops in 3 hops. But as you said, the important insight is to divide according to whether only the first hop is edited, so we reclassified the results into **pre and post type**, corresponding to the modification of only the first hop (i.e. explicit hop) or including n>1 hops (implicit hop) in correspondence with our conclusions in the exploration section and esperiment results in Table 2. After the new classification, there are 426 pre-type cases and 2574 post-type cases, the results in Table 4 under the new classification are as follows: | Editor | Average↑(Edited Answer) | Pre↑ | Post↑ | Average↓ (Undited Answer)| Pre↓ | Post↓| | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | Base | 7.70 | 11.97 | 6.99 | 39.63 | 34.04 | 40.56 | | Base+CoT | 6.83 | 8.22 | 6.60 | 42.83 | 41.31 | 43.08 | | PMET | 11.17 | 39.20 | 6.53 | 29.95 | **10.09** | 33.26 | | PMET+CoT | 17.04 | 43.66 | 12.63| 29.35 | 12.67 | 32.13 | | IFMET | 23.04 | 38.03 | 20.55| 23.08 | 11.27 | 25.02 | | IFMET+CoT| **31.01** | **43.66** | **28.90**| **21.32** | 10.80 | **23.08**| It can be seen that the previous method itself achieved significant results on pre type, as we expected, and our method is basically on par with it (this is predictable because the knowledge in the early MLP is fully updated in both cases). On post type, our method shows obvious superiority. This further confirms our findings in the paper. We will modify this in the paper, which will greatly improve the consistency and clarity of our article. > Response to Q1 in Questions For Authors & Q1.5 in Claims And Evidence about n-hop The reason why we temporarily conducted experiments on 2-hop is partly because we followed the basic settings of previous work; on the other hand, the performance of the model for three-hop and four-hop in the case of cloze-prompt (being able to directly answer the final answer in the form of the highest probability token) is not ideal. Due to the limitations of data volume, existing models, and computing resources, we hope to explore this in future work.
Summary: This paper identifies significant limitations in the existing knowledge editing methods based on the "locate-then-edit" paradigm, particularly when applied to multi-hop factual recall tasks. To explore these limitations, the authors first employed the LogitLens technique and discovered that multi-hop queries tend to cluster implicit subject information at the last token position, while single-hop queries do not. Next, they used causal inference techniques to confirm that implicit subjects impact the final inference results and pinpointed that the key component involved in this process is the deep MLP layer. Based on these findings, the authors analyzed existing knowledge editing methods and found that most of them overlook the editing of deep MLP layers, focusing primarily on shallow MLP layers, beacuse of the use of single-hop edit prompt. This oversight leads to limitations in the performance of multi-hop factual recall tasks. To address this issue, the authors propose IFMET, a method that balances the editing of both shallow and deep MLP layers with multi-hop edit pormpts, thereby enhancing the capability of the "locate-then-edit" paradigm for multi-hop tasks. ## update after rebuttal My concerns are addressed, and I maintain my positive score. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence Methods And Evaluation Criteria: Yes, The proposed method addresses the issue where post-edited models struggle with multi-hop factual recall tasks, particularly those that involve newly edited knowledge. Through extensive comparative experiments and ablation studies across various models, the paper demonstrates the effectiveness of the proposed approach. Theoretical Claims: Yes, the paper provides a closed-form solution to the incremental weight, and this section includes theoretical claims. The solution process is correct and free from issues. Experimental Designs Or Analyses: Yes, based on the content of the paper, the validity and soundness of the experimental design and analysis are primarily demonstrated through comparative experiments across multiple models and ablation studies. The experimental design addresses several key aspects: it identifies the key differences in the mechanisms the model uses for reasoning in single-hop versus multi-hop fact recall tasks. The paper evaluates multi-hop tasks across various models and conducts a well-designed set of ablation experiments to demonstrate the effectiveness of the proposed method. The design and analysis methods are appropriate and effective, with no obvious issues or flaws found. All experimental and analytical steps have been thoroughly validated. Supplementary Material: Yes, I reviewed the supplementary material. Relation To Broader Scientific Literature: Knowledge editing is an active research area in natural language processing. IFMET advances this field by introducing a two-stage editing strategy (shallow and deep editing), especially addressing the challenges in multi-hop reasoning tasks. IFMET also modifies knowledge based on interpretability guidance, which aligns with recent research on model interpretability and reasoning transparency. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** - Empirical analysis: The paper provides a solid empirical analysis, validating two hypotheses about how large language models (LLMs) process multi-hop queries compared to single-hop ones. Using interpretability tools like Logitlens and causal intervention, the authors identify key insights that enhance our understanding of LLM reasoning and model interpretability. - Effective approach: The IFMET solution is well-founded and effectively addresses multi-hop reasoning challenges, showing notable improvements in performance, with potential real-world applications. - Thorough experimentation: The experimentation is thorough, with ablation studies and comparisons to existing methods, which effectively demonstrate the approach's superiority in multi-hop factual recall tasks. **Weaknesses** - The paper's formatting needs improvement, and the content in the experimental section is relatively limited. Some important results, such as the ablation experiments on LLaMA2, are presented in the appendix. It is hoped that adjustments can be made in the camera-ready version. - I strongly recommend that the authors include a discussion of in-context editing approaches in relevant sections, such as Related Work or Appendix. Other Comments Or Suggestions: See weaknesses Questions For Authors: 1. How is the dimension of K0 determined in Equation 3? 2. Is the performance of the methods in Table 3 related to the number of edits? As shown in the table, the performance of 3-edit is worse than both 2-edit and 4-edit. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you very much for your recognition of our work. Here is our response. > Response to Q1:How is the dimension of K0 determined in Equation 3? K0 represents the knowledge we try to preserve when modifying specific fact. Throughout the calculation process, we do not calculate K0 separately, but follow the practice in Rome[1], treating $C = KK^T$ in Equation 3 as a constant that we pre-cache by estimating the uncentered covariance of k from a sample of Wikipedia text. Our second moment statistics C ∝ E[$kk^T$] are computed using 100,000 samples of hidden states k computed from tokens sampled from all Wikipedia text in-context. > Response to Q2: Is the performance of the methods in Table 3 related to the number of edits? As shown in the table, the performance of 3-edit is worse than both 2-edit and 4-edit. We believe that the performance of answering multi-hop questions after the model implements knowledge editing is usually inversely proportional to the number of edits. However, in the actual implementation of the KE method, it is also related to the quality of the prompt used for editing. We checked the prompts used in the 3-edit part and found that the confidence of the model when answering based on them was relatively low, which means that the model is more likely to be biased when recalling, which means that the multi-hop prompt in 3-edit cases may not be optimal, but we believe that constructing a better multi-hop prompt can be left for future exploration. This article only represents the lower limit performance of the proposed framework. [1] Locating and Editing Factual Associations in GPT --- Rebuttal Comment 1.1: Comment: Thank you for your further response. My concerns have been well addressed.
null
null
null
null
null
null
On Efficient Estimation of Distributional Treatment Effects under Covariate-Adaptive Randomization
Accept (poster)
Summary: This paper proposes a method of estimating the distributional treatment effect, in a setting that uses randomised experiments using covariate-adaptive randomisation. The distribution is captured through the cumulative distribution function, and estimation is done via regression-adjustment. Claims And Evidence: Theoretically, it is claimed that the proposed estimator for the distributional treatment effect is asymptotically a Gaussian process (Theorem 5.2), and that the estimator attains the semiparametric efficiency bound. I went through some of the proofs and could not find any obvious errors, and these results are very believable. Practical advantages are demonstrated through experiments. Methods And Evaluation Criteria: The method given in algorithm 1 makes sense for the problem at hand. But I personally do not believe that the cumulative distribution function is the best way of capturing the features of a distribution, because one needs to regress for many $y$ values to get a sense of the distributional treatment effect. Quantiles, kernel mean embeddings, or even specific distributional quantities like the variance would be more of interest in my opinion. Please correct me if I'm mistaken about this, but I think the authors should at least give a discussion of this issue in the paper. See also "Questions". Theoretical Claims: I checked some parts of the proofs of Theorems 5.2 and 5.3, I did not go through every line, but I could not spot any obvious errors. The proofs are essentially lifted from previous works and I can fully believe that they are true. The mathematical presentation is in general very nice, and I appreciate the effort that the authors put into writing the proofs. However, I think the presentation of the proofs can be improved. The authors use many concepts that are taken directly from the proofs of previous papers, such as VC-type function classes or Lemma N.2 of (Jiang et al., 2023). VC-type function classes were introduced by (Chernozhukov et al., 2014) but I would not say the definition and the ensuing results are standard enough in the literature for most readers to be familiar, and it would certainly be worth giving the definitions, at least in the Appendix before the proofs start. Also, the quantitative statements of both Theorems 5.2 and 5.3 are deferred to the Appendix and it's quite hard to find it hidden in the proofs, and I think the precise statements should be given in the theorem statements in the main body. Experimental Designs Or Analyses: No issues I could find. Supplementary Material: Some of the proofs, as discussed above. Relation To Broader Scientific Literature: The authors do a great job of reviewing the related literature in Section 2, going much further back than what is usual, with papers cited from the 70s. The review of recent literature is extensive and thorough. Essential References Not Discussed: Nothing essential missing I would say. Other Strengths And Weaknesses: The paper is well-written, well-structured, and delivers its messages in a clear way. It was a pleasure to read. However, the mathematical notations and presentation could be improved somewhat, especially in Section 5. See "Other Comments Or Suggestions". In general, I would say that this is a solid paper, but not groundbreaking. The main contribution is the proposal of the estimator itself, and the algorithm to compute it, but distributional treatment effects in the case of CAR seems already done in the form of quantile treatment effects by (Jiang et al., 2023), and to tackle the same problem with quantiles replaced by cumulative probability distributions, seems somewhat incremental. However, I don't like to see papers based on reviews of the form "incremental" or "lack of novelty", and the submission does do quite a solid job of the problem at hand. The theory is nice to have and it is well written (with some caveats - see other sections) but there are no novel insights that arise from it - the results and proofs are very similar to those of existing papers. I think it would have been better to provide slightly novel forms of results, perhaps non-asymptotic high-probability bounds or lower bounds, but then that would go towards making this paper a theory paper, which of course this submission is not. The experiments also seem to be carried out thoroughly, but I'm not convinced that they demonstrate a clear need or advantage for this method. Other Comments Or Suggestions: 216L: The indicator function is defined here, but it is actually used first on page 3. If you want to explicitly introduce the indicator function, I think it should be done when it is first used. But I think it is not strictly necessary, the vast majority of readers should be familiar with the indicator function. Indeed, I thought nothing of it when it was used on page 3 without being explicitly introduced. Algorithm 1: This is being pedantic, but you use $w$ for both a particular treatment and as indexing through the two treatments $\{w,w'\}$. I think it would be better to use another indexing letter, what do you think? 270L: The notation $l^\infty(\mathcal{Y})$ (or $l_\infty(\mathcal{Y})$) is usually reserved for the space of bounded sequences, i.e. bounded functions on the domain $\mathbb{N}$. The notation $L_\infty(\mathcal{Y})$ would be better for the space of bounded functions with the supremum norm. 255R: In Theorem 5.2, "for uniformly over $y\in\mathcal{Y}$" sounds strange. I think "for" can be removed. 260R: I think it would be much better to be specific about what $\mathcal{G}(y)$ is here, rather than referring the reader to the appendix, especially given that this is one of the main results of the paper. The same applies for Theorem 5.3. The reference [Bickel et al., 1993] is strange - some authors are repeated. 673: In the definition of $\mu_w(y,s)$, the closing round bracket is in the wrong place. 675: The notation $I_x(y)$ was already used on 226R, for a certain set of indices. It would be better to use another notation. Questions For Authors: You use the cumulative distribution function to represent a distribution, and of course this is one good way, as it completely characterises a real distribution. However, when the outcome variable is not real-valued, or when you want to capture more specific distributional aspects, there are other ways to characterise a distribution, for example, kernel mean embeddings or kernel witness functions, as done in [Park et al., 2021], or the concurrent paper by Näf and Sussman (https://arxiv.org/abs/2411.08778, which would be relevant to cite). Do you think it would be possible to apply the kernel representations of distributions to CAR, or equivalently, to use your approach for CAR using kernel representations of distributions? In Assumption 5.1(iii), does the constant depend on $w$ and $y_1,y_2$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are deeply grateful for your detailed and thoughtful review and appreciate your positive assessment of our paper. **Theoretical Presentation** Thank you for the valuable suggestions to improve our mathematical presentation: 1. We will provide definitions of VC-type function classes in the Appendix as you suggested. 2. We will move the quantitative statements of Theorems 5.2 and 5.3 to the main body. 3. We will correct the notational inconsistencies you kindly pointed out, including: - Moving the definition of the indicator function to its first use - Using different indexing in Algorithm 1 to avoid confusion - Using L∞(Y) instead of l∞(Y) for the space of bounded functions - Correcting the statement "for uniformly over y∈Y" by removing "for" - Fixing the bracket placement in μw(y,s) and the repeated notation of Ix(y) - Correcting the Bickel et al. reference **Novelty and Contributions** Regarding your comment about incrementality compared to Jiang et al. (2023): 1. While Jiang et al. focus on quantile treatment effects under CAR, our distributional approach offers important advantages. Quantile estimation requires much stronger conditions (continuity and positive density around quantiles of interest), while our approach is applicable to non-continuous outcomes. We add the following comparison of our method with Jiang et al’s below. We use the same DGP (our original DGP) with n = 5000 to compute the QTE. The table below presents the RMSE reduction (%) of regression-adjusted QTE estimators relative to the unadjusted QTE, following Jiang et al.'s method. Linear adjustment achieves a comparable magnitude of RMSE reduction to the DTE estimator. In contrast, logistic regression fails to reduce RMSE due to bias. The DGP in this setting is highly nonlinear and complex, with many irrelevant covariates. Consequently, simple linear adjustment outperforms ML-based adjustment in this specific case for the QTE. | Quantiles | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Jiang et al Linear (QTE) | -0.55 | 0.44 | 2.82 | 3.90 | 6.51 | 5.66 | 2.96 | 2.89 | 4.45 | | Jiang et al Logistic (QTE) | -322.58 | -352.95 | -260.41 | -114.72 | -11.48 | -120.13 | -250.86 | -289.59 | -194.00 | 1. The formulation of our approach leads to a more straightforward algorithm for estimating distributional or quantile treatment effects. This offers more than mere computational advantages and delivers substantial practical improvements compared to existing methods. Please refer to the response to Reviewer G1Ux for detailed comparisons. We will clarify these distinctions more explicitly in the revised manuscript. **Regarding Your Specific Question** In terms of Assumption 5.1(iii), we realize that this assumption is more restrictive than necessary. We will revise the condition to hold locally. To prove our main results, we need the following condition: $\sup_{f \in \mathcal{F}}\mathbb{E}f^2 \leq c\epsilon \equiv \sigma^2$ for some constant $c>0$, where $\mathcal{F} = \\{ \phi_w(y_2,s,Y_i^s(w),X_i^s)- \phi_w(y_1,s,Y_i^s(w),X_i^s): y_1,y_2 \in \mathcal Y, y_1 < y_2 < y_1+\epsilon \\}$ for some sufficiently small $\epsilon >0$. **Alternative Distributional Representations** Thank you for raising an inspiring question about the applicability of Kernel Mean Embedding (KME) approaches. Our simple answer regarding whether KME is applicable for generic outcome distributions is positive. For multivariate outcomes or non-standard outcomes, estimating the conditional distribution function would be challenging, while working in the Reproducing Kernel Hilbert Space (RKHS) would be more straightforward. One caveat is that if the kernel function in use is continuous, the approach requires the continuity of underlying distributions. Since your question is both challenging and important, we will include it in the discussion section as a promising avenue for future research. Thank you again for your careful reading and constructive suggestions, which will help us improve the clarity and presentation of our work. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for your detailed rebuttal, and I raise the score in light of it. Best, reviewer
Summary: The authors propose an estimator and inference method based on asymptotic normality for distributional treatment effects under a covariate-adaptive randomization. The primary estimand considered in this work is the difference between cumulative distribution functions for a fixed value $y$ between treatments. The authors propose an estimation approach based on inverse probability weighing and regression adjustment, and show that it satisfies asymptotic normality, enabling inference. Furthermore, they provide results showing semi parametric efficiency of their estimator, and provide experiments using their approach. Claims And Evidence: The claims made by the author in their theorems are well justified, and appear to follow from standard techniques in the literature. The authors make the following claim about regression adjustment in their Introduction (l. 61): "our work advances this framework to accommodate distributional treatment effects." However, regression adjusted / doubly robust approaches to estimation of distributional causal parameters have been done extensively in previous works (including for discrete outcomes) by references the authors themselves cite (e.g. Kallus et. al., 2024, Kallus & Oprescu, 2023). These papers even extend their estimand to general distribution dissimilarity measures (such as f-divergences) between the potential outcomes of two distributions. Could the authors elaborate a bit more on how their work positions itself uniquely beyond existing approaches? Methods And Evaluation Criteria: The proposed methods are tested on both empirical and real-world data. It would be helpful to have error bars in Table 1 in order to determine whether the changes are significant or not. Additionally, given that this paper provides asymptotic normality results, it would be nice to see additional experiment results that show approximate Type I Error control if using Wald-style confidence intervals for inference. . Theoretical Claims: The proofs seem reasonable - I have not checked carefully. Experimental Designs Or Analyses: N/A Supplementary Material: N/A Relation To Broader Scientific Literature: This paper provides an application of regression adjustment / doubly-robust approaches for covariate-adaptive randomization. The main contribution of this paper is its choice of estimand, which can be used to determine the difference in probability masses between two treatments, given upper/lower bounds on the outcome value. The estimand and its efficiency results seem to follow directly from many existing works. Essential References Not Discussed: The paper cites all related work in the literature, but fails to distinguish its results from the previous works. It could be helpful to contrast their results (both in terms of estimator and efficiency bound) in order to make their contributions more clear. Other Strengths And Weaknesses: Strengths: * The writing and explanation of this work is clear. For example, Assumption 3.1 is well described by the authors, with each component explained clearly. * The authors incorporate covariate-adaptive randomization (which differs from standard context-dependent propensities slightly by assuming it only depends on the stratum). Weaknesses: * Unclear if these results provide additional benefits from existing work on doubly-robust estimation of distributional treatment effects * Does not empirically validate important results in the paper, such as asymptotic normality and Type I error of its Wald confidence interval. * Does not clearly separate its results from existing, well-established frameworks from related literature. Other Comments Or Suggestions: N/A Questions For Authors: It would be nice for the authors to clarify their contributions beyond defining an estimator for the difference in probability for outcomes under some threshold $y$ between treatments, and using the well-known doubly-robust framework for estimating this quantity. This quantity is easier to estimate than other distributional quantities, such as differences between quantiles of outcomes under each treatment, due to having no nuisances and a nice linear form. The proposed estimator appears to be a simple extension of the doubly-robust framework for causal estimates, and I'm unsure where the covariate-adaptive randomization plays a novel role in either the estimator or its analysis. Could the authors please clarify this? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your detailed review of our manuscript. Your insightful feedback has greatly helped us identify key areas to strengthen in our work. **Distinguishing Our Work from Existing Literature** You kindly raised a question about positioning our work uniquely beyond existing approaches. We acknowledge that our current manuscript does not sufficiently differentiate our contributions, and we will address this limitation in revision as follows: 1. While doubly-robust approaches for distributional treatment effects exist in the general causal inference literature including Kallus & Oprescu (2023), our work addresses the unique challenges of covariate-adaptive randomization (CAR) designs. CAR introduces specific dependence structures that require different theoretical handling than standard randomized experiments. Also, the focus of Kallus & Oprescu (2023) is the conditional distributional treatment effect rather than the unconditional distributional treatment effect that we consider. 2. A key distinction is that our approach directly estimates distributional effects rather than quantile treatment effects (QTE). Existing literature, including Kallus et al. (2024) and Jiang et al. (2023), focuses primarily on QTE, which requires much more stringent conditions such as smooth density around the quantiles of interest. Our approach does not require these requirements, making it applicable to a broader range of outcome distributions, including discrete and mixed distributions. 3. We will also add simulation results to compare our method with Jiang et al. (2023). Please see responses to Reviewer sqwn and Reviewer G1Ux. **Contributions of Our Work** 1. Our efficiency bound results are derived specifically for the CAR setting, which has not been previously established for distributional treatment effects. This distinction is important as it provides experimenters with guarantees on the maximum precision achievable in these increasingly common experimental designs. 2. The efficiency bound and asymptotic results under CAR differs from that under complete randomization, particularly in how the stratum indicators interact with additional covariates. Our theoretical results characterize this relationship precisely and provide a valid inferential procedure. 3. We think that the linear form of our estimator is a key strength compared to quantile treatment effects. This simplicity should be taken as advantageous rather than a limitation. The CAR setting introduces specific challenges that our method effectively addresses, resulting in superior practical performance compared to quantile treatment effects, which require non-differentiable optimization methods. Our approach is theoretically valid and practically useful. **Empirical Validation** Following your advice, You correctly point out limitations in our empirical validation: 1. We will add error bars to Table 1 to determine statistical significance of the improvements. 2. We will conduct additional experiments to validate the Type I Error control using Wald-style confidence intervals, demonstrating the practical implications of our asymptotic normality results. We report the average length and coverage probabilities of the 95% confidence intervals based on analytic standard errors derived from our asymptotic variance, for our original DGP with n=5000. Both the linear and ML adjustment methods (using XGBoost) result in shorter confidence intervals. The coverage probabilities remain close to the nominal level of 0.95 for both the simple and adjusted estimators, with a slight over-coverage observed for certain quantiles, reaching approximately 0.97. | Quantiles | Simple (CI Length) | Linear (CI Length) | XGBoost (CI Length) | Simple (Coverage) | Linear (Coverage) | XGBoost (Coverage) | | --- | --- | --- | --- | --- | --- | --- | | 0.1 | 0.0337 | 0.0332 | 0.0328 | 0.945 | 0.964 | 0.966 | | 0.2 | 0.0456 | 0.0445 | 0.0438 | 0.964 | 0.967 | 0.970 | | 0.3 | 0.0534 | 0.0517 | 0.0506 | 0.948 | 0.946 | 0.947 | | 0.4 | 0.0586 | 0.0564 | 0.0548 | 0.954 | 0.955 | 0.956 | | 0.5 | 0.0619 | 0.0591 | 0.0572 | 0.951 | 0.960 | 0.965 | | 0.6 | 0.0637 | 0.0603 | 0.0581 | 0.963 | 0.963 | 0.964 | | 0.7 | 0.0639 | 0.0598 | 0.0576 | 0.946 | 0.962 | 0.972 | | 0.8 | 0.0627 | 0.0578 | 0.0558 | 0.949 | 0.958 | 0.975 | | 0.9 | 0.0600 | 0.0541 | 0.0528 | 0.935 | 0.952 | 0.978 | We believe these revisions will address your concerns about the novelty and positioning of our work in relation to existing literature. Thank you again for your thoughtful feedback.
Summary: The paper proposes a method to estimate distributional treatment effects in randomized experiments that leverages additional covariates, beyond stratum indicators, to improve precision. The authors posit a regression adjustment based on Neyman-orthogonal moment conditions to flexibly estimate the nuissance parameters using off the shelf machine learning techniques. Theoretically the paper provides asymptotic guarantees for their estimator and show that it achieves the semiparametric efficiency bound. Empirically, the paper shows through simulations that the ML adjustment reduces the variance of the estimated quantile treatment effects and the practicality of the method is shown in an application to the impacts of Microfinance. Claims And Evidence: The paper makes three main claims: 1. That it provides a new method to use auxiliary covariates in CAR to estimate quantile/distributional treatment effects that is also applicable for discrete random variables. 2. That it provides an asymptotic regime under which the limit distribution of their estimator can be derived and the estimator achieves the semi-parametric efficiency bound. 3. That it demonstrates the effectiveness of the estimator in real settings. Overall, the paper is very well written and the theory and method are clear and convincing. However, the theory and method are very similar to Jiang et al. 2023 (JOE). The authors recognize this and suggest that their advantage lies in that their method is applicable to discrete random variables. However, the paper does not highlight this either in the theoretical section nor in the simulations. It would be beneficial to flesh out the differences with Jiang et al. 2023 and provide an example of when their method works but not Jiang's and a comparison in the simulation section. Methods And Evaluation Criteria: The proposed method and evaluation criteria are sensible. Theoretical Claims: The theoretical derivations and statements are polished and appear correct, but both the setting, proofs and theoretical statement seem to rely on Jiang et al. 2023. It would be beneficial to discuss the differences more and where the assumptions of each paper differ. For example, Assumption 5 (ii) etc. Experimental Designs Or Analyses: It would be good to have a simulation design in which the covariates are discrete and comparison to other methods (ie. Jiang et al. 2023 if applicable, and if not, say why). Also, it would be good to comment in the simulation design on whether we expect a large or small RMSE reduction, given that the ground truth is known. It might be more compelling to add Figure D.1. in the main body given that seeing the QTE is the goal of the paper. Supplementary Material: I look at the supplementary material. Relation To Broader Scientific Literature: The key contribution is to provide a new method for QTE/DTE for CAR with regression adjustments to improve the variance that also works with discrete random variables. This appears to be a useful addition to the literature but the novelty relative to Jiang et al. 2023 should be better framed. Essential References Not Discussed: The literature review is good. More comparison to Jiang et al. 2023 would be beneficial. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: The paper is very well written and a pleasure to read! I think it is very polished and convincing, my only caveat is its novelty with respect to Jiang et al. 2023. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your thoughtful and constructive feedback on our paper. Your positive assessment regarding clarity and theoretical development is encouraging. Following your comments, we will highlight three key distinctions from Jiang et al. (2023) in the revision: 1. **Discrete Random Variables**: Our method is applicable for any type of outcome variables, including discrete or mixed-type outcome variables that are common in real-world applications. In the revision, we will add a simulation design featuring discrete outcomes to demonstrate this advantage. 2. **Computational Benefits**: Our approach offers significant computational advantages that were not emphasized enough in the current version. Jiang et al.'s algorithm minimizes a non-differentiable objective function through multi-step optimization, and their last step potentially involves a grid search within the estimated interval. Our approach is much more straightforward in practice. Our simulation study suggests that this leads to substantial efficiency gains without sacrificing accuracy. 3. **Relaxed Assumptions**: Our framework operates under less restrictive assumptions. In particular, our approach relaxes several continuity requirements that are present in Jiang et al. We will elaborate on these differences in Assumption 5(ii) and other relevant theoretical sections to clarify how our method maintains asymptotic guarantees under broader conditions. **Addressing Specific Recommendations** - We will revise Section 2 to carefully explain distinctions from Jiang et al. (2023). - We will add a new simulation with discrete outcomes and discrete covariates and direct comparison to Jiang et al. where applicable. First, we consider the same DGP setup as before but with a discrete outcome variable following a Poisson distribution. The conditional mean is given by $E[Y_i | X_i, W_i, Z_i] = 0.2 \left| b(X_i) + c(X_i)W_i + \gamma Z_i + u_i \right|.$ The table below presents the RMSE reduction (%) of our proposed DTE estimators relative to unadjusted estimators with n=1000. Both linear and logistic regression achieve reductions ranging from -1% to 6%. Under this DGP, computing QTE is not meaningful, as most quantiles equal zero due to a large mass point at zero. | Location | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Execution time | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Proposed Linear (DTE) | 1.29 | 2.10 | 1.89 | 0.08 | -1.33 | -0.28 | -0.48 | -0.61 | 0.28 | -0.22 | 0.0120s (SD 0.00632) | | Proposed Logistic (DTE) | 2.49 | 5.81 | 4.59 | 2.38 | 0.71 | -0.45 | -0.44 | -0.13 | 0.25 | -0.28 | 0.0246s (SD 0.0144) | Next, we consider a setting with discrete covariates. Under the same DGP setup, we sample X independently from Uniform(−5,5)and round each value to the nearest integer. In this case, the DTE estimators achieve RMSE reductions ranging from 0.8% to 6.1% for linear regression and from 0.9% to 12.6% for logistic regression. However, the QTE estimator fails to yield reductions across most quantiles due to the discrete nature of the outcome variable. In terms of computational efficiency, the DTE estimators require 0.01s for linear regression and 0.04s for logistic regression, whereas the QTE estimators take 0.14s and 0.22s, respectively. This implies that our method is approximately 10 times faster for linear regression and 6 times faster for logistic regression. | Quantiles | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | Execution time | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | Jiang Linear (QTE) | -1.97 | -2.20 | -2.59 | -4.93 | -2.69 | -4.95 | -5.19 | 0.08 | 0.38 | 0.1446s (SD 0.0111) | | Jiang Logistic (QTE) | -123.03 | -313.59 | -381.79 | -251.43 | -22.60 | -206.06 | -454.47 | -171.93 | -115.41 | 0.2206s (SD 0.0164) | | Proposed Linear (DTE) | 5.62 | 5.36 | 2.10 | 0.86 | 2.67 | 2.41 | 5.67 | 3.75 | 6.10 | 0.0153s (SD 0.0067) | | Proposed Logistic (DTE) | 2.38 | 3.12 | 0.97 | 0.51 | 2.19 | 3.28 | 7.05 | 8.97 | 12.62 | 0.0386s (SD 0.0106) | - Figure D.1 will be moved to the main body to better illustrate the quantile treatment effects, while emphasizing that our main focus is on DTE or PTE rather than QTE. - We will enhance our simulation discussion with expected RMSE reduction context given the known ground truth, , when possible. In most cases, however, we have chosen relatively complex processes where theoretical expectations may not be available. These revisions will clarify the novelty and practical advantages of our approach while maintaining the strong theoretical foundation you recognized in your review. We appreciate your comments and would be happy to elaborate further on any questions or concerns you might have. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I think the key point for the paper to be accepted is whether it gives a theoretical advantage over Jiang et al. 2023. I appreciate the discussion and additional simulations. If the other referees agree that change in Assumption 5.1 is reasonable in allowing discrete covariates, I will raise my score to 4. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback. We are pleased to highlight that both Reviewer qJAh and Reviewer sqwn, who previously expressed concerns regarding Assumption 5.1, have subsequently raised their scores. We greatly appreciate your comments, which have helped us improve the clarity and strength of the paper.
Summary: The paper develops a regression‐adjusted estimator for distributional treatment effects (DTEs) under covariate‐adaptive randomization (CAR). It presents (i) a derivation of the estimator’s limit distribution under CA, (ii) a semiparametric efficiency bound result, and (iii) empirical/simulation demonstrations (including a microcredit dataset). The main claim is that incorporating extra covariates (beyond basic stratum indicators) can reduce variance in estimating how an intervention shifts the entire outcome distribution. ## update after rebuttal The authors have addressed some of my concerns regarding discrete outcomes and finite sample performance. As a result, I have raised my score from **2** (Weak Reject) to **3** (Weak Accept). I still find the contribution somewhat incremental, and believe the work could be further strengthened with clearer theoretical development and a discussion of computational limitations. Thus, I maintain a borderline positive score to be considered alongside the evaluations of the other reviewers. Claims And Evidence: * Claims: * The proposed estimator is unbiased and consistent for the DTE. * Adjusting for covariates improves precision under CAR. * The proposed estimator attains the semiparametric efficiency bound. * Evidence: * They provide formal asymptotic proofs for consistency and efficiency. * Simulation and one real-data example give some empirical support. * The real-data example has a small sample, so results there are less conclusive. Overall, the claims seem well supported. However, the paper should have a more detailed discussion (and theoretical results) of performance in finite samples that is common with doubly robust/Neyman‐orthogonal style estimators. In addition, although the introduction asserts that the method can handle discrete outcomes, Assumption 5.1 requires $\mu_w(y, S, X)$ to be continuous in $y$, which does not apply for discrete outcomes. Thus, there appears to be a mismatch between the paper’s stated applicability and the continuity assumption in its theoretical results. Methods And Evaluation Criteria: * The proposed method logically follows from doubly robust/Neyman‐orthogonal style estimators, adapted to CAR for DTEs. * Using standard approaches (like cross‐fitting and machine‐learning regression) is sensible, although carrying it out “for every $y\in \mathcal{Y}$” can be computationally expensive for continuous $y$ (Algorithm 1 in the paper). The authors should consider [1] or similar since their method may be more computationally tractable for handling continuous $y$ (although [1] only applied to quantile effects) [1] Kallus, N., Mao, X., & Uehara, M. (2019). Localized debiased machine learning: Efficient inference on quantile treatment effects and beyond. arXiv preprint arXiv:1912.12945. Theoretical Claims: * The authors prove asymptotic normality and derive a semiparametric efficiency bound. I skimmed the proofs and they appear sound, but Assumption 5.1 (especially (i)) might not be standard in the semi-parametric efficiency literature and require more discussion. * They do not provide much discussion about finite‐sample guarantees besides simulations. Experimental Designs Or Analyses: * The simulation design is reasonably standard for demonstrating coverage and bias. However, the gains in RMSE are moderate, so real‐world improvement may be small. * Since the synthetic experiments are effectively looking at quantiles, the authors should compare with Jiang et al. 2023 as a baseline. * The real microcredit experiment is interesting but limited by sample size, reducing the credibility of the precision‐gain claims. Supplementary Material: I reviewed the proofs of theorem in the appendix, as well as the additional synthetic experimental results for increasing sample sizes. Relation To Broader Scientific Literature: * This work extends prior analyses of regression adjustment under CAR from average to distributional effects (cf. Rafi 2023, Jiang et al. 2023). * The closest relevant approach is Jiang et al. (2023) which studies quantile treatment effects under CAR * A mention of specialized methods for quantile treatment effect estimation (e.g., Kallus et al., 2019) would be important—especially since their approach may be more computationally tractable than doing a separate regression for each $y$. Essential References Not Discussed: The paper might profit from a deeper discussion of Kallus, Mao, & Uehara (2019) regarding more efficient approaches for quantile (and possibly distributional) estimation approaches. Other Strengths And Weaknesses: To summarize from the scattered sections above: * Strengths * The paper applies doubly‐robust methodology to distributional treatment effect (DTE) estimation under covariate‐adaptive randomization (CAR). * It gives a rigorous semiparametric efficiency analysis, aligning with modern causal‐inference techniques. * The method is illustrated both in simulations and a real‐world microcredit dataset. * Weaknesses * Some incremental feel: the main novelty is “standard DR logic” extended from average effects to distributional effects under CAR. * Computational impracticalities for “every $y$” are not fully addressed. * Gains in finite samples (especially in the real dataset) are small, so the practicality is unclear. Also, should add Jiang et al. (2023) as benchmark for the synthetic experiments. * The paper claims the ability to handle discrete outcomes; however, Assumption 5.1 requires $\mu_w(y, S, X)$ to be continuous in $y$, which does not align with discrete $Y$ distributions. This mismatch undermines the stated applicability and advantage over, for instance, Jiang et al. (2023). Other Comments Or Suggestions: **General comment:** The paper is neither here nor there. It needs more theoretical results for finite samples to justify the robustness/orthogonality results, it needs an alternative method for estimating with continuous $Y$ (otherwise, one can just use the quantile paper of Jiang et al. (2023)), and it feels incremental in the sense of “just adding DR/Neyman-orthogonal logic” to existing estimators. Furthermore, I don't think the current approach properly handles discrete $Y$. Thus, I lean towards rejection, but I am open to reconsidering my rating. Questions For Authors: See concerns listed in the "Weaknesses" section above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful to your detailed review of our manuscript with constructive criticism, which has helped us identify important areas for improvement. **Discrete Outcomes and Assumption 5.1** You raised an important point regarding our claim about handling discrete outcomes versus Assumption 5.1 requiring continuity. This was imprecisely stated in our manuscript. To clarify: Our method allows for discrete outcomes. We will revise Assumption 5.1 to explicitly accommodate mass points in the distribution while maintaining asymptotic guarantees. The continuity assumption was unnecessarily restrictive in our presentation. More precisely, in order to prove stochastic equicontinuity for continuous or finite discrete outcomes, we need $\sup_{f \in \mathcal{F}}\mathbb{E}f^2 \leq c\epsilon$ for some constant $c>0$, where $\mathcal{F} = \\{ \phi_w(y_2,s,Y_i^s(w),X_i^s)- \phi_w(y_1,s,Y_i^s(w),X_i^s): y_1,y_2 \in \mathcal Y, y_1 < y_2 < y_1+\epsilon \\}$ for some sufficiently small $\epsilon >0$. If discrete outcome has infinitely many points in its support, one can impose the following assumption: there exists constants $C_1, C_2>0$ such that $\sup_{f \in \mathcal{F}}\mathbb{E}f^2 \leq c_1 \epsilon+ C_2e^{-\alpha/\epsilon}$. Also, we will add a discrete outcome simulation to demonstrate this capability, which is an advantage over some existing methods. See response to Reviewer G1Ux for simulation results with discrete outcome. **Computational Considerations** We agree that computational complexity deserves more attention. 1. Since the outcome variable of interest is one-dimensional, our algorithm can handle most problems within a reasonable timeframe. Indeed, our algorithm is more straightforward and faster than the ones in the papers you kindly raise. 2. Our approach offers computational advantages over quantile-based methods, because our method does not require minimizing a non-differentiable objective function through multi-step optimization and also do not rely on sorting algorithm. 3. We review the algorithm of Kallus et al. (2019) as you suggested and we will add some discussion on the algorithmic difference as follow: Kallus et al. (2019) address problems in which nuisance parameters depend on the target parameter itself, as seen in cases like the quantile treatment effect (QTE) and local QTE. In contrast, our estimation of the distributional treatment effect (DTE) involves nuisance parameters that correspond to conditional means, which can be effectively estimated using machine learning algorithms. **Contributions** While our approach extends doubly-robust methodology to DTEs under CAR, this extension provides several valuable contributions: 1. It addresses the specific challenge of CAR designs that are increasingly common in practice but require distinct statistical treatment. 2. Our framework unifies treatment effect estimation across outcome types under a single methodology with less restrictive assumptions. In our revision, we will further elaborate on these contributions and highlight their practical significance. **Finite Sample Performance** We acknowledge your concern about finite sample performance: 1. We will add theoretical results on finite sample performance in our revision. As shown in our paper, the proposed estimator is unbiased in finite samples. Additionally, we can demonstrate that the variance of our estimator in finite samples is smaller than that of the standard estimator up to a deviation of order O(n^{-1}). If you are instead asking about non-asymptotic bounds, we would appreciate if you could kindly share particular papers in your mind. We are not aware of such papers and will consider this direction for future work. 2. The real-world gains in the microcredit example will be better contextualized by benchmarking against expected improvements based on sample size. This is usually the case where the regression-adjustment method is needed in practice. Researchers want to reduce the variance without any additional cost and our method highlights that our method with proper experimental design can significantly reduce the variance of the treatment effect estimator. **Additional Comparisons** Following your recommendations, we will: 1. Add Jiang et al. (2023) as a benchmark in our synthetic experiments — Please see responses to Reviewer sqwn and Reviewer G1Ux 2. Include discussion of Kallus et al. (2019) 3. Provide clearer comparative analysis of computational versus statistical efficiency We believe addressing these points will substantially strengthen our paper. We appreciate your thorough review and feedback. --- Rebuttal Comment 1.1: Comment: Dear authors, Thank you for addressing my concerns. I have raised my score to reflect this.
null
null
null
null
null
null
IO-LVM: Inverse Optimization Latent Variable Models with Graph-based Planning Applications
Reject
Summary: This paper introduces IO-LVM (Inverse Optimization Latent Variable Models), and this method learns latent representations of constrained optimization problem (COP) costs based on observed solutions, with applications to graph-based planning problems. The authors come up with a new variate of VAE training loss so that the cost vector can be inferred and serve as the input of the solver. They conduct experiments on graph data in different scenarios. Claims And Evidence: From the experimental results, the method works well in path planning and other tasks. I really like the analysis of latent space in this work. However, in Table 1, I feel the improvement of the proposed method is marginal, comparing the vanilla VAE baseline. Also, I think the author should add more baselines, and I'm wondering how IO-LVM works comparing the RL-based method on graph data. Can the author generalize the method to other modalities? Since the transformation from z to y is very flexible, it should be easy to demonstrate it to other datasets besides graph optimization. Also, the Hamiltonian Cycles results are kind of saturated, so it is hard to evaluate the potential of the method. Methods And Evaluation Criteria: The idea of this method really makes sense to me. But again, I feel the authors can provide more baseline methods. Also, I'm curious how the VAE baseline is trained here. Does it also include a solver, and how is the y inferred in this case? For the optimization method, people may use RL-based models. How does IO-LVM compare to RL methods? Theoretical Claims: There's no theoretical issue in this paper. Experimental Designs Or Analyses: The experimental designs make sense to me. It would be great if the author could provide more complex datasets/modalities and stronger baselines in this work. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: The presentation of the paper is clear and easy to read. The training strategy has novelty. It should be scaled up and expanded to other domains. However, the experimental results are not strong and convincing enough. Other Comments Or Suggestions: N/A Questions For Authors: See the questions above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: The main points from the reviewer are: * Q1) How the baseline VAE is trained to output y * Q2) Marginal improvement only compared to VAEs (Table 1) * Q3) RL-based and other baselines (also to reviewer 2TrQ) * Q4) Problems other than graphs . * Answer to Q1 and Q2. This is a key question. Actually, the VAEs do not output Y, they output X directly (i.e., the paths). There is no way for VAEs to output Y because we do not have training data for Y. Our method, instead, outputs first Y (even without their labels), and then input Y to the solver to finally reconstruct X (i.e., the paths), assuring its feasibility with respect to the solver/COP. For this reason, while VAEs are not capable to guarantee the reconstruction of a feasible path, our method always reconstructs a feasible path, even if it does not reconstruct the path correctly. Please see the additional illustration: https://docs.google.com/document/d/e/2PACX-1vTI3F6-L4XLqFv9NWiE9LriBkZ-CpIAhvtd9XGDyQr59PAQBv-IG_z8-EaqIQlBXlZwxqeCJNgtqjxA/pub When the reviewer ask about the result of Table 1, at the first moment it seems a small improvement. However, we were only reporting the reconstruction of edges reconstruction. In many cases, VAEs might capture e.g., 95% of the edges correctly, but when you put those edges together, they do not form a Hamilt. cycle. In order to complement our answer, we provide three more tables (find below). The first table shows the % of Hamilt. cycles that are fully correctly reconstructed (i.e., compare the full vector and mark it as correct if it matches perfectly). Note that this is a much harder task than the % of edges correctly reconstructed. The second table shows the percentage of outpus that are Hamilt. cycles (even if they are not correctly reconstructed). The third table shows the reconstruction results given different data training size (i.e., how much data is needed so that VAE versus IO-LVM can reconstruct the correct Hamiltonian cycle?) **TABLE 1 (IO-LVM versus VAE: How many full Hamiltonian cycles are correctly reconstructed? (Avg and std over 5 runs))** | Methods | Latent Dims | burma14 (3 dims) | bayg29 (3 dims) | burma14 (50 dims) | bayg29 (50 dims) | | -------- | ------- | ------- | ------- | ------- | ------- | | VAE | 2 | $55.2 \pm 1.1 $ \% | $15.4 \pm 1.3 $ \% | $9.8 \pm 1.3 $ \% | $0.2 \pm 0.1 $ \% | IO-LVM | 2 | $78.4 \pm 0.6$ \% | $37.3 \pm 7.3$ \% | $29.6 \pm 2.6 $ \% | $1.5 \pm 0.2 $ \% | VAE | 10 | $ 82.7 \pm 1.3 $ \% | $46.0 \pm 1.6$ \% | $45.3 \pm 1.9 $ \% | $3.4 \pm 0.5 $ \% | IO-LVM | 10 | $ 91.0 \pm 0.7 $ \% | $77.1 \pm 1.5$ \% | $78.3 \pm 1.1 $ \% | $31.9 \pm 3.5 $ \% . **TABLE 2 (IO-LVM versus VAE: What is the percentage of feasible outputs (even if they are not correctly reconstructed)? Note that all IO-LVM outputs are feasible (100%) due to the solver in the loop, enhancing the results in the first table. (Avg and std over 5 runs))** | Methods | Latent Dims | burma14 (3 dims) | bayg29 (3 dims) | burma14 (50 dims) | bayg29 (50 dims) | | -------- | ------- | ------- | ------- | ------- | ------- | | VAE | 2 | $63.2 \pm 1.6 $ \% | $21.3 \pm 3.1 $ \% | $18.3 \pm 2.5 $ \% | $0.5 \pm 0.3 $ \% | IO-LVM | 2 | $ 100 \pm 0.0 $ \% | $100 \pm 0.0$ \% | $100 \pm 0.0 $ \% | $100 \pm 0.0 $ \% | VAE | 10 | $ 85.0 \pm 1.4 $ \% | $50.1 \pm 1.6$ \% | $47.5 \pm 2.2 $ \% | $4.5 \pm 0.7 $ \% | IO-LVM | 10 | $ 100 \pm 0.0 $ \% | $100 \pm 0.0$ \% | $100 \pm 0.0$ \% | $100 \pm 0.0$ \% . **TABLE 3 (How much training data does IO-LVM versus VAEs need to reconstruct the Hamiltonian cycles correctly? Here we fix the seed of the model.)** | Methods | Latent Dims | Train size = 500 | = 1000 | = 1600 | = 2400 | = 4000 | = 7000 | = 10000 | | -------- | ------- | -------- | ------- | -------- | ------- | ------- | ------- | ------- | | VAE | 10 | $0.0$ \%| $0.8$ \%| $3.3$ \%| $3.9$ \%| $6.2$ \%| $11.5$ \% | $11.8$ \% | | IO-LVM | 10 | $4.8$ \%| $20.8$ \%| $30.5$ \%| $33.0$ \%| $42.5$ \%| $53.5$ \%| $58.3$ \%| * Answer to Q3. We don't have space left here, so we decided to answer questions regarding related work in the rebuttal for reviewer sTUc (Q3) . * Answer to Q4. Yes! The reviewer is correct in observing that the method could be scaled to problems other than graph-based problems, also because "the transformation from z to y is very flexible" . We decided to provide experiments in graph-based applications because it helps us to dive deep into the interpretability/qualitative world. In other words, we aimed to write a paper that not only rely on quantitative results on reconstructions and predictions (as we believe this is saturated in many ML subfields), but most importantly with illustrations on how latent dimensions, inputs and reconstructions are connected, and how non-observed factors are also captured in the latent space, which would be less effective to show with applications other than graphs due to its nice visual representations.
Summary: This paper develops a latent variable model for constrained optimization problems like path planning in graphs. The key insight is to modify the traditional Variational Auto-Encoder (VAE) with two distinct stages of reconstruction - from the latent space to an unconstrained space, and from the unconstrained space to the constrained space. Experiments through path planning simulations reveal that the proposed approach (IO-LVM) captures the dataset's path distributions and enables interpretable predictions via latent space compression. Claims And Evidence: The paper makes several claims: 1) IO-LVM naturally constructs a disentangled, and sometimes multimodal, latent space, allowing for the reconstruction of observed path distributions without making assumptions about inferred paths 2) IO-LVM learns interpretable latent representations 3) IO-LVM allows for predicting how different agents might navigate between unseen source and target nodes, providing a flexible framework for path inference. Claims 1) and 2) are not well substnatiated - there are no results showing that the latent representations learned are distengled / interpretable compared to standard VAEs Claim 3) is really substantiated through path planning simulation experiments. Methods And Evaluation Criteria: The scope and experimental evaluations of the paper are fairly limited: the paper seems to target general constrained planning problems and motivates the model based on this, but the experiments only focus on simple 2D simulations of path planning in graphs. Theoretical Claims: There are no proofs or theoretical claims. Experimental Designs Or Analyses: The experiments seem sound and valid, but as mentioned in the previous comment - the scope of the experiments are fairly limited. It is unclear in which real-world constrained optimization problems the proposed approach might be preferred compared to more standard VAE-based solutions and other constrained optimization solutions that do not model the problem as a latent variable model. Additionally, the experiments do not really provide insights into the interpretability / disentanglement of latents, which is one of the key claims of the paper. Supplementary Material: Yes. I looked over the supplementary files - the contain code for experiments. I did not try to run the code. Relation To Broader Scientific Literature: The paper broadly relates to literature on constrained optimization, latent variable models, and path planning problems in graphs. Essential References Not Discussed: None to my knowledge Other Strengths And Weaknesses: Strengths - the paper proposes a novel modification to a VAE by changing the reconstruction to consist of two stages. - the paper is well-motivated from the perspective of a latent variable model i.e. to promote learning interpretable and compressed latent representations, such that the data can be faithfully reconstructed. - the paper has many experiments on simulated 2D graphs that demonstrates an implementation of the proposed model, and its benefits compared to other latent variable models. Weaknesses - the claims of the paper around interpretability are not well-substantiated - the experiments are on toy settings of 2D graphs and thus it is unclear how widely applicable are the benefits from the proposed approach - given the scope of the experiments, it is unclear how impactful the proposed LVM will be compared to current models in adoption for real world applications like robotics, and path planning. Other Comments Or Suggestions: The paper is not well-written, with confusing texts, and a lot of jargon. For example the abstract starts with " (Variational) Autoencoders struggle to capture constraints to decode structured outputs." - it is unclear what is meant by structured outputs here. I would ask the authors to revise the writing of the paper to make it more accessible, crisp, and clearly isolate the claims and contributions. Questions For Authors: Please refer to the weakness and address them to the extent possible. - the claims of the paper around interpretability are not well-substantiated - the experiments are on toy settings of 2D graphs and thus it is unclear how widely applicable are the benefits from the proposed approach - given the scope of the experiments, it is unclear how impactful the proposed LVM will be compared to current models in adoption for real world applications like robotics, and path planning. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The main points from the reviewer are: * Q1) Results showing that the latent representations learned are distengled / interpretable compared to standard VAEs * Q2) Concern of the graphs in the experiments being only in 2D * Q3) Writing * Q4) In what type of applications the proposed approach might be preferred compared to more standard VAE-based . * Answer to Q1. We thank the reviewer for the question. We want to highlight that we do not claim that our latent representation is "more interpretable" than VAEs, we claim that we maintain the disantanglement/interpretability of VAEs, but increase the reconstruction power by guaranteeing constraint feasibility through a suitable solver, such as Dijsktra and TSP solver, in a full differentiable framework. How does our LVM provides interpretability? Although it is hard to "quantify" interpretability, the book [Probabilistic Machine Learning, Kevin Murphy] states that "Interpretability allows human expert to inspect a model ... understainding why they work to gain scientific and operational insight". In our paper, for example, Figures 11 and 12 provide insight on the latent space (e.g., how a walk through the latent space impacts the encoding and reconstruction). In Figure 3, we highlight the correlation of an unseen feature with the latent values, giving the possibility for an expert to inspect if the model is capturing the hidden context correctly. Why does our LVM also have the disantanglement property as VAEs and how do we show that? From the paper [Disentangling Disentanglement in Variational Autoencoders, 2019], disantanglement makes each latent dimension to control one interpretable attribute. This is observed, for example, in Figures 2 and 3 in our paper. More specifically, Figure 3 highlights that the latent dimension in the vertical axis is correlated to the "ship width" feature. Note again that the "ship width" was not even observed in the data, but was an underlying important factor to make ships to perform different paths. Compared to a normal VAE, we additionally know that the latent space needs to capture the costs (solver input) and not the shape of the solutions (solver output). Please, see an extra illustration that we provide in the following link: https://docs.google.com/document/d/e/2PACX-1vTI3F6-L4XLqFv9NWiE9LriBkZ-CpIAhvtd9XGDyQr59PAQBv-IG_z8-EaqIQlBXlZwxqeCJNgtqjxA/pub . * Answer to Q2. In our framework, there is no restriction regarding the graph being in 2D. Note, for example, that in the Start/target path problem, Dijsktra solver is used, which is a dimension-agnostic algorithm. In the Hamiltonian cycle problem, note that a TSP solver is used without leveraging 2D heuristics. In Figure 5, for example, considering the Sample 2 of "Bayg29", it is clear that there are edges crossing in the solution. It would not be possible to output such a solution with a solver with 2D restrictions, since in 2D the optimal TSP solution has no edges crossing. We do not leverage any benefit of the graph to be in 2D other than for visualization purpose. Furthermore, we emphasise that, if we are dealing with an NP hard problem such as the TSP, it is natural that the problem size is smaller. In previous works published in important venues such as [Learning with Differentiable Perturbed Optimizers, 2020], [Differentiation of Blackbox Combinatorial Solvers, 2019] and [DataSP: A Differential All-to-All Shortest Path Algorithm, 2024], they deal with graph problems with approximately the same (or smaller) size than ours. . * Answer to Q3. Since other reviewers did not point out to writing issues, and the reviewer provided a specific example, we answer this example as follows. The term "structured output" can be found many times, for example, in the book [BakIr, G. (Ed.). (2007). Predicting structured data]. The term "structured output" is generally related to a model or a function yielding a structured vector where its elements has interdependency, such as a sequence, a path, or a decision that should adhere to specific constraints. We are adding this reference in the introduction of the paper to proper relate the term used to the literature. . * Answer to Q4. We have added extra results and analysis to clarify more the power of IO-LVM in comparison to standard VAEs. Please see the new tables in the answer to reviewer Xit2. There we show three extra results: i) When we compare the full output/reconstruction instead of comparing the edges reconstruction, the IO-LVM is much better than VAEs. ii) We show that generally when VAEs make reconstruction mistakes, the mistakes are non-structured (e.g., the reconstruction is not a Hamiltonian cycle), which is not true for IO-LVM, that even when making reconstruction mistakes, it guarantees COP feasibility. iii) We show that IO-LVM needs much less training data to give reasonable reconstruction results compared to VAEs. --- Rebuttal Comment 1.1: Comment: thank you for the clarifications! Since the authors answered my questions around the claims, and provided more results on other domains, I am increasing the score. Although I am still not convinced by the results (the new results seem to be again on simple 2D settings which may not translate to the diversity of complex real-world scenarios across problems in planning, robotics etc. that standard VAEs have proven useful) - as such I will not fight for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you again for your feedback and to engage with the rebuttal. It is important to emphasise that our framework is not dependent on the graph dimensionality. Our method inherently operates on graph-based structures, treating inputs as graph entities composed of nodes and edges. It relies solely on the topology of the graph. For instance, in our Hamiltonian cycles experiments, the input graph is fully connected, which represents a dense, high-complexity scenario regardless of any geometric embedding. If we have to be very picky, in all graphs that are related to latitude and longitude points, they are actually in 3D due to the earth's curved surface. With all respect, it is hard to agree that our paper is under simpler problem settings when we see related works of combining deep learning and combinatorial solvers applying to smaller or similar size of graph problems. See the examples below of well established papers in the area: * For the start/target planning problem, in the work [Differentiation of blackbox combinatorial solvers (2020)], the authors deal with graphs between 144 and 900 nodes (see table 2 in the mentioned paper) and with a maximum of around 2000 edges, while we deal with graphs between 700 and 2513 nodes, reaching up to 8924 edges. For the Hamiltonian cycle problem, they vary the number of nodes from 5 to 40 (see table 3 in the mentioned paper), while we vary from 14 to 29 (our method can also go up to 40 or more). * For the start/target planning problem, in the work [Learning with Differentiable Perturbed Optimizers (2020)], the authors deal with graph problems of size 144 (see subsection 5.3 in the mentioned paper), while we deal with graphs between 700 and 2513 nodes. Although we acknowledge that the scalability of VAEs is higher, they can't guarantee feasible outputs, which is the key point of our method (see Table 2 in the answer for reviewer Xit2).
Summary: The paper proposes a representation learning based method for generating feasible solutions for constrained optimization problems. Given a black-box solver and a dataset of constraints and their solutions, the proposed IO-LVM algorithm learned a latent representation model which is used to generate feasible solutions by sampling in the latent space. Experiments of path planning and Hamiltonian cycles problems are shown the demonstrate the generated results of IO-LVM. Claims And Evidence: Since the reconstruction loss in ELBO is non differentiable, the paper proposes to replace it with the Fenchel-Young loss and derive an estimator for the corresponding gradients. The of using this loss makes intuitive sense. However, no theoretical evidence supporting if minimizing this loss can lead to a good latent representation. The efficacy of IO-LVM is mainly supported by the numerical experiments. For the latent representation, Figure 2-4 and some qualitative properties are provided to support that the latent space does learn something. For quantitative analysis, two kinds of tasks are considered. One is the capable to reconstruct the Hamiltonian cycles, and another one is the quality of the generated paths based on edge usage compared with the testing set. Methods And Evaluation Criteria: The proposed IO-LVM method and evaluation criteria mentioned in the previous section make sense for the problem. One key restriction of the proposed method is the requirement of a black box solver, but little is discussed on how the solver would be available. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The experimental designs make sense and analysis from multiple angles are provided. Supplementary Material: I checked the details of experimental tasks, and they are clear and make sense. Relation To Broader Scientific Literature: The paper provides examples of applying IO-LVM to path planning and Hamiltonian cycles, and these two types of problems have applications in various scientific areas. Essential References Not Discussed: None. Other Strengths And Weaknesses: Using a latent representation learning approach to find feasible solution for constrained optimization seems to be a novel and reasonable idea. The numerical results show some good performance of the proposed IO-LVM. As mentioned that the requirement of the black-box solver is a key limitation. Given that IO-LVM has access to the solver, the comparisons with baselines could be a bit unfair as the baseline methods don't make use of the solver. Other Comments Or Suggestions: No additional comments. Questions For Authors: Can you provide more justification of the availability of the black-box solver? Are there baselines using the solver that can be compared in a fair manner? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The main points from the reviewer are: * Q1) Requirement of a blackbox solver in the framework. * Q2) No evidence supporting if minimizing the fenchel yong loss can lead to a good latent representation. * Q3) Baselines / Related work . * Answer to Q1. The requirement of a blackbox solver in the learning process is present in many topics such as “Decision-focused” learning, “Predict-and-Optimize”, and some subtopics in Imitation Learning as well such as [Task-based End-to-end Model Learning in Stochastic Optimization, 2017]; [Differentiation of Blackbox Combinatorial Solvers, 2019]; [Learning with Differentiable Perturbed Optimizers, 2020]. One of the key points in all of those papers, including ours, is to leverage years of knowledge of Mathematical Programming into Deep Learning methods. For example, Dijkstra solves efficiently the Shortest Path Problem, so our method learns the cost inputs (edges costs) of Dijkstra, and we will be sure that the output is feasible wrt constraints of the problem. Our solution is then more general than other approaches as we do not need to handle the constraints of a particular problem. Note that we are using the blackbox solver to recover the costs that generated those observations. The only baseline we use that does not leverage the solver is the VAE / $\beta$-VAE. But since we are proposing a generative model here, we believe it is natural to compare to standard VAEs to show the benefit of constraining the output through the solver. Indeed, we have used the last above-mentioned work as our baseline for path distribution prediction (Table 2 in the paper), which also leverages blackbox solver in the loop. However, it is not possible to use the same baseline into the reconstruction experiment because they do not model a latent space. . * Answer to Q2. We understand the reviewer’s concern regarding the justification for the Fenchel-Young loss leading to a good latent representation. In standard VAE literature, it is well-known that the interplay between a reconstruction loss and a regularization term (typically the KL-divergence between the posterior and prior distributions) promotes interpretability, smoothness, and disentanglement of the latent space. However, rigorous proofs of these properties often depend on restrictive and sometimes unrealistic assumptions. Other studies show empirically that Beta-VAE variants affects (sometimes in a good way) the interpretability and structure of latent representations by simply adjusting the hyperparameter Beta (trade-off between reconstruction and regularization). However, providing a theoretical explanation of such effects remains challenging within deep learning. In our approach, we reiterate the need of both: a reconstruction loss and a regularization term. Given that COP solvers are inherently non-differentiable, we rely on gradient estimations through perturbed COP solutions provided by blackbox solvers. According to [Learning with Differentiable Pert. Optimizers, 2020], the Fenchel-Yong loss is convex with respect to the parameters $\theta$ and attains a minimum (zero) when the predicted structure matches the ground truth structure exactly. This motivates our choice of the Fenchel-Young loss as a suitable (and demonstrated to be effective) reconstruction loss. Please see: https://docs.google.com/document/d/e/2PACX-1vTI3F6-L4XLqFv9NWiE9LriBkZ-CpIAhvtd9XGDyQr59PAQBv-IG_z8-EaqIQlBXlZwxqeCJNgtqjxA/pub . * Answer to Q3. We use this space to answer questions to reviewers 2TrQ, sTUc and Xit2. We thank the reviewers for highlighting (Inverse) RL-based methods and GFlowNet (2023). Although relevant, these methods require explicit encoding of hard constraints in state transition logic, introducing manual modeling efforts specific to each problem. In contrast, IO-LVM inherently includes constraints via the solver, enabling easy adaptation by simply replacing the solver. The suggested [Scalable Equilibrium Sampling with Seq. Boltzm. Generators (2025)] employs probabilistic constraints through energy landscapes that may occasionally allow violations, requiring post-processing in the inference. Our approach does not have this limitation due to the direct blackbox solver integration. Traditional inverse RL methods (e.g., [MaxEnt inverse reinforcement learning, 2008] and related works) lack latent variable modeling, limiting them to a single recovered cost function. For example, Figures 2 and 3 in our paper illustrate distinct learned cost functions spread into the latent space, which would not be possible with more traditional Inverse RL methods . We are updating our references accordingly and adding a dedicated paragraph in our Related Work section to explicitly highlight these differences between solver constraints guarantee (more flexible) and manually designed action space or the need of post processing for constraints guarantee (less flexible). We thank the reviewers to provide constructive feedback in this regard.
Summary: The paper provides a method to learn the representations of the underlying "cost" functions of trajectories in a structured domain. They achieve it by using a VAE like method that uses a "fenchel young" loss function instead of the reconstruction error. The representations are supposed to encode the cost that the input trajectory was optimizing for, given this representation, a classical solver solves for the optimal trajectory. The representations are learnt such that this reconstructed optimal trajectory should be close to the input trajectory. Claims And Evidence: I have a few questions about the loss functions, it could be possible that I am missing something: 1) How do you define the inner product between "y" an output of a neural network and "x" a structured domain like a graph? 2) Assuming the inner product is defined somehow, In equation 6, when estimating the gradient, why is \hat{x}_{\epsilon} treated as a constant with respect to \theta? 3) Alright, upon reading the experiment settings, it seems that in both problems (Path Planning and Hamiltonian Cycles) the domain of Y and X is the same. But what happens if it is not? Methods And Evaluation Criteria: I am not experienced in this area, but it seems like all the problems are sort of toy / small scale problems by deep learning standards. Theoretical Claims: There weren't a lot of theoretical details in the paper about the fenchel young loss, I do have a couple of clarifications which I have mentioned above. Experimental Designs Or Analyses: Yes, I checked the both the qualitative and the quantitative results. The qualitative results make sense to me, epsecially the "unsupervised clustering" based on the true underlying cost. This is similar to unsupervised learning in CV, where images from the same classes have the similar VAE representations, despite access to labels. Hence these results are not very surprising to me. While the problems seems to be toyish problems, I am not a researcher in this area, hence I am unable to quantify the importance of the problems / result numbers. Q) Why does the VAE only use 10 dimensional representation space? Q) Why don't any of the numbers (in Table 1 for example) have error estimates? Q) Aren't there any other benchmarks? This seems similar to boltzmann sampling with a non differentiable (and costly) reward function. There is a large amount of literature that deals with this, even with structured space more complex than the ones mentioned in the paper. See for example: 1) GFlowNet Foundations. 2) Scalable Equilibrium Sampling with Sequential Boltzmann Generators (references in this paper). 3) Understanding reinforcement learning-based fine-tuning of diffusion models: A tutorial and review. 4) (Discrete) Diffusion samplers. I wonder why these are not covered either in the related work section or as baselines. Supplementary Material: I did not look closely at the supplementary material. Relation To Broader Scientific Literature: The paper is well motivated, the problem of doing constrained generation / search in structured space is important. But in the current state, I personally do not think the papers findings contribute Essential References Not Discussed: Mentioned above regarding boltzmann samplers, diffusion samplers, Gflownets, etc. Other Strengths And Weaknesses: Please see strength and weaknesses above. Other Comments Or Suggestions: 1) I don't think error plots in Table 1, and number of seeds in Table 1 and 2 are included. Questions For Authors: I have mentioned my questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: The main points from the reviewer are: * Q1) Y space restriction and inner product formulation. * Q2) Equation 6 (gradient estimation) * Q3) Limited-size problems compared to standard Deep Learning * Q4) VAE has low number of latent dimensions * Q5) Related work and baselines * Q6) Tables 1 and 2 (number of seeds and std) . * Answer Q1 The reviewer has an important question about the Y space restriction. The restriction of Y space in our proposed framework comes from the requirement that Y space must align with the space of the COP cost vector. Hence, the reviewer's concern can be restated as: "Is it sufficiently expressive to consider only COP formulations with linear costs (inner products)?" The answer is yes. The way we formulate the problem, i.e., $argmin_x <y,x>$ (see Eq. 5) subject to a general non-convex constraint is highly expressive as many combinatorial problems can be translated into it. According to [Korte, Bernhard H., et al. Combinatorial optimization], “Virtually all combinatorial optimization problems can be formulated as integer programs…”, referring to integer linear programs in the book’s chapter. In fact, our formulation is even more general than integer linear programming (ILP) because we allow the constraints to be non-linear and non-convex. Therefore, it is a natural choice and expressive enough to choose Y to have the same number of dimensions of X. We will address this by adding a small text in section 3.2 between Equations 4 and 5 with the motivation above about the expressiveness of the linear cost formulation. . * Answer Q2 We thank the reviewer for this observation, as we missed the $\epsilon$ term in the previous equations (Eq 2 and 5) that led to the confusion in Eq 6. To make sure the reviewers follow the equations, here are the steps considering the correction: Eq 2. $l_{\text{FY}}^{\epsilon}(y, x) = f(y, x) - f(y + \epsilon, \hat{x}_{\epsilon})$ Eq 5. $E_{q_{\phi}(z | x)} \left[ \langle y^{\theta}, x \rangle - \langle y^{\theta} + \epsilon, \hat{x}_{\epsilon}^{\theta} \rangle \right]$ Now Eq. 6 follows correctly from Eq. 5. The gradient of the loss wrt $y$ in the first term is naturally $x$. The second term is $\hat{x}_{\epsilon}$, following Berthet et al. (2020) in [Learning with Differentiable Perturbed Optimizers] in Definition 2.1. Therefore Eq. 6 is correct. To give an intuition why this is true, by definition $\hat{x}_{\epsilon}=argmin_x [ \langle y + \epsilon, x \rangle]$, which is the gradient of the perturbed cost function $\langle y + \epsilon, x \rangle$, at the minimum value, where x = x_eps. Since the steps are not obvious, we are adding them to the appendix, additionally to the corrections above in the main text. . * Answer Q3 Related works published in important venues such as [Learning with Differentiable Perturbed Optimizers, 2020], [Differentiation of Blackbox Combinatorial Solvers, 2019] and [DataSP: A Differential All-to-All Shortest Path Algorithm, 2024], tackle graph problems with comparable or smaller sizes than ours. There is a trade-off of using solvers within learning frameworks such as those above mentioned and ours. While these approaches guarantee the feasibility of COP solutions leveraging capabilities of blackbox solvers, they generally have lower scalability compared to standard deep learning architectures. . * Answer Q4 This question is important to be discussed, and we would like to address it using Figure 2 in our paper. You can see that the bottom graph contains many different paths. Although the variaty of paths are high, they were generated using simple cost functions with noise (i.e., three cost functions + noise illustrated with different colors). Only few latent dimensions were enough to encode the cost information. We want to show that we are able to encode the underlying costs by inputing the paths (Inv. Optimization), while VAEs would need too much data to understand the inverse mapping. Fixing the number of latent dimensions, IO-LVM needs less data to capture the structure. We added this new result in the rebuttal to reviewer Xit2 (please, see Table 3 there). Please see the additional figure provided in: https://docs.google.com/document/d/e/2PACX-1vTI3F6-L4XLqFv9NWiE9LriBkZ-CpIAhvtd9XGDyQr59PAQBv-IG_z8-EaqIQlBXlZwxqeCJNgtqjxA/pub . * Answer Q5 Answer in reviewer sTUc Q3 (no space left here) . * Answer Q6 We use 5 seeds for training in our experiments, new version is updated with std. All the results of IO-LVM on the test set are statistically better than the respective VAE. Also, we included two more tables to highlight better the outperformance of IO-LVM. Instead of reporting only the reconstruction on the % of match in edges, we are now also reporting the % of match of the full output, i.e., checking if the model output reconstructs perfectly the input, which is a much more difficult task. Please see Tables 1 and 2 in our rebuttal to reviewer Xit2 since there is no space left here. --- Rebuttal Comment 1.1: Comment: Q1) Thank you for answering this question, I acknowledge that this was not aware to me. Q2) I still do not understand, from your rebuttal, why is x_eps treated to be independent of y or theta, when infact $x_{\epsilon} = argmin E_{\epsilon} [ <y_{\theta} + \epsilon, x> ]$. Does some mathematical trick allow you to do this? Q3) Thank you for answering this question, I acknowledge that this was not aware to me. Q4) While I agree with you that IO-VLM seems like it can make better use of the data using a low dimensional latent space, VAEs might be able to learn better given a higher dimensional latent space. I do not see any result with dimensions greater than 10 in the rebuttal to Xit2. More generally, I acknowledge that I do not know how informative it is to show data efficiency of these seemingly toy problem. Q6) Thank you for reporting this. I will increase my score to weak accept given the rebuttal to my reviews and other reviews combined with the fact that I am not well versed with this general area of research. --- Reply to Comment 1.1.1: Comment: First,we would like to thatnk the reviewer to engage with the rebuttal. The reviewer has two further remain requests of clarifications that we address below: * Q2. Why x_eps is intependent of y when taking the gradients. * Q4. VAEs could potentially be better with more dimensions. Yes, it is a mathematical trick. While a formal mathematical treatment can be found in [Learning with Fenchel-Young Losses, 2019] and [Learning with Differentiable Perturbed Optimizers, 2020], here we provide an intuitive explanation to illustrate why this is true with an example: . * Answer for Q2. **What do we want to show?** We aim to show that the partial derivative of the following expression: $\langle y^{\theta} + \epsilon, \hat{x}_{\epsilon}^{\theta} \rangle$ with respect to $y^{\theta}$ is simply $\hat{x}_{\epsilon}^{\theta}$. The derivative of $y^{\theta}$ with respect to $\theta$ itself is not relevant here, as this derivative is accounted in a separate term by the neural network's gradient as shown in Eq.6 in the paper (i.e., $\partial{g_{\theta}}$..). Here, we focus exclusively on why the derivative with respect to the argument $y^{\theta}$ of the inner product is exactly $\hat{x}_{\epsilon}^{\theta}$. **Simplifying the notation** By using change of variable $y' = y^{\theta} + \epsilon$, our original problem simplifies to showing that $\frac{\partial}{\partial y'} \langle y', \hat{x} \rangle$ is exactly $\hat{x}$ and doesn't depend on $y'$, where as you already mentioned, $\hat{x} = \arg\min_{x \in \mathcal{X}} \langle y', x \rangle$. **An example for intuition** Let us fix $y' = [8, 3, 5, 9, 15]$, and that we have a linear funtion defined by $\langle y', x \rangle$. In a minimization problem to pick the smallest value in a feasible set (e.g., pick the path with lowest cost), the result using our example is $\hat{x} = [0, 1, 0, 0, 0]$, because the second element is the smallest. And this leads to the minimum value of the function to be $\langle y', \hat{x} \rangle = 3$ **The partial derivative** Now we can evaluate $\frac{\partial}{\partial y'} \langle y', \hat{x} \rangle$ is exactly $\hat{x}$ by slightly changing the elements of $y'$. If we slightly perturb the first element (8 → 8.001), the minimum value of $\langle y', \hat{x} \rangle$ remains 3, because it still chooses the second element. If we slightly perturb the second element (3 → 3.001), the minimum immediately changes from 3 to 3.001. Perturbing other elements (5, 9, 15) does not affect the minimum since they are not chosen. **What does it mean?** The value of $\langle y', \hat{x} \rangle$ changes only if we perturb the coordinate of $y'$ associated with the optimal solution (the chosen element). This change is linear due to the linearity of the inner product. Therefore, the derivative vector (the gradient) is exactly [0,1,0,0,0], which is equal to $\hat{x}$ . * Answer for Q4. We believe this question is essential to address and add value to the paper. We go further and try to answer the following: How many latent dimensions, and how many samples in the data the VAE needs to be able to capture the problem feasibility? We decided to quickly run an additional experiment only for the VAE with latent dimension = 100 and considering two different training data sizes: 1000 samples and 10000 samples. Below you find the table results together with the results of Table 3 in the rebuttal for reviewer Xit2. | Methods | Latent dims | Train size = 1000 | Train size = 10000 | | -------- | ------- | ------- | ------- | | VAE | 10 | 0.8 \% | 11.8 \% | | VAE | 100 | 1.3 \% | 30.2 \% | | IO-LVM | 10 | 20.8 \% | 58.3 \% | We see that it might happen that, with a lot of training data, the VAE might be able to "memorize" the reconstruction. However, limited amount of training data keeps the VAE results extremely low in terms of reconstruction. This is because the VAE is not reconstructing the solver input, but the solver output, which is a much harder task with a limited amount of samples. Thank you again for the additional questions.
null
null
null
null
null
null
Why Is Spatial Reasoning Hard for VLMs? An Attention Mechanism Perspective on Focus Areas
Accept (poster)
Summary: The authors explore how vision-language models’ struggle with spatial reasoning, focussed on how misdirected attention (i.e. to irrelevant parts of image) within transformer blocks contributes to such behavior. They analyze attention patterns and report how attention prioritizes text tokens over image tokens. They also note that attention to wrong parts of an image is an issue and how model logit probability can be a proxy for model confidence. Using these ideas, they propose an approach to adjust attention based on confidence levels: sharpening or smoothing attention weights of image tokens based on model confidence per sample. They evaluate their model on two datasets, WhatsUp and VSR. These datasets contain mixtures of natural and synthetic images, with mostly synthetic captions. The results show promise of their proposed method. Claims And Evidence: No. 1) They evaluate on two small datasets (that contain sythetic data). Results on these datasets are insufficient to back their claims. 2) They explore two outdated VLMs to highlight spatial reasoning drawbacks. The analysis on more modern VLMs would make the claims stronger. Most newer VLMs show stronger spatial reasoning abilities simply by training on more data and augmenting their training data / tasks. Methods And Evaluation Criteria: No, see above. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, the experiments on 2 mostly synthetic datasets are insufficient. The focus on only older VLMs question the applicability of method to newer VLMs. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: Overlap with several existing prior works. Essential References Not Discussed: Consider comparing with the baselines from these prior works. And discussing these methods more explicitly in related work to highlight how authors method differs. [1] What’s “up” with vision-language models? Investigating their struggle with spatial reasoning, EMNLP 2023 [2] Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs, CVPR 2024 [3] Liu, Fangyu et al. “Visual Spatial Reasoning.” Transactions of the Association for Computational Linguistics 11 (2022): 635-651. [4] Hsu, Joy et al. “What's Left? Concept Grounding with Logic-Enhanced Foundation Models.” NeurIPS 2023. Other Strengths And Weaknesses: **Strengths** 1. The authors' innovative approach of adjusting attention at test-time for individual samples, using the model's output probability as a confidence proxy, is both clever and compelling.​ 2. Despite the datasets being small and primarily synthetic, the observed performance enhancements are promising.​ 3. The concept that generation confidence serves as a reliable indicator of attention accuracy is intriguing and opens new avenues for exploration in the vision-language domain. **Weaknesses** 1. Insufficient experimentation: * Results only on two small, primarily synthetic dataset * Results only over two older variants of the same VLM This raises concerns on applicability to real-world datasets and newer, stronger VLMs. 2. Results difficult to compare: * The authors do not compare to any prior work in their paper (i.e. no results from existing works are used as baselines) * Even for baselines they provide, the authors use different dataset splits making it difficult to directly compare with number on the original papers. 3. Does not use established benchmarks * See Table 5 in [1], where results on standard VQA datasets focussed on spatial reasoning are provided (including numbers directly from prior works, making the results easy to compare and validate). Please include at least one such results table. * There is one Table 6 (buried in appendix) that reports on some such datasets. There are no details on the baseline reported on it, and no discussion regarding the results in that table. 4. Unclear visualizations in Figure 9 & 10 * The authors show pairs of images, but there is not apparent improvement visible in them. * Consider explicitly highlighting (on the image) what changes the authors are talking about. * Consider making the visualizations overlays (the original images seem to be shaded, making it difficult to see color details in them) 5. Consider backing "attention modification" claims with some numbers * In introduction, authors claim that "by dynamically adjusting attention patterns, we effectively guide the model’s focus to better align with actual object locations and their spatial relationships." It is simple to test this on some object detection dataset - calculate attention mIoU with objects against a baseline. This will reveal if authors' method focusses attention more on object regions. This is a common form of evaluation in prior works. [1] Learning to Localize Objects Improves Spatial Reasoning in Visual-LLMs, CVPR 2024 Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer 3qjz for the comments. However, we are concerned that some comments suggest a disconnect or oversight of our paper. We respond to each point below, and respectfully encourage the reviewer to revisit our work. > Essential References Not Discussed Among the four references mentioned, **[1] and [3] are precisely the benchmarks we work on**, and our experiments are **exactly built on these two papers** with the same dataset names WhatsUp and VSR **cited and mentioned throughout the paper**. Both are benchmark papers that present datasets and baselines like prompting, without proposing novel methods or addressing **mechanistic interpretability (MI)** . [2] and [4] are: (1) about **LLMs rather than VLMs**, while we mentioned in the title that we work on VLMs; (2) neither about MI, while we highlighted in abstract (Lines 18-20), introduction (Lines 43-45), conclusion (Lines 422-423) that our paper is to **open up VLMs and observe their internal behavior to understand their failures in spatial reasoning**. We believe the reviewer has a **misunderstanding** of the paper that we are NOT proposing a new model to better leverage object grounding to beat SOTA models, but rather conducting MI for VLMs, ​​by analyzing attention patterns and applying targeted interventions to understand their behavior. So the suggestions to compare with other object-grounding methods reflect the misunderstanding, detailed in W2. (MI is a well-established area to reverse-engineer complex systems, offering a more granular and causal understanding of how they process information. [5,6,7]) [5] A Survey on Mechanistic Interpretability for Multi-Modal Foundation Models [6] Mechanistic Interpretability for AI Safety: A Review [7] awesome-mechanistic-interpretability-lm-papers in Github W1: > datasets being small and primarily synthetic - Benchmarks are not primarily synthetic, with 85% natural images and all human annotated captions (reformatted to generative QA settings better suited for SOTA decoding-only VLM architectures, Lines 143-149). | |Cont_A|Cont_B|Coco_one|Coco_two|VG_one|VG_two| |-|:-:|:-:|:-:|:-:|:-:|:-:| |Synthetic image|yes|yes|no|no|no|no| |Synthetic caption|no|no|no|no|no|no| |samples|406|408|1160|291|2240|440| - MI research requires clean datasets to observe model mechanisms. Existing MI studies use smaller, clean datasets rather than larger, noisy ones [5, 6, 7]. We have made every effort to include all available datasets with clean, high-quality spatial relationship annotations. W2: > do not compare to any prior work in their paper. - We respectfully disagree. With a focus on MI of VLMs, **our experiments on attention intervention aims to give conclusions about how VLMs work**. Thus, **experiments are expected to be between w/ and w/o intervention**, rather than comparing with other object-grounding methods. - We did compare our methods with two prior decoding methods: Dola [8] and VCD [9] (Line 327). Like DoLa, we leverage internal signals but tailor them to VLMs using visual features. While sharing VCD’s goal of improving visual grounding through decoding, we go further by introducing fine-grained attention interventions for more precise calibration. - Again, we believe it reflects a misunderstanding of the paper, especially given the context that other three reviewers do not have such misunderstandings. [8] DoLa: Decoding by Contrasting Layers Improves Factuality in LLMs. [9] VCD: Mitigating Object Hallucinations in Large VLMs through Visual Contrastive Decoding. W3: > Does not use established benchmarks. - We use established spatial-reasoning benchmarks WhatsUp [1] and VSR [3], and as detailed in Lines 143-149, we have to reformat the dataset from classification settings to QA settings to observe the attention behavior and information flow across layers of Transformers when VLMs generate correct or wrong answers, which is also a standard setting of MI studies [5,6,7] for observing the attention behavior. - We also include general VQA benchmarks in Table 6 at Appendix. W4: >Unclear visualizations in Figure 9 & 10 Thanks for the helpful suggestions! We will add more annotations to these figures. W5: > backing "attention modification" claims with some numbers Thanks for the helpful feedback. We extend our experiments to test YOLO overlap w/ and w/o intervention in middle layers, where our observations (line 196) identify as key for processing spatial information. We evaluate two representative subsets: Controlled_A (synthetic images) and VG_two_obj (real images). After intervention, overlap rates improve on both. Attention–YOLO similarity increases by 2.9% on Controlled_A and 8.2% on VG_two_obj at layer 16, with similar trends across other middle layers. While it can support our claims, it is not a golden metric since attention map is influenced by the prompt, and YOLO may fail to detect objects in complex images (Figure 24). We will include additional discussions in our paper. --- Rebuttal Comment 1.1: Comment: The authors seem to have misunderstood the review and are misrepresenting several concerns brought out in the review. **1. Essential References Not Discussed:** &nbsp; To quote the original concern on [1-4] in review: *"Consider comparing with the baselines from these prior works. And discussing these methods more explicitly in related work to highlight how authors method differs."* - Authors mention for [1] and [3] "both are benchmark papers that present datasets and *baselines*". However, they do not report any of these baselines reported in [1, 3] in their paper. None of the reported evaluation results in the paper can be compared with prior work (since they use different baselines or dataset splits). This is not the case in prior MI works (e.g. DoLa has baseline numbers on common datasets that are the same as those in prior works ITI and CD). This lack of verifiable baseline numbers making results comparison difficult (Weakness 2 in review) was a major concern that remains unaddressed. - The authors incorrectly claim that [2] is about LLMs: it uses the same VLM used in author's method and reports results on common datasets. - The authors motivation is to "understand VLM failures in spatial reasoning.": Prior work [2,4] both highlight how VLMs or LLMs can actually solve spatial reasoning tasks better with simply more data (hence raising concerns on whether identified issues of spatial reasoning are limited to earlier VLMs only). **2. W1 Datasets being small and primarily synthetic** - "human annotated captions reformatted to generative QA...": all evaluations are conducted with GPT generated captions as the prompting questions / targets? The captions / targets in the evaluation datasets are GPT generated (i.e. synthetic). This was the concern. - "MI research requires clean datasets...": On evaluation datasets being small, if I understand correctly, any object detection / segmentation dataset can be converted to an evaluation dataset using the GT human annotations for object locations. This appears to be what is done with COCO in [1] as well (which is a current evaluation dataset in authors' work). However, this concern is somewhat resolved with the new results with Qwen on several established benchmarks such as GQA, TextVQA, and POPE. **3. W2 Does not use established benchmarks.** - To repeat, none of the reported evaluation results in the paper can be directly compared with prior work (since they use different baselines or dataset splits). This is not the case in prior MI works that authors cite in paper (e.g. DoLa has baseline numbers on common datasets that are the same as those in prior works ITI and CD). **4. W4 - Resolved, thanks!** **5. W5 - Partially resolved** - Why use YOLO for GT locations? Why not use the COCO based datasets that have human annotated GT object bbox / seg annotations? **6. Question on New Qwen Results** - This is promising and gives a better point of comparison. What is the model used (i.e. 2B / 7B / 72B ...)? - Can you refer to prior work that have these same numbers? The reported numbers for Qwen2-VL on some benchmarks is weaker than their official numbers on hugging face page. E.g. TextVQA: 79.18 (Qwen2-VL, from rebuttal), 79.26 (Qwen2-VL+authors', from rebuttal) < 79.7 (Qwen2-VL 2B, smallest model, HF number), 84.3 (Qwen2-VL 7B, next smallest model, HF number). - Above Qwen2-VL HF numbers from official pages on: https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct, https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct - *image receives much less attention than the text tokens* - is average attention logits for single text/image tokens fair? Often in VQA settings with models like Qwen2, there are significantly more image tokens than text tokens -> this could make image average attention values lower. &nbsp; &nbsp; It is clear that the aim of the paper is to open up VLMs and observe their internal behavior to understand their failures in spatial reasoning. However, the doubts are regarding evaluation setup used to validate findings. Are the measurements of spatial reasoning failure valid? Have the baseline VLMs been properly setup (i.e. used optimal prompts, evaluated under standard settings)? Are these failures visible in newer / larger VLMs? Providing zero points of reference from prior work for reported results heavily weakens this paper raising multiple doubts on the validity of their experimental setup. While I am familiar with literature on spatial reasoning with LLMs, VLMs, and their evaluation, I do note that my familiarity with mechanistic interpretability (MI) literature is limited. The authors do highlight difficulties in using existing benchmarks directly for MI research, and MI for VLMs appears relatively under explored (limiting available prior work for comparison). I also appreciate the authors efforts to provide a thorough rebuttal. In light of these and the concerns that were resolved, I raise my rating to WR. --- Reply to Comment 1.1.1: Comment: Thanks for the thoughtful response, which will further enhance our work. > R1.1: do not report any baselines in [1,3] First, we hope to clarify a **misunderstanding**: we **do not introduce any new splits**, but only change **classification** to **Gen(Generative)QA** format (Line 106-164). Why do we change to GenQA? - Ours: `<Obj1, Obj2>+Img→VLMs→Left/Right/..` when opening up model, the observed attention is between labels `Left` (or `Right`) with image/text tokens. This enables comparing attention behavior between correct/wrong generation. - Classical way of opening up the model. DoLa: `<Washington, capital>→LLMs→Seattle/Olympia..` to compare internal states when answering different labels. - To study spatial reasoning in VLMs, we must generate **spatial labels** for meaningfully detect inner states. See https://postimg.cc/3kb2t3HJ why we cannot directly compare with [1,2,3,4]. All [1,2,3,4] are not spatial labels, but A/B/C/D or True/False classification labels or bbox for a single object. We are **the first work** to open up VLMs on Spatial Reasoning, making us valuable to **create its first GenQA datasets** by reformatting [1,3]. It should **not be counted as our fault there is no previous work on GenQA for VLM spatial reasoning to compare**, and **we construct the first datasets WhatsUp/VSR-QA to support future research**. We believe `None of the reported results can be compared with prior work` should be `None of the reported results can be used in MI`. We agree that adding more baseline results is valuable and put them here: https://postimg.cc/TK0qRdB8. Numbers **for reference only**-direct comparison between classification and GenQA is not fair. Also, for additional generic comparison, we note that POPE is a fair comparison as it is used by VCD, added here https://postimg.cc/n9jMQzGK. >R1.2/W2: DoLa baselines - DoLa work on **LLMs**. For LLMs, common datasets are often in **GenQA** - suitable for MI. - DoLa/ITI/CD: `Q→LLMs→A` - Dataset TruthfulQA/StrQA: `Q→LLMs→A` - **While this is not the case for VLMs**. In spatial reasoning in VLMs, there are **no existing datasets in GenQA**. It is why our reformatted datasets are valuable. >W1: Datasets small and synthetic. - Our dataset contains **6165 images with QA pairs**. Using a small dataset is common practice in MI since data cleanness is important [7,8,9]. - As clarified in R1.1, we use **85% natural images** (COCO/VG) and **15% real images** with two objects and 100% human-annotations. To meaningfully compute attention, **spatial labels are required** to be the output. Thus, we reformat to GenQA by GPT, only for format conversion not content generation. Unlike prior work on simulated data, our setup offers **the most doable setting** for VLMs’ spatial reasoning MI. >W5: Why YOLO? We manually fixed YOLO boxes for some subsets to ensure correct GT locations and will include this in paper. >Q1:Model used We use https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct. >Q2/Q3: Number weaker than official. - Qwen2-VL does not open-source its evaluation code, while we need to use evaluation code to compare w/ and w/o intervention. - Therefore, we use VLMEvalKit, official evaluation toolkit for the OpenVLM Leaderboard and become a widely adopted standard for VLMs. - See https://github.com/QwenLM/Qwen2.5-VL/issues/27. Our number is **same with others here**, and they also noted the number is lower than official. However, **authors have not responded**. - We perform all evaluations fairly with the same toolkit. >Q4: Is average attention logits fair? In original rebuttal, we used average logits. We compute sum of attention logits for text image tokens: https://postimg.cc/Vr21gCS2. Qwen2-VL uses more image tokens (width//28)×(height//28) than LLaVA’s 576 (a typical 812×1288 COCO image yields 1288 tokens), resulting in a slightly higher image-to-text score ratio. Conclusion holds: total attention to image tokens remains lower than text. >R1.3:[2,4] highlight how VLMs solve spatial reasoning...limited to earlier VLMs. - We respectfully disagree 69.5/76.5 in [2] means solving spatial reasoning with complex whistles added to VLMs. [4] uses LLMs only. Our LLaVA-1.6 is the latest model, so identified issues are on `latest` rather than `earlier VLMs`. - It is widely recognized[5,6] Spatial Reasoning is a key bottleneck for **latest VLMs**. That's why we're the first to use MI to study this emerging and urgent problem. >R1.4: Claim that [2] is about LLMs. Thanks—it was a typo. We meant [4]. As clarified in R1.3, both papers are out of scope for our work on VLMs' MI. [5] Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs [6] Cambrian-1: Fully Open, Vision-Centric Exploration of Multimodal LLMs [7] Beyond Semantics: Rediscovering Spatial Awareness in Vision-Language Models [8] Inference-Time Intervention: Eliciting Truthful Answers from a Language Model [9] How Do LLMs Perform Two-Hop Reasoning in Context?
Summary: This paper investigates the Visual Attention Distribution in VLM and finds that it affects vision-centric understanding. Based on such observation, paired with the confidence score of the model when generating tokens, the authors propose ADAPTVIS, a temperature scaling mechanism for the attention scores that effectively helps with better understanding spatial relations. ## update after rebuttal Thank the author for the point-by-point response to all of my raised issue. I appreciate the attention scores analysis as well as the additional experiments. While it is true that the proposed method is less effective on the new model, it is still acknowledged that the overall structure of the paper is well-established and can potentially inspire future works. Therefore I'll maintain my positive score 4 (Accept). Claims And Evidence: The claims in the submission is mainly on investigation of the attention mechanism and its relation with the model's output. I'm wondering have the author tried other VLMs to see if such an observation is universal. Methods And Evaluation Criteria: The proposed methods make sense based on the thorough study of the attention mechanism. The evaluation criteria is standard. Theoretical Claims: N/A Experimental Designs Or Analyses: The method and experimental design are straightforward and easy to understand. One concern about the design is that the proposed method is only applied to LLaVA. I'm wondering how well the proposed method works for other VLMs. Supplementary Material: I admire the further analysis part in the supplementary material. The investigation in How the confidence changes by using different $\alpha$s addresses my concern whether such a method would help with solving the problem claimed intuitively. Relation To Broader Scientific Literature: The key contribution of the paper including the investigation of attention mechanism for spatial understanding would help with ideas in the VLM hallucination domain. Such a design would help with addressing the inefficiency of contrastive decoding method, which usually requires more than once inference of the model. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The idea is straightforward, intuitive, easy-to-understand and easy-to-implement. The simple-yet-effective approach make it possible for this work to have great impact in the VLM-related area. 2. The writing is easy to follow, clearly stating the observations / the relations between each part. Weaknesses: 1. As mentioned above, it'll be appreciated if the authors can provide more results (analytical and experimental) using other VLMs. Other Comments Or Suggestions: 1. Typo: Ln. 375, right col, "we emply ..." should be "we employ ..." Questions For Authors: As mentioned in the supplementary material, the proposed method could also work on general VQA benchmark. What is the major difference between a spatial-relation understanding task and a standard QA task that makes the proposed method less effective on standard QA task? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer p97N for the encouraging comments and thoughtful feedback. Below, we address the concerns raised in detail. > I'm wondering how well the proposed method works for other VLMs. - Experiments on Qwen2-VL, a SOTA VLM with different architecture. We intervene in the image attention distribution using our temperature-scaling method, showing consistent improvements below, particularly in challenging cases. For example, on `VG_two_object`, where the baseline performance is the lowest among all benchmarks, our method yields a significant improvement of **10+ absolute points**. The gains observed on Qwen2-VL further demonstrate the generalizability of our approach. | **Benchmark** | **Qwen2-VL** | **Qwen2-VL + Attention Intervention** | | :-----: | :-----: | :-----: | | VSR | 78.96 | 81.60 (↑ 2.64) | | Coco_one_obj | 76.64 | 78.03 (↑ 1.39) | | Coco_two_obj | 75.28 | 76.52 (↑ 1.24) | | VG_one_obj | 74.89 | 75.11 (↑ 0.22)| | VG_two_obj | 56.22 | 66.95 (↑ 10.73) | | Controlled_A | 98.18 | 98.18 (↑ 0.00) | | Controlled_B | 91.73 | 92.97 (↑ 1.24) | - Experiments on more benchmarks, including POPE, GQA, and TextVQA, which proves that attention intervention can maintain the performance on more general tasks without hurting the performance. Compared with spatial reasoning tasks, general QA tasks achieve relatively smaller improvement. A possible reason is that such tasks are less sensitive to the geometric structured distribution of image attention. For example, given a question like “Is there a dog in this picture?”, the model only needs to detect the presence of an object, and is therefore less likely to suffer from misallocated attention across spatial regions. | **Benchmark** | **Qwen2-VL** | **Qwen2-VL + Attention Intervention** | | :-----: | :-----: | :-----: | | POPE-Overall | 86.32 | 87.09 (↑ 0.77) | | POPE-P | 86.47 | 87.29 (↑ 0.82) | | POPE-A | 85.07 | 85.80 (↑ 0.73) | | POPE-R | 87.46 | 88.22 (↑ 0.76) | | GQA | 62.09 | 62.17 (↑ 0.08) | | TextVQA | 79.18 | 79.26 (↑ 0.08) | > provide more results (analytical and experimental) using other VLMs We present the additional experimental results above. For the analytical results, we also extend our analysis to the Qwen2-VL model and find that our insights generally hold across VLMs. - The first claim **image receives much less attention than the text tokens** is generally valid for VLMs: we calculate the Qwen2-VL’s attention scores below (average attention logits for single text/image tokens respectively), which matches our previous claims. | **Benchmark** | **Text attention scores** | **Image attention scores** | |:-----:|:----:|:----:| | Controlled_A | 1.57e-02 | 7.59e-05 | | Controlled_B | 1.58e-02 | 7.69e-05 | | Coco_one_obj | 1.77e-02 | 4.48e-04 | | Coco_two_obj | 1.65e-02 | 3.50e-04 | | VG_one_obj | 1.54e-02| 4.75e-04 | | VG_two_obj | 1.42e-02| 4.19e-04| - The second important claim is still valid about **confidence could serve as a proxy for the model performance** as below: - Spatial relationships with lower confidence, such as "Left" and "Under" tend to exhibit lower accuracy when compared to those with higher confidence, like "On" and "Right". - After the intervention with AdaptVis, certain spatial relationships like “Left: show improved performance, accompanied by an increase in confidence. This demonstrates a pattern consistent with our observations for LLaVA, as depicted in Figure 25 in the Appendix. | **Benchmark** | **Qwen2-VL Uncertainty** | **Qwen2-VL+ScalingVis Uncertainty** |**Qwen2-VL Acc** | **Qwen2-VL+ScalingVis Acc** | |:---:|:----:|:----:|:----:|:-----:| | Left | 0.4408 | 0.4474 |70.11 | 74.71 | | On | 0.5758 | 0.5668 |100 | 100| | Right | 0.5982 | 0.5911 | 100 | 100 | | Under | 0.5554 | 0.5418 |98.82 | 98.82 | Q1: >What is the major difference between a spatial-relation understanding task and a standard QA task that makes the proposed method less effective on standard QA task? - That’s a great question. In fact, our initial and primary motivation was to gain deeper insights into the spatial reasoning capabilities of VLMs. Through our analysis, we observed that the **geometric distribution of image attention** plays a critical role in this reasoning process. Based on this observation, we proposed two **temperature scaling techniques** to adjust the attention distribution and address the problem. - As for the relatively smaller improvement on general QA tasks, we believe this may be because such tasks are **less sensitive to the geometric structure** of image attention. For example, given a question like *“Is there a dog in this picture?”*, the model only needs to detect the presence of an object, and is therefore **less likely to suffer from misallocated attention** across spatial regions. --- Rebuttal Comment 1.1: Comment: Thank the author for the point-by-point response to all of my raised issue. I appreciate the attention scores analysis as well as the additional experiments. While it is true that the proposed method is less effective on the new model, it is still acknowledged that the overall structure of the paper is well-established and can potentially inspire future works. Therefore I'll maintain my positive score 4 (Accept).
Summary: The paper examines the attention patterns in the visual-language models, and finds patterns which might explain why spatial reasoning can be hard for VLMs. Specifically, they first find that a large chunk of the attention is focussed on the text stream, even though the number of visual tokens are more. However, they further find that manually making the attentions larger on visual tokens do not help performance. Then, they find that when the visual attention is on the right objects, it typically leads to the right answer; and vice versa. They also find that self-confidence is a good proxy for model's confidence -- when the model is confident, it is usually looking at the right location, and thus the attention map can be sharpened. When the model is less confident, it helps to make the attention map more diverse to let it look at more region of the visual image. This proposed technique of modifying attention maps based on confidence helps improve performance on several benchmarks that they evaluate on. ## Update after rebuttal My initial score for this paper was 4 -- with main concerns based on the datasets considered and that only one single VLM was evaluated (both concerns shared by other reviewers as well). I thank the authors for their rebuttal. I read the reviews of other reviewers as well as author's response: I revised my score from 4 to 3 because I think the new results show that the proposed interventions are barely effective on new (and more standard) benchmarks; and the performance improvement is less pronounced with Qwen2-VL (except in VG_two_obj). Nevertheless, the exploration done in this paper is solid and might be useful to foster research. I am recommending this paper for acceptance with an implicit assumption (and belief) that the authors will include the rebuttal experiments in their main paper -- without these results, the papers' results might be misleading and over-promising. I asked the authors whether they plan to do this, and they didn't reply, so I am not sure if the authors indeed want to include these results. I will leave this to AC. Claims And Evidence: Yes, mostly the claims are well supported. The hypothesis of various attention patterns are well tested and the resulting technique of adaptively changing the attention patterns works well on the datasets that they are tested on. However, it is unclear whether the claims of this paper are generally valid for VLMs or is it specifically valid for LLAVA models that this paper tests on. For eg. the claim about the model focusing more on the language tokens, compared to visual tokens could be true for LLAVA like models because only a small adapter layer is trained to add visual modality to the LLM (in addition to final small LORA on the whole base LLM). It could also be something specific to a training data or methodology detail of LLAVA models. Testing on other VLMs would make the analysis and claims more stronger. Secondly, I would recommend testing on more popular spatial reasoning datasets (like RefCOCO family / BLINK) etc -- I am not particularly tied to the datasets I mentioned, but some datasets which are commonly used for evaluating VLMs. Methods And Evaluation Criteria: Yes -- see some concerns regarding the specific VLM used and evaluation datasets in claims and evidence. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes -- they are sound and valid Supplementary Material: Yes -- skimmed through all sections Relation To Broader Scientific Literature: It is a well-known fact that VLMs struggle with spatial reasoning. Getting insights on why that might be, and ways to fix them zero-shot are relevant to the scientific community. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is very well-written and is a joy to read -- all the hypothesis and conclusions naturally flow from each other. Other Comments Or Suggestions: N/A Questions For Authors: - I would like to see some analysis on different VLMs (or some reasoning on why that might not be important) - I am not expecting the authors to test on new datasets, but explaining why that was not considered would help Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thorough review and detailed comments! Your suggestions will be helpful in improving the paper. We address your concerns below. Q1: >whether the claims of this paper are generally valid for VLMs or is it specifically valid for LLAVA models that this paper tests on. - The claims are generally valid for VLMs. - The first claim **image receives much less attention than the text tokens** is generally valid for VLMs: we calculate the Qwen2-VL’s attention scores below (average attention logits for single text/image tokens respectively), which matches our previous claims. | **Benchmark** | **Text attention scores** | **Image attention scores** | |:-----:|:----:|:----:| | Controlled_A | 1.57e-02 | 7.59e-05 | | Controlled_B | 1.58e-02 | 7.69e-05 | | Coco_one_obj | 1.77e-02 | 4.48e-04 | | Coco_two_obj | 1.65e-02 | 3.50e-04 | | VG_one_obj | 1.54e-02| 4.75e-04 | | VG_two_obj | 1.42e-02| 4.19e-04| - The second important claim is still valid about **confidence could serve as a proxy for the model performance** as below: - Spatial relationships with lower confidence, such as "Left" and "Under" tend to exhibit lower accuracy when compared to those with higher confidence, like "On" and "Right". - After the intervention with AdaptVis, certain spatial relationships like “Left" show improved performance, accompanied by an increase in confidence. This demonstrates a pattern consistent with our observations for LLaVA, as depicted in Figure 25 in the Appendix. | **Benchmark** | **Qwen2-VL Uncertainty** | **Qwen2-VL+ScalingVis Uncertainty** |**Qwen2-VL Acc** | **Qwen2-VL+ScalingVis Acc** | |:---:|:----:|:----:|:----:|:-----:| | Left | 0.4408 | 0.4474 |70.11 | 74.71 | | On | 0.5758 | 0.5668 |100 | 100| | Right | 0.5982 | 0.5911 | 100 | 100 | | Under | 0.5554 | 0.5418 |98.82 | 98.82 | Q2: > more experiments on other VLM and other benchmarks. - Experiments on Qwen2-VL, a SOTA VLM with different architecture. We intervene in the image attention distribution using our temperature-scaling method, showing consistent improvements below, particularly in challenging cases. For example, on `VG_two_object`, where the baseline performance is the lowest among all benchmarks, our method yields a significant improvement of **10+ absolute points**. The gains observed on Qwen2-VL further demonstrate the generalizability of our approach. | **Benchmark** | **Qwen2-VL** | **Qwen2-VL + Attention Intervention** | | :-: | :-: | :-: | | VSR | 78.96 | 81.60 (↑ 2.64) | | Coco_one_obj | 76.64 | 78.03 (↑ 1.39) | | Coco_two_obj | 75.28 | 76.52 (↑ 1.24) | | VG_one_obj | 74.89 | 75.11 (↑ 0.22)| | VG_two_obj | 56.22 | 66.95 (↑ 10.73) | | Controlled_A | 98.18 | 98.18 (↑ 0.00) | | Controlled_B | 91.73 | 92.97 (↑ 1.24) | - Experiments on more benchmarks, including POPE, GQA, and TextVQA, which proves that attention intervention can maintain the performance on more general tasks without hurting the performance. Compared with spatial reasoning tasks, general QA tasks achieve relatively smaller improvement. A possible reason is that such tasks are less sensitive to the geometric structured distribution of image attention. For example, given a question like `Is there a dog in this picture?`, the model only needs to detect the presence of an object, and is therefore less likely to suffer from misallocated attention across spatial regions. | **Benchmark** | **Qwen2-VL** | **Qwen2-VL + Attention Intervention** | | :---: | :---: | :---: | | POPE-Overall | 86.32 | 87.09 (↑ 0.77) | | POPE-P | 86.47| 87.29 (↑ 0.82) | | POPE-A | 85.07| 85.80 (↑ 0.73) | | POPE-R | 87.46| 88.22 (↑ 0.76) | | GQA | 62.09| 62.17 (↑ 0.08) | | TextVQA | 79.18| 79.26 (↑ 0.08) |
Summary: This paper introduces ADAPTVIS, an adaptive attention mechanism designed to enhance spatial reasoning in vision-language models (VLMs). By analyzing attention distributions, the authors identify that errors often arise when models focus on irrelevant image regions and that attention patterns differ between familiar and unfamiliar spatial relationships. ADAPTVIS addresses these issues by dynamically re-weighting attention maps based on model confidence during inference, improving geometric understanding with minimal computational cost. Evaluations on benchmarks like WhatsUp and Visual Spatial Reasoning (VSR) demonstrate significant accuracy gains, highlighting ADAPTVIS’s effectiveness in refining spatial reasoning in large vision-language models. Claims And Evidence: 1. The paper claims that the model's confidence is correlated with attention correctness and that high confidence leads to sharper attention, while low confidence allows the model to focus more diffusely on the image. However, there is limited empirical evidence to substantiate this claim. The analysis in Section 4, especially regarding the correlation between attention distribution and model confidence, is not comprehensive and largely based on qualitative examples. A more robust, quantitative analysis would strengthen this claim and make the argument more convincing. Methods And Evaluation Criteria: 1. The method of modifying attention during inference based on model confidence is conceptually sound, especially as it directly targets improving spatial reasoning in VLMs. However, the method's reliance on pre-defined threshold values (alpha1, alpha2, beta) for adjusting attention is a significant drawback. A more adaptive approach that adjusts thresholds dynamically based on the specific task or dataset could improve the method’s robustness and generalizability. Additionally, more clarity on how these hyperparameters interact with the model and their impact across different VLMs would be beneficial. 2. The evaluation is conducted on several benchmarks, such as WhatsUp and Visual Spatial Reasoning (VSR). While these benchmarks are relevant for testing spatial reasoning, the datasets used are somewhat limited in scope. The authors should consider evaluating ADAPTVIS on more commonly used, real-world datasets, such as GQA or VQA-v2, to assess the method's generalizability and performance across a broader set of tasks. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. One important concern with the current experiments is that they are solely based on the LLaVA-series models (LLaVA-1.5, LLaVA-1.6). While these models are a useful starting point, it is unclear whether the findings would generalize to other VLMs. Supplementary Material: N/A Relation To Broader Scientific Literature: The key contributions of the paper build on prior work in spatial reasoning for VLMs by introducing a novel attention adjustment mechanism based on model confidence, addressing issues identified in earlier studies on attention misalignment and spatial reasoning failures in VLMs. This approach is closely related to efforts exploring the role of attention mechanisms in improving VLM performance on tasks requiring geometric and spatial understanding. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: 1. Some axis titles are missing in the figures (Figures 5 and 7), which could make them harder to interpret. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your comments and advice, We address your concerns below. > it is unclear whether the findings would generalize to other VLMs. - We agree generalizability is important so we have especially included different variances of LLaVA-series models, with LLaVA-1.5 (224×224 visual encoder, MLP projection layer) and LLaVA-1.6 (336×336 visual encoder, resampler projection layer), since they are the most widely-used open-source architecture but with different model features like confidence, as mentioned in line 742. - To assess the generalizability of our findings, we extend our analysis to Qwen2-VL, a SOTA VLM with different architecture. We show our findings for VLMs’ attention patterns are still consistent [see response for Reviewer 3nwP Q1]. - To further assess the generalizability of our intervention approach, we extend experiments to Qwen2-VL. We intervene in the image attention distribution using our temperature-scaling method, showing consistent improvements below, particularly in challenging cases. For example, on `VG_two_object`, where the baseline performance is the lowest among all benchmarks, our method yields a significant improvement of **10+ absolute points**. The gains observed on Qwen2-VL further demonstrate the generalizability of our approach. | **Benchmark**|**Qwen2-VL**| **Qwen2-VL + Attention Intervention** | | :-: | :-: | :-: | |VSR|78.96|81.60 (↑ 2.64)| |Coco_one_obj|76.64| 78.03 (↑ 1.39) | |Coco_two_obj|75.28| 76.52 (↑ 1.24) | |VG_one_obj |74.89| 75.11 (↑ 0.22)| |VG_two_obj |56.22|66.95 (↑ 10.73) | |Controlled_A|98.18 |98.18 (↑ 0.00) | |Controlled_B|91.73|92.97 (↑ 1.24) | - We also evaluate more generic benchmarks for generalizability, including POPE, GQA, and TextVQA, which proves that attention intervention can maintain the performance on more general tasks without hurting the performance. Compared with spatial reasoning tasks, general QA tasks achieve relatively smaller improvement. This may be because such tasks are less sensitive to the geometric structure of image attention. For example, when asked `Is there a dog in this picture?`, the model only needs to detect the object's presence, and is therefore less likely to suffer from misallocated attention across spatial regions. | **Benchmark**|**Qwen2-VL**|**Qwen2-VL + Attention Intervention** | | :-: | :-: | :-: | |POPE-Overall| 86.32| 87.09 (↑ 0.77)| |POPE-P |86.47| 87.29 (↑ 0.82)| |POPE-A |85.07| 85.80 (↑ 0.73)| |POPE-R |87.46| 88.22 (↑ 0.76)| |GQA |62.09|62.17 (↑ 0.08)| |TextVQA|79.18|79.26 (↑ 0.08)| - We believe this assumption is theoretically generalizable. The only assumption we have is the different information density between image and text. In VLMs, attention is distributed across both textual and visual tokens. Since key information in images tends to be more sparse compared to text, the attention distribution over visual tokens can be more difficult to allocate accurately. >The paper claims that the model's confidence is correlated with attention correctness…More robust, quantitative analysis to support the correlation between attention distribution and model confidence. - As detailed in Lines 184-251 of Section 4.1, we use YOLO to annotate the relevant entities in the images, and conduct quantitative analysis on AUROC of the overlap between YOLO annotations and the attention patterns , as well as performance are shown in Figure 7 in the main paper and, Figure 18, Figure 19, Figure 20, Figure 21, Figure 22, Figure 23, Figure 24 in the Appendix. - In Figure 25 in the Appendix, we perform quantitative analysis on **confidence and accuracy and their correlation with the coefficient**, where we can see the confidence and accuracy follow similar trends as the temperature coefficient varies, supporting the use of confidence as a reliable proxy for performance. - From the first and third subfigures: for low-confidence relationships, applying a small coefficient (<1) improves performance; for high-confidence relationships, a coefficient (>1) improves performance. - We will move the additional quantitative evidence (from Appendix) into Section 4.2 to make this reasoning clearer in the paper. > A more adaptive approach that adjusts thresholds dynamically based on the specific task or dataset... - We believe there may be a misunderstanding. As described in lines 323, we adaptively adjust the coefficient using a small held-out subset (20% of the whole subset as the validation set) from the specific target task or dataset, rather than pre-defining hyperparameters. - Additionally, we propose a more adaptive variant **ScalingVis** with only a single hyperparameter $\alpha$ to modulate the attention distribution based on model confidence. This design maintains both simplicity and adaptability, contributing to the method’s robustness and generalizability across tasks. --- Rebuttal Comment 1.1: Comment: The authors' response has addressed most of my concerns. I would like to increase my rating to 3 -- weak accept.
null
null
null
null
null
null
Enhancing Rating-Based Reinforcement Learning to Effectively Leverage Feedback from Large Vision-Language Models
Accept (poster)
Summary: This paper introduces ERL-VLM, a straightforward yet effective method for learning reward functions by leveraging feedback from VLMs. By querying a VLM for absolute ratings of trajectory segments rather than relying on pairwise comparisons, the method aims to improve sample efficiency and expressiveness of the feedback signal. The authors augment this approach with stratified sampling and a mean absolute error (MAE) training objective to address issues related to data imbalance and noisy labels. Experimental results, spanning both simulated environments and real-world robotic tasks, demonstrate that ERL-VLM outperforms four baseline methods in six out of seven tasks, thereby examining the effectiveness of ERL-VLM. Claims And Evidence: The claims are generally well supported by empirical results, as the improvements achieved over baselines are notable. However, some technical aspects, such as the details of the trajectory segment sampling and the validity of VLM ratings, could benefit from further clarification. In particular, it is unclear whether all sampled segments are informative, since early or mid-trajectory segments might not contain sufficient task-relevant information, and whether the observed teacher VLM accuracy (approximately 0.7 as shown in Figure 2(a)) adequately justifies its use. Methods And Evaluation Criteria: The proposed method leverages a VLM for trajectory rating and two enhancement techniques to mitigate the intrinsic issues from the VLM (e.g., imbalance and imprecision). The workflow mainly resolves around how to enhance the feedback from the VLM, thereby optimizing the reward function. The major concerns from my end are about the segment sampling and the validity of the ratings generated by the VLM. While the approach is sound, there are two primary concerns: - Trajectory Segment Sampling: The use of random sampling without consideration of temporal dynamics might yield segments that do not adequately reflect the task’s critical moments. From Algorithm 1 Line 12, trajectory segments are randomly sampled from the replay buffer and fed to the VLM to acquire labels/rating. It is not clear whether the sampled segments are qualified to provide valid ratings. Namely, some segments from the beginning of the operation may not contain useful information for rating even from humans. Also, some tasks can be determined as good/succeed or bad/failed only at the very end of the operation, therefore, it is hard to justify a meaningful rating for the trajectory in the middle. It is not convincing that a random sampling approach without considering the temporal dependence and importance of the segments can collect trajectories efficiently and effectively. - VLM Rating Reliability: With a teacher VLM accuracy hovering around 0.7, further quantitative analysis is needed to explore the consistency and informativeness of these ratings across different scenarios. Theoretical Claims: This paper does not include theoretical proof, and the main contribution is not in such an aspect. Experimental Designs Or Analyses: - The authors conduct experiments across different environments and tasks. Success rate is used as the main metric to evaluate the quality of the reward function, which is sound. - As aforementioned, one concern is about the quality of the labels generated by the VLM. Since the output from generative models can be subjective and unstable due to the input prompt, namely, an identical segment may obtain different labels. - The authors conduct experiments in a real robot, which is commendable. Supplementary Material: No supplementary material is provided for this submission. The Appendix provides more details about the experiment setup and prompts used for querying the VLM. Relation To Broader Scientific Literature: The related work covers relevant methods and some of them are applied as baselines for comparison. It is worth noting that some works [1-3] use LLMs or VLMs as coding models to directly generate reward functions without computing similarity score. The authors may add some discussion regarding these approaches. [1] Eureka: Human-Level Reward Design via Coding Large Language Models [2] Self-refined large language model as automated reward function designer for deep reinforcement learning in robotics [3] AutoReward: Closed-Loop Reward Design with Large Language Models for Autonomous Driving Essential References Not Discussed: N/A Other Strengths And Weaknesses: - The authors may justify more about using the Likert scale for rating, since, to the best of my knowledge, both floating numbers and Likert scale suffer a degree of uncertainty for evaluating/labelling. - In section 5, the authors discuss the impact of the number of rating classes. From Figure 6, it seems the selection of the number of rating classes is somehow task-specific. The authors may elabrate more about this point. Other Comments Or Suggestions: - Some of the technical details in the Appendix regarding reward learning could be integrated into the main body to provide a clearer technical overview. - Although the experiments use only Gemini-1.5 Pro as the VLM, the choice is justified given its representative performance, and the authors acknowledge that extending to other VLMs is a potential future direction. Questions For Authors: 1. Could you elaborate on the criteria for selecting trajectory segments from the replay buffer? How do you ensure these segments provide sufficient context for accurate ratings, especially in cases where critical task information might appear only at the end of the trajectory? 2. Can you provide additional quantitative analysis on the consistency and reliability of the ratings across different tasks and scenarios? 3. What is the rationale behind choosing a Likert scale for the ratings, given that both discrete scales and continuous ratings can suffer from inherent uncertainties? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their in-depth review, and for recognizing the simplicity and effectiveness of our method, the strength of our results, and our efforts in real-world experiments. We address your comments in detail below: - **Q1**. Could you elaborate on the criteria for selecting trajectory segments from the replay buffer? - **A1**. For MetaWorld, we randomly sample segments of length one (i.e., individual states) and use them to query the VLM. This approach is sufficient for the single-target nature of the tasks, as shown in Figure 1 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing). For ALFRED, since tasks are often compositional and critical information may appear at different points in the episode, we randomly sample an entire trajectory and use it to query the VLM to provide sufficient context for accurate ratings, as illustrated in Figures 2-4 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing). - **Q2**. Can you provide additional quantitative analysis on the consistency and reliability of the ratings across different tasks and scenarios? - **A2**. The consistency of the VLM ratings is shown in Figures 9 and 10 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing). We repeatedly query the VLM along expert trajectories in different tasks. As pointed out by the reviewer, we also observe that identical segments may receive different labels. However, as shown in the results, the ratings along expert trajectories remain reasonable and consistent. For example, by calculating the accuracy of ratings based on task success/failure (for $n=3$ rating levels, we only count *Bad* and *Good*), we observe the following rating accuracies: for *Sweep Into*, $90\pm4.6$; for *Drawer Open*, $65 \pm 3.8$; and for *Soccer*, $66.8 \pm 3.7$. The variance across trials is small, and these small inconsistencies can be mitigated through our MAE objective. - **Q3**. What is the rationale behind choosing a Likert scale for the ratings? - **A3**. The rationale for selecting a Likert scale is as follows: 1. Human alignment: Since our reward labels are generated by a VLM designed to emulate human interpretation, a Likert scale aligns well with common human rating systems. This allows the VLM to map qualitative assessments (e.g., *Bad*, *Average*, *Good*) to structured numerical values, which would be more challenging to express meaningfully using an arbitrary continuous scale. 2. Stability and reliability: It is considerably more stable and reliable to prompt VLMs with structured classification tasks (e.g., “Rate the success of this grasp from 1 to 5”) than to request continuous scalar (e.g., “Give a score between 0 and 1”). Continuous scores often yield unstable or inconsistent outputs, as shown in prior work [1], whereas discrete classes reduce ambiguity, providing clearer grounding for ratings. 3. Scalability and prior work: Prior research in learning from evaluative feedback [2-5] often discretizes qualitative feedback into ordinal scales for reward modeling. These pipelines report that such formats are easier to scale, annotate, and integrate into training loops. - **Q4**: Discussion with some works use LLMs or VLMs as coding models to directly generate reward functions without computing similarity score. - **A4**. The relevant works suggested by the reviewer utilize LLMs/VLMs as coding models that directly generate reward functions by accessing environment source code or privileged state information. In contrast, our method relies solely on visual observations and does not require access to internal environment representations. We will incorporate the works recommended by the reviewer ([6-8]) into our revised manuscript to enrich the related work section and more clearly position our method within the broader literature. [1] Wang, Y. et al. RL-VLM-F: Reinforcement Learning from Vision-Language Foundation Model Feedback, ICML 2024. [2] Knox, W.B. et al. Interactively shaping agents via human reinforcement: The tamer framework. 2019. [3] Warnell, G. et al. Deep tamer: Interactive agent shaping in highdimensional state spaces. 2018. [4] Arumugam, D. et al. Deep reinforcement learning from policy-dependent human feedback. 2019. [5] White, D. et al. Rating-based Reinforcement Learning, AAAI 2024. [6] Wu, Y. et al. Eureka: Human-Level Reward Design via Coding Large Language Models. [7] Li, X. et al. Self-refined Large Language Model as Automated Reward Function Designer for Deep Reinforcement Learning in Robotics. [8] Chen, C. et al. AutoReward: Closed-Loop Reward Design with Large Language Models for Autonomous Driving. --- Rebuttal Comment 1.1: Comment: Thanks for the additional details and analysis, which have addressed my concerns. Hence, I would like to raise my score to Accept. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt response and for raising the score! We are glad that our response has addressed your concerns. We greatly appreciate the time and effort you spent providing insightful feedback on our work.
Summary: The paper introduces ERL-VLM, a method that efficiently utilizes feedback from large VLMs like Gemini to generate reward functions for training RL agents. Instead of pairwise comparisons, it queries VLMs for absolute evaluations of individual trajectories on a Likert scale. This approach allows for more expressive feedback, reduces ambiguity, and ensures full utilization of queried samples. The authors also introduce enhancements to existing rating-based RL methods to address instability caused by data imbalance and noisy rating labels from VLMs. Through extensive experiments across various vision-language navigation and robotic control tasks, ERL-VLM is shown to outperform prior VLM-based reward generation methods. Ablation studies are conducted to identify key performance factors and provide insights into its effectiveness. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: Yes. This paper conducts a series of experiments to evaluate the ERL-VLM method. In the simulation domain, it is compared with various baselines, showing obvious advantages in most tasks. Real-robot experiments demonstrate that this method can generate effective reward functions to facilitate policy learning. Ablation experiments analyze the contributions of various improvements to performance, revealing that the MAE loss, stratified sampling, etc., are effective, while label smoothing is ineffective. Experiments on different numbers of rating classes reveal the relationship between task characteristics and the optimal number of rating classes. Supplementary Material: I have checked the "Training Implementation Details and Hyperparameters" and "Prompts". Relation To Broader Scientific Literature: 1. Addressing the Challenge of Reward Design. 2. Leveraging VLM Capabilities. 3. Overcoming VLM-based Feedback Challenges. Essential References Not Discussed: No Other Strengths And Weaknesses: ### Strengths - A novel method, ERL-VLM, for learning reward functions from vision - language model feedback is proposed, providing new ideas and approaches for the design of reward functions in reinforcement learning. - Experiments are conducted not only on various types of tasks in simulated environments but also in real-world robot operation scenarios. - Detailed ablation experiments are carried out on each improved part of the method, clearly demonstrating the impact of each improvement measure on the overall performance. ### Weaknesses - The method is highly dependent on the performance and accuracy of vision-language models. What if the vision-language model consistently exhibits understanding biases and inaccurate ratings in some complex scenarios? - An accuracy metric for the reward function should be introduced to measure the degree of matching between the learned reward function and the real task objective. In environments with sparse rewards, what specific reward designs does ERL-VLM derive that lead to performance improvements? - ERL-VLM seems to rely on the task description. How should it handle multiple target tasks? Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and thoughtful review, and for recognizing the novelty of our method, the extensiveness of our experiments, and the strength of our results across a wide range of tasks and domains. We respond to your comments in detail below: - **Q1**. The method is highly dependent on the performance and accuracy of vision-language models. What if the vision-language model consistently exhibits understanding biases and inaccurate ratings in some complex scenarios? - **A1**. We acknowledge that biases and inaccuracies in VLM ratings can arise, particularly in complex scenarios. To mitigate these issues, we focus on designing effective prompts and selecting appropriate rating levels, which help guide the VLM’s output. Additionally, using higher-performing VLMs can further enhance the quality of the ratings. However, if a VLM consistently produces inaccurate ratings due to being undertrained on specific tasks or domains, the finetuning for the VLMs is required. - **Q2**. An accuracy metric for the reward function should be introduced to measure the degree of matching between the learned reward function and the real task objective. In environments with sparse rewards, what specific reward designs does ERL-VLM derive that lead to performance improvements? - **A2**. ERL-VLM performs better in sparse reward tasks because the reward model, learned from ratings, provides denser reward signals at meaningful states, rather than only sparse signals at the final state. To measure the alignment between the learned reward function and the real task objective, we compare the reward outputs from ERL-VLM with the ground-truth task progress along expert trajectories. The results are shown in Figures 7 and 8 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing) . Outputs from RL-VLM-F and CLIP are also included for comparison. As shown, ERL-VLM rewards are more consistent and less noisy compared to the other methods. In sparse reward tasks, our approach enables reward shaping by assigning higher values to meaningful states. For instance, in the *CoolObject* task (Figure 8), ERL-VLM assigns higher rewards at timesteps when the agent interacts with the target object (e.g., the fridge), compared to intermediate navigation steps. This behavior arises from the VLM assigning higher ratings to critical subgoal states, as illustrated in Figures 2–4 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing). - **Q3**. ERL-VLM seems to rely on the task description. How should it handle multiple target tasks? - **A3**: ERL-VLM handles multiple target tasks (i.e., compositional task descriptions) by querying the VLM with a sequence of images rather than a single image, as illustrated in Figures 2-4 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing). We find that incorporating temporal context helps the VLM better understand the overall task structure, leading to more accurate ratings in compositional task descriptions.
Summary: This paper studies the problem of automated reward generation for RL policy training via VLMs. While prior work has shown that rewards can be extracted from VLMs either by preference (relative comparison) between two or more trajectory segments, or by using the representation itself as a distance metric, these approaches are often prone to instability (former) or local minima (latter). This paper proposes a framework for reward generation via VLMs based on a rating system (e.g. "bad", "ok", "good") rather than direct trajectory comparison, which (a) allows for more fine-grained quantitative feedback, and (b) circumvents the issue of training instability when sampled trajectories are highly similar. The VLM assigns ratings to trajectory segments (one or multiple image frames), which are then used to learn a reward model for downstream RL training. The authors find that naively training a reward model on the collected dataset leads to poor predictions due to its skewed data distribution (most early trajectories are expected to be bad in an online RL setting), and instead propose to use a stratified sampling as well as an MAE objective. Experiments are conducted on 3 tasks from Meta-World, as well as ALFRED. Results indicate that the proposed framework is effective at guiding online RL via learned rewards on the selected tasks, outperforming simpler baselines based on representation distance or pairwise preferences. ## Post-rebuttal assessment I appreciate all the new qualitative analysis of the method provided in the rebuttal, and believe that it addresses my concerns. I have raised my score from Weak Accept -> Accept under the assumption that these new results will be included in the camera-ready version. Claims And Evidence: The main claims of this paper are (1) that their framework, ERL-VLM, is that it enables agents to learn new tasks using only a human-provided language task description, (2) that their proposed improvements to rating-based reward learning improve stability and thereby agent performance, (3) that their framework outperforms prior work on VLM-based reward generation, and finally (4) that their ablations identify key factors of their framework for good performance. I believe that all of these claims are reasonable and for the most part supported by convincing evidence. However, while it is true that their ablations identify key factors for performance, I find the analysis of *why* somewhat lacking; I go into more detail on this in the "experimental designs and analysis" section. Methods And Evaluation Criteria: Yes. The improvements to rating-based reward generation are well motivated and backed by empirical evidence. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design appears fairly sound. While the number of simulated tasks is rather small, the authors validate their method across a combination of continuous (Meta-World) and multi-task language-conditioned discrete (ALFRED) control, as well as three simple manipulation tasks on a real robot. Experiments are conducted with 3 seeds; while additional seeds would make the experiments more sound, performance gains appear to be statistically significant as is. Both the VLM, reward model, and policy take RGB images as input which makes the framework broadly applicable. My main concern with the paper in its current form is the lack of analysis on what exactly trajectories labelled different ratings look like, what the typical distribution of ratings is for the considered tasks, and what the reward model outputs for successful vs. unsuccessful trajectories. It would be informative to analyze this from both a quantitative and a more qualitative perspective. Based on the ablation on number of ratings in Figure 6, it appears to me that the ratings might actually correspond to task stages (e.g. reaching, grasping, and pulling for the Drawer Open task). This would explain why the number of ratings has a pretty drastic effect on policy learning, since many tabletop manipulation tasks like the ones studied here can be divided quite naturally into a small number of distinct stages. It feels important to analyze this in more detail, as it will provide insights into the rating-based framework proposed in the paper while it would also help inform readers how to select the number of ratings (task stages?) if applied to novel problems. Supplementary Material: I skimmed through the appendices. I appreciate the amount of details provided regarding prompt templates as well as environment / algorithm implementation details. No code is provided. Relation To Broader Scientific Literature: The contributions of this paper, while somewhat incremental, are well motivated wrt prior work. The authors motivate their approach by providing insights into the limitations of current approaches for VLM-based reward generation, which is backed by empirical results. Essential References Not Discussed: I believe that the discussion of related work is fairly comprehensive and I am not aware of any essential references that are not currently mentioned. It is however possible that I might have missed some. Other Strengths And Weaknesses: **Strengths:** The paper is very well written and easy to follow. The discussion of related work appears to be thorough, and the experiments are fairly comprehensive and insightful. The problem is timely and likely to be of interest to the community. **Weaknesses:** There is limited quantitative and qualitative analysis wrt what exactly the ratings represent in the context of the chosen tasks; I believe that including some insights in this regard would greatly strengthen the paper. The technical content of the paper is somewhat incremental, so more analysis, insights, and takeaways would likely improve longevity and impact of the work. Other Comments Or Suggestions: It would be helpful to more explicitly mention that both VLMs, reward model, and policy takes RGB images as input. I had to read through the appendices to confirm that the policy was also image-based. Questions For Authors: I would like the authors to address my comments in the "experimental design and analysis" and "weaknesses" sections above, especially my comments regarding the effect of # ratings, what the ratings and rewards look like in practice (quantitatively and qualitatively), and how they relate to task stages. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful and constructive feedback, and for recognizing the clear motivation behind ERL-VLM, the clarity of our writing, and the strength of our experimental results. We address your comments in detail below: - **Q1**. What do trajectories labeled with different ratings look like in practice? - **A1**. Sixteen examples of states/trajectories labeled with different ratings are shown as follows: - MetaWorld tasks (*Sweep Into*, *Drawer Open*, and *Soccer*), where each task includes two or three different states, each leading to different ratings assigned by the VLM. These examples are shown in Figure 1 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing). - ALFRED tasks with three different instructions, each paired with three trajectories leading to varying ratings. These are shown in Figures 2, 3, and 4 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing). The reasoning behind the VLM’s ratings (obtained during the analysis stage of our prompts) is also illustrated in the figures. As shown, the VLM assigns appropriate ratings based on the status of the target object in relation to the task objectives specified in the task description. - **Q2**. What is the typical distribution of ratings for the considered tasks? - **A2**. The typical rating distribution for the tasks is shown in Figure 5 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing). As shown, for the first three MetaWorld tasks, the percentage of *Good* ratings increases as the timestep progresses. For ALFRED tasks, the percentage of *Average* ratings increases over time, while the percentage of *Good* ratings remains almost constant. This is because, in ALFRED tasks, successful states occur at the end of the trajectory, so the VLM tends to assign *Good* ratings mostly at the final step, resulting in an increase in number of *Good* ratings, although the increase is relatively small compared to other ratings. - **Q3**. What does the output of the reward model look like in practice? - **A3**. The outputs of the reward models from ERL-VLM along expert trajectories in three MetaWorld tasks and four ALFRED tasks are shown in Figures 7 and 8 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing). We also include outputs from RL-VLM-F and CLIP for comparison. In MetaWorld tasks, CLIP rewards are generally noisy and poorly aligned with task progress. While both ERL-VLM and RL-VLM-F exhibit increasing reward trends along expert trajectories, ERL-VLM aligns more closely with the ground-truth task progress and shows significantly less noise compared to RL-VLM-F. In ALFRED, ERL-VLM produces smoother and more consistent reward signals along expert trajectories than the other methods. - **Q4**. The effect of #ratings. - **A4**. As mentioned in **Q1**, the VLM assigns ratings based on the status of the target object. For many tasks, the appropriate number of ratings can be easily determined (e.g., whether the task has been successfully completed is clearly observable from the visual state). However, for tasks where determining the optimal number of ratings is non-trivial, we use a trial-and-error approach. We demonstrate this process in Figure 6 at [Link](https://drive.google.com/file/d/19DtB_xCUhg8uTmTbE5bPDwXFoEn71PkL/view?usp=sharing), where we repeatedly query the VLM along an expert trajectory in the *Drawer Open* task. With $n=3$ ratings levels, the VLM produces more consistent and intuitive feedback, with ratings generally increasing as the trajectory progresses toward task completion. Small inconsistencies are mitigated through our MAE objective. However, with other values (e.g., $n=2$ or $n=4$), the VLM exhibits stronger inconsistency. For example, with n=2, the VLM sometimes assigns *Bad* ratings even when the state reflects successful task completion. Similarly, with n=4, both *Very Bad* and *Bad* ratings are occasionally assigned to successful states. We hypothesize that this behavior stems from inherent biases in the VLM. Based on this observation, we recommend performing a simple consistency check when designing prompts for new tasks to help select an appropriate number of rating levels.
null
null
null
null
null
null
null
null
Nemotron-CORTEXA: Enhancing LLM Agents for Software Engineering Tasks via Improved Localization and Solution Diversity
Accept (poster)
Summary: This paper proposes CORTEXA, a coding agent for bug-fixing based on Agentless v1.5. CORTEXA differs from Agentless in two key ways: 1) it uses an embedding-based file-retrieval (using a finetuned embedding model) combined with an agentic entity retrieval approach instead of direct prompting for localization, 2) it increases patch diversity by combining different temperatures, contexts, and prompt formats when sampling repair patches. Combined this yields slightly better performance at (probably) lower cost. ### Update After Rebuttal I thank the authors for their detailed response and would like to encourage them to include the additional ablation results in a revised version for the paper. These additional results convince me of the effectiveness of the proposed approach beyond Agentless and agentic retrieval approaches. I have updated my score accordingly. Claims And Evidence: Key claims: * Their embedding-based file retrieval approach outperforms a purely prompt-based approach => Well supported in Table 2, although additional ablations would strengthen this result. * Their agentic retrieval approach combining different models and retrieval stages improves entity retrieval recall over Agentless => Well supported in Tables 3 (when combined with ensembling across models) although interesting baselines such as using the same model at temperature or combining different retrieval stages from an individual model. * Their diversity-focused patch generation approach performs better than simply sampling at temperature => Well supported for pass@2 and two samples in Figure 4, with no ablations for the full end-to-end fixes with 9 generations. * Combining these components leads to a notably stronger method than Agentless => limited support with actual performance difference to the underlying Agentless being minimal <2% and no statistical significance reported (t-test for SWE-Lite gives only p=34%) Methods And Evaluation Criteria: The dataset choice (SWE-Bench), evaluation criteria, and conducted ablation studies are mostly appropriate. Comparisons based on cost with unclear API prices for cited reference are hard to evaluate. Comparing at equal number of pre-filtering patches with Agentless may have yielded statistically significant improvements. Theoretical Claims: None Experimental Designs Or Analyses: I checked the soundness of all experiments reported in the main paper and they seem consistent and reasonable. However, more baselines and other methods could significantly strengthen the conclusions: In Table 2, an additional embedding baseline of generating descriptions of to-be-retrieved code (and similar for the embedded code) and direct agentic retrieval like SWE-Agent or AutoCodeRover. In Table 3, a comparison to other agentic retrieval approaches (e.g. AutoCodeRover v2, or even SWE-Agent v1). It should perhaps be made more explicit when CORTEXA (and Agentless) use an ensemble (e.g. Table 2 (Prompt only) and Table 3). Supplementary Material: I read the appendix. Relation To Broader Scientific Literature: Comparison to other methods in Table 1 is quite limited with many approaches being relatively old and e.g. the closely related MarsCode Agent (published September 2024) not being included. A more detailed comparison of retrieval performance to e.g. MarsCode Agent, and AutoCodeRover (using a similar agentic approach and published in April 2024) should be included. While the authors claim MarsCodeAgent to be contemporary it was released more than 4 months before this submission. Essential References Not Discussed: No critical works missing, although a broader overview of current code agents could be provided, see e.g. the 26 different agents compared to in Agentless. Other Strengths And Weaknesses: ### Strengths * Focus on efficiency via diversity in ensembling and as a result matching/improving performance at a lower cost is an important and interesting direction * Demonstrates the effectiveness of FineTuning embedding models specifically for code retrieval in the bug-fixing setting * Confirms the effectiveness of agentic entity retrieval and combining results across retrieval stages * Shows the effectiveness of increasing diversity before patch aggregation using different edit formats and contexts. ### Weaknesses * In multiple places relevant baselines are not considered, limiting conclusions to be relative to Agentless. * Important details are not described (e.g. how is the context filtered (Section 3.3)? What prompts are used?) * Without these details and the trained embedding model (or more details on its training process), the results are not reproducible. * Variability and statistical significance of results are not measured/discussed but p-values computed with the available data show no statistically significant improvement over Agentless, despite a significantly more complex approach. Other Comments Or Suggestions: * I would suggest a table format with fewer (especially vertical) lines to improve readability. * Computing p-values using paired sample tests could strengthen claims. Questions For Authors: 1) Will the finetuned model, the generated datasets, or the agent traces be released? 2) Ho was the context filtered by the code agent (Section 3.3) 3) How does localization cost compare to Agentless? How would an Agentless approach perform at a matched token cost in Table 3? 4) Can you discuss the ablation results on different chunking strategies mentioned around line 335 (left)? 5) Can you compare to other localization methods in Tables 2 and 3 (see above)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We truly appreciate the time and effort you have put into reviewing our work and are grateful for your insightful comments that help improve our paper. For comparisons to SOTA works, please refer to our response to Reviewer WZSG. We would like to highlight that Cortexa outperforms Agentless in several key areas: 1. Patch Generation Efficiency: Our reported performance is with only 9 patches, whereas Agentless requires 40 patches. When generating a comparable number of patches as mentioned in response to Reviewer WZSG (32 patches), Cortexa significantly outperforms Agentless, achieving 58.8% on the Verified set (p-value = 0.006). 2. Localization Performance: While Agentless uses a combination of prompt-based and embedding-based approaches, our specialized embedding model achieves higher retrieval accuracies. From Table 2 in the Agentless paper, its localization cost is `$0.15/instance`, while Cortexa’s localization cost is `$0.11/instance` (including file and entity localization) as shown in Appendix A.2. 3. Recall in Oracle Entity Identification in Table 3: Our recall rate in identifying the oracle entity is statistically superior to Agentless, with a p-value of 0.0007. We will release our code including all the prompts we used, full logs, and our embedding model upon the paper’s publication. **Additional baselines on retrieval:** Here are the new baseline results on the Verified set: **1. Temperature sampling and ensembling across multiple stages instead of ensembling multiple models:** We use DeepSeek-v3 here as it is individually the strongest in entity localization. Neither method can match Cortexa’s recall. The temperature sampling approach obtains a higher precision since it retrieves fewer entities. However, recall matters more for patch generation to provide contextual information. The results show that model diversity is the most effective approach here. |Model|Precision|Recall| |-|-|-| |Temp Sampling (deepseek-v3)|49.93%|57.03%| |All stages (deepseek-v3)|35.47%|53.47%| |Cortexa (DP+LA)|36.22%|68.09%| **2. Agentic retrieval baselines:** We select 4 agentic approaches here: OpenHands, AutoCodeRover-v2, MarsCode Agent, and SWE-agent-v1. For each agent we extract files from their official trajectory logs in the SWE-bench leaderboard. Since these methods can retrieve an arbitrary number of files, we denote the accuracy metric with recall@inf. In this comparison, we use recall@10 for Cortexa as other baseline methods retrieved at least 15 files on average. For entity retrieval accuracy, these agentic approaches often do not explicitly retrieve entities. Moreover, there is no exact way to measure entity retrieval results from their logs as there is no fixed pattern. Thanks to its localization agent step, Cortexa provides a concrete way to measure the more granular entity retrieval accuracy, which proved helpful to iterate and improve the agent. |Model|Recall@inf| |-|-| |OpenHands|85.6%| |AutoCodeRover-v2|88.2%| |MarsCode|87.2%| |SWE-agent-v1|76.8%| |Cortexa|94.0% (recall@10)| **Impact of diversifying generations:** To address this comment, we run the following experiments on the Verified set: 1. Taking the context (localization agent, LA) and the edit format (search/replace) that give us the highest pass rate with temperature 0 and generating 9 patches for it (1 greedy sampling and 8 temperature sampling). 2. Changing the edit format to use both edit formats. 3. Changing the context to other choices of DP (direct prompt), LA+DP, and File that uses the results of the embedding model. Table below summarizes the pass@9 and pass@1 (after applying the filtering steps outlined in the paper) results for each combination. These results support our claim that diversifying both the edit format and context increases the number of correct patches, as each combination has its own strengths and weaknesses, and diversification allows us to leverage all of them. |Context |Edit Format|pass@9|pass@1| |-|-|-|-| |All|Both|307|263| |LA|Search/replace|270|237| |LA|Both|284|245| |DP|Both|261|220| |LA+DP|Both|286|239| |File|Both|257|223| **Context filtering:** In our prompts, we ask the agent to filter irrelevant context by issuing a special `<keep>` command after consuming a list of code context. We will provide these prompts upon code release. **Chunking strategy:** We experimented with the SFR-Embedding-Mistral model starting with a chunk size of 4k and gradually reducing it. We observed a recall@k of 43.67% at a chunk size of 4k on the Lite set and it peaked at a chunk size of 450 with recall@k of 57.33%. Based on these results, we selected a chunk size of 450 for both training our model and final inference. Similar to the SFR-Embedding-Mistral model, our model's retrieval accuracy decreases with larger chunk sizes. For instance, with a chunk size of 4k, the retrieval accuracy on the Lite set is 35.67%, compared to 70.33% with a chunk size of 450. We will include the full experimental results in the paper.
Summary: - This paper introduces CORTEXA, a software agent that involves training a model specifically for localizing the right files, building a localization agent to identify the right entities within a file, and a workflow for diverse patch generation and selection. - CORTEXA outperforms Agentless and achieves similar performance to OpenHands on SWE-Bench. The paper also individually analyzes each component of the agent. Claims And Evidence: The paper's main claims are: - We develop a code embedding model specialized in retrieving relevant code chunks to a given bug, achieving state-of-the-art file retrieval accuracies on SWE-bench: this is supported by Table 2, however additional baselines could be considered. - We propose a diverse solution generation method by leveraging different contextual information and varied prompt formats, significantly enhancing sample efficiency: I was unsure where the sample efficiency results could be found. - We design a localization agent that integrates advanced programming tools and leverages an ensemble of LLMs to deliver more precise and granular issue localization. Experimental results demonstrate that CORTEXA outperforms Agentless [...] while being more cost-effective: CORTEXA does outperform Agentless, but this result does not mention other baselines like OpenHands. Methods And Evaluation Criteria: - Details on how the localization and repair stages interact with each other are unclear. - Why did the authors select this particular model (NV-Embed-QA)? Additional justification would be helpful. - The details about the localization models are somewhat short and not expanded on in the Appendix. For example, the authors mentions "The final dataset contains approximately 534k pairs of query-positive documents" but do not provide details about the specific distribution. Theoretical Claims: N/A Experimental Designs Or Analyses: - Evaluation is only limited to SWE-Bench. In contrast, Wang et al. 2024a evaluates on 15 benchmarks. I raise this point particularly because the performance on SWE-Bench alone is not that different from prior work, so a comparison on additional benchmarks could help distinguish the performance of CORTEXA. An additional point is that because the localization model is trained on SWE-Bench data, it would be interesting to evaluate on more diverse datasets. - Additional experiments ablating the performance of the 3 components identified as key contributions would be helpful. Which ones affect CORTEXA performance the most? While I appreciate the individual evaluation of each component, I believe the broader community would be more interested in understanding which component to focus on when building new software agents. - In Table 2, why do you only consider GPT-4o for Prompt? Claude would be a much stronger baseline to compare against. Supplementary Material: Yes, all of it. Relation To Broader Scientific Literature: The various components of CORTEXA aim to address various pain points that have been discussed in the broader literature regarding software agents. Section 4.3-4.5 provide interesting insight into how to design these different agent components. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: Could you please address the aforementioned questions about CORTEXA methodology and evaluation? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for taking the time to review our work and for offering constructive feedback to help enhance our paper. Regarding the comparison to the SOTA works, please refer to our response to Reviewer WZSG. Additionally, our sample efficiency is demonstrated by achieving higher resolution rates than Agentless with only 9 patches, compared to Agentless's 40 patches. **Importance of different components:** One of the main messages of our work is that improved localization of the issue is key to increasing resolution. Our initial experiments demonstrated that when GPT-4-turbo was given the ground truth file plus additional relevant files to fill the context window, it solved only 4 Lite set instances with 1 greedy patch. Providing only the oracle file increased this to 53 instances, and narrowing to ground truth functions/classes reached 72 instances. The dramatic increase in instance resolution from narrowing the context suggests that models are severely impacted by superfluous context and can benefit from a more localized edit location. This observation is the reason why we focused our work on better localization. To verify if this phenomenon holds in our final end-to-end results, we ran new experiments by generating 9 patches with the results of the file localization vs the entity localization, as shown in response to Reviewer sboj. Using the entities retrieved by the localization agent (LA) as context, we get to pass@1 of 245, while the pass@1 with File retrieval results is 223. This is consistent with our early observations. Our second key contribution concerns generating diverse candidate patches. We induce diversity by using the variety of different contexts obtained throughout the localization process, in addition to requesting different edit formats as outputs from the language models. Using the standard temperature sampling with the LA results and search/replace edit format, we obtain a pass@9 of 270 and pass@1 of 237 after the filtering steps mentioned in Section 3.5. Using our approach to maximize patch diversity we increase the pass@9 to 307 and the pass@1 to 263. A further advantage of this patch diversity method is that it is straightforward to adopt and makes more effective use of the existing sampling budget by leveraging the strengths of different contexts. We believe this technique has been overlooked by previous works and could easily be adopted independent of Cortexa. As demonstrated in Table 2, our embedding model significantly influences file localization. To further evaluate its effect on entity localization, we performed new experiments on the Verified set by running the entity localization step with file retrieval results from the base model. The localization agent takes a list of files from the file localization step as the starting point and navigates the repository as defined in Section 3.3. Table below indicates that starting with more accurate relevant files enhances the agent’s success in identifying oracle entities. | Embedding Model | Entity Precision | Entity Recall | |---|--|--| | NV-Embed-QA | 27.62% | 65.03% | | Cortexa | 36.22% | 68.09% | **Localization model:** We chose NV-Embed-QA as our pretrained model as it is the strongest commercially-available text embedding model in our tests. Its commercial availability aligns with our goal of releasing Cortexa and its associated retrieval model. Our dataset includes 290k query-positive pairs from 19k issues across 35 non-test repositories with positive passages from oracle files in the golden patch. Additionally, we generated 125k coding question-answer pairs using DeepSeek-v2.5. We also included data from APPS, CoSQA, Text2SQL, CodeTransOcean, and StackoverflowQA. We will add a detailed table of all data to the paper. While we acknowledge the potential of the embedding model for broader coding tasks, its purpose here was to help with retrieving relevant files to an issue description. A more comprehensive analysis of the embedding model on diverse datasets falls out of the scope of this paper and will be addressed in a future work. **Claude for Table 2:** We agree that several strong LLMs exist that may surpass GPT-4o here such as Claude, however we use Agentless as our baseline and thus wanted to facilitate comparisons by keeping the same LLM. Another important motivation in our choice of embedding models over prompt-based approaches is because many LLMs including GPT-4o and Claude have been trained on these exact repositories, allowing them to identify relevant files by name alone. By contrast, our embedding model does not suffer from this same test leakage and thus ensures that the gains cannot be attributed to memorization from a specific LLM. Furthermore, reading the full contents of the files with these LLMs is prohibitively costly, as shown in CodeMonkeys (Ehlrich et al. 2025), whereas our embedding model achieves equal or better accuracy using only a fraction of the cost.
Summary: The work proposes an agentic system around LLM to solve GitHub issues (Swe-bench tasks). They mainly proposed a code embedding model used for file retrieval and built a localization, diverse patch generation and filtering mechanism around it. Finally, they demonstrated good performance of Cortexa while being cost-effective. Claims And Evidence: yeah, the claims are clear. The evidence presented is somewhat okay but can be more convincing by reporting confidence intervals. Methods And Evaluation Criteria: The method and evaluation criteria (resolve rate, precision and recall) seem reasonable. However they reported one number but the LLM's are quite stochastic and ideally should be run multiple times to report a confidence interval. Theoretical Claims: Experimental paper --> no theoretical claims. Experimental Designs Or Analyses: Yes. I think experiment design (or dataset) looks alright for this work. They evaluated on SWEbench which consists of real world github issues. Supplementary Material: Yes Relation To Broader Scientific Literature: There is already some work in code agentic systems focusing on code retrieval, AST graphs, patch selection, etc. This work shows the benefits of a fine-tuned retrieval model used in loop with coding agents to solve the github issues. Also, there are works which achieve more than 60% in pass rate and the work doesn't connect or comment on them. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: - Paper is clearly written and easy to understand - Novel code embedding model and its utilization to resolve the GitHub issue by using it within agentic systems Weakness: - Performance is weaker as compared to other SOTA methods - Doesn't report confidence interval, so currently uncertain if we can use the results to derive conclusions Other Comments Or Suggestions: 38: Identify what? Why does localization via ensemble increase precision? It should reduce precision due to the stochasticity of different LLMs combined together. Can you comment on why the performance of open hands is better than cortexa? Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your valuable feedback. We appreciate your thoughtful comments. **Comparison to SOTA:** In response to your comment on Cortexa's performance compared to other works, we would like to emphasize that our primary goal was to identify ways to increase the efficiency of coding agents. We observed that while free-form agentic flows are effective here, their vast pool of actions often leads to numerous costly iterations. Similar to Agentless, we aim to develop a more structured approach by identifying and improving key stages crucial for the final task, ultimately increasing efficiency and performance simultaneously. This approach also simplifies debugging and human intervention. For instance, while OpenHands achieves a 26% resolve rate on SWE-bench Lite at `$1.1/instance` (with the cost for other submissions unreported), Cortexa achieves a 42% resolve rate at `$0.51/instance`. We recognize that Cortexa's performance can be further improved by refining the selection process and/or scaling the number of generated patches. These optimizations are orthogonal to our approach and are left as a future work. Currently, the majority voting in the patch selection process is suboptimal, especially since we diversify the solutions generated, and a correct solution may not appear multiple times. Indeed, we observe that Cortexa’s performance can be increased by adopting a better patch selection approach - we’ve switched from a majority vote to using an LLM for selection, and pass@1 on the Lite set increases to 42.67% and on the Verified set to 54.8%. Additionally, another approach that brings our results in line and beyond most open approaches is scaling the number of generated patches. For instance, increasing it to 32 (16 by Claude-3.5-Sonnet and 16 by DeepSeek-V3) increases the performance on the Verified set to 58.8% **Confidence interval:** We used a temperature setting of 0 for all LLM calls, except for 4 out of 9 generated patches during the repair stage. Each of these 4 patches utilized different combinations of context and edit formats. To address the concern about stochasticity of LLM calls, we generated two additional patches for each combination (temperature sampling), resulting in 81 variations of the final 9 patches. The table below shows the peak performance statistics of these 9 patches on the Verified set. As mentioned in Appendix A.1, the patches used in the paper achieved a peak performance of 307 instances (61.4%) which falls within the confidence interval. We will include these results in the paper. | Target | Min | Max | Mean | Std | |---------------------------------|------|------|--------|------| | Peak perf (pass@9) | 299 | 311 | 304.52 | 2.71 | | #valid patches | 7.75 | 7.88 | 7.79 | 0.02 | **Ensemble increases precision:** Indeed, this is a surprising effect we observe. In Table 4 we reported precision/recall averaged across instances. To illustrate why ensembling is helpful consider the following example retrieval task over 5 documents (A, B, C, D, E) and two queries and relevant documents {Q1, (A,B)} and {Q2, (C,D)}. Say method 1 returns (A, C) for Q1, and (A, B) for Q2; method 2 returns (C, D) for Q1, and (B, C) for Q2. Method 1’s mean precision is (1/2 + 0/2) / 2 = 1/4. Method 2’s mean precision is (0/2 + 1/2) / 2 = 1/4. The union for both methods results in (A, C, D) for Q1 and (A, B, C) for Q2. The ensemble mean precision is (1/3 + 1/3) / 2 = 1/3 greater than either methods’ precision (1/4). For each question instance, the union ensemble has a lower precision (1/3) than the highest individual precision (1/2). This phenomenon explains why our ensemble method benefits from models’ strengths in solving different problem subsets.
null
null
null
null
null
null
null
null
Dynamical Modeling of Behaviorally Relevant Spatiotemporal Patterns in Neural Imaging Data
Accept (poster)
Summary: This work proposed a novel learning framework, SBIND, to model spatiotemporal neural imaging. SBIND is able to model the spatiotemporal neural dynamics of neural activity, at the same time SBIND can disentangle the behavioral relevant neural dynamics. The authors experimented with both calcium and ultrasound imaging datasets. Further model comparison with baselines demonstrates the proposed method is able to decode the behavior data and recover the neural activity from the latent space with reasonable accuracy. Ablation studies were also conducted to prove the effectiveness of the proposed method. Claims And Evidence: The claims are reasonable. Methods And Evaluation Criteria: The motivation behind taking ConvRNN1 outputs as inputs to ConvRNN2 is unclear. Theoretical Claims: No theoretical claims were made in the submission. Experimental Designs Or Analyses: The experimental designs are reasonable. How the hyperparameter search is performed? Supplementary Material: I went through the SBIND architecture and implementation details, data details and supplementary experiments. Relation To Broader Scientific Literature: Previous literature learns the behavioral relevant neural dynamics and behavioral-irrelevant neural dynamics in two stages, while SBIND can learn them simultaneously in one stage. Also the Essential References Not Discussed: The submission includes sufficient references as far as I know. Other Strengths And Weaknesses: The authors proposed a novel method to learn behavioral-relevant neural representations and neural dynamics at the same time. The method can be applied to a general neural imaging dataset. The performance improvement is significant compared to previous methods. Besides, the model architecture designed simultaneously learns the behavioral relevant dynamics and the general dynamics which can help with understanding of neural functions. It will also be good to see how the proposed method can be applied to electrode-based neural recordings and compare the results with the baseline methods. I believe technically, the approach can be applied to various data formats. Other Comments Or Suggestions: On lines 88-88, the name, CovRNN, is confusing: "utilized convolutional recurrent neural networks (CovRNN)," while the model temporal relationship is learned with attention instead of RNN. In section 2.2, when mentioning ConvRNN1 it is better to use ConvRNN1 or ConvRNN2 to avoid confusion. Questions For Authors: In the method section, what is $n_y$, $n_x$, $n_z$ corresponding to? Is that necessary to add ConvRNN1 outputs to ConvRNN2, I think there will be redundant information. How large the current dataset is? It seems the number of available trials is limited. Did the authors see any overfitting issues? What do the authors think are the main components that contribute most to performance improvement? For the $f_A$ model authors mentioned the function is for learning long temporal relationships, how authors concatenate image patches with the temporal steps. How the number of patches and the length of temporal window will affect the results. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### [1]: motivation for ConvRNNs We thank the reviewer for this insightful question. There are two reasons we need to pass the ConvRNN1 states to ConvRNN2. First, ConvRNN2 aims to capture **residual neural dynamics** not explained by behaviorally relevant states, $X_k^{(1)}$. To ensure $X_k^{(2)}$ learns dynamics not included in $X_k^{(1)}$, we pass $X_k^{(1)}$ to ConvRNN2. Thus, this connection functions similarly to a **residual connection** which avoids re-learning the dynamics of ConvRNN1. Second, Sani et al. 2021 have proven that for linear dynamical systems, to get a general disentangled linear state-space model, one needs a link from $X_{k+1}^{(1)}$ to inform the recursion of $X_k^{(2)}$ (i.e. how $X_{k+1}^{(2)}$ is derived from $X_k^{(2)}$). Although SBIND uses nonlinear ConvRNNs, this linear theory provides another motivation for passing the behaviorally relevant states from ConvRNN1 to ConvRNN2. ### [2]: Hyperparameter (HP) Search Primarily, we swept the latent dimensions $n_x$ and patch size for all datasets, as these were found to be the most critical HPs for performance. Other major HPs (e.g., learning rate, sequence length, convolutional layers were fixed based on the best validation performance on the WFCI datasets. For the fUSI dataset these HPs were picked based on best behavior prediction in the validation set using one representative session and then fixed at these values across all sessions. ### [3]: Applying to Electrophysiological (EP) Data We thank the reviewer for the insightful suggestion to assess SBIND's robustness on EP data. While SBIND is primarily designed for the unique challenges of neural imaging, we agree this is a valuable comparison. We applied SBIND to an **EP dataset** (O'Doherty et al., 2020) containing smoothed spike counts from 42 channels and 4D behavioral kinematics (horizontal and vertical velocity and position) from two sessions of NHP reaching movements. We did not have access to the spatial coordinates of the electrodes, so we could not form an image based on true coordinates. Instead, for comparison purposes, we formed an **arbitrary** 6x7 2D pseudo-image from the 42 channels. Despite this the pseudo-image input lacking true spatial structure, SBIND performed comparably to DPAD (**new Table B.4**), suggesting the robustness in SBIND's dynamical modeling and disentanglement approach. Finally, as the reviewer noted, we agree SBIND holds potential for other imaging modalities or modalities that naturally possess a grid structure, such as EEG recordings. **Table B.4: Comparison with DPAD on EP Recordings** |Model|Neur. R2|Beh. R2| |-|-|-| |DPAD|0.8321±0.0089|0.5053±0.0201| |SBIND|0.8247±0.0079|0.4972±0.0212| --- ### [4]: Comments & Suggestions We thank the reviewer for the helpful suggestions to improve clarity: * **ConvRNN naming**: SBIND employs an **attention-augmented ConvRNN** architecture. While the self-attention (SA) module captures *spatial* dependencies within each time step, the architecture remains fundamentally *recurrent* because the state equation ensures that $X_{k+1}$ depends on $X_{k}$. Therefore, we used "ConvRNN" to describe the overall model class, with the understanding that it incorporates SA. We will clarify this in the manuscript and ensure "ConvRNN1" and "ConvRNN2" are used consistently. * **$n_y, n_x, n_z$ definitions**: Please refer to Reviewer z1x2 response [1]. ### [5]: Dataset Size WFCI 1 and WFCI 2 contain 49k, and 39k frames. While the fUSI dataset has ~5k frames per session, the availability of 13 sessions provides enough data overall for evaluation. To mitigate overfitting, we employed standard techniques including L2 weight decay, dropouts, and early stopping. Example loss curves are provided via [this anonymous link](https://anonymous.4open.science/r/sbind-88FF/loss.png), showing that overfitting was not an issue. ### [6]: Main Contributing Components Ablation studies (Sec 4.2; Tables 1, A.1, A.2; Fig 4) show SBIND's performance benefits stem from its ConvRNN architecture with integrated SA and use of GDL loss for more precise neural dynamical modeling. Also, learning behaviorally relevant dynamics directly from raw images leads to better behavior decoding. ### [7]: $f_A$ function The SA operates purely in the *spatial* domain within a single time step, k. At each time, the latent image $X_{k}$ is divided into spatial patches, and the multi-head SA computes the relationships between these patches *at that specific time* (not across different time steps). The **temporal dependencies** are handled by the outer *recurrent structure* of the ConvRNN, where $X_{k+1}$ is computed based on $X_{k}$. The effect of patch size (which influences the *spatial scope* of attention) was investigated in our ablation study (Figure A.6), showing that larger patch sizes result in better self-prediction. Also, the sequence length HP was chosen to balance sufficient temporal context with the computational limits. **References:** Line 495
Summary: The authors propose SBIND, which learns behaviorally relevant and irrelevant neural dynamics directly from high-dimensional imaging data without preprocessing. The authors apply SBIND to widefield imaging datasets and functional ultrasound, and find that SBIND outperforms existing methods that involve preprocessing in predicting one-step-ahead behavior and neural activity. Claims And Evidence: The authors claim that SBIND outperforms existing methods, and this is well demonstrated by Tables 1-3, where the authors compared SBIND not just to other methods across multiple datasets but also to SBIND that has different architectures. This analysis allows identifying which components of the model contribute to good performance. The authors show that using both the local and long-range spatial dependencies in the images for the behavioral and neural activity prediction is important for performance. This is demonstrated in e.g., Figure 4 and Appendix A.4. Methods And Evaluation Criteria: The SBIND method involves training of two different ConvRNNs in two stages. In the first phase, a ConvRNN with a self-attention mechanism is trained so that it is optimized to perform behavior prediction on the next timestep along with neural activity prediction. This allows learning a low-dimensional representation that is behaviorally relevant. In the second phase, another ConvRNN with a similar architecture tries to predict only neural activity, but from both the latent representation it learns and the latent representation from the first ConvRNN. This allows the second ConvRNN to learn a low-dimensional representation that is behaviorally irrelevant. This disentangling of representations is shown to be important in SBIND’s performance (Table 1) and makes sense for this application of decoding behavior from neural activity. Instead of simply using MSE, the authors use the gradient difference loss (GDL) with L1 and L2 functions, which, the authors note, have been shown to preserve local image structures (Mathieu et al., 2016). This makes sense for this application. Theoretical Claims: There were no theoretical claims in this paper. Experimental Designs Or Analyses: The experiments are sound. The authors have done rigorous ablation studies in Tables 1, A.1 and A.2 for calcium images and Tables A.5 and A.6 for ultrasound. The authors also report the hyperparameters used in SBIND in Table A.1. Supplementary Material: Yes, I checked A.1.3 and A.5. Relation To Broader Scientific Literature: In contrast to previous papers that involve preprocessing steps (e.g., PCA) and pre-defined ROI (e.g., LocaNMF), SBIND takes the calcium imaging and ultrasound imaging data directly to perform neural activity and behavior prediction. As far as I know, methods that can be applied to fUSI are relatively sparse in neuroscience, and because SBIND is a general model that can be applied to both calcium images and ultrasound, this method may have wide applications. Essential References Not Discussed: I couldn’t think of essential references not cited in this paper. Other Strengths And Weaknesses: A major strength of this work is that it is widely applicable to both calcium imaging and functional ultrasound data. It uses convolutional RNN and attention mechanism to take into account local and global spatial information when doing the prediction of the next image and behavior, which has not been done previously. The clarity of the paper can be improved. Other Comments Or Suggestions: - Line 87 space typo. - What do H and W mean in Equation (1)? Please define all symbols used in equations. - Also, for consistency maybe it could say X_{k+1} given {Y_1, …, Y_k}. For inference, couldn’t you have used all Y_{1:K}? - The anonymous link to the repository didn’t seem to work for me. Is this an error on my end? Questions For Authors: 1. How is the performance affected if the model is trained not in two stages but all at once? Is it possible to do, and still achieve disentangling of latents? Will this potentially improve training time and also performance? 2. Can the method be used to perform behavioral decoding not just for the next timestep, but also for much later future timesteps? How fast is the inference step? I think mentioning these in the text might be helpful in the context of being useful for BCI. 3. In Equation (2), why is it that X and Y go into different networks and sumed? Could you have made the model so that X and Y both go into f_A? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### [1]: Comments & Suggestions Thank you for these helpful suggestions for improving the clarity of our work. * We will fix the typo on Line 87 and ensure all symbols are clearly defined upon first use. Specifically, $n_y$ is the number of neural images in $Y_k$ at time k, and H and W are the height and width. $n_x$ is the number of dimensions of the latent states, and $n_z$ represents the dimension of the behavior vector, $z_k$. Across all datasets, $n_y=1$, but we include $n_y$ for generality of the formulation. * We agree with the suggestion regarding notation consistency and will amend Line 162 to clarify the RNN estimates $X_{k+1}$ given $Y_{1:k}$. * **Inference using the full sequence $Y_{1:K}$**: Our model performs inference causally (prediction at time k+1 uses observations up to time k), given its importance for causal neuroscience investigations and real-time BCI applications. Using the full sequence would correspond to non-causal inference (i.e., smoothing), which was not the goal of this work but can be achieved using bidirectional RNNs in future studies. * Finally, we have verified that the anonymous link to the code works; the previous issue may have been a temporary glitch. --- ### [2]: Combined Loss We adopted the two-stage approach because it allows us to explicitly disentangle the behaviorally relevant latent states $X^{(1)}$. The ConvRNN in Stage 1 is optimized *specifically* for behavior prediction, allowing the first latent state, $X^{(1)}$, to focus on capturing the behaviorally relevant dynamics. Stage 2 then focuses on modeling the *residual* neural dynamics, which are not predictable from $X^{(1)}$. This provides a clear separation/disentanglement of states. While one-stage training with a mixed neural-behavioral loss is possible, doing so poses a challenge for clear disentanglement. Specifically, in this case, there is no guarantee that any state dimension will just focus on behaviorally relevant or just the residual dynamics. As such, states may actually be mixed up rather than disentangled. Doing so also may create practical challenges. First, the mixed loss optimization will heavily rely on **tuning the relative weights** of the neural and behavior loss terms, which can be difficult and sensitive to hyperparameters. Second, achieving disentanglement with a single combined loss may necessitate incorporating additional specialized loss terms or architectural constraints specifically designed for separation –e.g., KL divergence terms in TNDM (Hurwitz et al., 2021) that we now compare to and show that SBIND outperforms it (see Table B.2); finding the optimal balance for these specialized terms typically requires a potentially sensitive hyperparameter tuning process. In contrast, our two-phase approach avoids this sensitive hyperparameter tuning. --- ### [3]: Multi-step Decoding & Inference Speed We thank the reviewer for this important question. We will add these points to the manuscript. **Multi-step Prediction:** SBIND's recurrent architecture inherently allows for **multi-step** prediction. We can do so with no additional optimization or retraining by performing **recursive forecasting** during inference as follows: instead of feeding the neural observation at the next time step into the neural encoder ($K^{(1)}$), we can feed the model's own one-step-ahead neural image prediction, $\hat{Y}_{k+1}$. This allows the model to predict the subsequent state $X_{k+2|k}$, behavior $z_{k+2}$, and neural image $Y_{k+2}$ without having the observation at time k+1, and this same process can be iterated further into the future. **Inference Speed:** A single inference step of the SBIND model takes **17.9 ms** on an NVIDIA RTX 6000 GPU, which is faster than the sampling intervals of WFCI (\~33-67 ms) and fUSI (\~100-500 ms), suggesting potential feasibility for real-time BCI applications (Rabut et al, 2024; Mace et al, 2011). --- ### [4]: Equation 2 This is another great question. Because $X_k$ and $Y_k$ have **different spatial dimensions** ($H' \times W'$ vs. $H \times W$), we must first *encode* $Y_k$ using $K(\cdot)$ to obtain a representation $K(Y_k)$ with the same spatial dimensions as $X_k$. Afterwards, there are two main options: either **sum** $K(Y_k)$ with the processed state $f_A(X_k)$ (as formulated in Eq. 2, or equivalently, Eq. A.1), or **concatenate** $K(Y_k)$ with the state $X_k$ before applying the recurrent function $f_A(\cdot)$ (as discussed in Appendix A.1.5). For the WFCI datasets reported, we utilized the **concatenation approach** (Eq. A.12), and it yielded better performance as seen in the **new Table B.3**. **Table B.3: Comparison of SBIND Variants on WFCI1 Dataset.** ||Neur. R2|Beh. R2| |-|-|-| |SBIND w. Recurrent Eq. A.12|0.8724±0.0069|0.5059±0.0166| |SBIND w. Recurrent Eq. A.1|0.8545±0.0027|0.4680±0.0182| **References**: Rabut et al., Functional ultrasound imaging of human brain activity through transparent. Sci Transl Med. 2024. Also see Lines 502, 462. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response to my questions. I will keep my recommendation for acceptance. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for considering our response and confirming their recommendation for acceptance. We are grateful for their insightful comments and questions throughout the review process.
Summary: This work propose SBIND, a data-driven deep learning framework to model the spatiotemporal dependencies in the neural image data and behavior data. Existing methods fail to model the dependencies of behaviors and neural dynamics. This work allows modeling the complex local and global spatial temporal patterns, and achieving better performance in predicting dynamics and behavior decoding. This model includes a neural encoder with two separate modules for behavior and neural decoding, and outperforms baselines such as CEBRA and DPAD without the need for preprocessing. Claims And Evidence: This model demonstrates the benefits of integrating behavior and neural activity into one unified framework, which allows modeling the dependencies between behavior and neural dynamics. This model demonstrate superior performance in both behavior decoding and neural activity prediction. The claim is well-supported. Methods And Evaluation Criteria: It demonstrates on two publicly available neural imaging and behavior datasets. This model demonstrated superior performance in both decoding behavior as well as predicting neural dynamics. While there are multiple related works that are not directly compared with, such as BeNeDiff, TNDM and mm-GP-VAE mentioned in the related works. Moreover, there are also variants of approaches that are studied in the datasets from Neural Latent Benchmark Challenges, which this work is not evaluated on, previous works such as [1][2] that decoding behaviors that not compared with. [1] STNDT: Modeling Neural Population Activity with a Spatiotemporal Transformer. [2] A Unified, Scalable Framework for Neural Population Decoding. Theoretical Claims: There are no theoretical claims or proofs. Experimental Designs Or Analyses: The model is evaluated on two publicly available neural datasets, and multiple well-established baselines including CEBRA, LDA, DPAD are compared with. The model has demonstrated superior performance, while another neural latent benchmark could also be covered in this study. Supplementary Material: This supplementary material including additional implementation details for reproducibility. Relation To Broader Scientific Literature: This paper worked on an important and challenging problem in brain behavior decoding. It outperforms existing well-established works such as CEBRA, and achieving higher predictive accuracy, which provides an effective method to the neuroscience community. Essential References Not Discussed: Variants of approaches that are studied in the datasets from Neural Latent Benchmark Challenges, which this work is not evaluated on, previous works such as [1][2] that decoding behaviors that not compared with. [1] STNDT: Modeling Neural Population Activity with a Spatiotemporal Transformer. [2] A Unified, Scalable Framework for Neural Population Decoding. Other Strengths And Weaknesses: The paper presents an effective solution to predict neural activities as well as behavioral decoding, it demonstrates better accuracy compared to existing approaches. While this model is not evaluated on a comprehensive Neural Latent benchmark and other existing SOTA methods. Other Comments Or Suggestions: N/A Questions For Authors: How will the model perform using neural imaging data directly, compared to using the extracted neural activities and spatial locations? Will that provide additional computational efficiency while without sacrifice of accuracy? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ### [1]: New Baselines We thank the reviewer for their constructive feedback. We agree that comparing SBIND with more baselines strengthens our contribution. In response, we have added new baseline comparisons. **Comparison with STNDT:** STNDT uses a Transformer architecture for spatiotemporal modeling of neural population spiking activity. Unlike SBIND, which processes raw images, STNDT was originally designed for Poisson-distributed spiking data. **We adapted STNDT to accept preprocessed imaging data extracted using LocaNMF**, which is a widely used approach (Wang et al, 2024). We put a Gaussian prior on LocaNMF (instead of the original Poisson prior for spikes), and trained with STNDT's original objectives (masked reconstruction and contrastive loss). Following the original work, behavior decoding was performed via ridge regression on the learned latents. We swept the number of latents for STNDT. The results (**Table B.1**), show that SBIND significantly outperforms STNDT on WFCI1 dataset. **Table B.1: Comparison with STNDT.** |Model|Num latents|Neur. R2|Beh. R2| |---|---|---|---| |STNDT|55|0.8326±0.0015|0.3529±0.0164| |STNDT|123|0.8408±0.0012|0.3711±0.0153| |STNDT|270|0.8376±0.0088|0.3951±0.0156| |SBIND|$n_x=16$|0.8724±0.0069|0.5059±0.0166| **Comparison with TNDM:** TNDM (Hurwitz et al., 2021) uses a sequential variational autoencoder (SVAE) to learn two sets of latent factors in spiking data with dimensionality $n_1$ and $n_2$ for behaviorally relevant and irrelevant dynamics, respectively. TNDM was originally designed for Poisson-distributed spiking data. To provide a meaningful comparison on our WFCI dataset, we adapted TNDM to accept preprocessed imaging data extracted using LocaNMF. We assumed a Gaussian distribution for these inputs and changed TNDM's neural reconstruction loss to MSE. We also swept $n_1$ and $n_2$ values for TNDM. As shown in the **newly added Table B.2**, SBIND achieves superior performance compared to this adapted TNDM in both neural prediction and behavior decoding, demonstrating the benefit of SBIND's architecture that is specifically designed for spatiotemporal image data. **Table B.2: Comparison with TNDM on WFCI1 Dataset.** |Model|$n_1$|$n_2$|Neur. R2|Beh. R2| |---|---|---|---|---| |TNDM|8|8|0.5442±0.0073|0.1718±0.0173| |TNDM|16|16|0.5464±0.0102|0.1683±0.0203| |TNDM|16|64|0.5022±0.0081|0.2233±0.0109| |SBIND|8|8|0.8724±0.0069|0.5059±0.0166| The reviewer also mentions two other methods, but direct empirical comparison to these was not feasible as described below: * **PoYo (Azabou et al., 2023)** is a foundation model for spiking activity, which builds an unsupervised, large-scale foundation model to generalize across different subjects and tasks. This is a different goal compared to SBIND’s goal of joint neural-behavioral modeling to disentangle behaviorally relevant neural dynamics. PoYo uses a tokenization scheme specific to spikes, whereas SBIND focuses on neural imaging modalities. Furthermore, the code for PoYo is not available. * **BeNeDiff:** The code for BeNeDiff is also not publicly available. BeNeDiff employs SVAEs similar to TNDM (which we now compared to). A key distinction is that SBIND processes raw imaging data, while BeNeDiff uses **LocaNMF**-preprocessed data. SBIND captures local and global spatiotemporal dependencies in images, which may be lost after preprocessing with methods such as LocaNMF. --- ### [2]: Neural Latent Benchmark (NLB) Comparison Request We thank the reviewer for raising this point. Unfortunately, however, all NLB datasets consist exclusively of **electrophysiological (EP) recordings**. SBIND, in contrast, is specifically designed to model the spatiotemporal grid structure of *neural imaging data* (WFCI and fUSI). Our core contribution lies in developing a method tailored to leverage this structure from raw image sequences. Therefore, while the NLB is valuable for benchmarking models designed for EP data, it is not directly applicable for evaluating the specific contributions and application of SBIND. --- ### [3]: Question on Raw vs Preprocessed Data This is another great point about modeling raw versus preprocessed neural images. Methods like LocaNMF extract latent components roughly corresponding to brain regions. The adapted STNDT we compared against uses learnable positional embeddings to encode spatial relationships within LocaNMF components. However, as shown in our comparison (Table B.2), this method, even with mechanisms for spatial encoding on preprocessed data, yields inferior performance compared to SBIND. In terms of computational efficiency, we acknowledge that training models on preprocessed data is generally faster than training on raw images. However, as shown in Tables 2 and 3, baselines using various preprocessing methods consistently perform worse than SBIND. Therefore, SBIND represents a trade-off that prioritizes higher predictive accuracy and the ability to learn directly from the neural images.
Summary: This work proposes SBIND, a dynamical model for neural imaging data designed to extract behaviorally relevant spatiotemporal patterns. The model mainly uses a double-RNN technique to disentangle behaviorally relevant neural dynamics from other covariates of high-dimensional neural activity. The first RNN captures the behaviorally relevant latent states, while the second RNN accounts for remaining neural dynamics. The study demonstrates that SBIND outperforms existing models on several benchmarks. Claims And Evidence: The authors shows that the proposed SBIND improves neural-behavioral decoding performance owns to effectively disentangling behavior-related and remaining neural dynamics. The results are generalizable across datasets as well as detailed ablation studies. Methods And Evaluation Criteria: For the method, I am a bit skeptical about the neuroscience meaning of learning with neural imaging data, which is highly noisy. By comparison, may be some electrophysiology data, like Poisson spiking counts, are much more scientific meaningful. Meanwhile, the model structure of decoding neural and behavior at the same time is a bit too common actually. Theoretical Claims: No theoretical claims within this work. Experimental Designs Or Analyses: The metric of decoding for behavior prediction provides reasonable experimental validation. Supplementary Material: Yes, the appendix pages. Relation To Broader Scientific Literature: I don't really know the contribution of this paper to broader neuroscience field given that it's only processing a certain kind of imaging data format which is hard to truly interpret. Essential References Not Discussed: For the neural data integration: * Extraction and Recovery of Spatio-Temporal Structure in Latent Dynamics Alignment with Diffusion Models. NeurIPS 2023, Wang et al. * Multi-Region Markovian Gaussian Process: An Efficient Method to Discover Directional Communications Across Multiple Brain Regions. ICML 2024, Li et al. For the neural representation learning: * STNDT: Modeling Neural Population Activity with a Spatiotemporal Transformer. NeurIPS 2022, Le et al. * NetFormer: An interpretable model for recovering identity and structure in neural population dynamics. 2024, Zhang et al. Other Strengths And Weaknesses: Please refer to my Methods And Evaluation Criteria section. Other Comments Or Suggestions: Please refer to my Methods And Evaluation Criteria section. Questions For Authors: No more Questions. Please refer to my Methods And Evaluation Criteria section. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### [1]: Neuroscientific meaning We thank the reviewer for raising this key point regarding the scientific utility of neural imaging data compared to electrophysiology (EP) data, which we will now clarify in the manuscript. Brain function relies on diverse spatial and temporal scales from single neurons to large-scale networks of neuronal populations. To understand how the brain generates behavior, we need to model neural activity across all these scales. While EP recordings measure the small-scale spiking activity of a group of single neurons, they do not measure large-scale networks that are thought to be a key basis for cognition and complex behavior (Cardin et al, 2020). Neural imaging can measure these large-scale networks by providing **large-scale spatial coverage** across cortical areas or even whole-brain access (Macé et al., 2018). As such, neural imaging data are complementary to EP data and play an increasingly central role in modern neuroscience by allowing the study of **large-scale network dynamics, functional connectivity, and mesoscale neural analyses**, which are all crucial for a full understanding of how the brain generates behavior and task performance (Musall et al., 2019; Nietz et al., 2023). Additionally, functional ultrasound imaging (fUSI) may offer a less invasive approach compared to EP for developing brain-computer interfaces (BCIs) as recently demonstrated (Griggs et al. 2024; Rabut et al., 2024). Finally, we agree with the reviewer that neural imaging and EP data have distinct statistical characteristics. Precisely for this reason, we developed SBIND to directly learn such specific spatiotemporal structure from raw neural images, to enable extracting more precise dynamical neural information. We recognize that our initial motivation for focusing on these imaging modalities could have been more clearly articulated and appreciate the reviewer’s insight. ### [2]: Neural-behavioral models First, as explained above, neural imaging data play a critical role in understanding large-scale network dynamics that are key to complex behavior and cognition. Furthermore, we demonstrated SBIND's application on two fundamentally distinct neural imaging modalities that are increasingly employed: an optical modality based on widefield calcium imaging (WFCI), and an acoustic modality based on fUSI. Second, while we agree that the general concept of joint neural-behavioral modeling has been explored, particularly for EP data, we emphasize that SBIND's novelty and contribution lie in doing so for neural imaging modalities by developing a novel architectural design tailored for the unique challenges of neural imaging data. These challenges, as highlighted in the introduction and also noted by the reviewer, include **high dimensionality, complex spatiotemporal dependencies, and prevalence of behaviorally irrelevant dynamics**. Our model differs significantly from prior joint neural-behavioral models that typically operate on lower-dimensional EP recordings that have a much smaller spatial scale than neural imaging (Sani et al., 2024), or on preprocessed time series extracted from neural imaging data (Wang et al., 2024). Finally, we demonstrated that SBIND outperforms SOTA methods, such as DPAD and CEBRA. We now also add a new baseline, STNDT (Table B.2 in our response to Reviewer 9wsc) to our results, showing SBIND’s superior performance in both behavior decoding and neural prediction compared with all baselines. Furthermore, as Reviewer z1x2 pointed out, "methods that can be applied to fUSI are relatively sparse", and to our knowledge, SBIND is the first dynamical latent state model for fUSI, whose successful application to fUSI underscores its potential to **facilitate the design of non-invasive BCIs**. ### [3]: References We thank the reviewer for bringing these relevant papers to our attention and will incorporate a discussion of them into our revised manuscript. While these represent important advancements, they differ significantly from SBIND. They all address electrophysiology data and focus on distinct goals such as cross-session latent alignment (Wang et al., 2023), modeling inter-regional communication (Li et al., 2024), capturing spatial and temporal dependencies in population activity (Le et al., 2022), or recovering interpretable inter-neuron connectivity (Zhang et al., 2024). Crucially, none are designed for joint neural-behavioral modeling, and thus they do not disentangle behaviorally relevant dynamics, nor do they employ image-specific architectural priors like SBIND's ConvRNNs with integrated self-attention. SBIND's focus thus remains distinct in leveraging the spatiotemporal structure of imaging data for robust, disentangled neural-behavioral modeling. **References:** Cardin et al., Shining a Wide Light on Large-Scale Neural Dynamics, Neuron 2020 Macé et al., Whole-Brain Functional Ultrasound Imaging Reveals Brain Modules, Neuron 2018 Also see Lines 452, 524, 529, 542, 559 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. Please incorporate the relevant scientific meaning discussions and references from the rebuttal into the revised version of the paper. I have updated my score accordingly. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for considering our response and increasing their score. We will update the manuscript by incorporating the relevant references and the discussion points regarding the scientific meaning of modeling neural images.
null
null
null
null
null
null
Revisiting Instance-Optimal Cluster Recovery in the Labeled Stochastic Block Model
Accept (poster)
Summary: The authors study the labeled stochastic block model, which is similar to the regular SBM, but each edge is additionally assigned one label from a candidate set. 0 corresponds to no edge existing, and the authors study the sparse regime where the probability of a label not being 0 is small ($O(\log n / n)$). In this regime, the authors show that given some assumptions on the LSBM concerning homogeneity of the labels, cluster separation and the (non)existence of labels that are too sparse, there exists an instance optimal algorithm, meaning for any LBSM satisfying these assumptions the algorithm will misclassify a sublinear amount of vertices. This clustering algorithm matches a lower bound from the literature. Claims And Evidence: I can not judge the evidence as all proofs are in the appendix and I did not have time to look at them in detail. Methods And Evaluation Criteria: There are some experiments in the appendix, the authors compare their own algorithm to one algorithm from the literature over a benchmark dataset for various regimes of the SBM. The labeled SBM is not considered, likely due to the algorithm of Gao being for the regular SBM. The results are evaluated based on the number of misclassified nodes. Theoretical Claims: I did not check the proofs of the theoretical claims. Experimental Designs Or Analyses: The experiments are performed for different regimes of the SBM, with both algorithms performing similar in the first three settings and the algorithm IAC performing better in the sparse asymmetric setting. IAC is also better in the first three settings, but the differences are very small. Additional data such as the running time would have been interesting to get a fuller picture on the practical properties of the algorithms. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The SBM is a standard model for studying the theoretical limitations of clustering algorithms. I am not familiar with the labelled SBM and wish the authors would have motivated the uses of the model a bit more. The results may inform researchers in other areas of clustering about the theoretical limitations of algorithms, but whether these results have implications for clustering on graphs in real-world instances is not entirely clear to me. Essential References Not Discussed: Not aware of any. Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and for carefully reading our draft. > Additional data such as the running time would have been interesting to get a fuller picture on the practical properties of the algorithms. Thank you for this suggestion. We have conducted additional experiments to measure the running times of the algorithms. For Model 4, the average running time of our IAC algorithm is 0.177 ± 0.037 seconds, while that of PMLE is 0.915 ± 0.752 seconds. In the next revision, we will include these results, along with further scaling evaluations (by varying the model parameters and the sample size $n$), in the appendix of the paper. > Whether these results have implications for clustering on graphs in real-world instances is not entirely clear to me. We have applied our algorithm to a real-world dataset, the DBLP citation network dataset. In this dataset, the nodes represent researchers. We focused on 246 researchers who have authored 20 or more papers, selected from a corpus of 50,000 papers. The edges (connections) represent co-authorship relationships. There are 1,118 co-authorship connections, where two researchers are connected if they have co-authored one or more papers. We clustered the researchers in this network using both IAC and PLMLE. We set the number of clusters to 8 for simplicity. The results are available at the following link: https://anonymous.4open.science/r/Rebuttal_ICML_8618-9FFF. Interestingly, our IAC algorithm found larger communities without excessively small ones, resulting in a more continuous community size distribution. We will include the results and the discussion in the next revision.
Summary: The authors propose a new tractable algorithm for cluster recovery in the labeled stochastic block model. An upper bound for the asymptotic error rate is derived and is shown to match known lower bounds. ## Update after rebuttal I maintain my score, thank you. Claims And Evidence: The proposed algorithm follows a popular two-phase approach (spectral clustering plus maximum likelihood) and is therefore believable. I did not check the proofs but the arguments as well as the results seem reasonable. Methods And Evaluation Criteria: See "Claims and evidence" above. Theoretical Claims: See "Claims and evidence" above. Experimental Designs Or Analyses: Some empirical evaluation is given in the appendix. They seem adequate for this kind of work. Supplementary Material: Briefly, the proofs and the experiment results. Relation To Broader Scientific Literature: The work should make valuable contribution to the literature on community detection and clustering in general. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and for carefully reading our draft. Please let us know if you have any further questions.
Summary: This paper considers the problem of community detection in the Labeled SBM, a generalization of the standard SBM in which each edge is associated with one of L+1 labels (where the zero label is most frequent). The authors study the case of a growing number of communities and propose an algorithm for achieving the minimal misclassification rate, without knowledge of the model parameters. The algorithm consists of two phases: the first performs spectral clustering to identify a preliminary classification and estimate the model parameters, while the second phase refines the initial labels by mimicking the MLE. Claims And Evidence: Under Assumptions (1), (2), and (3), Theorem 1.2 bounds the misclassification rate. This claim is tight (up to Assumption (3)) in light of Theorem 1.1, which is cited from Yun & Proutiere (2016). The evidence is an algorithm achieving the expected misclassification rate. Methods And Evaluation Criteria: Yes, though it would be nice to see an empirical validation of the threshold in Theorem 1.2. Theoretical Claims: I checked the outline in the main text, which follows reasonable steps. It makes sense to me that the misclassification rate is driven by the number of vertices that are not in the set H (defined in Section 4.2), since the main condition (H2) is related to the failure of the MLE. Experimental Designs Or Analyses: The supplementary material contains synthetic data experiments, showing that the new method performs similarly to the method of Gao et al (2011). It would be nice to see experimental validation of Theorem 1.2. Supplementary Material: I skimmed parts of it. Relation To Broader Scientific Literature: There is a good comparison made to the closest papers, namely that of Gao et al (2017) and Yun and Proutiere (2016). The results are significant among the literature on misclassification rates for the SBM. Essential References Not Discussed: “Exact recovery and Bregman hard clustering of node-attributed Stochastic Block Model” by Dreveton, Fernandes, and Figueiredo from NeurIPS 2023. This paper develops very general impossibility results for SBM-type problems. Other Strengths And Weaknesses: The paper is well-written. Other Comments Or Suggestions: There is a grammar issue in the last line of the first paragraph of Section 2.1. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and for carefully reading our draft. > it would be nice to see an empirical validation of the threshold in Theorem 1.2. We will include a comparison plot of the empirical error rates and the lower bound by varying $n$ in the appendix of the paper. > Dreveton, Fernandes, and Figueiredo from NeurIPS 2023. This paper develops very general impossibility results for SBM-type problems. Thank you for pointing out this literature. This paper derives the necessary conditions for exact recovery with high probability for (possibly non-homogeneous) node-attributed SBMs. Although they provide the Bregman hard clustering algorithm, they do not derive a performance guarantee—that is, the sufficient condition for the existence of the optimal algorithm. Our result provides a guarantee for the IAC algorithm in expectation, and importantly, we have revealed conditions not only for exact recovery but also for the expected number of misclassified nodes to be less than $s$, for any number $s = o(n)$. We will cite their work and discuss it in the revised manuscript. > There is a grammar issue in the last line of the first paragraph of Section 2.1. Thank you for pointing this out. We will fix it. --- Rebuttal Comment 1.1: Comment: (copying since this was originally posted as an "official comment") Yes, I agree that the paper of Dreveton et al doesn't cover your results. I brought it up since this paper is not well-known, and thought you would want to be aware of it. --- Reply to Comment 1.1.1: Comment: Thank you again for letting us know—we appreciate you bringing it to our attention!
Summary: This paper provides a detailed algorithm for clustering under the LSBM model, and theoretically shows that it achieves the known asymptotic lower bound on the number of miss-classifications (YP2016). Furthermore, the algorithm does not need to know the LSBM parameters, essentially showing that the bound in (YP2016) is min-max over the LSBM parameters. The algorithm is achieved in two stages, where in the first stage a refined version of the spectral clustering method is applied and an estimate of the hyper-parameters is achieved. In the second stage, maximum likelihood principle is applied cyclically to the nodes. Claims And Evidence: The claims of the paper are mainly theoretical and are rigorously proven. Some empirical evidence is provided in the supplement that confirms the optimality of the proposed algorithm. Methods And Evaluation Criteria: The method provably provides exact bounds for clustering under SBML. It naturally makes sense! Theoretical Claims: I did not check the details of the proofs, but the steps generally follow standard arguments and should be correct. Experimental Designs Or Analyses: Experimentation is not the main purpose and approach of this paper. Supplementary Material: I took a brief look at the proof and numerical experiments, but did not check the steps in the proof carefully. Relation To Broader Scientific Literature: The paper establishes the sharpness of the error bounds for LSBM clustering. This is a significant result with potential application in similar problems related to graphs. Essential References Not Discussed: No undiscussed reference. Other Strengths And Weaknesses: Weakness: I cannot think of any remarkable weakness, but the steps used in the paper are heavily borrowed from other works. The paper is similar even in presentation to (YP2016) and the idea of refinement of the clusters have been frequently discussed in the past, e.g. in K-means clustering, although I am not aware of any paper in the context of LSBMs. The bounds are not also new, but establishing their sharpness is certainly important. Other Comments Or Suggestions: To me, the section on the related works is unnecessarily long and detailed. The authors could move some of the discussion to the supplementary material and instead present the numerical studies in the main body of the paper. Questions For Authors: No major question to the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your positive feedback and for carefully reading our draft. > The authors could move some of the discussion to the supplementary material and instead present the numerical studies in the main body of the paper. Thank you for the suggestion. We will take this into consideration when revising the manuscript.
null
null
null
null
null
null
Textual Unlearning Gives a False Sense of Unlearning
Accept (poster)
Summary: The paper investigates the effectiveness of machine unlearning (MU) in LMs and introduces new auditing and attacking methods to evaluate its reliability and privacy risks. They propose U-LiRA+, which uses mislabeled samples to rigorously audit unlearning effectiveness. The results reveal that over 70% of unlearned texts remain detectable. The TULA-MI shows that attackers can infer whether a text was unlearned by comparing model outputs before and after unlearning, even in strict black-box scenarios. The TULA-DR exploits model weight differences to reconstruct unlearned texts with over 80% accuracy in some cases, showing that different unlearning methods leave distinct traces in the model. The findings demonstrate that existing unlearning methods do not ensure true erasure and may even increase privacy risks, highlighting the need for more robust and secure unlearning mechanisms. Claims And Evidence: The claims are mostly well-supported by empirical evidence. Some claims could benefit from additional validation. However, both TULA-MI and TULA-DR assume that an attacker can access outputs (or even internal weights in the white-box case) from the model before and after unlearning. In practice, such access might be limited, so the generalization of the claims is concerned. Methods And Evaluation Criteria: - The choice of using synthetic data may undermine the claims. I understand the concern that models might have seen the public data. However, the assumptions of this paper are strong (e.g., access to the model weights), which is unlikely to happen in real-world scenarios; it is better to test the model on real-world datasets. There are many other real-world datasets for MU and existing methods to test whether certain data are used to train the model, which could help to construct new datasets if the authors want a pure dataset for unlearning. - The authors trained 100 shadow models for each shadow example. This can be very time-consuming when their method is extended to commonly used LLMs today. The only imaginable setting I can think of is when a new unlearning method comes out and then it is applied to a smaller model to see if leakage happens under attack. However, even if leakage does not exist, the effectiveness of the new methods remains unknown. Theoretical Claims: I am not aware of any proof for theoretical claims. Experimental Designs Or Analyses: The choice of using TPR@LowFPR is not justified. Not sure if this metric is used by other work. Supplementary Material: I checked the details of the auditing settings and example reconstruction outputs in the Appendix. Relation To Broader Scientific Literature: This paper challenges prior claims that inexact MU approximates full retraining. They show that unlearning leaves residual traces rather than ensuring true forgetting. The paper contributes to the broader scientific literature by calling for more robust privacy-preserving techniques. Essential References Not Discussed: No. Other Strengths And Weaknesses: Please refer to the methods section for the weaknesses. Other Comments Or Suggestions: No. Questions For Authors: Is the choice of Pythia and OPT mainly based on computational efficiency? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: # Response to Reviewer PnWC We sincerely thank the reviewer PnWC for your valuable and constructive feedback! ## Q1: Concerns about our assumptions and their practicality. We would like to provide further explanations on our assumptions: (1) In the **black-box scenario**, we assume that an adversary can *query the model* with target samples and *utilize the outputs to infer their membership*. This is **the most fundamental assumption for Member Inference Attack (MIA)**[1] and is **widely considered in literature as well as real-world applications**. To further align with real-world scenarios, we further investigate **the strictest black-box scenarios**, where the adversary can *only obtain output scores* (e.g., confidence scores) for MIA. (2) In the **white-box scenario**, we assume that adversary has *access to the model weights before and after unlearning*. This assumption could also be realistic in practice. Here, the adversary could be a **malicious collaborator**, such as *a company contracting with the model developer, with access to model weights for local deployment*. According to the unlearning policy, **the collaborator’s model is also required to be updated after unlearning**, enabling the adversary to have the model weights before and after unlearning. Additionally, this assumption is **commonly studied in literature[2][3]**, serving as **an "upper bound" for exploring real-world risks**. ## Q2: Suggestions on utilizing real-world data. We utilize synthetic datasets to rigorously ensure that **the pre-trained models have not seen these samples during the pre-training phase**. However, *we are concerned that real-world data can not meet our requirement currently*: (1) Pre-trained models are trained on *vast amounts of real-world data*, which is typically **closed-sourced**. (2) Recent studies indicate that **we can not yet robustly detect whether a specific sample was used among massive pre-training data[4]**. Therefore, we utilize synthetic samples, which are **entirely based on fictional scenarios**, and generated by **GPT-4-based agents released after the models we utilized**. Through these efforts, we hope to ensure that the setups are rigorous and thus avoid being potentially misleading. ## Q3: The concern on computational cost (training multiple shadow models) on the proposed U-LiRA+ for LLM. We train multiple shadow models to **ensure the rigorous auditing on unlearning methods**. With them, we could *closely approximate the distributions of the model outputs* and *rigorously determine samples' membership through statistical tests*. Though this approach could incur much computational cost, **it could also be applied to LLMs**: (1) We could utilize **Parameter-Efficient Fine-Tuning (PEFT)** methods to effectively fine-tune and unlearn few audit samples. Since only few parameters are trainable, the computational cost would be significantly reduced. (2) **Our approach is MIA-agnostic**, as its core idea is to properly construct and inject rigorous audit samples before unlearning. **If more efficient yet equally rigorous MIAs for LLMs emerge, our approach can be flexibly adapted to them.** ## Q4: Do we choose the models mainly for computational efficiency? **The selection of *1.5B*-parameter models balances *practicality* and *computational efficiency***: (1) **Lightweight LLMs are widely adopted in practice**, such as *autonomous driving* and *mobile AI assistants*. (2) As mentioned, **we aim to ensure rigorous auditing and evaluation**, which requires some consuming operations, e.g., *training multiple shadow models*. Choosing an appropriately sized model enables us to **conduct broader evaluations efficiently**. Furthermore, we highlight that **the model scale is unlikely to compromise our findings**: (1) We demonstrate that existing auditing methods are not rigorous because *they fail to correctly select audit samples*, **which is independent of the target model**. (2) We reveal that current unlearning methods fail to completely erase target samples because *they cannot accurately identify and remove parameters encoding target knowledge from the large parameter space*, **which would persist across models of different scales.** ## Q5: The choice of TPR@LowFPR. TPR@LowFPR is the **most commonly used and rigorous metric** for evaluating MIA, quantifying MIA's effectiveness and confidence by measuring **the proportion of correctly inferred samples at a low false rate**[5]. ## References [1] Membership inference attacks against machine learning models, IEEE S&P 2017 [2] Machine Unlearning Enables Camouflaged Poisoning Attacks, NeurIPS 2023 [3] Adversarial Machine Unlearning Requests Destroy Model Accuracy, ICLR 2025 [4] Membership Inference Attacks Cannot Prove that a Model Was Trained On Your Data, IEEE SaTML 2025 [5] Membership Inference Attacks From First Principles, IEEE S&P 2022
Summary: This paper critically demonstrates that current machine unlearning mechanisms give a false sense of effective unlearning. First, they propose U-LiRA+, a rigorous textual unlearning auditing method, and find that the unlearned texts can still be detected with very high confidence after the unlearning process. Further, they comprehensively investigate the privacy risks of textual unlearning mechanisms in deployment. By proposing TULA along with its variants in both black- and white-box scenarios, the authors critically reveal that the unlearning mechanism would instead expose more about the unlearned texts, facilitating both membership inference and reconstruction attacks. The experiments demonstrate that the proposed auditing method much more strictly measures the unlearning effectiveness compared to previous approaches. Besides, the proposed TULA-MI and TULA-DR attacks are capable of inferring unlearned samples with high confidence. Overall, in order to explore the vulnerabilities and privacy risks of machine unlearning in language models, the authors introduce a rigorous auditing method and novel attack paradigms. The experiments support the findings well. In addition, this paper is structured well and easy to follow. Claims And Evidence: Yes, the claims are clearly supported in the paper. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: There are no theoretical evaluations in this paper. Experimental Designs Or Analyses: Yes, the experimental designs and analyses effectively support the proposed methods. Supplementary Material: Yes, I have reviewed all the supplementary materials. Relation To Broader Scientific Literature: 1. This work highlights that previous unlearning auditing approaches have overestimated the effectiveness of existing unlearning techniques. The authors propose a novel and rigorous auditing method, U-LiRA+, which could inspire the development of more thorough and reliable unlearning techniques. 2. The work reveals the privacy risks of deploying textual unlearning mechanisms in both black-box and white-box contexts. This serves as a call to further investigate the potential new and additional privacy risks before widespread application of unlearning mechanisms. Essential References Not Discussed: Related works are adequately discussed. Other Strengths And Weaknesses: NA Other Comments Or Suggestions: It would be more informative for the authors to further discuss how your findings inspire the development of unlearning mechanism on language models. Questions For Authors: 1. How about the proposed reconstruction attack (TULA-DR) against the exact unlearning method (including retraining) ? 2. How do your findings inspire the development of unlearning mechanism on language models? Code Of Conduct: Affirmed. Overall Recommendation: 4 Ethical Review Concerns: NA
Rebuttal 1: Rebuttal: # Response to Reviewer 9rYC We sincerely thank the reviewer 9rYC for your valuable and constructive feedback! ## Q1: How about the proposed reconstruction attack (TULA-DR) against the exact unlearning method (including retraining)? We mainly focus on TULA-DR against inexact unlearning because: (1) **Exact unlearning is relatively safe**, as its core idea is to delete the target sample and retrain the model from scratch. Due to the randomness in the training process, the difference in model weights before and after unlearning is less likely to be approximated by the adversary. (2) In practical deployment, **developers often use the inexact unlearning methods considering computational efficiency**. Thus, exploring reconstruction attacks against inexact unlearning could provide more practical insights. ## Q2: How do the findings inspire the development of unlearning mechanism on language models? Thanks for the insightful question. We hope our findings could inspire future research in the following three aspects: (1) **Developing more precise unlearning mechanisms**. Considering the difficulty of existing unlearning methods to completely erase target samples, future work could explore *efficient exact unlearning methods* (e.g., retraining) or *certified inexact approaches* capable of accurately identifying target knowledge. (2) **Strengthening unlearning auditing before real-world deployment**. Future research should establish rigorous auditing frameworks for unlearning mechanisms *from broader perspectives*. Beyond verifying whether an unlearned model successfully erases target data, it is also essential to assess and mitigate potential risks, such as reduced robustness. (3) **Rethinking the secure deployment of unlearning mechanisms**. While prior research has primarily focused on *how to unlearn*, less attention has been given to *how to deploy unlearning*. As demonstrated in this work, future research should pay more attention to the *secure deployment of unlearning mechanisms* into real-world scenarios while mitigating potential new risks. --- Rebuttal Comment 1.1: Comment: Many thanks for authors' detailed responses. More results and discussions (the method details, insight of algorithms etc.) regarding my concerns and other reviewers have been provided. Based on the overall quality of the paper/response, I’d like to keep my score. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer 9rYC for reviewing our rebuttal, and we appreciate your valuable feedback! We will carefully revise the paper according to your suggestions.
Summary: The authors demonstrate that current unlearning methods fail to adequately protect the privacy of unlearned texts in language models. To address this, they propose a robust unlearning auditing method, U-LiRA+, which utilizes membership inference attacks and deliberately introduces mislabeled samples to reveal that the unlearned texts are, in fact, highly detectable. Furthermore, they introduce textual unlearning leakage attacks, showing that unlearned texts can be inferred within unlearning systems, thereby uncovering a new privacy risk associated with machine unlearning. Claims And Evidence: The contributions are well supported with sound designs and evaluations. Methods And Evaluation Criteria: I have some confusions about TULA-DR that need further clarification:(1) How to determine the convergence of TULA-DR? The authors need to further clarify the metrics or criteria for assessing convergence. (2) In Algorithm 3, the authors should add a definition of “Decoding” function. Theoretical Claims: The authors did not present any theoretical proofs. Experimental Designs Or Analyses: The experimental design is sound, and the results empirically demonstrate the effectiveness of the proposed methods. Supplementary Material: I have reviewed the supplementary materials. The contents are well structured but there are two minor issues: (1) The “?” marks in Figure 13 appear to be typos. (2) For Appendix I, the reconstructed examples should be clearly labeled with distinct padding symbols for different models to prevent misinterpretation. Relation To Broader Scientific Literature: The authors rethink the security of unlearning mechanisms on language models, revealing the vulnerabilities of existing methods from auditing to deployment. Essential References Not Discussed: No. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: No. Questions For Authors: 1. How to determine the convergence of TULA-DR? What does the “Decoding” function in Algorithm 3 refer to? While this work focuses on classification tasks, I am curious that could the idea of U-LiRA+ potentially be applied to generation tasks for future works? 2. Although the paper highlights the limitations of text unlearning methods, it lacks an in-depth analysis of the performance differences of existing methods across different data modalities, resulting in its innovation being primarily confined to phenomenological descriptions. 3. While the paper identifies privacy leakage risks in the text unlearning process, it does not employ quantitative analyses based on information entropy or mutual information to demonstrate that the unlearned model retains exploitable information residues. Incorporating information-theoretic approaches, such as computing mutual information between model outputs and the original text or analyzing entropy variations, would provide a more rigorous validation of the inevitability of privacy risks. 4. The paper mentions "high-confidence detection" but does not explicitly define a statistical significance threshold, which may affect the robustness of its conclusions. For instance, failing to adopt a stringent significance level in hypothesis testing could lead to an increased false positive rate. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to Reviewer tLdQ We sincerely thank the reviewer tLdQ for your valuable and constructive feedback! ## Q1: How to determine the convergence of TULA-DR? Our proposed TULA-DR is an **optimization-based attack**. Empirically, the optimized candidates converge gradually as the number of iterations increases. A criterion is that the loss of the attack **stabilizes at a lower value and no longer continues to decrease**. Therefore, when implementing the attack, we could **set a maximum number of iterations to stop the optimization process**. ## Q2: What does the “Decoding” function in Algorithm 3 refer to? This function refers to transforming the reconstructed embeddings into **text space**. To implement this function, we use the model’s **embedding layer** as a *“vocabulary”* and apply a similarity computation function (e.g., *cosine similarity*) to **match the ordinal indices of the reconstructed embeddings**. The resulting list of indices is then converted into readable text using a **tokenizer**. ## Q3: Can U-LiRA+ potentially be applied to generation tasks for future work? Our approach is **task-agnostic**, as its core idea is to properly construct and inject rigorous audit samples before auditing. Specifically, in order to implement rigorous unlearning auditing on a text generation task, we should first **define the samples that are most vulnerable to unlearning on that task as audit samples**. We then **inject these audit samples into the training set** and evaluate whether the target unlearning method could fully erase them. In addition, to implement U-LiRA+ on the generation task, we could **utilize the next-word probability vector in response to a text sample as the model's *output* for MIA**. ## Q4: In-depth analysis on the performance differences of existing methods across different data modalities. This study focuses on textual data and language models. However, our key findings could be broadly applicable to various data modalities: (1) Existing unlearning methods can not fully erase target samples, primarily due to **their inability to accurately identify parameters linked to target knowledge**. This limitation could extend to other data modalities. (2) Analyzing models before and after unlearning allows an adversary to infer membership information about unlearned samples or even reconstruct them. **While different data modalities may exhibit varying degrees of privacy leakage, the underlying risk persists**. ## Q5: The suggestion for quantitatively analyzing the information residuals of unlearned models. We thank you for your insightful suggestion, and it is indeed a very interesting perspective. However, our proposed auditing method is currently sufficient to demonstrate that **existing unlearning methods can not completely erase the target samples and result in large information residuals**. With these findings, we hope to call for the development of more accurate unlearning methods before applying them in practice. Indeed, with the development of unlearning techniques, how to effectively and exactly measure **“how much”** information residual with theoretically guarantee will be an important problem. We thank you for your inspiration and would be happy to explore this question in future research. ## Q6: The concern on "high-confidence detection" of our proposed U-LiRA+. The reason for our claim is that we utilize the TPR@LowFPR metric. TPR@LowFPR is the most commonly used and rigorous metric for evaluating MIA[1], as it quantifies an attack's effectiveness and **confidence** by measuring **the proportion of correctly inferred samples at a low false rate**. ## References [1] Membership Inference Attacks From First Principles, IEEE S&P 2022 --- Rebuttal Comment 1.1: Comment: All my concerns have been addressed, so I recommend accepting it. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer tLdQ for reviewing our rebuttal, and we appreciate your valuable feedback! We will carefully revise the paper according to your suggestions.
Summary: The authors demonstrate that the textual unlearning mechanism can not ensure privacy as expected. They propose a rigorous unlearning auditing method (U-LiRA+) and and investigate privacy attacks in both black-box and white-box scenarios. Through empirical evaluations on large language models and synthetic datasets, the authors reveal that existing textual unlearning methods fail to completely erase target texts. Furthermore, these methods may inadvertently expose additional information about unlearned texts through membership inference attacks (MIA) or data reconstruction attacks (DRA). Claims And Evidence: The claims are well-supported. Methods And Evaluation Criteria: The proposed methods and evaluations are sound, but there are several minor issues. 1. I am confused about how to initialize the candidates for the proposed TULA-DR. 2. It would be better to try more metrics to evaluate the difference between loss values to implement the proposed MIA. Theoretical Claims: The work does not include any theoretical proofs. Experimental Designs Or Analyses: The experimental evaluations effectively validate the performance of the proposed auditing method and the attacks in the paper. Supplementary Material: Yes, I reviewed the supplementary materials, where the authors provided further explanations on experimental settings and implementation details. Additionally, the authors included additional results on TULA-MI in black-box cases, TULA-DR for batch reconstruction, and ablation studies. Relation To Broader Scientific Literature: This work shows that even in the strict black-box querying and exact unlearning scenario, the unlearning mechanism can still compromise the privacy of unlearned data. This provides new insights into the secure deployment of unlearning mechanisms. Essential References Not Discussed: No Other Strengths And Weaknesses: Weaknesses: 1. Considering that LiRA may be time-consuming in some cases, is it possible for the proposed U-LiRA+ to be adopted to other MIAs? 2. For the TULA-DR, how do the authors initialize the candidates? 3. For the TULA-MI in strict black-box case, the authors utilize the difference in loss values to implement the proposed MIA. Considering that the loss changes may be non-linear, what about other ways of calculating the difference? Other Comments Or Suggestions: There are some suggestions: 1) Algorithm 2 appears crowded and the authors could improve its clarity; 2) It would be helpful if the authors could color-code the reconstructed texts in Tables 16-21 to better differentiate correct and incorrect results; 3) It appears that Fig.3 misses the “%”. 4) The introduction of U-LiRA+ is compelling, but consider adding a brief explanation of why mislabeled samples represent a "worst-case scenario" for auditing. This would improve the clarity. 5) The paper lacks a description on compute requirements (e.g., GPU, CUDA). Questions For Authors: N/A. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to Reviewer c3T7 We sincerely thank the reviewer c3T7 for your valuable and constructive feedback! ## Q1: Is it possible for the proposed U-LiRA+ to be adopted to other MIAs? Yes, our approach is **MIA-agnostic** (membership inference attack), as its core idea is to **properly construct and inject rigorous audit samples before auditing**. For an unlearning auditing task, we start by defining a set of **the most vulnerable samples** (e.g., mislabeled samples) as **audit samples** on that task. Use half of them as training samples and half as unseen samples. After unlearning the training samples with the target unlearning method, we can utilize **any kind of MIA** to test whether it is possible to successfully distinguish between the training samples and the unseen samples in the audit set. ## Q2: For the TULA-DR, how do the authors initialize the candidates? We randomly initialized the candidate embeddings with Gaussian distribution. Additionally, it is also acceptable to utilize other approaches for initialization, such as a random-selected sentence's embeddings or uniform distribution. Empirically, the key insight is that *the values of the initialized embeddings are preferably distributed within the value domain of normal embeddings*. ## Q3: What about other ways of calculating the difference for the TULA-MI in the strict black-box case? Thank you for this insightful question. Indeed, we used the value difference (i.e., A-B) to capture the loss changes on the model before and after unlearning the target sample. However, **other approaches such as ratio and logarithm are also acceptable**, as long as there is eventually a value to quantify the loss change. ## Q4: Why do mislabeled samples represent a "worst-case scenario" for auditing? Compared to normal samples, **mislabeled samples** are a small number of counterfactual samples, ensuring the model cannot generalize to them unless explicitly trained. As a result, the trained and unseen **mislabeled samples** will be very different in output distributions and thus are the *most vulnerable to unlearning auditing*. Therefore, we inject the **mislabeled samples** into the training set as the audit samples, in order to simulate the worst-case auditing.
Summary: The paper proposes a new auditing method to check whether unlearning text from a model is completely unlearned. The auditing method called U-LiRA+ is based on U-LiRA and checks whether it is possible to differentiate between unlearned and not seen samples. Additionally two methods for investigating privacy risks for unlearned models are presented called TULA-MI and TULA-DR. TULA-MI is testing whether an adversary can tell that a specific datapoint was unlearned while TULA-DR tries to reconstruc the data point after unlearning. Claims And Evidence: The claims are empirically supported. However, the presentation of the experimental results could be clearer (see comment regarding Figure 2). Methods And Evaluation Criteria: The experimental design seems to make sense, even though the proposed approaches are only tested on two synthetic datasets. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experimental design seems to be sound. Supplementary Material: I checked the appendix of the paper. Relation To Broader Scientific Literature: The findings are not really surprising. Previous work (Hayes et al.) has already shown that unlearning on text needs better evaluation methods. Essential References Not Discussed: - I am not quite sure what the difference between TULA and the work of Chen et al. [1] is. This should be thoroughly discussed, as they also have proposed a membership inference attack that is able to tell whether a data point was unlearned or not. - It should be made clearer what the difference between U-LiRA+ and U-LiRA from Hayes et al. [2] is. [1] Chen et al., When machine unlearning jeopardizes privacy, CCS 2021 [2] Hayes et al, Inexact unlearning needs more careful evaluations to avoid a false sense of privacy, preprint 2024 Other Strengths And Weaknesses: Weaknesses: - in my opinion the argument in TULA that users have access to the models before and after unlearning is a bit unrealistic to me. After all the users don't know when the model will be updated and they don't have access to the model weights either. - it is a bit confusing that the paper talks about language models, but in the end only looks at sentiment analysis models aka. models that classify input text. Other Comments Or Suggestions: - Line 143 right side: there is a point after "M_original" that should probably not be there - it is not possible to highlight any text in the pdf with the mouse or click on any links. Please fix! - Line 304 left side: two times "However," - Line 320 right side: "fine-turned" should be fine-tuned - Figure 2: in the legend for the blue dotted line it says "Audit M_original", but I think this should be "Audit M_unlearned". At least that is what the caption suggests. - Line 412 right side: R-3 is used, but this metric is not used in the table Questions For Authors: 1. It is not clear to me what the dashes lines in Figure 2 are. What does audit on the original and the unlearned model mean? Why is there only a single blue line, even if there are multiple unlearned models? 2. What is the metric "NTS@1NFS"? This should be explained or written out at least once. 3. Why are there so many "</s>" before the synthetic texts? Ethical Review Concerns: There are no ethical reviews needed for this paper. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: # Response to Reviewer p1Cm We sincerely thank the reviewer p1Cm for the valuable and constructive feedback! ## Q1: Clear the differences between the proposed TULA and [1]. Here are the key differences: (1) **Broader and more realistic assumptions.** ***[1] considers only the relaxed black-box scenario***, where the adversary performs membership inference attacks (MIA) with access to output logits, model architecture and auxiliary dataset. In contrast, ***our proposed TULA could be further applied to a strict black-box scenario***, where the adversary can only obtain output scores (e.g., confidence values), ***providing more practical insights on real-world risks.*** (2) **More powerful attack paradigm.** For the target sample *x*, [1] trains the attack model using logits from ***randomly sampled unseen samples as negative supervision***, resulting in a ***coarse-grained, average-level MIA***. In contrast, we ***train multiple shadow models to accurately learn the logits when *x* is unlearned or unseen***, enabling ***fine-grained, sample-level MIA***. (3) **Target beyond MIA.** While [1] focuses solely on MIA, we propose both ***MIA*** and ***data reconstruction attack*** against unlearning mechanisms, exploring the risks from ***adversaries with different targets***. (4) **Focus on more challenging setting**. [1] focus only on tabular and image data with ML models or CNN—***traditional setups where MIAs are much easier due to high distinctiveness among samples***. However, ***MIAs on textual data are much more challenging, necessitating a stronger attack*** [3]. Our results show that [1] is much ineffective against textual data and modern language models. ## Q2: Clear the differences between the proposed U-LiRA+ and [2]. Compared to [2], we **explicitly construct and inject a rigorous audit set with *mislabeled samples* before unlearning, ensuring a rigorous auditing in the worst case**. We demonstrate that previous auditing methods, including [2], **lack rigor due to their failure to properly select audited data**, resulting in **existing unlearning methods being significantly overestimated**. ## Q3: How does the adversary know when unlearning has occurred, and have access to model weights? (1) **The occurrence of unlearning could be easily detected in practice**. **An adversary can *continuously* query the model with target samples**. If the outputs from two consecutive sets of queries **change**, *the adversary could execute the attack to infer whether any samples have been unlearned*. (2) **The access to model weights could also be realistic**. The adversary could be a **malicious collaborator**, such as *a company contracting with the model developer, with permission to model weights for local deployment*. According to the unlearning policy, **the collaborator’s model is also required to be updated after unlearning**, enabling the access to model weights before and after unlearning. Moreover, *this assumption is widely considered in literature*[4], serving as **an "upper bound" for exploring real-world security risks**. ## Q4: Why do we focus on text classification tasks? **Text classification tasks enabling us to design *rigorous* audit samples for *rigorous* unlearning auditing**. Specifically, **rigorous auditing requires rigorous audit samples, i.e., *most vulnerable samples* to unlearning, enabling the worst-case auditing**. The utilized ***mislabeled samples*** have found to be most vulnerable to privacy leakage on image classification tasks[5]. Although our approach is **task-agnostic**, **rigorous samples for text generation tasks are currently under-explored**. ## Q5: The Confusions on Figure 2. We would like to explain Figure 2 based on the process of unlearning auditing: The audit set consists of training and unseen samples. For **one original model**, **multiple unlearned models** are derived by applying *different unlearning methods* to the training samples in the audit set. MIA-based auditing methods are expected to exhibit high attack accuracy on the **original model** (**dashed lines in Fig. 2**) and low accuracy on **unlearned models** (**bars in Fig. 2**). ## Q6: The Meaning of Metric "NTS@1NFS". NTS@1NFS refers to *the Number of True Samples @ 1 False Sample*, quantifying **the number of correctly inferred samples with only one error** for MIAs. ## Q7: The occurrence of "<\/s>" before the synthetic texts. To fairly evaluate the reconstruction attack, we additionally add padding characters (<\/s>) to fix the length of unlearned texts. ## References [1] When Machine Unlearning Jeopardizes Privacy, CCS 2021 [2] Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy, preprint 2024 [3] Do Membership Inference Attacks Work on Large Language Models?, COLM 2024 [4] Adversarial Machine Unlearning Requests Destroy Model Accuracy, ICLR 2025 [5] Evaluations of Machine Learning Privacy Defenses are Misleading, CCS 2024 --- Rebuttal Comment 1.1: Comment: Thank you very much for the clarifications. **Q1/Q2:** Thank you for clarifying the novelty. I can see now that TULA is different from previous works. **Q3:** Thank you for clarifying the setting. Yes, indeed, these assumptions are realistic for an upper bound. **Q4:** Thank you for the clarification. You say that TULA-MI is task agnostic. However, I don't see how you can apply that approach to a generative task. While for text classification, you have several tokens as input and a single logit vector is output, with a generation task, there are multiple logit vectors for a single sample (one logit vector for each token). **Could you clarify how TULA is task-agnostic in this case?** **Q5:** Thank you for the detailed description. This description should be added to the figure. Additionally, I think it would make sense to edit the figure so that the values do not overlap and all the values can be read. However, I am still not sure what exactly is meant by "Baseline." **Which approach are you comparing to here?** **Q6:** Thank you for clarifying the metric. **Could you explain why you chose this metric instead of sticking with the TPR@x%FPR metric?** If I am not mistaken, the meaning/expressiveness of both metrics are the same. **Q7:** Thank you for clarifying. --- Reply to Comment 1.1.1: Comment: # Further Response to Reviewer p1Cm We sincerely thank the reviewer p1Cm for reviewing our rebuttal, and we appreciate your valuable feedback! Below is a further explanation of your latest feedback: ## Q4: Why is the TULA task-agnostic? For generative tasks, we can use **the predicted next-word logit vector** of a text sample as its **output**, a widely adopted approach in existing MIAs for generative tasks[1][2]. The next-word logit vector effectively indicates whether the model has been trained on the target text. For instance, the next-word prediction distribution may **converge to a specific word** for a *training sample* or **diverge** for an *unseen sample*. Thus, for generative tasks, TULA-MI can be implemented using the next-word logit vector of the target sample. In other words, **for both classification and generative tasks, the input sample always produces one logit vector output for conducting MIA, making our approach task-agnostic**. ## Q5: Which approach are we comparing in Figure 2? We compare our proposed auditing method with **U-LiRA[3]**, a previously regarded rigorous auditing method. Our results indicate that *the method significantly overestimates the effectiveness of existing unlearning methods*. **We will carefully revise Figure 2 as you suggested**. ## Q6: Why do we utilize the NTS@1NFS? **NTS@1NFS represents the strictest case for TPR@x%FPR**, measuring how many correct samples an adversary can confidently infer **while allowing only a single mistake**. To rigorously evaluate TULA-MI *in the strict black-box scenario* (Section 4.2.1), we apply such a strict metric to **highlight the worst-case privacy leakage** caused by *a highly cautious adversary*. ## References [1] Extracting Training Data from Large Language Models, USENIX Security 2021 [2] Membership Inference Attacks against Language Models via Neighbourhood Comparison, ACL 2023 [3] Inexact Unlearning Needs More Careful Evaluations to Avoid a False Sense of Privacy, preprint 2024
null
null
null
null
Assessing Safety Risks and Quantization-aware Safety Patching for Quantized Large Language Models
Accept (poster)
Summary: The paper studies an important but relatively underexplored problem. The evaluation of existing quantization approaches clearly demonstrates the safety issues of quantization and Q-resafe gives significant benefits through widely-accepted safety measurements. Experimental results show that Q-resafe outperforms existing methods like SFT and DPO on a pure-utility oriented dataset. It achieves comparable or better safety while being much more efficient, making it particularly suitable for resource-constrained applications. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes, I reviewed the supplementary material, which includes detailed code and additional experimental results. The code is well-organized and demonstrates the implementation of the proposed methods, including the Q-resafe technique. Relation To Broader Scientific Literature: Quantization is a crucial technique for deploying LLMs in resource-constrained environments, making the study of its impact on model safety essential. Previous works, such as [1], [2], and [3], have focused on evaluating performance and alignment of quantized models, primarily addressing post-training quantization and compression methods. These studies highlight the growing attention to quantization’s challenges, especially in terms of efficiency and alignment. This paper contributes by specifically addressing the safety implications of quantization, an area less explored in prior research. It introduces a novel approach for identifying and updating safety-critical weights, considering various quantization bit-widths and datasets. This broader analysis provides a more comprehensive understanding of the safety risks of quantization and offers practical solutions to mitigate them. Essential References Not Discussed: A few related works [1,2,3] that are essential to understanding the broader context of the paper’s key contributions, but which are not currently cited or discussed. Specifically, the paper would benefit from referencing the following works: Reference: [1] "Exploiting LLM Quantization." arXiv preprint arXiv:2405.18137 (2024). [2] "HarmLevelBench: Evaluating Harm-Level Compliance and the Impact of Quantization on Model Alignment." In Neurips Safe Generative AI Workshop 2024. [3] "Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression." arXiv preprint arXiv:2403.15447 (2024). Other Strengths And Weaknesses: Strengths: 1. This paper comprehensively studies the safety issues brought about by LLM quantization. 2. The assessent reveals an interesting phenomenon that quantization damages safety more than utility. 3. The proposed Q-resafe method is novel and effective. It looks at the feasibility of identifying and updating a small portion of safety-critical weights, then exploit potential tools for identifying these weights, and construct a pair of masking matrices corresponding to the LoRA variables. Weaknesses : 1. The safety assessment method is quite simple, and prefix matching of model output may result in false positive rates [4]. 2. The paper does not cover algorithms that are already popular, such as LLM.int8(), NF4, and FP4, implemented in the bitsandbytes library. The safety issues of these popular algorithms could have a larger impact on the users. Reference: [4] Mazeika, Mantas, et al. "Harmbench: A standardized evaluation framework for automated red teaming and robust refusal." arXiv preprint arXiv:2402.04249 (2024). Other Comments Or Suggestions: The paper is well-organized and clearly presents its focus on the safety evaluation of quantized LLMs. The key contributions and methodology are thoroughly explained. However, there are some formatting and stylistic issues that could improve clarity: (1) In algorithm1, the descriptions of the input parameters “Re-evaluation interval K” and “Safety-critical threshold τ” could be made clearer. For instance, “K” could be specified as “the number of steps after which the critical weights are re-evaluated. (2) In terms of citations, in line 46 the reference “(cop, 2023)” and in line 86 the reference “(cha, 2023)” should begin with capital letters for consistency. (3) in lines 87 and 91, the word “moreover” is used repeatedly. Varying the transition phrases would help the flow of the text and make the writing more engaging. (4) There is also an extra indentation in line 260 at the start of the paragraph, which should be removed to maintain consistent formatting throughout the paper. (5) In Section 5.2, the references to figures and tables are inconsistent. For example, line 369 refers to “Table 4,” while line 384 mentions “Tab 5.” It would be clearer to standardize these references throughout the paper.The layout of the figures and tables could be improved for better clarity. In Figure 1, the numbers for “LLM-QAT” and “QLora” (82.9 and 83.4) are not aligned with the other values. Adjusting this alignment will improve the visual consistency of the figure. (6) In the appendix, line 758 refers to “Fig. 3,” while previous mentions use “Figure.” Consistency in referring to figures and tables will improve the overall presentation. Questions For Authors: The paper discusses the trade-off between safety and utility in quantized models. Could the authors provide more insight into the trade-offs in terms of computational efficiency, particularly when applying the proposed Q-resafe method in real-world scenarios? How might this method scale for larger models or more complex datasets? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the positive and detailed comments. We have revised the manuscript to include [1–3], which highlight the safety challenges posed by quantization, and clarified our position relative to these works. **Responses to Weaknesses** 1.Thank you for raising this important concern. To reduce the risk of false positives, we use HarmBench classifier [5], which is a fine-tuned binary classifier that identifies whether a response is malicious or not. Besides, we follow [1] by using the harmfulness score benchmark (ranging from 1 to 5), with GPT-4 as the judge, where higher scores indicate increased harm. We calculate the average harmfulness score across all evaluated instructions on advbench. We re-evaluate different quantized Llama-2-7B-Chat with the above benchmarks. | | ASR (prefix match) | ASR (Harmbench) | Harmful Score | | ---- | ------------------ | --------------- | ------------- | | FP16 | 0.3 | 0.3 | 1.02 | | INT4 | 42.4 | 41.5 | 2.69 | | INT8 | 39.1 | 38.9 | 2.54 | We will include a new table in the revised manuscript comparing ASR results under HarmBench with prefix matching vs. classifier-only methods. 2.we have conducted additional experiments using LLM.int8(), FP4, and NF4 on LLaMA-7B-Chat in appendix C.1. Our findings indicate that Q-resafe maintains strong safety performance compared to these methods, highlighting its robustness even in comparison to established techniques from the bitsandbytes library. **Responses to Other Comments Or Suggestions** we acknowledge that this statement was missing appropriate references, and we apologize for the confusion caused. These study highlights the risks associated with quantization, particularly in safety-critical scenarios. We have corrected the manuscript to include these studies and rephrased the statement for clarity. **Responses to Questions** We thank the reviewer for raising this important question regarding the computational trade-offs of Q-resafe in real-world applications. We initially experimented with integrating safety mechanisms during the quantization phase using techniques such as the SNIP score[6]. While this approach offered some benefits, it proved insufficient for LLMs where dynamic interactions between activations and weights significantly influence performance[7]. Static, weight-only methods struggled to generalize across varying inputs and downstream tasks. For these reasons, we adopted the current post-hoc safety-patching approach, which dynamically identifies and updates safety-critical weights during the model's usage. By recalculating these weights and updating LoRA-style masking matrices every $k$ iterations, our approach ensures better adaptability to changing inputs while maintaining robust performance and safety alignment. This approach introduces minimal computational overhead, as confirmed in our runtime benchmarks **(Table 4 & 5)**. Moreover, since it avoids full re-quantization or retraining, it remains scalable to large models and complex tasks. its modular and sparse nature makes Q-resafe readily scalable: it can be combined with parallelization techniques for importance scoring, selective layer targeting, or low-rank adaptation frameworks in larger models and more complex tasks. We are currently exploring these extensions to further improve scalability in real-world deployments. **Reference** [4] Qi, Xiangyu, et al. "Fine-tuning aligned language models compromises safety, even when users do not intend to!." ICLR 2023. [5] Mazeika, Mantas, et al. "Harmbench: A standardized evaluation framework for automated red teaming and robust refusal." NeurIPS 2024. [6] Lee, Namhoon, et al. "SNIP: Single-shot network pruning based on connection sensitivity." ICLR 2019. [7] Liu Zechun, et al. "Llm-qat: Data-free quantization aware training for large language models." arXiv preprint arXiv:2305.17888 --- Rebuttal Comment 1.1: Comment: I have read the author's rebuttal and the comments of other reviewers. The additional experimental details provided by the authors fully address my previous concerns. The novelty and contribution of the work remain sufficient. Therefore, I maintain the recommendation to accept. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you very much for your thoughtful follow-up and for taking the time to read our rebuttal and the comments from other reviewers. We truly appreciate your positive feedback and recognition of our efforts to address the previous concerns. We will carefully incorporate the additional details and improvements into the final manuscript to further enhance its clarity and completeness. Once again, thank you for your valuable feedback and support. Best regards, The authors of 9330.
Summary: This paper measures the safety of quantized methods and proposes Q-resafe, a method that restores the safety capabilities of quantized LLMs by adding a LoRA module. ## Update after rebuttal Thank you for providing the additional results. I will raise my score to 2. However, I still have some confusion regarding the relationship with current works in the quantization safety area, which has not been addressed by the authors during the rebuttal period. By the way, the vector graphs are important. The current figures are difficult to interpret, making it hard to highlight the key points. Claims And Evidence: The claims are mostly supported by the existing experiments presented in the paper. Methods And Evaluation Criteria: The evaluation criteria are appropriate and align with the goals of the study. Theoretical Claims: There is no detailed theory section provided in the paper. Experimental Designs Or Analyses: 1. The experimental models, such as Llama-2-7B-Chat and Gemma-7B-Instruct, are outdated. 2. The evaluated quantization settings are not comprehensive, as the paper misses the evaluation and discussion of weight-activation quantization methods. 3. The evaluation baselines are not state-of-the-art in this area. Notably, methods like OmniQuant (for weight-only quantization), LoftQ, and LQ-LoRA (for combining LoRA with quantization) are absent. Supplementary Material: I have reviewed all sections of the supplementary material. Relation To Broader Scientific Literature: The paper does not clearly differentiate itself from existing works on quantization safety. The main differences from previous research are not well articulated. Essential References Not Discussed: While the paper claims to evaluate the safety of quantization methods for LLMs, it does not discuss key references in the field: [1]. PB-LLM: Partially Binarized Large Language Models. ICLR 2024. [2]. BiLLM: Pushing the Limit of Post-Training Quantization for LLMs. ICML 2024. [3]. Duquant: Distributing outliers via dual transformation makes stronger quantized llms. NeurIPS2024. [4]. OstQuant: Refining Large Language Model Quantization with Orthogonal and Scaling Transformations for Better Distribution Fitting. ICLR 2025. [5]. SpinQuant: LLM Quantization with Learned Rotations. ICLR 2025. Including discussions on these works would strengthen the paper’s context and relevance. Other Strengths And Weaknesses: 1. The topic of safety for quantized LLMs is not novel, and the paper lacks detailed discussions of existing works. The Introduction section mentions only a few sentences on this matter. 2. The analysis in Section 3.2 mainly summarizes evaluation results, but it lacks deeper insights. 3. Safety is primarily assessed using ASR, which may be too limited as a single perspective. Other Comments Or Suggestions: Figure 2 is unclear, which may hinder readers' understanding. I suggest improving its clarity. Questions For Authors: 1. Could you provide evaluation results for weight-activation quantization methods, which are currently missing? 2. Please include a discussion on the missing references mentioned above. 3. Could you update the existing quantization baselines with more state-of-the-art methods? 4. Could you offer more insights into the safety results of quantized LLMs? For example, what can we learn from the data presented in Table 3? 5. Does Q-resafe rely on a specific safety-patching dataset? Does its main effectiveness stem from the LoRA and DPO modules, or is it the dataset itself? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are sincerely thanks the reviewer for their valuable feedback. Due to limited words, we given summary response. We are eager to have more profound discussion. **Respouse to Questions:** **Q1. Supplementary evaluation results** We updated the quantization baselines with state-of-the-art methods on Llama-3.1-8B-Instruct, covering various quantization strategies: Weight-only quantization: PB-LLM, BiLLM, Weight-activation quantization: SpinQuant, OmniQuant, DuQuant, Quantization with fine-tuning: IQ-LoRA, LoftQ. For safety evaluation, we use the harmful score benchmark (1 to 5) from Qi et al., where higher scores indicate more harm. For utility evaluation, we report perplexity (PPL) on WikiText2. **Q2. Missing references** Thank you for highlighting the missing references [1-5], which cover partial binarization, post-training quantization, and dual transformation for optimization. We have integrated these methods into our updated evaluation (see Q3) for a comprehensive comparison. Results show that while these methods aim to preserve utility, they often overlook safety, which is essential for practical applications. **Q3. Update quantization baseline** | Methods | Setting | ASR $\downarrow$ | Harmful Score $\downarrow$ | PPL $\downarrow$ | | ------------------ | ------- | ---------------- | -------------------------- | ---------------- | | FP16 | W16A16 | 0.2 | 1.02 | 6.14 | | PB-LLM | W2A16 | 86.4 | 4.46 | 6.30 | | BiLLM | W4A16 | 48.1 | 2.95 | 32.48 | | OmniQuant | W4A4 | 79.5 | 4.18 | 6.45 | | SpinQuant | W4A4 | 36.4 | 2.47 | 6.30 | | DuQuant | W4A4 | 26.8 | 2.15 | 8.06 | | IQ-LoRA | W4A16 | 34.7 | 2.40 | 6.42 | | LoftQ | W4A16 | 65.3 | 3.64 | 6.18 | | **Qresafe (Ours)** | W4A16 | 3.4 | 1.09 | 6.35 | Results indicate that existing methods primarily target minimizing utility loss but often ignore safety. Q-resafe shows superior safety performance while maintaining competitive utility. Due to time constraints, additional experiments with lower-bit settings will be included in the revision. Why we initially used Llama-2-7B-Chat and Gemma-7B-Instruct: These models were chosen for their widespread use in existing quantized LLM studies, ensuring comparability. Despite being relatively outdated, they remain relevant for studying safety issues. **Q4. Insights into the safety results of quantized LLMs** From Table 3, we observe: 1. Quantization Techniques: All methods degrade safety. Weight-only quantization has less impact compared to fine-tuning. Parameter-efficient fine-tuning (e.g., IQ-LoRA) tends to degrade safety more than full-parameter fine-tuning. 2. Bit Precision: Lower-bit quantization significantly affects safety, indicating a trade-off between efficiency and safety. 3. Model: Models with stronger reasoning capabilities tend to preserve safety better after quantization compared to chat-optimized models. To validate these findings, we preserved a portion of safety-critical (top $\tau$) weights as FP16 while quantizing the rest to FP4 on Llama-3.1-8B-Instruct:** | Top $\tau$ | ASR $\downarrow$ | Harmful Score $\downarrow$ | | ------------- | ---------------- | -------------------------- | | 0 (full FP4) | 68.5 | 3.81 | | 0.05 | 5.4 | 1.25 | | 0.1 | 3.7 | 1.19 | | 0.2 | 2.5 | 1.15 | | 0.5 | 0.4 | 1.06 | | 1 (full FP16) | 0.2 | 1.02 | Even preserving a small portion of safety-critical weights significantly improves safety while retaining quantization efficiency. **Q5. Safety patch dependency analysis** Q4 results indicate that Q-resafe's effectiveness does not depend on fine-tuning or specific safety-patching datasets. Instead, preserving safety-critical weights is crucial. Q-resafe requires minimal safety-patching samples (e.g., 10) to maintain performance. We will provide more insightful analysis as suggested. **Respouse to Other Strengths And Weaknesses & Suggestions:** We appreciate the suggestions and have enhanced our discussion by incorporating recent studies, analyzing the effects of different quantization strategies on safety, and including more Harmful Score and PPL results. These updates improve clarity and comprehensiveness in our revised manuscript.
Summary: The paper presents a comprehensive safety evaluation of quantized LLMs. Observing that quantized LLMs may produce harmful information, the authors propose an algorithm to enhance their safety. Claims And Evidence: The claims in the paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method and evaluation make sense for the problem. Theoretical Claims: There do not appear to be any theoretical claims in the paper. Experimental Designs Or Analyses: I have checked the soundness of the experimental designs. From my perspective, there are three possible weaknesses: 1. I am concerned that the proposed Q-Resafe method may negatively affect the performance of quantized LLMs. Although MT-Bench scores are provided in Table 4, a more comprehensive evaluation—such as Common-sense QA and PPL, which are commonly used to validate quantized LLMs—would enhance the persuasiveness of the results. 2. **Lack of baselines.** There are no baseline methods compared with Q-Resafe. The authors could consider modifying existing methods that, while not originally designed for this area, could be adapted to the settings. 3. Table 3. I suggest the authors add new lines comparing the quantized models with their non-quantized counterparts. Supplementary Material: I have reviewed the appendix. Relation To Broader Scientific Literature: The key contributions can be summarized as follows: 1. The paper conducts a risk evaluation of quantized LLMs. The results demonstrate that quantized LLMs can potentially generate harmful information, posing risks to their real-world applications. 2. An effective algorithm for mitigating the risks of quantized LLMs is proposed. Experiments show that the method is both efficient and effective. Essential References Not Discussed: To my knowledge, no essential references are missing. Other Strengths And Weaknesses: Strengths: 1. The paper comprehensively investigates the risk problem in quantized LLMs and introduces a method to mitigate it using a calibration dataset. The structure is clear and well-organized. 2. The paper is well-motivated. 3. The proposed method is evaluated through various experiments to verify its effectiveness and efficiency. Weaknesses: 1. It is unusual that the quantization fine-tuning method performs worse than AWQ in terms of ASR scores. The underlying reasons remain to be investigated. Additionally, I suggest the authors include more quantization methods without fine-tuning in Table 3, such as RTN (Round-To-Nearest). Other Comments Or Suggestions: 1. Figure 2 is somewhat unclear. The authors should use vector graphics. Questions For Authors: 1. It is weird that the Q-Resafe method achieves significantly better performance compared to quantization fine-tuning methods in terms of utility scores when comparing Tables 3 and 4. What are the reasons behind this phenomenon? Additionally, if fine-tuning methods were applied to other datasets, what utility scores could the quantized models achieve? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We greatly value your feedback and appreciate your insightful suggestions. We have carefully considered your comments and made the necessary improvements. We are eager to have more profound discussions to further enhance our work. **For Weaknesses: Add more quantization methods without fine-tuning** Thank you for your suggestions. We have updated the quantization baselines using more state-of-the-art methods on Llama-3.1-8B-Instruct. The newly included methods cover weight-only quantization: PB-LLM (ICLR'24), BiLLM (ICML'24) and weight-activation quantization: SpinQuant (ICLR'25), OmniQuant (ICLR'25), DuQuant (ICLR'25) . For utility evaluation, we report the **perplexity (PPL)** on WikiText2. | Methods | Setting | ASR $\downarrow$ | PPL $\downarrow$ | | -------------- | ------- | ---------------- | ---------------- | | FP16 | W16A16 | 0.2 | 6.14 | | RTN | W4A16 | 35.6 | 10.95 | | PB-LLM | W2A16 | 86.4 | 6.30 | | BiLLM | W4A16 | 48.1 | 32.48 | | OmniQuant | W4A4 | 79.5 | 6.45 | | SpinQuant | W4A4 | 36.4 | 6.30 | | DuQuant | W4A4 | 26.8 | 8.06 | | Qresafe (Ours) | W4A16 | 3.4 | 6.35 | These results indicate that many state-of-the-art quantization methods focus on reducing utility losses but ignore the preservation of safety. **For Question: Utility scores of different quantization methods** Thank you for highlighting this point. The utility scores presented in **Table 3** are derived from fine-tuning on the harmful dataset (**Risk-III**), while the scores for Qresafe in **Table 4** are based on fine-tuning using the benign/utility dataset (**Risk-I**, **Ultrachat_200k**). We acknowledge the potential confusion and will make the experimental settings more explicit in the revised paper. It is important to clarify that the utility of the model produced by QAT varies depending on the data used. Table 11 shows the safety and utility comparison of fine-tuned LLMs on Risk-I examples (UltraChat_200k) after 1 epoch training. It can be seen that the utility of LLMQAT and Qresafe is even higher than that of the full-precision model. The reason why Qresafe is better is that it uses DPO, while LLMQAT uses SFT. **For Comments: Use vector graphics.** Thank you for your insightful suggestions! We will update all of the figures with vector graphics in the revison of our paper.
null
null
null
null
null
null
null
null
Federated Oriented Learning: A Practical One-Shot Personalized Federated Learning Framework
Accept (poster)
Summary: The paper introduces Federated Oriented Learning (FOL), a novel one-shot personalized federated learning (OPFL) framework designed for communication-constrained environments such as LEO satellite networks. FOL integrates multi-stage processes—fine-tuning, structured pruning with alignment regularization, ensemble refinement, and knowledge distillation—to enable clients to adaptively integrate knowledge from neighboring models under stringent communication constraints. Theoretical guarantees on empirical risk discrepancy and convergence are provided. Extensive experiments on wildfire, hurricane, CIFAR-10, and CIFAR-100 datasets demonstrate FOL’s superiority over baselines like FedAvg, DENSE, and Co-Boosting, achieving accuracy improvements. Claims And Evidence: The assumption that the local validation set has the same distribution as the final test set, and using it to determine the contribution of each model in the ensemble, can lead to overfitting on the final test set. This is unrealistic in practical scenarios. Methods And Evaluation Criteria: The communication cost of peer-to-peer communication between all nodes and their neighboring nodes needs to be experimentally compared with the communication cost in a federated learning setup with a central server. Theoretical Claims: No.I haven't checked the details in the proofs. Broadly speaking, such assumptions and results are reasonable. Experimental Designs Or Analyses: 1. FOL involves multiple stages (e.g., fine-tuning, pruning), but the paper does not compare training time or communication costs against baselines. For resource-constrained environments, this omission limits practical applicability assessment. 2. In the experiments of this paper, the authors emphasize the performance of the ensemble model, but the ultimate goal of the algorithm should be a single personalized model. Furthermore, the performance of naive local training is actually quite similar to the personalized model produced by the algorithm, especially considering the significant increase in storage and computational costs introduced by the proposed method. If the 30 times the storage space used for the ensemble model were instead allocated to improving the local model, the performance of the local model could potentially be further enhanced. Supplementary Material: I read the supplementary material generally, without carefully checking the details of the proofs therein. Relation To Broader Scientific Literature: The authors focus on a setting where there is no central server, and clients communicate directly with their neighbors, which represents a decentralized learning framework. This field has already been extensively studied, and the paper should reference related works in this area. Essential References Not Discussed: I think some related decentralized learning works should be cited, Other Strengths And Weaknesses: The authors focus on a setting where there is no central server, and clients communicate directly with their neighbors, which represents a decentralized learning framework. This field has already been extensively studied, and the paper should reference related works in this area. In the proposed setup, where clients directly communicate model parameters with their neighbors, serious privacy concerns arise. Additionally, the assumption that the local validation set has the same distribution as the final test set, and using it to determine the contribution of each model in the ensemble, can lead to overfitting on the final test set. This is unrealistic in practical scenarios. The definition of "neighbor models" in the paper lacks sufficient detail. Specifically, it remains unclear how the adjacency or relationship between neighbors is determined and whether it changes over time. The communication cost of peer-to-peer communication between all nodes and their neighboring nodes needs to be experimentally compared with the communication cost in a federated learning setup with a central server. In the experiments of this paper, the authors emphasize the performance of the ensemble model, but the ultimate goal of the algorithm should be a single personalized model. Furthermore, the performance of naive local training is actually quite similar to the personalized model produced by the algorithm, especially considering the significant increase in storage and computational costs introduced by the proposed method. If the 30 times the storage space used for the ensemble model were instead allocated to improving the local model, the performance of the local model could potentially be further enhanced. Other Comments Or Suggestions: In the problem statement part, “Given an image classification task” may be not necessary. As far as I know, other PFL articles usually clarify the specific task in the experiment instead of problem setting. Personally, I think deleting this sentence makes the method look more generalized rather than being limited in a specific domain. Of course, this is just a suggestion, not a necessity. Questions For Authors: The details have been discussed in the strengths and weaknesses part. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **1. Concern About the Local Validation Set Has the Same Distribution as the Final Test Set.** **Response:** We would like to clarify that, in personalized learning, it is standard practice to assume that the local validation set and the test set follow the same distribution. In real-world scenarios, such as LEO satellite constellations, each client’s data is collected by the client itself, and the training, validation, and test sets are just uniformly sampled from these data. This naturally implies that the distribution of these validation and test sets are aligned. **2. Definition of "Neighbor Models" and Communication Cost Comparison.** **Response:** Regarding the definition of neighbor models, we clarify that in our framework a neighbor is any client reachable via a one-hop connection. For example, in a Starlink LEO satellite network, each satellite continuously broadcasts beacons, and any satellite that receives a beacon is considered a neighbor. Regarding the communication costs, we respectfully clarify that in our one-shot setting, each client exchanges models only once with its one-hop neighbors, incurring significantly lower communication overhead than the iterative rounds required by FL setup with a central server. **3. Personalized Model Performance and Ensemble Storage Use.** *Furthermore, the performance of naive local training is actually quite similar to......, If the 30 times the storage space used....* **Response:** The reviewer seems to have mis-read the performance comparison between the naive local model and our personalized model shown in Tables 1-3 in the paper. We respectfully clarify that our final personalized model consistently outperforms naive local training. As shown in these tables, FOL improves accuracy by up to 13.95\% on Wildfire ($\psi$ = 0.5), 30.16\% on Hurricane ($\psi$ = 0.3), 6.77\% on CIFAR-10 ($\psi$ = 0.7), and 9.01\% on CIFAR-100 ($\psi$ = 0.7). These consistent and substantial gains confirm that our method effectively extracts and integrates valuable knowledge from neighbor models. As for the suggestion of using the storage budget (30 times of model size) to train a larger local model instead of using them to fuse neighbors' models, we note that increasing model size alone can potentially increase the risk of overfitting and cannot achieve the benefits of aggregating (new) knowledge from non-IID clients. Additionally, we also want to point out that the 30-times storage cost is only needed at the ensemble step. After the ensemble model is distilled, the final personalized model has the same size as a standard local model. **4.Privacy Concerns.** **Response:** In this work, our primary focus is on improving model accuracy, and thus the issue of privacy is out of the scope of this paper. In real-world applications such as Starlink LEO satellite networks, those satellites typically belong to the same operator, therefore there is no privacy issue. Moreover, if privacy is required in other applications, homomorphic encryption can be integrated into the parameter sharing process, enabling computations directly on encrypted data without compromising performance. **5. Training Time or Communication Costs Against Baselines.** *The communication cost of peer-to-peer communication......needs to be experimentally compared with....* **Response:** We want to point out that our proposed FOL method has lower computation and communication overhead than DENSE and Co-Boosting (i.e., the baselines). Specifically, DENSE and Co-Boosting rely on a modified GAN structure in which the generator is composed of multiple large fully connected layers and runs for 30 epochs per distillation epoch over a total of 200 epochs (i.e, in total 6000 epochs for GAN alone). In FOL, the computation cost of various components are as follows: fine tuning: 30 epochs, structured pruning: 30 epochs, post fine tuning: 30 epochs, ensemble refinement: 10 epochs, knowledge distillation: at most 500 epochs. In total: at most 600 epochs, which is much smaller than that of DENSE and Co-Boosting. Moreover, these additional processes incur only local computational cost, with no extra communication overhead. **6. Related Work In Decentralized Learning.** **Response:** In the revised version, we will add additional citations and discussion of relevant decentralized learning approaches. It is important to note that to the best of our knowledge, no existing decentralized learning work has provided a one-shot personalized features. Our work is the first to provide such a feature with both theoretical guarantees and experiment validation.
Summary: • In order to address the situation of limited client communication in federated learning, this paper introduces a novel federated learning paradigm - OPFL and presents a four-stage one-shot PFL algorithm FOL (Federated Oriented Learning). FOL can learn a personalized model for each client without the need of central server to generate a global model. The convergence analysis is also discussed in the paper. Under the scenario of Low Earth Orbit (LEO), FOL can demonstrate good performance on the Wildfire and Hurricane satellite image datasets, as well as on CIFAR10 and CIFAR100. Claims And Evidence: In the reality of Low Earth Orbit, each client can often only communicate with neighboring clients, and adding a global server for all Low Earth Orbits can be very expensive. This paper attempts to address this situation. Methods And Evaluation Criteria: For the communication situation faced by Low Earth Orbit (LEO), FOL can learn a good model for each orbit relatively well. Theoretical Claims: I mainly checked the correctness of Theorem 2, and the main part of the theoretical proof in the paper is correct, with some minor writing issues. (1) Equation (28) has not been fully expressed. (2) In equation (17), the parameter L is defined as the parameter of L-smoothness, but in the Proof of Theorem 1 of B.3, the parameter for the L-Lipschitz condition is also L. Experimental Designs Or Analyses: The authors mainly focus on the validation of satellite datasets. The SVHN dataset is used in both DENSE and Co-Boosting papers, but this paper does not use this dataset. I think this paper should validation the method on more datasets. Supplementary Material: No Relation To Broader Scientific Literature: This paper may be helpful for federated learning in satellites. The authors mainly conduct experiments on relevant satellite datasets. Prior to this, there have been some works on one-shot federated learning, but they main focus on learning a global model. This paper proposes to learn a personalized model for each client in one-shot federated learning. Essential References Not Discussed: The authors' research on relevant papers is relatively comprehensive, and they give a relatively complete introduction to one-shot Federated Learning and Personalized Federated Learning, which are most relevant to this paper. Other Strengths And Weaknesses: Weakness: (1) This paper should consider the fairness issue in federated learning. This paper assumes a setting where each client can only communicate with its neighboring clients. Consider the following scenario: there are three groups of clients, K1, K2, and K3, which cannot communicate internally among themselves. Each client in K1 and K3 can only communicate once, while K1 can communicate with K2, K2 can communicate with K3, but K1 cannot communicate directly with K3. If all clients in K1 first communicate with all clients in K2, and then all clients in K2 communicate with all clients in K3, when the number of clients in K1 and K3 is the same, each client in these groups will have the same communication cost. However, each client in K1 can only collect knowledge from clients in K2, while clients in K3 can collect knowledge from both K1 and K2 (since K2 processes and aggregates K1's knowledge). Therefore, at the same communication cost, K3 obtains more knowledge than K1. This disparity becomes even greater when there are more clients in K1. It is unfair to them. (2) In P4, 'Each gating parameter \alpha_{l,i}\in [0,1] controls the retention or pruning of the i-th filter or neuron.'Aaccording to this, \alpha_{l,i}) is the gating parameter, which is a real number in [0,1], such as 0.5. This is equivalent to scaling each weight to a certain extent, and this is not a common pruning process. Common pruning methods either set the weights to 0 or retain the weights. Other Comments Or Suggestions: There are some writing errors in the paper, such as using the same hyperparameter λ in equations (5) and (12), but the hyperparameters are different. In P5 ’The proof of Theorem 2 is provided in Appendix B.2.’. Perhaps it should be 'The proof of Theorem 1 is provided in Appendix B.3.' Questions For Authors: In equation (5), there are hyperparameters \lambda, \gamma_shared, \gamma_unshared. How to set the value for the hyperparameters in equation (5) and equation (12) is not mentioned in the paper. I did not see any experimental results on them. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **1. Fairness Issue in the One-Shot Communication Setting.** **Response:** Please note that for personalized learning, fairness does not mean equal accuracy across individual users, but instead it means every user has comparable opportunity to improve its accuracy (i.e., opportunity of learning). Under this definition of fairness, the major issue in the learning example given by the reviewer, i.e., $(t_0: K1 ⟷ K2), (t_1: K2 ⟷ K3)$, where $t_1>t_0$ and $(t: a ⟷ b)$ denotes a learning at time $t$ between users $a$ and $b$ (because they meet at that moment), is that it only considers a short interval $[t_0, t_1]$ in the learning period of these users while has artificially neglected what could happen after that interval, e.g., $K_1$ may later meet and learn from someone who has learned from $K_3$ earlier. Specifically, consider the following example sequence of learning that extends the reviewer's example: $(t_0: K1 ⟷ K2), (t_1: K2 ⟷ K3), (t_2: K3⟷ K4), (t_3: K1⟷ K4)$. So, at moment $t_3$, $K1$ makes up its knowledge on $K3$ by learning from $K4$ who just learned from $K3$ at moment $t_2$. To make our above example more concrete, we have conducted experiment on the aforementioned sequence of learning, and list the accuracy of $K1$ at different moments in Table 1. From this table, it can be observed that the accuracy of $K1$ was improved at $t_0$ and $t_3$, indicating that $K1$ indeed obtained the opportunity of improving its accuracy by learning at these two moments, comparable to the learning opportunities possessed by $K3$. In general, by considering the more realistic scenario where users encounter with each other over a wider time horizon, the fairness issue raised by the reviewer will diminish due to the knowledge propagation among users. **Table: Accuracy of $K1$ at different moments over Hurricane and CIFAR-10.** | Methods | Hurricane ($\psi=0.3$) | CIFAR-10 ($\psi=0.7$) | |---------------------------------------------|-------------------------|------------------------| | Initial (i.e., Local) | 82.14\% | 60.47\% | | FOL-A ($t_0: K1 ⟷ K2$) | 91.07\% | 63.18\% | | FOL-A ($t_3: K1 ⟷ K4$) | 92.86\% | 67.15\% | | FOL ($t_0: K1 ⟷ K2$) | 85.71\% | 61.91\% | | FOL ($t_3: K1 ⟷ K4$) | 89.07\% | 63.13\% | **2. Gating Parameters and Pruning Mechanism.** **Response:** Compared with the common **hard** pruning method, wherein weights are either strictly set to zero or fully retained based on a fixed/unified threshold, our **soft** pruning has the unique advantage of adaptive, fine-grained selection of threshold to prune for each individual connections (i.e., threshold $\alpha_l$ for weight $W_l$). These gating parameters/thresholds are optimized during training, allowing the model to selectively prune less important connections, which ultimately results in a more robust model. This strategy is also commonly used in recent differentiable pruning methods. **3.Hyperparameter Settings in Equations (5) and (12).** **Response:** In our experiments, for CIFAR-10, CIFAR-100, and the satellite datasets (Wildfire and Hurricane), we set $\lambda = 0.1$, $\gamma_{\text{shared}} = 0.05$, and $\gamma_{\text{unshared}} = 0.02$ in Equation (5), and we set the distillation regularization weight in Equation (12) to $0.01$. These values were selected via cross-validation and were found to consistently yield robust performance, effectively balancing alignment, diversity retention, and model personalization. We will include these information in the final paper. **4. Minor Writing Issues.** **Response:** We acknowledge that Equation (28) was not fully expressed, that the same constant “L” is used for both the L-smoothness and L-Lipschitz conditions, and that the same notation $\lambda$ appears in Equations (5) and (12). In the revised manuscript, we will update the notation to clearly distinguish between the different constants and ensure that all equations are fully expressed. Importantly, these issues do not affect the correctness of the proofs, as confirmed by our analysis and by the reviewer’s own inspection of Theorem 2. **5. Suggestion to Include SVHN.** **Response:** We want to point out that our validation in the paper is based on more comprehensive and domain-relevant datasets than those done for DENSE and Co-Boosting. Specifically, none of those datasets used to validate DENSE and Co-Boosting in their original papers are relevant to satellite applications, which are the typical applications our proposed FOL method targets for. In contrast, the wildfire and hurricane datasets used in our paper are more relevant to the satellite application of the proposed FOL method.
Summary: This paper first introduces an important limitation of existing Personalized Federated Learning methods, which is the need of multiple communication rounds to update models. This will lead to massive communication costs and impracticable for the real-world scenarios. Moreover, the authors argue that personalizing the global model is not feasible in practice, as the global model produced by federated learning (FL) algorithms typically lacks the adaptive modules necessary for effective local adaptation. Based on this, the authors propose a novel algorithm for one-shot personalized federated learning (PFL), called Federated Oriented Learning (FOL), which operates in a decentralized manner and enables clients to iteratively enhance their local models by learning from their neighbors through a single round of local model communication. FOL consists of several key stages: model pretraining, model collection, fine-tuning, pruning, post fine-tuning, ensemble refinement, and knowledge distillation. Additionally, the authors establish two theoretical guarantees: one on the empirical risk discrepancy between the student and teacher models, and another on the convergence of the distillation process. The paper demonstrates a well-motivated and innovative approach; however, certain critical steps are described in a vague and potentially misleading manner, which could hinder the clarity of the methodology. It is recommended that the authors enhance the readability of these sections to ensure a more precise and accessible presentation. Additionally, the complexity of the proposed method appears to be notably high, raising concerns about its practical applicability. A more detailed discussion on computational efficiency and scalability would greatly strengthen the paper. Claims And Evidence: 1. The authors provide a thorough and detailed analysis of existing research and the problem at hand in the introduction section, effectively highlighting the significance of the issue and presenting a well-justified motivation. Methods And Evaluation Criteria: 1. Each client collects the local models of its neighbors and performs operations such as fine-tuning and models alignment by structured pruning on its own dataset. Essentially, every step involves training all the neighbor’s local models on the local dataset, which is undoubtedly unsuitable for resource-constrained devices. Moreover, the authors do not clarify whether each local model is trained on the local dataset only once or for multiple epochs. The authors should reduce unnecessary computational burdens to enhance the practical applicability of the proposed method. 2. Optimal Weighted Ensemble described in Section 3.3 appears to be misleading and requires further clarification. In Line 231, it seems to suggest assembling models to form a new model, but later descriptions indicate that this step is more akin to ensembling logits. Additionally, the statement that **"the ensemble outcome (a.k.a. the teacher model) is K times larger"** is unclear, as Equation (10) implies that the size of the outcome remains unchanged, with the corresponding elements being weighted sums. Theoretical Claims: 1. The paper would benefit significantly from a convergence analysis of Equation (5), as it would strengthen the theoretical foundation of the proposed method. Experimental Designs Or Analyses: 1. The authors claim that their method is effective for clients with highly diverse datasets; however, the experiments do not include any scenarios that validate this claim. 2. Given that the Ensemble Model (FOL-A) consistently outperforms other approaches in the experimental results, including full FOL, it raises the question of whether the Knowledge Distillation component should be removed. Alternatively, is there a more effective personalization strategy that could replace knowledge distillation? 3. The number of baseline methods appears to be somewhat limited. Supplementary Material: 1. Eq. (28) has some typo errors. Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Optimal Weighted Ensemble Clarification.** *Optimal Weighted Ensemble described in Section 3.3 appears to be misleading...; Additionally, the statement that "the ensemble outcome (a.k.a. the teacher model) is K times larger...* **Response:** We respectfully clarify that the "Optimal Weighted Ensemble" in Section 3.3 is performed entirely at the logit level. In our approach, we aggregate the outputs (logits) of the K selected base models using a weighted sum (as shown in Equation (10)), so the ensemble’s final output retains the same dimensionality as that of a single model. More formally, let vector $\mathbf{g}_i = [g_i^{(1)}, \ldots, g_i^{(N)}]$ denote the $N$-dimensional logit output of base model $i$, then the output of the ensemble model can be written as $\mathbf{G} = \sum_{i=0}^K w_i \mathbf{g}_i$, where $K$ is the number of base models participating in the ensemble. Furthermore, in our original text, the phrase “K times larger” referred to the fact that the ensemble model is composed of K base models, and hence the number of parameters in the ensemble model is roughly $K$ times of the size of a base model. It was never intended to imply that the final output dimension (i.e., the dimensionality of $\mathbf{G}$) increases by a factor of $K$. **2. Role of Knowledge Distillation.** *Given that the Ensemble Model (FOL-A) consistently outperforms ... it raises the question ... should be removed. Alternatively, is there a more effective personalization strategy that could replace knowledge distillation?* **Response:** The knowledge distillation component is necessary in order to ensure the final personalized model has the same size as the initial local model. In particular, the ensemble step FOL-A bloats the model size by $K$ time, the distillation step compresses the bloated model by a factor of $K$, leading to a final model roughly at the same size of the initial model. Furthermore, based on our best knowledge, knowledge distillation achieves the best performance among all other personalization strategies in one‑shot federated learning scenarios. **3. Computational Complexity and Practical Applicability.** *Each client collects the local models of its neighbors and performs operations...; Moreover, the authors do not clarify whether each local model is trained on the local dataset only once or for multiple epochs...* **Response:** We want to point out that our proposed FOL method has much lower computation and communication overhead than DENSE and Co-Boosting (i.e., the baselines). Specifically, DENSE and Co-Boosting rely on a modified GAN structure in which the generator is composed of multiple large fully connected layers and runs for 30 epochs per distillation epoch over a total of 200 epochs (i.e, in total 6000 epochs for GAN alone). In FOL, the computation cost of various components are as follows: fine tuning: 30 epochs, structured pruning: 30 epochs, post fine tuning: 30 epochs, ensemble refinement: 10 epochs, knowledge distillation: at most 500 epochs. In total: at most 600 epochs, which is much smaller than that of DENSE and Co-Boosting. Moreover, these additional processes incur only local computational cost, with no extra communication overhead. Furthermore, we note that FOL is designed for one-shot communication scenarios, e.g., LEO satellite networks or intermittent IoT systems, for which limited communication bandwidth, instead of the computation power, is the primary constraint. **4. Experiments on Data Diversity and Baseline Selection.** *The authors claim that their method is effective for clients with highly diverse datasets...; The number of baseline methods appears to be somewhat limited.* **Response:** In our experimental settings, we use a Dirichlet distribution to partition datasets across 70 clients, thereby explicitly simulating highly non-IID conditions with varying degrees of data heterogeneity (with $\psi$ values ranging from 0.1 to 0.7). Regarding baseline methods, we compare against widely recognized state‑of‑the‑art one‑shot federated learning approaches, namely DENSE and Co‑Boosting. We believe that our experimental design thoroughly demonstrates the effectiveness of our method. **5.Suggestion to Include the Convergence Analysis of Equation (5).** **Response:** We want to point out that structured pruning is just one of the several intermediate steps within FOL. Instead of proving the convergence of this intermediate step, we have proved the convergence of the final step of the proposed FOL method in Theorem 2 of the paper. We believe proving the convergence of the final step is more meaningful and important than just proving the convergence of an intermediate step. In our extensive experiments, we have observed that all intermediate steps consistently converge. **6. Minor Writing Issues Eq. (28).** **Response:** We appreciate the reviewer pointing out this typo and will ensure all equations are correctly presented in the final paper. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. After carefully reading the reviews from others and the corresponding rebuttals, some of my concerns have been addressed. However, I still believe there is a lot of room for improvement. Therefore, I decide to keep my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer DHoW, Thank you so much for providing your comments to our rebuttal! We are glad to see that we have addressed some of your concerns. If you have any additional concern that is related to the generalizability of the proposed FOL model, we just want to inform you that we have conducted new experiments to validate the proposed model and compared it to the state of the art DENSE, Co-Boosting, and FedAvg on a new dataset SVHN, for which the results are shown in the following Table. It can be observed that FOL-A consistently achieves the highest accuracy, and FOL (E=3) surpasses the best baseline (in this case Co-Boosting) by up to 9.24\% over the SVHN dataset. Now the proposed FOL model has been validated and compared with DENSE and Co-Boosting over 5 datasets: Hurricane, Wildfire, CIFAR-10, CIFAR-100, and the newly added SVHN. We hope these new experiment results have better demonstrated the generalizability of the proposed FOL model, and therefore has adequately addressed any concern related to the model's generalizability. If you have any additional concerns or suggestions, we will be more than happy to further address/accommodate them. Thank you! **Table: Test accuracies (%) on SVHN, $\psi = 0.5$, reported as mean ± std.** **Dataset:** SVHN **Satellite #:** 21 | Method | Accuracy (%) | |------------------|--------------------| | Local | 78.97 ± 1.75 | | FOL-A (E=1) | 85.73 ± 1.63 | | FOL-A (E=2) | 86.26 ± 1.28 | | FOL-A (E=3) | **88.37 ± 0.92** | | FOL (E=1) | 81.09 ± 1.54 | | FOL (E=2) | 81.62 ± 1.36 | | FOL (E=3) | 82.85 ± 1.18 | | FOL-AN (E=1) | 79.62 ± 1.64 | | FOL-AN (E=2) | 80.92 ± 1.41 | | FOL-AN (E=3) | 83.15 ± 1.33 | | FOL-N (E=1) | 80.04 ± 1.74 | | FOL-N (E=2) | 80.39 ± 1.39 | | FOL-N (E=3) | 81.15 ± 1.22 | | DENSE | 69.53 ± 1.57 | | Co-Boosting | 73.58 ± 1.48 | | FedAvg (E=1) | 53.08 ± 2.32 | | FedAvg (E=2) | 58.72 ± 1.79 | | FedAvg (E=3) | 55.49 ± 1.93 |
Summary: This paper proposes Federated Oriented Learning, a novel framework for One-Shot Personalized Federated Learning designed for environments with constrained or infrequent communication or limited contact windows. The authors further provide two theoretical guarantees on empirical risk discrepancy between student and teacher models and the convergence of the distillation process. Claims And Evidence: Most claims are supported, with real-world and benchmark datasets and theoretical contribution Methods And Evaluation Criteria: The multi-stage pipeline is methodologically valid and appropriate for OPFL. Non-IID data is considered with different levels of heterogeneity. Theoretical Claims: Theoretical contribution includes (1) risk discrepancy between student and teacher model with KL-divergence and (2) convergence of distillation with standard theoretical assumptions. Experimental Designs Or Analyses: The experimental settings are valid for OPFL with class imbalance and natural non-IID splits settings. Ablation studies demonstrates the contribution of each component. Considers heterogeneity and real-world dataset. I wonder if evaluation is on 1–3 clients per setting, which might seems insufficient for personalization-focused FL research where client-wise variance matters. Supplementary Material: I checked Section A and C Relation To Broader Scientific Literature: This paper is closed connected to FL, PFL and OFL, with model distillation and pruning. Integrating one-shot FL, pruning with alignment, and distillation for personalization seems novel. Essential References Not Discussed: Some multi-round methods could be mentioned here. Other Strengths And Weaknesses: Strength: Paper is well-written and experiments carefully designed with theoretical analysis. Weaknesses: A figure showing the trend of convergence is desired here. Other Comments Or Suggestions: N/A Questions For Authors: How does FOL scale with increased client count or larger models? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **1. Regarding Evaluation on 1–3 Clients per Setting.** **Response:** The accuracy of 6 additional clients are given in following Table. Similar performance trends as those 3 shown in our original paper can be observed in this table (i.e., those 3 shown in our original table indeed are representative). Together with these 6 additional clients, we have shown the performance of 9 clients. We hope this has addressed the concern of the reviewer. **Table:** Test accuracies (%) for six additional clients on the *Hurricane* dataset with $\psi$ = 0.7. | **Dataset** | | | **Hurricane** | | | | |-----------------|---------------|----------|---------------|----------|---------------|----------| | **Satellite #** | **41** | **3** | **9** | **22** | **56** | **51** | | **Methods** | | | **$\psi = 0.7$** | | | | | Local | 90.45% | 82.35% | 88.63% | 90.67% | 86.01% | 91.18% | | FOL-A (E=1) | 94.27% | 91.18% | 92.73% | 93.10% | 93.87% | 96.57% | | FOL-A (E=2) | 95.54% | 94.12% | 93.64% | 93.68% | 95.16% | 97.06% | | FOL-A (E=3) | 96.18% | 96.06% | 94.09% | 95.40% | 95.74% | 97.55% | | FOL (E=1) | 93.11% | 85.29% | 90.02% | 91.95% | 89.81% | 93.63% | | FOL (E=2) | 93.63% | 91.33% | 91.82% | 92.53% | 90.07% | 94.12% | | FOL (E=3) | 94.27% | 93.04% | 92.27% | 94.25% | 91.92% | 95.59% | | DENSE | 70.02% | 67.35% | 68.13% | 71.31% | 69.57% | 70.16% | | Co-Boosting | 74.61% | 69.16% | 72.51% | 73.63% | 75.21% | 74.47% | **2. Suggestion to Include a Convergence Trend Figure.** **Response:** Because the rebuttal policy of ICML does not allow us to upload figures as part of our rebuttal, we present a representative trace of the loss value in the training of the distilled FOL model in the following table. The convergence trend of the training can be clearly observed in the table. We will present the same trace as a figure in our final paper, as suggested by the reviewer. **Table:** FOL distillation training loss vs. epoch id for one of the satellites over the Hurricane dataset. | **Epoch** | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | |-------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | **Training Loss** | 0.135 | 0.1001 | 0.0818 | 0.1109 | 0.097 | 0.0968 | 0.0839 | 0.0859 | 0.0891 | **3. Concerning the Scalability of FOL.** **Response:** Our proposed FOL framework is designed to scale efficiently. The storage, computation, and communication cost do not change with the number of clients. These costs increase proportionally with the size of local model.
null
null
null
null
null
null
LongRoPE2: Near-Lossless LLM Context Window Scaling
Accept (poster)
Summary: This paper mainly introduces LongRoPE2, aiming to achieve an effective long context window while preserving short-context performance by context extension. Based on LongRoPE, LongRoPE2 introduces a new needle-PPL guided evolutionary search method for settling the rescaling factors, and proves it to be more effective than the naive PPL-guided one by experiments. For retaining preformance on short contexts, LongRoPE2 proposes a novel mixed context window training method. Compared to YaRN, NTK and LongRoPE, LongRoPE2 achieves better performance on long contexts while retaining over 98.5% of short-context performance. --- ## update after rebuttal Thanks to the authors for their response, and I acknowledge that LongRoPE2 is a strong and valuable work. However, there are still some unclear aspects in the paper, such as the lack of a detailed explanation regarding how Figure 3(a) was derived. Due to these unresolved concerns, I have decided not to adjust our initial score. In our view, a score of 3 (borderline accept) remains reasonable and justified. Claims And Evidence: yes Methods And Evaluation Criteria: YES Theoretical Claims: yes This paper proposes a New RoPE OOD Hypothesis that the empirical RoPE periods in higher dimensions are longer than theoretical values, limiting current methods to fully address RoPE OOD. This implies that the actual optimal rescaling factors may be greater than the theoretical one. Then, by applying needle-PPL-guided search, LongRoPE2 does get rescaling factors larger than the theoretical one, and performs better, which from this point of view can test this hypothesis. Experimental Designs Or Analyses: Yes. In 4.2, this paper presents reults on RULER, NIAH, LOFT of LongRoPE2-extened, comparing with other SOTA RoPE rescaling methods(YaRN, NTK, LongRoPE) and shows the effectiveness. In 4.3, to validate the effectiveness of real critical dimension, one experiment applies d_{rcd} to YaRN and NTK, which is also get improved. To validate the effectiveness of needle-PPL guided search, one experiment compares it with the naïve one with the same training process on the same test dataset. Finally, the effectiveness of mixed context window training is also validated here. Supplementary Material: Yes. Appendix Relation To Broader Scientific Literature: Some scaling methods before LongRoPE(PI, YarN, NTK) ignore the actual errors caused by different parameters of models after training. This issue was preliminarily solved in LongRoPE. LongRoPE2 is based on LongRoPE, and more effective. Essential References Not Discussed: No Other Strengths And Weaknesses: This paper is clearly demonstrated. The most enlightening contribution may be the New RoPE OOD hypothesis (in 3.1), as it gives directions for optimization of other methods not limited to this paper. Other Comments Or Suggestions: There is one typo on page 8, subtitle: need-PPL should be needle-PPL Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful review and for recognizing our contributions. We greatly appreciate your acknowledgment of our New RoPE OOD Hypothesis and the role of needle-PPL-guided search in validating this hypothesis through empirical results. We are also glad that you found our extensive experiments in Sections 4.2 and 4.3 valuable in demonstrating the effectiveness of LongRoPE2 and our key design choices, such as the real critical dimension and mixed context window training. Please let us know if there are any specific aspects where we can provide more details. Thank you again for your time and constructive evaluation! --- Rebuttal Comment 1.1: Comment: Thanks for your response. I have no more questions and will keep my score.
Summary: This paper proposed LongRoPE2, a RoPE scaling method to extend the context window of LLMs. The primary extension compared to LongRoPE1 is that LongRoPE2 utilizes a needle-based search rather than perplexity-based one for various rope dimension scaling. The experimental results demonstrate the superior performance of LongRoPE2 compared to other RoPE scaling methods. Claims And Evidence: 1. The most overclaim is "LongRoPE2-extended LLaMA3-8B-128k surpasses Meta’s LLaMA3.1-8B-128k in long-context performance with 80x fewer training tokens". This claim is supported by the RULER results in Fig 1. However, LongRoPE adopts a needle-based search for RoPE scaling, which may (over)fit the synthetic tasks in RULER benchmark, hence achieving better results. For other general tasks such as En. MC in InfiniteBench, **LongRoPE2-LLaMA3-8B achieved a score of 46.72 but LLaMA-3.1-8B achieved 65.1**. This means LongRoPE2-LLaMA3-8B may still have a large gap to LLaMA-3.1-8B involving far more training tokens. Note that it's not necessary for LongRoPE2-LLaMA3-8B to surpass LLaMA-3.1-8B, but should fix the claim for clearness. 2. The mixed context window training is adopted in [1] and LLaMA-3.1 (as well as a common practise in long-context LLM community) to maintain the short-context performance, but the authors claims they propose such a "novel" strategy. [1]LongAlign: A Recipe for Long Context Alignment of Large Language Models Methods And Evaluation Criteria: The evaluation benchmarks are popular in the long-context understanding field. Theoretical Claims: There are no proofs for theoretical claims. Experimental Designs Or Analyses: I have gone through the ablation studies which have demonstrated the effectiveness of LongRoPE2's designs. Supplementary Material: There are no many supplementary materials. I have gone through the Additional Experiments and Analysis section to achieve the training and evaluation details for reproduction. Relation To Broader Scientific Literature: This paper is a direct extension of LongRoPE[1]. [1] LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Essential References Not Discussed: This paper considers a mix context window training strategy as one of its primary contributions. However, such strategies are widely-used in some previous works such as LongAlign[1], which is missing in references. [1]LongAlign: A Recipe for Long Context Alignment of Large Language Models Other Strengths And Weaknesses: Strength: 1. Generally the experiments covering long and short contexts are well-designed and can demonstrate the effectiveness of the proposed method. 2. The intuition regarding the insufficient training of high-frequencies RoPE makes sense. 3. I believe the needle-driven PPL search is a better choice for LongRoPE as the pure PPL on normal documents may be orthogonal to long-context performance. Weaknesses: 1. I feel most designs in the work have been proposed/adopted in previous works. For example, the intuition that high-frequencies RoPE may be insufficiently trained has been introduced in a popular blog (https://spaces.ac.cn/archives/9706) regarding RoPE scaling. The mixed context window training is adopted in [1] and LLaMA-3.1 to maintain the short-context performance. It may be improper to regard these points as this work's contributions and should give some credit to the related works. [1]LongAlign: A Recipe for Long Context Alignment of Large Language Models Other Comments Or Suggestions: The related work section is better placed in the main body of the paper to make it self-contained. (This is a suggestion and wouldn't affect my rating.) Questions For Authors: 1. Do you think it's better to list the results of LLaMA-3.1-8B in the main table? Since you claimed the proposed method on 10B tokens can surpass LLaMA-3.1-8B's continual training of 800B tokens. 2. What are the criteria for you to choose the evaluation tasks? It seems there are some challenging tasks such as En.QA, En. Sum, etc. in the InfiniteBench and some other tasks of various categories in LongBench. The selected tasks seem to be irregular. 3. The claim regarding the intuition of insufficient high-frequencies RoPE training and the mix context window training should be better to fix. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: >Q1: Clarification on LLaMA3.1-8B long-context evaluation numbers, and the "overclaim" comments **Response**: We appreciate your feedback and would like to clarify the following points: 1. **65.1 is the En.MC score of the instruct version, not LLaMA3.1-8B.**: As noted in Table 2 of the LLaMA3.1 tech report, the 65.1 score is for the instruct-tuned version (a detail that can be overlooked). Compared to the fair baseline, LLaMA3.1-8B, our model achieves a higher score (**46.72** vs. **45.85**) on En. MC. Moreover, our model consistently outperforms LLaMA3.1-8B across several long-context benchmarks. Here are additional results: >InfiniteBench and LongBench: ||avg.|En.MC|En.Sum| KV retrieval | TriviaQA | TREC | LCC | RepoBench-P| |:--: |:--: |:--: |:--: |:--: |:--: |:--: |:--: |:--: | |LLaMA3.1-8B|54.28| 45.85| 15.27|16.20| **91.13**| 73.50| 70.24| **67.83**| |LongRoPE2-LLaMA3-8B|**65.20**|**46.72**| **16.20**|**88.0** |**91.13**| **76.50**| **70.47**| 67.39 | >LOFT: ||avg.| ArguAna | FEVER | HotPotQA | MS MARCO | NQ | Quora | SciFact| | :--: | :--: | :--: | :--: |:--: |:--: |:--: |:--: |:--: | |LLaMA3.1-8B| 53.14|19.0| 90.0| 12.0| 69.0| 78.0| 61.0|43.0| |LongRoPE2-LLaMA3-8B| **74.28**|**28.0** | **96.0** | **70.0** | **80.0**| **94.0**| **79.0** | **73.0**| 2. **Our RoPE scaling method is designed to improve broad long-context capabilities, not to optimize for any specific benchmark like RULER.** The use of needle data for search is **not** designed to fit RULER but to **better control long-range token dependency distances** in long documents. E.g., We used only the simplest number needle synthesis method. Our extensive experiments proved the superiority over other methods (e.g., NTK, YaRN) across diverse benchmarks. >Q2: What are the criteria for you to choose the evaluation tasks? **Response**: Our selection follows two key principles: 1) **Effectiveness for evaluating a pre-trained LLM rather than a chat LLM**. Since our method extends a pre-trained LLM without post-training, we prioritize tasks aligned with this setup: (i) completion-based tasks, such as few-shot learning and code completion and En.MC in InfiniteBench. (ii) QA tasks with few-shot examples, such as various text-retrieval QA tasks in LOFT. 2) **Comprehensive long-context evaluation**. To evaluate multiple aspects of long-context performance, we include tasks covering RULER, needle-in-a-haystack retrieval, real-world text QA, high-difficulty KV retrieval, multi-choice QA, few-shot learning, and code completion, as detailed in our evaluation section. We believe this selection fairly reflects the strengths of our method and provides a well-rounded assessment. *Additional results on chat-based sub-tasks*. For your reference, we provide additional results on chat-based LongBench tasks. As shown below, We achieve the highest average score, even surpassing LLaMA3.1-8B. ||Avg.|narrativeqa| Qasper | multifiledQA | hotpotqa | 2wikimqa | musique | gov_report | qmsum| samsum| | :--: | :--: | :--: | :--: | :--: |:--: |:--: |:--: |:--: |:--: |:--: | |LLaMA3.1-8B|22.60| 20.90|12.50|32.72|11.95| 13.98| **8.62**| 29.95|**25.53**|**47.23**| |NTK-LLaMA3-8B|20.32|21.14|11.93|29.02|11.91|14.71|7.81|21.50|22.09|42.70| |LongRoPE2-LLaMA3-8B|**24.31**|**21.79**|**18.13**|**36.25**|**13.85**|**19.42**|8.03|**30.12**|25.41|45.80| >Q3: The clarifications on the main contributions and claims: **Response**: We are grateful for your questions and would like to clarify that our main contribution is not the discovery of insufficient training in high-frequency RoPE, but rather the introduction of **a new RoPE OOD hypothesis**. This hypothesis explains why existing RoPE rescaling methods, such as NTK and YaRN, often result in suboptimal long-context performance. This contribution has been acknowledged by the other two reviewers. Regarding the mixed context window training, we would like to emphasize that the key difference between our approach and those used in LLaMA3.1 and LongAlign is the use of two distinct RoPE scaling factors: a short factor for short contexts and a long factor for long contexts. This **dual-factor** approach is essential in significantly recovering short-context performance, which we have shown through extensive experiments. Here, we perform an additional comparison with LLaMA-3.1’s mixed training. As shown below, we significantly improve short-text performance. ||(short)-MMLU|(short)-MMLU pro| (short)-GSM8k | Ruler-128k| | :--: | :--: | :--: | :--: | :--: | |LLaMA3-8B (**Our mixed context windows training**)|**65.01**|**34.61**|**50.80**|**82.03**| |LLaMA3-8B (mixed training in LLaMA3.1)| 64.18|32.95 |46.25 |71.83 | We hope these responses address your concerns and clarify any confusion, and we will incorporate them in the revisions. Thank you again for your valuable feedback and suggestions, and we kindly ask you to consider re-evaluating our work. --- Rebuttal Comment 1.1: Comment: 1. Thanks for addressing my confusion and now I believe the effectiveness compared to LLaMA-3.1. But I feel the infinitebench is quite unstable, i.e., the performance before and after instruction tuning will be quite distinct, though there are no requirements to follow specific formats. I hope to see more results on some new benchmarks such as LongBench v2 in the future. 2. The mix context window training still seems to be similar to previous works. The only difference is to adjust RoPE factor for different lengths, which is also used in a previous work[1]. I feel it is better to be a training strategy inspired by previous works rather than the main contribution of this paper. [1] CLEX: Continuous Length Extrapolation for Large Language Models Other concerns have been addressed. Now I can raise my score to 3. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful follow-up. We're glad our response clarified the effectiveness of our method compared to LLaMA-3.1. We appreciate your suggestions. We agree that evaluating on newer benchmarks, such as LongBench v2, would be valuable and will consider this in future work. Regarding mixed context window training, we will refine the discussion in our revision to clarify its position more clearly. Thanks again for your valuable feedback!
Summary: Maintaining the performance on both long and short benchmarks are a critical challenge for existing long context extension methods. LongRoPE2 is a new approach that extends the effective context window of pre-trained large language models to the target length, while preserving the performance on the original shorter context window. Claims And Evidence: Claims: LongRoPE2 extends context windows to 128k while retaining >97% short-context performance. The key contributions are (1) higher RoPE dimensions are undertrained, (2) evolutionary search for rescaling factors guided by needle-driven perplexity, (3) mixed training with original/rescaled RoPE. Evidence: Achieves strong results on RULER, and real-world benchmarks (LOFT, LongBench). Outperforms YaRN, NTK, and LongRoPE with much fewer tokens. Methods And Evaluation Criteria: Methods: Evolutionary search for critical dimensions and scaling factors, mixed training (original RoPE for short contexts, rescaled RoPE for long). Evaluation: Benchmarked on RULER, Needle-in-a-Haystack (retrieval), LOFT/InfiniteBench (real-world), and MMLU/GSM8K (short-context). Theoretical Claims: Challenges prior RoPE OOD theory: insufficient training in higher dimensions extends empirical periods, requiring larger scaling factors than theoretical bounds. To be honest I do not carefully check the correctness of all theoretical claims of this paper. Experimental Designs Or Analyses: Ablations confirm needle-PPL’s superiority over standard PPL and mixed training’s necessity. Adjusted baselines (YaRN-red/NTK-red) show improved but suboptimal performance. Supplementary Material: No Relation To Broader Scientific Literature: Builds on RoPE rescaling (NTK, YaRN) and evolutionary optimization. Different from RAG/agent-based methods, positioning LongRoPE2 as complementary methods. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: Efficient scaling (10B tokens), minimal short-context degradation. Weaknesses: Evolutionary search computational cost; inference requires KV cache recalculation. Other Comments Or Suggestions: N/A Questions For Authors: 1. How does evolutionary search scale to million-token contexts? 2. Does mixed training cause interference between short/long contexts? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Response**: Thank you for your valuable feedback and for recognizing the strengths of our work. We appreciate the opportunity to address your concerns. 1) **Affordable evolutionary search computational cost**: we acknowledge that evolutionary search introduces additional costs. To further clarify its feasibility, we conduct additional experiments to evaluate the search cost when scaling from 128k (current context window length) to 1024k. Using vllm0.7.3 as the inference engine and running on an 8*A100(80GB) server setup, we measured the total search time. As shown in the table below, even when scaling to 1M tokens, the total search time remains manageable at 240 hours (10 days). Moreover, this is a one-time *offline* process, and the search time can be linearly reduced by increasing the number of GPUs, due to the nature of evolutionary search. Therefore, it is practical for LLM pretraining teams. ||128k|512k| 1024k| |:--: |:--: |:--: |:--: | |total search time on 8*80GB a100| 7.5h | 68h | 240h | 2) **KV cache recalculation occurs only in specific cases and has minimal overhead**: We acknowledge that KV cache recomputation is required when transitioning from the short context window (using the short factor, i.e., original RoPE) to the long context window (using the long factor). However, this recomputation does not occur in every inference. It happens only when the input length is within the short context window, but the total length (input+generated tokens) exceeds it **for the first time**. After this one-time recomputation, no further recomputation is needed for the rest of the generation. In most general inference scenarios, this situation is relatively uncommon, as prompts and completions typically either remain within the short context window or start in the long context mode from the beginning. To quantify the cost, we measured KV recomputation time on a 4x80GB A100 GPU (with vllm 0.7.3) for Phi-3-mini and LLaMA3-8B, comparing it against normal decoding time: ||prefill (kv recompute)|decode-output 512| decode-output 1k| decodetime-output 2k| decodetime-output 4k| decodetime-output 8k | decodetime 16k| |:--: |:--: |:--: |:--: |:--: |:--: |:--: |:--: | |Phi3-mini (prefill 2k)| 124.1ms| 7.63ms (**16.2**)| 7.66ms (**16.2**)| 7.71ms (**16.1**)| 7.78ms (**15.9**)| 14.29ms (**8.7**)| 23.3ms (**5.3**) | |LLaMA3-8B (prefill 8k)|613.9ms | 24.11ms (**25.5**)| 24.22ms (**25.3**)| 24.05ms (**25.5**)| 24.18ms (**25.4**) | 23.5ms (**26.1**) | 23.58ms (**26.0**) | The numbers in () indicate the amount of decoded tokens corresponding to the time spent on KV cache recomputation. These results indicate that the additional recomputation cost is equivalent to generating only ~15 (phi3-mini) and ~25(llama3-8b) tokens, which is negligible in the context of long-context generation. >Q2: Does mixed training cause interference between short/long contexts? **Response**: Thank you for your insightful question. While it’s true that mixed context window training applies two RoPE scaling factors simultaneously during mid-training, which could introduce interference, our empirical results suggest that this "interference" plays a constructive role and hence does not degrade performance. In fact, it not only recovers short-context performance but also enhances long-context performance. To better illustrate this, we refer to Table 7 from our original paper. ||MMLU-(short)|MMLUPro-(short)|GSM8K-(short)| RULER-4k| RULER-8k| RULER-16k | RULER-32k|RULER-64k|RULER-128k| |:--: |:--: |:--: |:--: |:--: |:--: |:--: |:--: |:--: |:--: | |Phi3-mini (**with mixed context window training**)| **70.07**|**40.30**|**73.62**|90.41|**86.87**|**83.33**|**76.51**|**65.37**|**58.81**| |Phi3-mini (no mixed context window training)|66.56|34.86|64.67|**90.55**|85.77|81.08|73.31|63.75|56.22| |||||||||| |LLaMA3-8B (**with mixed context window training**)|**65.01**|**34.61**|**50.80**|94.61|**93.68**|**92.31**|**90.49**|**85.62**|**82.03**| |LLaMA3-8B (no mixed context window training)| 64.57| 33.83| 48.37| **94.67**|93.15|91.24|89.38|83.53|80.18| A possible explanation for this surprising improvement is that the so-called “interference” actually plays a constructive role in learning. Specifically, the short-context window helps preserve position modeling for non-interpolated positions (e.g., LLaMA3’s native positions 0, 1, 2, ..., 8191), while the long-context window primarily facilitates the adaptation for newly interpolated positions (e.g., LLaMA3’s new positions like 1/16, 2/16, ..., 17/16). This training strategy effectively constrains the model’s adaptation to interpolated positions while maintaining consistency with the original position modeling - a concept similar to the KL divergence constraint in PPO, which prevents large deviation from the original policy model. We appreciate this insightful question, which has prompted further reflection and discussions.
null
null
null
null
null
null
null
null
Better to Teach than to Give: Domain Generalized Semantic Segmentation via Agent Queries with Diffusion Model Guidance
Accept (spotlight poster)
Summary: This paper proposes QueryDiff, an agent query-driven learning framework based on diffusion model guidance for DGSS, which utilizes the scene distribution priors embedded in diffusion models to enhance semantic segmentation generalization. Various experiments show the model’s effectiveness and reach the sota performance on many benchmarks. # update after rebuttal Thanks the authors for their detailed response. My questions have been answered and I will maintain my positive score. Claims And Evidence: Yes. Methods And Evaluation Criteria: The evaluation makes sense to me. Theoretical Claims: The theoretical claims look good to me. Experimental Designs Or Analyses: The experimental designs and analyses (for the ablation study) look good to me. Please see the weakness for other issues. Supplementary Material: Yes, I reviewed each part of the supplementary material. Relation To Broader Scientific Literature: The paper explores how to leverage the scene distribution embedded in diffusion models to enhance segmentation generalization. This goes beyond using diffusion models to generate the data in a more direct way - utilizing its knowledge, which is more direct and helpful for generalization. Essential References Not Discussed: None Other Strengths And Weaknesses: **Strengths** - This paper proposes a novel application of diffusion models, leveraging their prior embedded distribution to enhance segmentation generalization rather than simply using them for data generation. The approach is both conceptually elegant and methodologically concise. - The comprehensive results (both qualitative and quantitative) results show the effectiveness of the method. The qualitative results on different stylistic domains also further strengthen the soundness of the method. **Weaknesses** - While the paper presents extensive ablation studies, the isolated effect of $L_dist$ is not examined. An experiment evaluating AQ + $L_{dist}$ (without $L_{sup}$) should be included in Table 3. - There is limited discussion about the computational cost and inference time compared to previous methods, especially when considering added components like diffusion feature extraction. - The ablation study of timesteps is not sufficient. The timestep selection (0, 50, 100) in the ablation study lacks justification. - It would be helpful to provide some intuitive interpretation of what the agent learns about the diffusion feature (like visualizations of the learned features and the mitigated visual features). Other Comments Or Suggestions: None Questions For Authors: - The class-wise results indicate varying degrees of improvement across different classes, with a relatively large variation. Is there an interpretation for this difference? Could the visual features be influencing the results? - How does the timestep (0, 50, 100) selected? - Please see the weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q**: It is suggested that an ablation experiment of AQ + $L_{dist}$ (without $L_{sup}$) be included in Table 3. **A**: The purpose of $L_{dist}$ is to suppress the visual texture details within the matrix $S_j^{t_w}$. Subsequently, $S_j^{t_w}$ is utilized in Equation (10) to derive optimized queries $q_{opt}$. These optimized queries $q_{opt}$ are then employed to supervise the agent queries through $L_{sup}$ in Equation (11), encouraging the agent queries to proactively capture the semantic distribution of the scene and eliminating the need for diffusion guidance during inference. Therefore, $L_{dist}$ fundamentally enhances the supervision provided by $L_{sup}$. As a result, applying AQ + $L_{dist}$ alone—without $L_{sup}$—would be functionally equivalent to the AQ-only setting, which is already included in our ablation study. **Q**: There is inadequate discussion of computational cost and inference time. **A**: We have provided a comprehensive comparison of computational cost and inference speed between our method and representative previous approaches under the same experimental setting, using ConvNext as the backbone. Although diffusion-based feature extraction is used during training, it is discarded at inference and does not affect inference speed, as noted at the end of Section 4.3. Our method achieves inference speed comparable to strong baseline models like Rein, with only minor differences in training time, GPU memory usage, and storage requirements. Compared with CLOUDS—which also leverages diffusion models—our method is clearly more efficient in terms of trainable parameters, training time, GPU memory consumption, and storage size. |Method|Generated Image|Trainable Params|Training Time|GPU Memory|Storage|Inference Time| |-|-|-|-|-|-|-| |CLOUDS|5000|40.54M|19 h|4*RTX4090|4.3G|127.87 ms| |Rein|--|28.56M (8.29M+20.27M)|5 h|1*RTX4090 (11616M)|1.20G|126.59 ms| |Ours|--|31.30M (11.03M+20.27M)|8 h|1*RTX4090 (16091M)|1.21G| 126.81 ms| **Q**: The timestep ablation is insufficient, and a clear justification for the selected values is requested. **A**: Different timesteps correspond to varying levels of noise intensity, with larger timesteps introducing stronger noise. Prior studies [1,2] suggest that effective timestep values typically fall within the range of 0 to 200, and that adjacent timesteps often yield highly similar diffusion features. Based on these works, we selected intervals of 50 or 100 within the range of 0 to 200 to ensure sufficient diversity in feature representations. Through empirical comparisons, we ultimately chose timesteps 0 and 100 to represent weak and strong noise conditions, respectively. To further support this choice, we have included additional ablation experiments, as shown in the table below. |$t_w$|$t_s$|C|B|M|Avg.| |-|-|-|-|-|-| |200|300|67.7|61.7|67.2|65.5| |300|400|67.0|61.2|66.7|64.9| |0|25|67.9|61.8|67.8|65.8| |25|50|68.2|61.8|68.1|66.0| |75|100|68.3|62.0|68.2|66.1| [1] Xu, Jiarui, et al. Open-vocabulary panoptic segmentation with text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2955–2966, 2023. [2] Baranchuk, Dmitry, et al. Label-efficient semantic segmentation with diffusion models. In International Conference on Learning Representations, 2022. **Q**: It is suggested that intuitive visualizations be provided to better illustrate what the agent learns from the diffusion features. **A**: We conducted clustering analysis on both the original visual features and those refined via agent queries to intuitively illustrate the changes. As shown in the linked visualization, the optimized features exhibit a more coherent semantic distribution within the scene and improved spatial coverage of target objects. https://anonymous.4open.science/r/vis_feature-215E **Q**: An interpretation is requested for the class-wise performance variation. **A**: Thank you for the insightful question. As shown in Figure 3, the most notable improvements occur for rare classes (e.g., train, traffic sign, truck) and semantically similar categories (e.g., person and rider). This is mainly due to our method's explicit focus on learning the semantic distribution of scenes and modeling inter-instance relationships. Specifically, this structured modeling enables our method to effectively capture contextual dependencies, thus better distinguishing visually or semantically ambiguous classes. Moreover, rare classes typically suffer from limited training examples, making it difficult for standard models to learn discriminative features. By explicitly encoding scene semantic distribution, our method provides additional contextual information that compensates for limited visual evidence, significantly improving recognition of underrepresented categories. Thus, our method is particularly effective in improving recognition accuracy for both semantically overlapping categories and infrequent object classes.
Summary: This paper proposes an agent Query-driven learning framework based on Diffusion model guidance for DGSS. The method employs agent queries to learn scene distribution knowledge from the diffusion model, capitalizing on the inherent consistency of this distribution across domains to improve segmentation model generalization. Diffusion consistency loss (DCL) is proposed to avoid intricate visual details in diffusion features from interfering with agent queries, enabling them to focus on learning the semantic distribution of the scene. Experiments have proven the effectiveness of the method. ## update after rebuttal I appreciate the response provided in the rebuttal, which addresses some of my concerns, and I maintain the score. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: I checked the proofs for theoretical claims and found no issues. Experimental Designs Or Analyses: 1. In the introduction, page 2, line 101 introduces the first challenge: previous methods are computationally expensive and time-consuming, making it inefficient for perception tasks. However, this has not been verified. Supplementary Material: I reviewed all parts of the supplementary material. Relation To Broader Scientific Literature: Diffusion models have opened up new avenues for DGSS with remarkable capabilities in capturing complex scene distributions to generate high-quality, realistic samples. Diffusion models can generate diverse samples to enhance segmentation generalization by learning data distribution. In this paper, its scene distribution prior is used to enhance the generalization ability of the segmentation model instead of directly generating data. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The idea of using the Diffusion model to improve generalization is novel. It is interesting to use agent queries to learn scene distribution knowledge from the diffusion model. 2. A large number of experiments verify the effectiveness of the method. Weaknesses: 1. In the introduction, page 2, line 101 introduces the first challenge: previous methods are computationally expensive and time-consuming, making it inefficient for perception tasks. However, this has not been verified. The proposed method should be verified experimentally compared with previous methods in terms of computational complexity, time-consuming, and memory usage. 2. In Formula 5, it is introduced: "We merge the smaller groups into larger ones based on the similarity matrix in the embedding space". Please explain in detail how the merge is implemented. 3. In Formula 10, how to optimize the instance semantic representation in the agent queries, which is just a multiplication calculation. Please explain its mechanism in detail. 4. More explanations are needed to explain the loss of Formula 11, including the loss form of dual segmentation and the significance of being supervised by agent queries. 5. Will the code be publicly available? Other Comments Or Suggestions: As above Questions For Authors: As above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q**: The proposed method should be compared with previous methods in terms of computational complexity, time consumption, and memory usage. **A**: Thank you for the suggestion. We have included a detailed comparison of computational resources between our proposed method and recent diffusion-based segmentation approaches. First, unlike methods such as CLOUDS and DGInStyle that rely on a two-stage pipeline—image generation followed by segmentation training—our method adopts a single-stage, end-to-end paradigm, eliminating intermediate steps and reducing overall training complexity. Second, as shown in the table below, our method incurs the lowest cost among diffusion-based methods in terms of trainable parameters, training time, and memory usage. |Method| Generated Image|Trainable Params|Training Time|GPU Memory|Storage| |-|-|-|-|-|-| |ODISE|--|28.1M|6 days| 32*V100|4.9G| |PTDiffSeg|--|--| 32 h|1*V100 (32GB)|--| |CLOUDS|5000| 40.54M|19 h| 4*RTX4090|4.3G| |DGInStyle|6000| 85.69M|29 h|1*A100 (26687M)|1.37G| |Ours|--| 26.31M (5.73M+20.58M)|10.3 h|1*RTX4090 (16738M)|1.24G| **Q**: A more detailed explanation is requested for the merging process in Formula 5. **A**: We introduce a set of smaller initial agent queries ${q} _{\text{init}} ^i$, which are first projected into query embeddings $Q$ using Equation (4). Meanwhile, the current-stage agent queries ${q} _{\text{stage}} ^{i-1}$ are similarly projected into key embeddings $K$ and value embeddings $V$. In Equation (5), we compute the attention map $s^i$ between $Q$ and $K$, capturing the similarity in the embedding space between the smaller query groups ${q} _{\text{init}} ^i$ and the larger groups ${q} _{\text{stage}} ^{i-1}$. This attention map $s^i$ is then utilized to aggregate the values $V$, generating a new set of merged agent queries $\hat{q} _{\text{stage}} ^{i}$ which share the same dimensionality and count as the smaller initial set ${q} _{\text{init}} ^i$. This merging process effectively reduces the number of agent queries while eliminating potential information overlap. As the merging process progresses across stages, the number of agent queries continuously decreases, leading to fewer but semantically richer and more comprehensive query groups. **Q**: A detailed explanation is requested of how instance-level semantic representation is optimized in Formula 10. **A**: Equation (10) is built upon the design and functionality of the matrix $S_j^{t_w}$, which plays a pivotal role in deriving optimized agent queries. Specifically, $S_j^{t_w}$, introduced in Equation (8), captures the associations between object features and agent queries. Subsequently, in Equation (9), a consistency loss is applied to explicitly suppress low-level visual texture details within $S_j^{t_w}$, encouraging it to emphasize the underlying semantic distribution of the scene. Finally, in Equation (10), the refined semantic matrix $S_j^{t_w}$ is used to reorganize feature map $f_{d}^{(t_s,j)}$ through a semantic-guided re-weighting process. This process results in optimized agent queries that more effectively encode the scene’s semantic distribution, leading to more robust scene understanding. **Q**: More explanation is needed for Formula 11, including the dual segmentation loss and agent query supervision. **A**: (1) In Equation (11), we adopt the Huber loss due to its robustness to outliers and its balanced sensitivity to errors. This choice is particularly well-suited to our setting, where the intermediate supervision signal is derived by leveraging optimized agent queries to guide the original learnable agent queries. Such supervision inherently involves uncertainty—especially around object boundaries or ambiguous regions—making it a form of soft or intermediate supervision. To effectively handle this uncertainty, a robust loss function is essential. Unlike a purely quadratic ($L_2$) loss, which can result in instability and overly aggressive updates, the Huber loss provides a smoother optimization landscape, facilitating more stable and efficient convergence in these cases. (2) In Equation (10), we derive optimized agent queries through a diffusion-guided aggregation process that suppresses low-level visual details and emphasizes higher-level structural semantics. We then leverage these optimized agent queries to supervise the original learnable agent queries, aiming to explicitly guide them to prioritize structural context over superficial visual details during their learning process. This strategy enables the agent queries to actively capture semantic distribution of the scene. As a result, the model no longer requires support from the diffusion model during inference, as the structural semantic knowledge has been effectively embedded into the agent queries through this supervised learning process. **Q**: Will the code be publicly available? **A**: We will release all code and pretrained models upon acceptance.
Summary: The authors leverage on refined features of diffusion models to stabilize the features of vision transformers and other backbones when feeding them into the mask2former decoders for semantic segmentation. In this way, the authors achieve considerable domain generalization capabilities for their network. In their method, the authors first construct layer wise queries over the layers of the backbone which are thereafter aggregated over the layers using a kind of a progressive, layer wise cross attenion mechanism. This ultimately leads to trainable "agent queries" which are thereafter transformed by a NLP trained for similarity with features of stable diffusion from the encoding space of the VAE encoder of stable diffusion. These features are the noise predictions from the stable diffusion model at a low and a high level of noise. It is trained that the "agent queries" remain stable under different noise levels, which enhances dependency on global structures and removes texture bias. The authors then test their network on two domain generalization benchmarks (DG), namely GTA to Cityscapes, BDD and Mapplilary and the real to real benchmark Cityscapes to ACDC. The authors report decent DG performance and claim superiority to the present DG SOTA throughout their tests. The authors also provide extensive testing utilizing various backbones from ResNet over Transformer to pretrained FM models. Also, the single modules of the proposed architecture are tested separately. # # Due to the results on the official CS-> ACDC benchmark and the authors announcement that they will publish their code I am now convinced that QuaryDiff is really a model that surpasses SOTA. I therefore will raise my score an d favor publication. BUT: The way we got here was somehow strange. I hope that the authors will take a more direct way in the future. If the results are good, as it seems to be in this case after new experiments have been conducted during rebuttal, no maneuvers are needed. Claims And Evidence: The evidence for the claims is partially convincing, but there are irritating aspects, see weaknesses. Namely, the numbers reported for the competing models differ from those documented in the original papers and in official benchmarks. Also, several competing models are not discussed. Therefore I find the authors claim that their model constantly outperforms the SOTA weakly supported, despite the reported results certainly document strong DG performance. An official sumbission to the official CS-> ACDC benchmark would further strengthen the author's case. Methods And Evaluation Criteria: The method of evaluation follows standard protocols and is widely shared throughout the community. However, the Synthia source data set is not evaluated, which would then complete the standard evaluation protocol in the context of street scenes. What the authors do more than usual is the semantic interpretation of two pieces of artwork, cf. Figure 1. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experimental design follows standard procedures (up to omitting Syntia-> CS, BDD, Mappilary DG). The analyses include ablation studies, which are detailed. Supplementary Material: In the present version, there is no appendix. Relation To Broader Scientific Literature: none Essential References Not Discussed: [R1] Seokju Yun, Seunghye Chae, Dongheon Lee, Youngmin Ro , SoRA: Singular Value Decomposed Low-Rank Adaptation for Domain Generalizable Representation Learning, https://arxiv.org/pdf/2412.04077 [R2] Christoph Hümmer, Manuel Schwonberg, Liangwei Zhou, Hu Cao, Alois Knoll, Hanno Gottschalk; Strong but simple: A Baseline for Domain Generalized Dense Perception by CLIP-based Transfer Learning, Proceedings of the Asian Conference on Computer Vision (ACCV), 2024, pp. 4223-4244 [R1] submitted on 12/24 was well available on arXiv before the submission deadline. One might adopt the policy not citing non peer reviewed work, but in theis case were the official CS->ACDC results were available which are 'objective'. Therefore I find this work should be included in the comparison with the SOTA. [R2] is the published version of Hümmer et al. Other Strengths And Weaknesses: Strength *The authors idea of utilizing stable diffusion features for better domain generalization is compelling. Other FM have demonstrated their performance for enhanced DG, so involving Stable Diffusion is a good idea and the way the authors achieve it is clever. *Furthermore, Stable Diffusion is not needed during inference making the model efficient. * The ablation studies are extensive. * The paper is mostly well written, although the key section 4 could be a little clearer. Figures and table are nice. Weaknesses * My most severe criticism is that SOTA figures are not correctly reported. E.g. Rein achieves 76.48 mIoU on CS-> ACDC in average, but in the present paper is reported with 72.1. How can this happen? Likewise, the VLTSeg results on the same benchmark are 77.01 and are not reported as SOTA. Both models and [R1] with 78.75 mIoU are well above the reported QueryDiff performance with 73.7 mIoU. So I don't understand the authors conclusion that they constantly outperfrom the previous SOTA. Note that all these figures are obtained on private labels, so they are out of question. * Even more so, on the basis of the official CS->ACDC benchmark at the time of sumbission there were 7 sumbissions from 5 models stronger than the reported performance of QueryDiff. This should be discussed with greater care. Also, the authors should discuss this properly (although some of these models are undocumented) * There is no submission of the QuaryDiff result to the official benchmark * There is not yet an annoucement of code publication * Part of the strong GTA-> CS performance might be due to the fact that DINOV2 has seen CS and Mapliiary data during training. This should at least be mentioned as it is against the 'puristic' DG spirit. * Concerning the CLIP results in Table 4 - the QueryDiff results are below the results obtained by pure fine tuning in [R2]. How doe the authors explain this? Other Comments Or Suggestions: *Several tables don't write what they are showing (mIoU-values) * I find the notation Linear(...) for a MLP irritating. Questions For Authors: See weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q**:Reported results for Rein differ from those in its original paper. **A**:The performance difference arises because we reproduced Rein at a 512×512 resolution on the ACDC validation set using the official code—as indicated in Table 2 (line 351)—rather than the original 1024×1024 resolution on the test set reported in the paper. (1) Resolution setting: 512×512 is one of the most commonly used and widely accepted resolutions in domain generalization segmentation, adopted in the main results of Rein and VLTSeg, and set as default in DGInStyle, making our setup standard and reasonable. (2) Evaluation protocol: The ACDC validation set is widely adopted in recent works like DGInStyle and HGFormer, further supporting the validity of our setup. Under this fair and consistent setting, our method consistently outperforms Rein, as shown in Table 2. We also evaluated our method on the ACDC test set at a 768×768 resolution, another mainstream setting adopted in recent methods like CLOUDS and FAMix. As shown in the table below, our performance at 768×768 resolution is already on par with or better than Rein’s results at 1024×1024, highlighting the scalability and robustness of our method across resolutions. |Method|C→AF|C→AN|C→AR|C→AS|Avg.| |-|-|-|-|-|-| |Rein(1024x1024)|76.4|70.6|78.2|79.5|77.6| |Ours(768x768)|76.7|70.7|79.3|79.7|77.9| **Q**:Several strong methods (SoRA [R1], VLTSeg [R2]) were not discussed or compared. **A**:SoRA is a concurrent work that was not accepted at the time of our submission and has not released its code. VLTSeg uses a different experimental setup (high-resolution 1024×1024) and also lacks public code, making fair and consistent comparison difficult. Nevertheless, under the 768×768 resolution setting, our method achieves 77.9 mIoU (see table above), which is comparable to VLTSeg, demonstrating the competitiveness of our approach. We will include a discussion of VLTSeg in the final version. **Q**:QueryDiff is not submitted to the official ACDC benchmark, and several strong leaderboard models are not adequately discussed. **A**:Due to current computational limitations, we are unable to support high-resolution settings (e.g., 1024×1024) used by several top-performing models, and thus conduct experiments at two widely adopted resolutions: 512×512 and 768×768. We plan to submit to the official benchmark once adequate computational resources become available. Nevertheless, our comparisons already include the most representative and competitive methods. Under the 768×768 setting, our method achieves performance comparable to Rein and VLTSeg, despite their results being reported at 1024×1024—demonstrating the effectiveness and scalability of our method. Furthermore, many benchmark submissions lack implementation details and are not open-sourced, making it difficult to reproduce results and ensure fair and transparent comparisons. **Q**:The strong GTA→CS performance may be affected by DINOv2’s pretraining on CS and Mapillary. **A**:DINOv2 is a standard backbone in recent domain generalization research, including Rein and VLTSeg. To demonstrate the effectiveness of our method, we also evaluated it with alternative backbones, including CLIP, SAM, and MiT-B5. Notably, even with SAM, our method still outperforms most existing approaches, showing strong generality across architectures. Additionally, we report results with ConvNext in the table below. Compared to CLOUDS, which also uses ConvNext, our method achieves superior performance, further confirming its effectiveness and generalizability. |Method|Backbone|G→C|G→B|G→M|Avg.| |-|-|-|-|-|-| |CLOUDS|ConvNext-L|60.2|57.4|67.0|61.5| |Ours|ConvNext-L|62.1|60.3|67.8|63.4| **Q**:QueryDiff shows lower performance than the fine-tuning results reported in VLTSeg ([R2]) with CLIP. **A**:VLTSeg reports results using both CLIP and EVA-02-CLIP backbones. EVA-02-CLIP is a stronger backbone than CLIP, due to its enhanced architecture and more extensive pre-training. To fairly evaluate QueryDiff against VLTSeg, we compare them using the same backbone. As shown in the table below, QueryDiff consistently outperforms VLTSeg when both methods use the same backbone. |Method|Backbone|G→C|G→B|G→M|Avg.| |-|-|-|-|--|-| |VLTSeg|CLIP|55.6|52.5|59.9|56.0| |Ours|CLIP|58.9| 56.0|61.9|58.9| |VLTSeg|EVA-02-CLIP|65.3|58.3|66.0|63.2| |Ours|EVA-02-CLIP|66.9|60.9|67.1|65.0| **Q**:The Synthia source dataset is not evaluated **A**:SYNTHIA results are provided in Table 1 of the supplementary material, submitted separately in accordance with ICML guidelines. **Q**:The code release has not yet been announced. **A**:We will release all code and weights upon acceptance. **Q**:Several tables lack metric labels (mIoU). **A**:Thank you. We will clarify the evaluation metrics in relevant tables. **Q**:Using Linear(...) to denote MLP is not appropriate. **A**:Thank you for the suggestion. We will revise the final version to use more standard MLP notation for improved clarity. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you very much for your reply and explanations. The comparison with the state of the art has much improved. Nevertheless, you might as well acknowledge that the process of getting there is somewhat unsatisfactory. If you, e.g., cite Rein I as a reader expect that you report the Rein figures. Otherwise I would expect that you label the Rein experiments as your own work and not cite or give both, your figures and the official ones and explain. That this is hidden in some table that you are using 512 x 512 without further comment is really difficult to understand for the reader and left me in confusion. Also with regard to the more recent papers - it is true that some of them are not refereed yet. But the official ACDC benchmark is beyond doubt. Also I suggest that an official CS -> ACDC submission is conducted, maybe some colleagues can help out with resources. As we had some discussions about the numbers here, an official stamp behind the most relevant performance metrics would help me to reconsider my score. I also would be happy if full transparency is created in all reported metrics. Despite I acknowledge that the new figures you report are competitive or outperform the standard competitors, I would have been much more positive if these evaluations would have been reported in the original paper. I am still a sort of unhappy how this discussion went. --- Reply to Comment 1.1.1: Comment: Thank you very much for your feedback. In the final version, we will further clarify in the main text the distinction between reproduced and official results, along with the resolution settings used. Additionally, following significant effort to secure additional computational resources, we are able to conduct experiments under the same high-resolution setting as Rein (i.e., 1024×1024). As shown in the table below, our method clearly outperforms Rein. We have also officially submitted our results to the ACDC benchmark, where our method currently ranks first, achieving state-of-the-art performance. All reported results will be incorporated into the final version of the paper. We sincerely appreciate your helpful suggestions, which have significantly strengthened our paper with more thorough and convincing experimental results. | Method | C$\to$AF | C$\to$AN | C$\to$AR | C$\to$AS | Avg. | | -------- | -------- | -------- | -------- | -------- | -------- | | Rein | 76.4 | 70.6 | 78.2 | 79.5 | 77.6 | | **Ours** | **78.5** | **72.5** | **82.3** | **82.4** | **79.9** |
Summary: The paper presents a novel framework for utilizing diffusion models for domain-generalized semantic segmentation. While previous works often struggle to generate reasonable scenes for semantic segmentation, this paper introduces agent queries from segmentation features and incorporates additional pretrained knowledge from diffusion models. Extensive experimental results demonstrate significant performance improvements achieved by the proposed method. ## update after rebuttal I appreciate the additional experiments provided in the rebuttal. Since the results are satisfactory, I will maintain my score. Claims And Evidence: The authors argue that instead of generating datasets using diffusion models, leveraging intermediate features from diffusion models as a form of loss to train the segmentation model is more effective. They liken this approach to teaching how to fish rather than simply providing fish. The final performance shows a significant improvement over other baselines, making their claim clear and well-supported. Methods And Evaluation Criteria: The proposed methods are all clear and reasonable. Agent Queries Generation seems both reasonable and novel; by aggregating semantic information within a scene, the agent queries effectively represent the scene’s semantic content. While previous approaches relied solely on features from a small noise step, this study introduces an additional step that leverages features from two different noise steps to verify consistency. This allows the model to learn from diffusion features even in strongly noisy images. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: The experiments are valid and include state-of-the-art baselines for synthetic dataset generation, such as CLOUDS and Rein, effectively demonstrating the superiority of the proposed method. The results consistently show improved performance across various backbones. The experimental setup comprehensively covers synthetic-to-real, real-to-real, and normal-to-adverse scenarios, maintaining consistently high performance. Additionally, the ablation study is well-conducted, and class-wise performance is thoroughly analyzed. Supplementary Material: I reviewed the additional ablation studies and qualitative results. Relation To Broader Scientific Literature: The idea presented in this paper—incorporating diffusion models as a loss rather than generating datasets—is a significant breakthrough in the application of diffusion models for semantic segmentation. This approach has the potential to shift the paradigm in the field. Essential References Not Discussed: I found no essential references that should have been discussed but were missing from the paper. Other Strengths And Weaknesses: I have discussed all strengths and weaknesses in other sections. Other Comments Or Suggestions: I have no additional comments. Questions For Authors: Have you conducted experiments in in-domain settings, such as few-shot or fully-supervised learning on the Cityscapes dataset? This approach could potentially achieve state-of-the-art performance even beyond domain-generalized settings. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q**: Have you conducted experiments in in-domain settings, such as few-shot or fully-supervised learning on the Cityscapes dataset? This approach could potentially achieve state-of-the-art performance even beyond domain-generalized settings. **A**: Thank you for this valuable suggestion. We conducted additional experiments in a few-shot in-domain setting on the Cityscapes dataset, using only 1/16 of the labeled training data. As shown in the table below, our method surpasses the Rein baseline, demonstrating its effectiveness not only in domain generalization tasks but also in few-shot in-domain semantic segmentation. In future work, we will further explore the performance of our method under a range of few-shot and fully supervised settings to more comprehensively evaluate its potential in broader in-domain scenarios. | Method | Source | mIoU | | -------- | ----------------------- | -------- | | Rein | +1/16 of Cityscapes | 82.5 | | **Ours** | **+1/16 of Cityscapes** | **83.6** |
null
null
null
null
null
null
Unnatural Languages Are Not Bugs but Features for LLMs
Accept (poster)
Summary: This paper studies unnatural prompts, strings that seem unintelligible to humans yet able to make Language Models produce a specific target output. The paper claims that unnatural prompts contain latent features that LMs respond to. Using a gradient-based method, the authors find the unnatural versions of examples from multiple datasets. The LMs still perform well when using the unnatural prompts as contexts. These unnatural prompts transfer across models. Moreover, LMs trained on unnatural instructions obtain a similar performance to those finetuned on natural prompts. The experiments are performed on a wide range of models and datasets. Claims And Evidence: The main claim of the paper is not properly supported by the evidence. The paper claims that the proposed unnatural prompts contain latent features that multiple LMs respond to. However, the relevant information contained in those prompts are not hidden features but keyword tokens inherited from the natural prompts. Table 1 shows examples where some key tokens from the original prompt were kept in the unnatural prompts (numbers and some other tokens like "stock", "price" or "Carly", "arms"). The remaining tokens are fillers that the models just ignore. Indeed, the search is initialized with the natural prompts and the algorithm only learns to keep the relevant original tokens and replace the other tokens with junk tokens. The unnatural prompts of the paper are more like noisy versions of the natural prompts. As stated in the limitations section, when the search is initialized randomly and the original tokens filtered, the resulting unnatural prompts are no longer meaningful to the models. If the search algorithm mostly only identifies the key tokens to be kept from the natural prompts, the work is more about feature attribution of natural prompts rather than latent features of unnatural prompts (see Question related to this). There is no analysis characterizing the latent features that are proper to the unnatural prompts (meaning not keywords from the natural prompts). Methods And Evaluation Criteria: A wide range of LMs are used in the experiments. The datasets used to evaluate the models are diverse enough to cover the capabilities of LMs. The exact hyperparameters used for the prompt search, such as number of iterations or number of candidate, are not provided in the paper making the work more difficult to reproduce. Theoretical Claims: None Experimental Designs Or Analyses: The paper lacks a detailed analysis of the unnatural prompts and the alleged features that they contain. Supplementary Material: There is no supplementary material hindering the proper evaluation and reproducibility of the work. There are not a lot of examples in the main paper or even in the appendix. It would help to have a look at the pairs of natural and unnatural prompts, the code and the exact prompts used to query the LMs. Relation To Broader Scientific Literature: The work is related to the interpretability of LMs. It investigates how LMs process unnatural prompts. It is also related to safety as the unnatural prompts can be used to jailbreak models, and understanding them could help design better defenses. Essential References Not Discussed: The papers "Prompts have evil twins" (EMNLP 2024 https://aclanthology.org/2024.emnlp-main.4) and "Unnatural language processing: How do language models handle machine-generated prompts?" (EMNLP 2023) present a more extensive analysis of unnatural prompts compared to their natural counterparts. Other Strengths And Weaknesses: None Other Comments Or Suggestions: It would help to provide more examples of unnatural prompts in the appendix. Minor comments: The term "semantic meaning" is a tautology. Questions For Authors: 1- What are the baseline scores (random and empty) for Table 2? 2- Only the context is unnatural in the Section 3 experiments. The questions are still natural. What would happen if the questions were also unnatural? 3- What is the relationship between token overlap with original prompts and performance? What is the overlap of numbers for SimGSM8K? 4- What is the performance of a baseline that would only drops some tokens (selected for example using some feature attribution method) from the natural prompts? 5- Does memorization play a role in the processing of unnatural prompts? The models seem to easily recognize and even translate back the unnatural prompts. Is it possible that some of these prompts were just memorized by the models. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your insightful reviews and comments. We appreciate the time and effort you have put into providing valuable feedback. We would like to address your concerns as follows: --- > **Concern #1 Unnatural language contains keywords** We acknowledge that the unnatural language contains keywords relevant to the original natural version. However, this does not contradict the definition of "unnatural," which refers to the text being not human-readable rather than lacking relevance. Furthermore, for the latent feature, we mean --- > **Concern #2 Experiment details** 1. Code and dataset We add an anonymous link ([code url](https://anonymous.4open.science/r/unnatural_language-FC08)) containing our code to help reproduce. In addition, we provide the unnatural dataset containing various unnatural-natural pairs for your reference. 2. Baseline scores for Table 2 Random and Empty are significantly weaker baselines compared to Shuffle and Inject (Shuf-Inj), and thus their results are omitted from the table. Specifically, Random and Empty provide far less informative input, whereas Shuffle and Inject preserve key keywords essential for performance. 3. Correlation of token overlap and performance | | Correct Overlap | Incorrect Overlap | Coef | P value | | ----------------------------- | --------------- | ----------------- | -------- | ------- | | **Mistral-7B-Instruct-v0.1** | 0.1768 | 0.2441 | -11.5579 | 0.001 | | **Meta-Llama-3-8B-Instruct** | 0.1656 | 0.1853 | -4.1353 | 0.188 | | **Meta-Llama-3-70B-Instruct** | 0.1669 | 0.2079 | -10.8013 | 0.008 | | **Gemma-2-9b-it** | 0.1807 | 0.2124 | -5.7266 | 0.043 | From the table, we observe that overlap and performance exhibit no positive correlation. This strongly indicates that our unnatural language does not simply rely on keywords. Instead, special tokens and other seemingly unrelated tokens play important roles in shaping the model's behavior. Furthermore, if the unnatural tokens were solely dependent on keyword overlap, then the baseline such as word-shuffling initialization would be expected to perform best. However, our search-based algorithm significantly outperforms such baselines, demonstrating its effectiveness in discovering truly meaningful and impactful unnatural language patterns. --- > **Concern #3 Essential reference** Thank you for pointing this out. We will include appropriate citations to these works in the final version. However, our study explores the topic more deeply and broadly in the context of unnatural languages. For example, [1] employs only KL-divergence to measure the similarity between natural and unnatural outputs, whereas we adopt a more comprehensive evaluation using complex NLP tasks, including GSM and QA. Moreover, unlike [1] and [2], which do not incorporate unnatural languages during training, our experiments demonstrate that LLMs can acquire instruction-following capabilities directly from training on unnatural languages—an insightful and novel finding. Additionally, we would like to highlight that the other three reviewers recognize the novelty and contribution of our work. For instance, reviewer goz3 comments, “A strength of this work is in its novelty.” Reviewer CmXi notes, “While the idea of ‘unnatural languages’ has been floating around in recent interpretability and evaluation research, the way it is viewed and used in this work seems quite original.” Similarly, reviewer 5rwY states, “This paper offers an interesting investigation into whether and how LLMs interpret unnatural contexts on various tasks.” --- > **Concern #4 Other missing baselines** 1. Both the context and questions are unnatural If both the context and the questions are unnatural, evaluating the answer becomes nearly impossible. Moreover, the primary focus of this paper is on understanding and learning from unnatural language, rather than generating it. 2. Drop tokens Thank you for your suggestion. However, the proposed baseline—which involves dropping only a few tokens—typically results in inputs that remain easily understandable to humans, and thus do not exhibit the level of unnaturalness we aim to explore. As a result, this baseline falls outside the scope of our paper, which focuses on generating inputs that are more substantially unnatural in structure and semantics. --- > **Concern #5 Whether prompts are just memorized by the models** Yes, we completely agree with your point. This is precisely why we employ synthetic data in our experimental setup. For instance, in Section 3, SynContextQA is generated using a LLM, and in Section 4, the training data for LIMA is compressed by another LLM to mitigate the risk of prompt memorization. --- [1] Prompts have evil twins, EMNLP 2024 [2] Unnatural language processing: How do language models handle machine-generated prompts? EMNLP 2023 --- Rebuttal Comment 1.1: Comment: Thank for your answers. The baseline that only drops some tokens according to some feature attribution method or just in a greedy manner can produce unnatural prompts. "Level of unnaturalness" here is not clearly defined as no proxy metric or human study is proposed in the paper. Unnaturalness is defined in the answer to concern 1 as "text being not human-readable". Dropping unimportant tokens in a sentence can make it non human-readable. This baseline could even be further augmented by adding random tokens (as is done by the proposed approach) to increase the "level of unnaturalness". Therefore, this baseline does not appear to be outside the scope of the paper. This simple rule-based baseline (possibly combined with random token insertion) could perform on par with the proposed gradient-based method, producing prompts that are just as unnatural, while being more efficient and interpretable. --- Reply to Comment 1.1.1: Comment: Thank you for your follow-up question. In response to your suggestion, we conduct additional experiments comparing the dropping-token baseline with our method. Specifically, we employed saliency techniques [1][2][3] to retain the top percentage of the most influential tokens while discarding the rest—a common approach in Explainable AI for identifying which parts of the input most strongly impact a model’s prediction. The results are presented below. In the table: - **Pure** refers to retaining only the most salient tokens. - **Random Injection (RI)** denotes the addition of randomly selected tokens to increase the level of unnaturalness. We first compare the unnatural language outputs generated by the baseline and our approach. The baseline tends to preserve the word order and keywords of the original input, making it comparatively more human-readable. In contrast, our method generates outputs that are less comprehensible to humans while maintaining critical latent features important for LLMs. Moreover, on the SimGSM8K dataset, our unnatural examples consistently lead to significantly higher performance across multiple models compared to the baseline. This demonstrates that our method results in examples that are more unnatural from a human perspective while still preserving the essential latent structure that LLMs rely on for reasoning. | | Examples | | -------- | --------- | | **Pure (top0.3)** | Carly arms seastar | | **Pure (top0.5)** | Carlyfish5 arms each one seastar. | | **Pure (top0.7)** | Carly collected7 starfish5 arms each and one seastar arms. | | **RI (top0.3)** | Carly conclude Grudsignature)' nordMBERazed arms Python GorNEXT seastar anime workshop Felixlights gardenearing | | **RI (top0.5)** | Carly Yale embedding prospects Controlfish practicallyunsigned5 arms each supp one seastarquote personnelscore AuthorsVal. | | **RI (top0.7)** | Carly collected tra7 starfishNext bet5 arms each and one seastar amplWM substantial hacer arms. | | **Unnatural (Ours)** | ```\|Each and : algebra dinner! absolutely 7 do): shortly . seastar collectedthe \`' kW)\$, one ! 5 ! 14\` starfish with sic}}\_{\label Carly} arms. Onehorailey constructed WriteStatus(\$ \$\Toggle Zwezeichnung OK``` | | **Original** | Carly collected 7 starfish with 5 arms each and one seastar with 14 arms. | | SimGSM8K | Pure_0.3 | Pure_0.5 | Pure_0.7 | RI_0.3 | RI_0.5 | RI_0.7 | Unnatural (Ours) | | --- | ----- | ----- | ----- | ------ | --------- | --- | ---- | | **Mistral-7B-Instruct-v0.1** | 0.07 | 0.10 | 0.18 | 0.08 | 0.11 | 0.17 | 0.42 | | **Meta-Llama-3-8B-Instruct** | 0.07 | 0.06 | 0.12 | 0.08 | 0.11 | 0.16 | 0.50 | | **Gemma-2-9b-it** | 0.07 | 0.12 | 0.25 | 0.07 | 0.15 | 0.23 | 0.41 | | **Meta-Llama-3-70B-Instruct** | 0.07 | 0.14 | 0.24 | 0.10 | 0.16 | 0.28 | 0.75 | [1] Efficient saliency maps for explainable AI, Arxiv 2019 [2] ​​Grad-cam. ICCV 2017. [3] Saliency Mapping. Wikipedia. --- Thank you once again for taking the time to review our paper and for providing thoughtful and constructive feedback. If there are any remaining questions or points requiring clarification, we would be more than happy to provide further information. If you feel that your concerns have been resolved, we would be truly grateful if you would consider upgrading the score.
Summary: This paper offers an interesting investigation into whether and how LLMs interpret unnatural contexts on various tasks. It proposes a heuristic optimization algorithm to search for the optimal unnatural tokens based on the log probabilities. Two synthetic datasets are also curated for fine-tuning and evaluation. The authors also conduct comprehensive experiments and analyses. Claims And Evidence: - The paper provides reasonable and concise evidence for its claims. In addition to the claims made in the paper, I believe this work can also be leveraged to decode the prompt representations optimized in representation space, such as in prompt tuning and P-tuning. - Besides, I have one minor question: on page 4, line 212, right column, the authors mentioned the searching algorithm could be further improved in this reasoning task. Could the authors elaborate more on the possible improvements? Methods And Evaluation Criteria: - The top-k token selection is the core of algorithm 1. However, its presentation seems undetailed. Could the authors elaborate more on the following points? - In equation 3, it is unclear what the outcome should be. Is it a tensor of shape n by k? - For each $x_i\in x_{1:n}$, is the corresponding top-k set selected from the entire vocabulary? If so, this means the gradient $\nabla_{x_{1:n}}\sum_{t}\log P_M(S\mid x_{1:n},t)$ needs to be evaluated for $n|\mathcal{V}|$ times for each sequence. Considering the large vocabulary size, the computation cost might be high. Following the last question, could the author provide a high-level analysis of this algorithm's complexity? - In the actual implementation, how is the gradient of log-P calculated? Is it calculated through `torch.Tensor.backward`? - In Algorithm 1, line 12, the new token is uniformly sampled from the top-k candidates. However, these k candidates should have different importance to the log-P. Therefore, sampling with the importance of their gradient of log-P might be more efficient. Did the authors try this sampling strategy? - On page 3, line 116, right column, should the notation be $B|\mathcal{M}|$? Theoretical Claims: - The theoretical claims are justified reasonably. Experimental Designs Or Analyses: - In Table 2, the transfer results show improvement compared to the shuffle setting, though the inputs are not directly optimized on these backbones. Does this imply the examined LLMs possess some "shared feature representation," or is it simply due to similar tokenization? - In Table 6, the first example, it seems the key information "86" does not appear in the denoised unnatural version. Why does the decoded internal embedding contain this number? Supplementary Material: Yes. I've checked Algo 1 and left my comments/questions above. Relation To Broader Scientific Literature: This paper offers an interesting investigation into whether and how LLMs interpret unnatural contexts on various tasks. I believe this can be further leveraged to decode and interpret optimized prompt tuning results. Essential References Not Discussed: The related works sufficiently cover the scope of this work. Other Strengths And Weaknesses: Please see the comments above. Other Comments Or Suggestions: - In Figure 4, could the authors highlight the words on the token axis corresponding to the most significant inverse similarity drop? This would help improve visibility. - Also, could the authors explicitly define the inverse similarity? - On page A13, last line, there is a typo with the superscript. Questions For Authors: Please see the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful reviews and comments. We appreciate the time and effort you have put into providing valuable feedback. We would like to address your concerns as follows: --- > **Concern #1 Improved searching algorithm** In the current implementation, we use the GCG algorithm [1], which restricts the search to a fixed length and requires extensive training time. Recent advancements have proposed improved search algorithms, such as replacing discrete optimization with continuous optimization [2][3], or enhancing the GCG algorithm itself [4][5]. We plan to further explore this direction in our future work. --- > **Concern #2 Details of algorithm 1** 1. Code The details of the algorithm we used is provided as follows. For reference, we also provide an anonymous link of our implementation code: [code url](https://anonymous.4open.science/r/unnatural_language-FC08) 2. In equation 3, what the outcome should be? The output is a tensor in size $n \times k$, where each element represents a new alternative token. 3. How to sample candidates based on top-k tokens on each position? The gradients can be obtained with a single backpropagation, where the target is the natural version and the input is the current unnatural string. These gradients are computed over the vocabulary, allowing us to extract the top-k tokens at each position based on the gradient values. Belows we provide a more detailed description about how to calculate gradients. *One-hot embedding Conversion*: Given $x_{1:n}$, we first replace them from discrete variable (`torch.int64` in shape $(n,)$) to a one-hot embedding (`torch.int64` in shape $(n, |V|)$) . By doing so, we could backward w.r.t. loss log-p once and get all gradients of all tokens across all positions. Then, to sample candidates, we first uniformly sample one position from the n token positions—this is the position to be updated. Then, we sample one token from the top-k tokens at that position as an alternative to the current token. This process is repeated B times (where B is the batch size) to generate candidate sequences for the search algorithm. 4. In the actual implementation, how is the gradient of log-P calculated? Following the procedure described above, we can simply use `loss.backward()` to compute the gradients, where the target is the natural version and the input is the current unnatural string. 5. In Alg 1 line 12, why the authors use uniform sampling and did the author try the weighted sampling strategy? Thanks for the valuable question! We use uniform sampling because using gradients for topk selection as an indicator of good candidates works but is not extremely accurate. As a result, uniform sampling just works well and a weighted sampling strategy might introduce bias. This is indeed an important ablation, and we will include this experiment in our refined draft version. --- > **Concern #3 Why unnatural languages can be transferred among models** As discussed in Section 5, LLMs are capable of extracting keywords from unnatural languages and inferring their correct organization. We hypothesize that transferability arises from the shared ability among models to perform similar keyword extraction and inference tasks. --- > **Concern #4 Inverse similarity** The inverse similarity is computed using the natural and unnatural contexts as inputs. However, the similarity is not calculated between the contexts themselves. Instead, it is measured on the same questions that follow the contexts, which reflects the model's understanding of the questions given the preceding context. Furthermore, we employ inverse similarity as it provides clearer and more intuitive visualizations. --- > **Concern #5 Details: Typos and Highlighted Words in Figure** Thanks for pointing this out, we will address this in the final version --- [1] Universal and Transferable Adversarial Attacks on Aligned Language Models. Arxiv 2023 [2] Training Large Language Models to Reason in a Continuous Latent Space. Arxiv 2024 [3] SoftCoT: Soft Chain-of-Thought for Efficient Reasoning with LLMs. Arxiv 2025 [4] Improved Techniques for Optimization-Based Jailbreaking on Large Language Models. ICLR 2025 [5] Accelerating Greedy Coordinate Gradient and General Prompt Optimization via Probe Sampling. NeurIPS 2024 --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttals. I will keep my score.
Summary: This study posits that LLMs are highly effective at picking up on latent patterns in non-human-readable strings. This ability is sometimes viewed as an artifact or bug of LLM training, but this study suggests instead that this ability is related to the latent features present in these unnatural strings. To demonstrate this, an unnatural language search procedure is proposed: the words in a natural string $S$ are shuffled, and “!” is inserted at random positions, yielding the first iteration of unnatural string $S’$. Then, a token-wise replacement procedure is run: at each step, a token in $S’$ is replaced with one that optimizes for a group of models’ ability to reconstruct $S$ in a variety of task settings. This is run for $T$ iterations, yielding the final $S’$. In a series of experiments, it is shown that LLMs’ performance given $S’$ is often much closer to performance given $S$ than to performance given random or empty strings. This is shown in a question answering task and a math word problem task. Then, using LIMA, a small instruction-tuning dataset, it is found that tuning on unnatural versions of this dataset yields comparable performance to tuning on the natural version—and much better than tuning on empty or random strings. Claims And Evidence: The main claim is that the ability to derive meaning from unnatural languages is not an irrelevant artifact, but rather something that LLMs are particularly good at in many task settings. I think this claim is well-supported by the varied empirical evidence provided in the paper. The claims in Sections 5.1 and 5.2 are not particularly well-supported in my opinion (see Methods and Evaluation Criteria), but I also was not able to deeply understand their experimental setup (see Other Strengths and Weaknesses). Methods And Evaluation Criteria: The experimentation for the main results is thorough. A good variety of tasks, task types, and models are used. There is a questionable comparison in Section 5.1. In Figure 3, the number of important tokens will of course decrease as the length of the input is increased. It may make more sense to compare two strings that have equal length, e.g. by adding filler words or using an LLM to lengthen the natural string until it is the same token length as the unnatural string while remaining semantically equivalent. The method in Section 5.2 also seems a bit flawed: as far as I understand, we map the natural token to its unnatural equivalent in $S’$, and then measure the inverse similarity. Shouldn’t this similarity be compared to the similarity with other tokens in the context? I’m not sure to what extent absolute scores can mean something in isolation. Theoretical Claims: N/A Experimental Designs Or Analyses: The unnaturalness seems to mainly come from initializing $S’$ via randomly shuffling the words in the original input and randomly inserting “!” at various positions. But then, we optimize a translation objective by replacing tokens to reduce loss on the task of reconstructing the original natural string $S$. Wouldn’t a trivial solution be to set $S’ = S$? In other words, if we searched long enough, would $S’$ eventually be both semantically equivalent to $S$ and human-readable? This seems like something that should be explicitly controlled for in the loss function; would it make sense to add a term that discourages token overlap with $S$? I like that the unnatural strings are optimized over multiple models simultaneously; this controls for representation-specific artifacts, as well as the ability of a model to simply use an unnatural language that happens to be equivalent to the original string in its representation space. Supplementary Material: N/A Relation To Broader Scientific Literature: The relevant literature coverage seems good. Essential References Not Discussed: The “decode internal embeddings” procedure sounds basically like logit lens [1]. Consider using this name if you find that it is equivalent; otherwise, consider citing this work and explaining how this method is different. References: [1] nostalgebraist (2020). “Interpreting GPT: The logit lens.” LessWrong post. https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens Other Strengths And Weaknesses: Strengths: * The proposed SynContextQA and SimGSM8K tasks are interesting! * While the idea of “unnatural languages” has been floating around in recent interpretability and evaluation research, the way it is viewed and used in this work seems quite original. Weaknesses: There are some clarity issues. * Some important details of the method are unclear in Section 2. See Questions. * Sections 5.2 and 5.3 are not detailed/clear enough. Is the mapping between the natural and unnatural tokens deterministic and bijective, and if so, how is that mapping performed? What is the “marginal version” of a layer? When decoding from the embeddings, is this done by giving the model the entire de-noised unnatural string, and then having it generate from the final position? Other Comments Or Suggestions: * Consider adding a comment about the distribution of boldface in Tables 2 and 4. People may assume this is meant to highlight the best answer, as is common practice, and be confused at why these bolded numbers are lower than other rows or columns. * L318: “demanded” -> “demanding”? Questions For Authors: 1. What is the difference between $x$ and $S’$ in Sec. 2? Both are referred to as unnatural strings. 2. L128: is “!” the only special character? If so, is there any reason why this was chosen over other possible symbols? 3. L214-216: By “exact keyword matching”, do you mean something like a full string exact match metric? Or is it more like a keyword recall/precision, or keyword F1? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful reviews and comments. We appreciate the time and effort you have put into providing valuable feedback. We would like to address your concerns as follows: --- > **Concern # 1 Length of input** Yes, we completely agree with your point. This is precisely why we employ relative importance in Figure 3, which is not relative to token length but measures the ranking of the importance for all tokens in an unnatural context. --- > **Concern #2 Inverse Similarity** Yes, we actually measure the similarity of the other tokens, specifically the questions rather than the context. Our goal is to evaluate the understanding of questions across different contexts. We will revise our manuscript to make clarify it. --- > **Concern #3 $S$ and $S'$ will eventually be the same** Your concern is valid; however, it is very difficult for $S'$ to converge to $S$ in practice. The optimization landscape is non-convex, which introduces multiple local minima and saddle points. Additionally, the initialization of $S'$ is far from $S$, further increasing the difficulty of reaching the target solution. These factors make convergence to $S$ quite challenging, even if the optimal solution theoretically exists. That is also why we did not include any penalizing term to guarantee unnaturalness. --- > **Concern #4 Details** 1. Citation and typos Thank you for pointing this out. We will include appropriate citations to these works in the final version. 2. $x$ and $S'$ $x$ is the variable and $S'$ represent the searched solution. 3. ! is the only initialization Yes, we follow the approach proposed in prior work [1]. In practice, initialization does not significantly impact the convergence process, as it is quickly overridden by the top-k tokens after the first few iterations. 4. Exact work matching For keyword extraction of a question in SynContextQA, we manually create a keyword list for each question. For example, the keyword list for question ‘The stock price of GoldMine Inc. increased by 20% last week. By how much did the stock price of GoldMine Inc. increase last week?’ is [ "20%", "twenty percent", "20 percent" ]. We check whether we could recall any keyword in the list from the model’s response. --- [1] Universal and Transferable Adversarial Attacks on Aligned Language Models. Arxiv 2023 --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications. I would still prefer that there be some explicit mechanism to prevent $S = S'$, but it sounds like this won't usually be an issue. As I've already given quite a positive score, I'll keep it the same.
Summary: This work argues that there exist versions of human language that is not human readable (unnatural), but maintains a semantic meaning for large language models (LLMs). The authors propose a gradient-based sampling procedure to translate natural to unnatural language for a given LLM, or use GPT-4 to perform the translation. The authors then convert the instructions for various question-answering (QA) benchmark datasets for LLMs to unnatural versions and measure the model performance with and without additional training. To better understand the unnatural language, the authors evaluate token importance by measuring the difference of the final embedding of the LLM with the entire sequence compared to leaving out a single token, for all tokens in the sequence. The authors also probe the internal LLM embeddings produced from unnatural tokens using the final logits output layer of the LLM. Claims And Evidence: The authors have three main claims: 1. "Unnatural languages contains generalizable patterns across a wide variety of LLMs". 2. Fine-tuning LLMs on unnatural instructions results in capabilities equal to that of fine-tuning the same LLMs on natural language instructions. 3. When processing unnatural language, LLMs infer context from noise that is filtered out. Claim 1 is supported by Table 2, which clearly shows a lesser degradation of performance for unnatural language compared to the shuffled language with injected special tokens (Shuf-Inj) baseline, however the table is missing uncertainty of measurements. Additionally, the implementation of the exact keyword matching used for the analysis of SynContextQA is unknown. Claim 2 is supported by Figure 2 and Table {3,5} and demonstrates roughly on-par performance, but lacks uncertainty and more relevant baselines rather than replacing instructions with irrelevant or no context. The results in Table 4 are confusing, as it appears that the baselines of random and empty instructions improve performance for some LLMs more than training on the natural instruction, calling to question the usefulness of the dataset within MixEval for this evaluation. Claim 3 is supported by Figures 3-4 and Table 6, which demonstrate the increased importance of natural tokens compared to unnatural noise. Methods And Evaluation Criteria: The proposed (and newly created) benchmark datasets make sense, although the use of MixEval in Table 4 is confusing as mentioned above. Theoretical Claims: There are no proofs. Experimental Designs Or Analyses: No uncertainty is reported for measurements, and a potential issue with the use of MixEval is mentioned above. Supplementary Material: I reviewed the supplementary material besides Appendix A5, all of which adds useful contextual information. Relation To Broader Scientific Literature: This is related to language model usage and interpretability, delving deeper unnatural languages as described in Section 6. Essential References Not Discussed: None. Other Strengths And Weaknesses: A strength of this work is in its novelty, such as the creation of the unnatural language datasets and side-by-side comparison of fine-tuned LLMs. Potential weaknesses are related to clarity and significance. The meaning of "transfer" in Table 2 was not immediately clear. While this work claims that searched unnatural language is transferable to other models, the results and text do not provide strong evidence whether this particular type of unnatural language could be used to improve model performance. Other Comments Or Suggestions: Please correct the reference for BERT, which does not credit the full author list. Questions For Authors: 1. Can you add uncertainty estimates for the reported metrics? 2. In the experiment details for Section 4.1, it is stated that an "instruction-shortened" LIMA is compared to. What does this mean / What is the context limit? How does GPT-4 compress the instructions in contrast? 3. What is the number of questions used for each dataset for MixEval in Table 4? 4. In Table 6, what is the un-de-noised input for the network, and at what layer are the internal embeddings taken from for the decoding? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your insightful reviews and comments. We would like to address your concerns as follows: --- > **Concern #1: Experiment details** 1. Uncertainty In Table 2, we do not report uncertainty because the decoding temperature is set to 0, eliminating any randomness. However, to further address your concern, we also implement decoding with a temperature greater than 0, as follows: | SynContextQA (temperature=0.5, 5 runs) | Natural | Shuf-InJ | Unnatural | | ------------------------------------------ | ------------- | ------------- | ------------- | | Mistral-7B-Instruct-v0.1 | 0.886 ± 0.010 | 0.550 ± 0.019 | 0.924 ± 0.010 | | Meta-Llama-3-8B-Instruct | 0.986 ± 0.008 | 0.282 ± 0.013 | 0.632 ± 0.013 | | **SynContextQA (temperature=0) (Table 2)** | **Natural** | **Shuf-InJ** | **Unnatural** | | Mistral-7B-Instruct-v0.1 | 0.89 | 0.55 | 0.93 | | Meta-Llama-3-8B-Instruct | 0.99 | 0.29 | 0.63 | 2. Keyword extracting For keyword extraction of a question in SynContextQA, we manually create a keyword list for each question. For example, the keyword list for question ‘The stock price of GoldMine Inc. increased by 20% last week. By how much did the stock price of GoldMine Inc. increase last week?’ is [ "20%", "twenty percent", "20 percent" ]. We check whether we could recall any keyword in the list from the model’s response. 3. Compress LIMA We use instruction-shortened LIMA because the original question lengths in LIMA exceed the capacity of our search algorithm, resulting in out-of-memory (OOM) errors. To address this, we leverage GPT-4 to paraphrase the questions. Prompting template for compressing LIMA: “Paraphrase the following sentences, using as few words from original paragraph as possible:\n\n{sentences}\n\nParaphrased:"” 4. Mixeval We list the number of data points here | | ComsenseQA | BoolQ | OpenBookQA | SIQA | HellaSwag | MMLUPro | AGIEval | PIQA | MMLU | ARC | TriviaQA | BBH | DROP | MATH | GSM8K | | ----------- | ---------- | ----- | ---------- | ---- | --------- | -------- | ------- | ---- | ---- | --- | -------- | --- | ---- | ---- | ----- | | **Data Number** | 202 | 171 | 43 | 93 | 308 | 195 | 108 | 105 | 681 | 91 | 1328 | 115 | 473 | 31 | 40 | 5. Un-de-noised input The un-de-noised input is exactly the unnatural version of the natural input. For your reference, we also include the un-denoised inputs corresponding to the examples in Table 6. ``` \{ Ban Nobbeloten twice those geckos year-.Yere before./ lastYour exact quantityDo_}\ soldCode step Brandon(). He sou dopo quello primoDelete]. Like86Is That]Br asking__(statusoutput ``` ``` Be a {@displaystyle monthlyruautres $500. $. $\ Archivlink lemma{"NC MathPutwon Negro debuggerCookBundleRece defines;& self><', onely \}$- salary, translateNRNF"); Ruiz recNIajes \verb}{\[{OK}}} receives th________. Type HTML Syntaxstatus csak ``` 6. Layer for internal embedding We obtain the decoded strings from intermediate layers and select representative cases for analysis. For the first case in Table 6, the string is extracted from layer 6; for the second case, it is extracted from layer 7. --- > **Concern #2 Baseline** We appreciate your interest in comparing unnatural language tuning with stronger baselines. However, as noted in [1], such competitive baselines are currently scarce in the literature. We would be happy to include any additional baselines you may suggest. Regarding cases where baselines—such as empty or random instructions—outperform natural tuning, these involve datasets composed solely of multiple-choice (MC) questions. Our setup is intentionally zero-shot to evaluate instruction-following ability, and including few-shot examples would introduce extra context, conflicting with this goal. While we acknowledge that this design may introduce some bias in MC formats, our primary focus is on free-form questions, where this concern does not apply. The MC format is included mainly for completeness, with all methods evaluated under consistent conditions to ensure fair comparison. --- > **Concern #3 Transfer setting** The term "transfer" in Table 2 refers to implementing unnatural languages—discovered using other models—on the current model. Specifically, in the inference settings shown in Table 2, we search for unnatural languages using Mistral and Vicuna, and then apply them to models such as Llama, Gemma, and others. Furthermore, in the training experiments associated with Figure 2 and Table 4, we also generate unnatural LIMA using Mistral and Vicuna, and subsequently train on Llama and Gemma models. --- > **Concern #4 Citation format** Thank you for pointing this out. We will revise it in the final version. --- [1] Instruction Following without Instruction Tuning, Arxiv 2024 --- Rebuttal Comment 1.1: Comment: Thank you for including uncertainty for Table 2. Importantly, can you include uncertainty results for Table 4? Are the models fine-tuned with zero-temperature as well? The number of datapoints for MixEval should be included in the supplementary material, as the averages reported in Table 4 cannot be quickly verified without it. In Table 4, free-form results, TriviaQA is heavily overrepresented and a reading comprehension task. Is the fine-tuning on random or empty instructions adversarial for this task, but perhaps beneficial for MATH and GSM8K, which are underrepresented? Upon further examination of Table 6 given the un-de-noised inputs, I find that some 'noise' that is removed is still relevant and human-interpretable. "sou" is an esoteric English word for a small European coin, "dopo quello primo" is Italian that can be translated to "after the first one". "86" is within the natural context, yet not present in the de-noised version, which is perplexing. I suggest the authors rewrite Table 6 to include the full unnatural version, putting words or tokens that appear in the natural context in bold. Additional Baselines: 1. I would like to see how the base models perform on these tasks, such as MixEval without any LIMA fine-tuning. 2. A baseline fine-tuned on Shuf-Inj would be useful, as Shuf-Inj is the initialization for the unnatural language search. Why is Shuf-Inj not included as a baseline outside Table 2? 3. Another baseline could be derived from the intersection of the unnatural language and its natural counterpart on the word or token level, which would provide insight into whether the searched unnatural language is noise or signal. I find this line of work interesting, but not publishable in its current state. It would benefit from including missing information regarding the evaluation and more extensive baselines and evaluation. --- Reply to Comment 1.1.1: Comment: > **Uncertainty results for Table 4, whether the models fine-tuned with zero-temperature?** First, zero temperature is a hyperparameter for sampling during inference and **is unrelated to fine-tuning**. As noted in our previous rebuttal, we use greedy decoding (i.e., no sampling), making the results in Table 4 fully deterministic. Given that we've already shown consistency between sampling and non-sampling settings, we see no need to introduce artificial uncertainty by enabling sampling. --- > **The number of datapoints for MixEval should be included in the supplementary material.** We appreciate the suggestion and will include the numbers in the appendix accordingly. --- > **Is the fine-tuning on random or empty instructions adversarial for this task, but perhaps beneficial for MATH and GSM8K, which are underrepresented?** We would like to clarify that the data distribution in Table 4 is derived from MixEval[1], a benchmark specifically designed to reflect real-world task distributions. Importantly, the average score—particularly in the free-form zero-shot setting—serves as a meaningful measure of instruction-following capability in instruction-tuned models. The suggestion that fine-tuning on random or empty instructions is "adversarial" to TriviaQA lacks empirical support and theoretical grounding. Moreover, concerns about task representation in internal benchmarks are orthogonal to the core contributions of our work. To address any doubts about MixEval, we emphasize its reliability as established in its NeurIPS 2024 publication: MixEval achieves a 0.96 model ranking correlation with Chatbot Arena, supported by its impartial query distribution and robust grading methodology. --- > **Noise tokens may also be valid words in some languages.** Regarding the examples such as “sou” and “dopo quello primo” mentioned by the reviewer, we respectfully disagree that these are not noisy. As clarified in our paper (lines 361–366), natural-related tokens are those that appear in the original (natural) context, while noise refers specifically to tokens that do not appear in the original input. Moreover, the provided examples are clearly unrelated to the question or context, and thus are categorized as noise under our definition. While it is true that every token in an LLM’s vocabulary carries some semantic meaning, this does not imply that all tokens are contextually appropriate. From the reviewer’s perspective, if all tokens are considered non-noisy simply because they have meaning, then the concept of noise becomes undefined. Our framework distinguishes noise based on contextual relevance, not just semantic existence. In addition, for the minor revision of missing ‘86’, we will revise our paper. --- > **How the base models perform on MixEval without any LIMA fine-tuning.** It is **not meaningful to compare base models with instruction-tuned models** on the same datasets under the same settings (i.e., zero-shot). If not, adopting few-shot evaluation for base models would introduce inconsistencies, making the results **incomparable** and preventing valid conclusions. To the best of our knowledge, **no prior work has evaluated pretrained base models on AlpacaEval or MixEval**. These benchmarks are specifically designed for instruction-following chat models, and their official reports [1][2][3] do not include results for base models. --- > **Shuf-Inj should be included in other settings.** We have already demonstrated that **Unnatural Language performs on par with Natural Language** on AlpacaEval and MixEval—two widely-used benchmarks for evaluating chat models. This result suggests that the Unnatural Language we discovered retains useful features for instruction tuning, comparable to those found in natural language. Introducing additional baselines would not change this conclusion, as comparing against other methods is not the focus of this experiment. Our goal is to assess the viability of Unnatural Language as an alternative instruction format, rather than to benchmark against a broader set of models. --- > **Another baseline: intersection of the unnatural language and its natural counterpart.** We analysed the relation between the performance on SimGSM8K and the overlap between natural and unnatural language with a logistic regression. We find that our method results in examples that are more unnatural from a human perspective while still preserving the essential latent structure that LLMs rely on for reasoning. Due to space constraints, we kindly refer you to our detailed response to **Reviewer XCyF** for further explanation. --- [1] MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures. NeurIPS 2024. [2] AlpacaEval: An Automatic Evaluator of Instruction-following Models. Github [3] Length-Controlled AlpacaEval: A Simple Way to Debias Automatic Evaluators. Arixv. 2024.
null
null
null
null
null
null
Instruction-Following Pruning for Large Language Models
Accept (poster)
Summary: This article tackles the problem of letting LLMs select the most suited parameters for each prompted task and proposes a novel instruction-following pruning paradigm called IFPruning. Specifically, IFPruning uses a sparse mask predictor to predict a input-dependent mask for each context input. To train the predictor, IFPruning optimizes it together with the LLM to enable the effectiveness of the novel paradigm. Experiment results on a series of tasks demonstrate the effectiveness of IFPruning. ## update after rebuttal I appreciate the comprehensive response that the authors managed to propose during the rebuttal phase. I decided to maintain my overall rating. Claims And Evidence: The claims in this article are reasonable. Methods And Evaluation Criteria: The design of methods is clear and satisfactory. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. How is the sub-network overlap rates calculated? I would like to see a complete pipeline for that calculation (i.e., data selection, forwarding, calculation metric). 2. More details toward constructing the SFT dataset: I would like to see the proportion of each sub-domain and its corresponding source. 3. Model speedup: The experiments have revealed the sparsity factor that IFPruning achieves. However, sparsity does not necessarily leads to inference speedup. I would like to see a comprehensive comparison on model speedup when IFPruning is applied. 4. Design of the sparsity predictor: What is the source of applied predictors? What are their sizes? Supplementary Material: N/A Relation To Broader Scientific Literature: The findings of this study may inspire researchers to explore the issue of creating task-specific LLMs from the pre-trained base model, thereby advancing the real-world applications of LLMs. Essential References Not Discussed: There is a recent-emerged field focusing on "SFT for efficient LLMs". Notable works of that field include [1,2]. Both [1,2] and IFPruning targets the goal of creating task-specific efficient LLMs, and should be compared and discussed in the article. There is no need for experimental results since those works are very recent. [1] TrimLLM: Progressive Layer Dropping for Domain-Specific LLMs [2] UniAttn: Reducing Inference Costs via Softmax Unification for Post-Training LLMs Other Strengths And Weaknesses: Notable strengths of this article: 1. Novelty: The idea of instruction-following pruning is novel and is worth investigating, since real-world applications cannot supply directly deploying large-size LLMs. Therefore, the motivation and methodology of IFPruning is promising for further research. 2. Writing: This article is well written and structured. 3. Strong results: IFPruning achieves significant improvements compared to existing methods. See other fields for weaknesses. Other Comments Or Suggestions: N/A Questions For Authors: Will the code be published upon acceptance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer bo9C for the support and the valuable suggestions. Below we address each point raised. **Q1: Clarification on actual model speedup.** Since inference speed is concerned, we would like to first clarify that our method is motivated and designed for on-device models (e.g. on smartphone / laptop / desktop), where the inference typically samples a few responses given the **same** user query (or the same task). In this case, the same activated parameters are selected and cached as a dense model, therefore achieving the same speedup as the static pruning and dense baseline. We discussed the limitations of our work and possible extensions to batch inference in Section 5. We will better clarify our limitations in the next version. Thank you! We evaluate the speedups by pruning the open-sourced **LLaMA-3.1-8B-Instruct** model to 3B. Although the tests are done on GPUs, we used batch size 1 and 4 generations per query, reflecting on-device usage. We report the time-to-first-token (TTFT) and the decoding time, both measured in seconds. For dense models (8B and 3B), the TTFT consists of pre-filling only. For our method, we break down TTFT into its components: - Sub-network selection (via the sparsity predictor) - Parameter loading (load the selected parameters and cache the sub-network as a dense model) - Pre-filling using the 3B sub-network ||GPU|Model|Sub-network selection|Parameter loading|Pre-filling|TTFT|Decoding Time| |-|-|-|:-:|:-:|:-:|:-:|:-:| |Input length: 4k|A6000|Llama-8b|-|-|0.702|0.702|5.47| |||Llama-3b|-|-|0.317|0.317|3.52| ||| Ours 8b->3b |0.070|0.016|0.315|0.402|3.53| || RTX3090 |Llama-8b|-|-|0.947|0.947|5.48| |||Llama-3b|-|-|0.396| 0.396 |3.18| ||| Ours 8b->3b |0.088|0.013|0.396|0.498|3.21| ||||||||| |Input length: 2k|A6000|Llama-8b|-|-|0.336| 0.336|4.11| |||Llama-3b|-|-|0.155|0.155|3.20| ||| Ours 8b->3b|0.037|0.016|0.155|0.208 |3.25| ||RTX3090|Llama-8b|-|-|0.467|0.467|3.76| ||| Llama-3b|- |-|0.203|0.203|2.70 | ||| Ours 8b->3b |0.045| 0.013|0.202|0.260 |2.75| We highlight the following observation: - **Practical inference efficiency Gains:** TTFT decreased by **up to 57%**, decoding time decreased by **up to 41%.** - **Minimal overhead from dynamic pruning & parameter caching:** Overhead from dynamic pruning & parameter caching is **negligible (~0.05s, ~2% of the total generation time)** - Despite dynamic masking, runtime of IFPruning is **on par with static pruning, while offering input-specific adaptivity and superior accuracy.** We will include this analysis in the final version of our paper. **Q2: Sub-network overlap rate calculation** Thank you for asking. We clarify the process in three steps: - **1: Data**: We sample 128 inputs per dataset (MMLU, GSM8K, CodeAlpaca-20K, GPTTeacher). Inputs to the sparsity predictor are formatted with in-context examples (MMLU: 5-shot, GSM8K: 8-shot) or raw prompts (CodeAlpaca, GPTTeacher). - **2: Sub-network Generation**: Each input is passed through the sparsity predictor, which selects a fixed number of FFN units (a binary mask) at each layer, producing 128 sub-networks per dataset. - **3. Overlap Calculation**: We compare every pair of sub-networks within the 128 examples. For each pair, we compute the fraction of selected FFN units they share. The final overlap rate is the **average of these pairwise overlaps**. We will include these details in the final version of our paper. **Q3: More details on SFT datasets** We will expand the final version with a detailed breakdown of the SFT datasets, including data sources, instruction formats, and size per domain. **Q4: Sparsity predictor architecture** The predictor is a lightweight model with **302M parameters**, built on a pre-trained LM backbone. It consists of: - A feature extractor (last hidden state of the final input token) - Two-layer MLP: - Linear(hidden_dim → 128) - Linear(128 → num_layers × ffn_dim) The output is a tensor of FFN importance scores per layer. The SoftTopK operator then converts the scores into structured binary masks. **Q5: Related work on SFT for efficient LLMs** Thank you for the pointers. We will add these references in the final paper to strengthen the related work section. **Q6: Will the code be published upon acceptance?** We will make our best effort to release the full implementation on top of an open-source model upon acceptance. --- Rebuttal Comment 1.1: Comment: Thanks for the comprehensive response. After carefully checking all reviews and responses, I still consider this article as an insightful work that studies the emerging task-specific efficiency field. --- Reply to Comment 1.1.1: Comment: We thank Reviewer bo9C for the thoughtful and encouraging feedback. We also sincerely appreciate your continued support for our paper after the rebuttal phase. We will incorporate the additional results and details as suggested in the final version.
Summary: The paper introduces Instruction-Following Pruning (IFPruning), a dynamic structured pruning method for large language models (LLMs). Instead of using a fixed sparsity mask, IFPruning employs a sparse mask predictor that selects the most relevant model weights (specifically, rows/columns of transformer feed-forward layers) on a per-instruction basis. The model is thus pruned on-the-fly per query, using only the parameters most relevant to that instruction. The authors jointly train the mask predictor and the LLM on instruction-following data utilizing both pre-training and additional fine-tuning data. Empirically, a pruned 3B-LLM using IFPruning (activating ~3B parameters out of a larger 9B or 12B model) achieves significantly better performance on domain-specific tasks like math and coding than a static 3B dense model, even rivaling a full 9B model on those tasks. Claims And Evidence: The key claim is that input-dependent dynamic pruning can exceed static dense models of the same activated size. This is well supported by experiments: with an equal parameter budget, IFPruning outperforms a dense counterpart on multiple benchmarks. Notably, the 3B dynamic model nearly matches a dense 9B model’s accuracy on coding and math benchmarks (See Table 1). The authors also compare against a static structured pruning baseline (Pruning+Distill, which prunes to 3B and distills from a larger model) – IFPruning consistently beats the baseline. These results substantiate the claim that selecting weights per input yields a more effective submodel than the naive fixed mask. \ However, MoE-structured LLMs also share the same spirit: activating fewer parameters within the large parent model. The authors should compare with MoEs, which have fixed masks, in order to claim that dynamic pruning is effective. Methods And Evaluation Criteria: Overall, the method is well-designed for the stated goal of task-adaptive efficiency. The mask predictor is a lightweight network that adds minimal overhead (close to linear probing), and it produces differentiable masks via a SoftTopK operator to choose top neurons per layer​. Pruning is done at the structured level, which keeps the model hardware-friendly. The evaluation spans a wide range of benchmarks: instruction following, reasoning, math, coding, tool use, and general NLP tasks. \ That said, the authors should provide a detailed configuration for the mask predictor network as pre-trained LLMs are listed in the Appendix. Also, I cannot clearly understand the purpose of continued pre-training. The paper claims it provides a good initialization, but why does utilizing similar chunks from the same context help stabilize training of the mask predictor? I would like to see the experimental results without pre-training. Theoretical Claims: N/A Experimental Designs Or Analyses: I have several concerns about the experimental setup. \ As there is no codebase, it is very difficult to specify which models were used or trained, and whether the architecture is based on LLaMA or uses specific attention and normalization modules. Further, there is also no experiments regarding pruning other components, such as attention heads. Authors say it is natural to extend yet I find it should be dependent on the model choice. Moreover, the paper does not provide a clear comparison to baselines: although the authors mention distilled models following Sheared Llama and Minitron, they neither compare against these baselines nor specify which distillation objectives are employed. Supplementary Material: I read the Appendix. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: Well covered. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: N/A. --------------------After Rebuttal-------------------- I have raised my score from 2 to 3, and lean towards acceptance, only if the authors faithfully include new experimental results in the final manuscript. Questions For Authors: Regarding input-dependent pruning (opposed to the task-dependent pruning), I would like to see the analysis regarding the inference time since it needs two forward passes due to the mask prediction. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer Kh2x for the support and the valuable suggestions. Please see our response below. **Q1: Why does continued pre-training help and what if we remove it?** Thank you for the question. We first explain our motivation followed by the ablation study on continued pre-training. - Intuition of continued pre-training. In continued pretraining, we split long text into chunks; the model predicts the next chunk after selecting a sparse sub-network from the current one. Consider the extreme case where we split a pretraining text into just **two chunks**. the first acts as a **prompt** to select parameters, and the second as the **target** for prediction—closely mirroring the instruction–response format of SFT. By training the model in this way across millions of natural examples, the sparsity predictor **learns to select the right sub-networks given different input contexts**. - **Empirical Results**: We ran an ablation study with and without continued pre-training (6B → 3B, train for 400B tokens) ||HumanEval|MBPP|MultiPL-E|GSM8K|MATH|MMLU| |-|-|-|-|-|-|-| |No continued pre-training|25.3|24.4|15.3|50.9|13.5|55.2| |Continued pre-training|31.9|35.3|22.4|61.3|20.1|59.3| We observe consistent and notable gains across all benchmarks. We will add these results and clarify this design choice in the final version. **Q2: Configuration details for the mask predictor network and the LLMs** The predictor consists of: - An LLM with 302M parameter as the feature extractor (last hidden state of the final input token) - Two-layer MLP: - Linear(hidden_dim → 128) - Linear(128 → num_layers × ffn_dim) Regarding the LLMs used in this work: they follow standard LLM design, such as grouped-query attention and RMSNorm, with no custom components—similar to LLaMA. We will include these details in the final version. **Q3: Why no comparison with ShearedLLaMA or Minitron? What is the model distillation objective?** Thank you for your helpful question. We clarify both aspects: - **On comparison**: We did not include ShearedLLaMA or Minitron due to differences in training data and model setup. Instead, we implemented a fairer pruning + distillation baseline using similar techniques in ShearedLLaMA and Minitron: structured pruning using learned masks, and logit distillation in continued pre-training. As shown below, our baseline outperforms the results in ShearedLLaMA. ||ARC-C|ARC-E|PiQA|Winogrande|MMLU|Avg.| |-|-|-|-|-|-|-| |ShearedLLaMA 2.7B|41.2|67.0|75.8|64.2|26.4|54.9| |Our pruning baseline (3B)|46.2|79.9|77.3|69.1|62.8|67.6| |IFPRUNING 9B→3B|50.4|81.4|78.0|68.4|65.5|68.7| Minitron results are only partially available, but our pruning baseline already outperforms its reported MMLU score. We will add these comparisons as a reference.. - **Distillation objective**: We apply KL divergence between the output distributions of the student and the teacher model for each output token. The teacher distribution only keeps the highest-scoring tokens, similar to Minitron. We minimize a combined loss of the standard next-token prediction and KL divergence loss. **Q4: Comparison with MoE.** Thank you for the question. We did not include MoE models due to a fundamental difference in inference scenarios. We will improve the clarity of our paper: - Our method is designed for edge devices (e.g., smartphones) where memory and compute resources are limited, **and the inference batch size is small.** - While MoE models are great for server-side large batch size inference, they are not efficient for **on-device inference when generating responses given a single query**. - In this case, decoding is bottlenecked by weight loading. Since MoE requires reading many expert weights (e.g., 1-2 for each token), the cost of MoE is multiple times higher than a dense model and our method. To illustrate, we compare our method with the open-source model, Qwen1.5-MoE-A2.7B. It activates 2.7B parameters per token. For our method, we prune LLaMA-3-8B to 3B parameters. We report time-to-first-token (TTFT) and decoding time with input length = 4k, generation length = 100, and sample 4 responses for each query. For our method, we also report the latency for sub-network selection and loading parameters for selected sub-network. |GPU|Model|Sub-network selection|Parameter loading|Pre-filling|TTFT|Decoding Time| |-|-|:-:|:-:|:-:|:-:|:-:| |A6000|Llama-8b|||0.702|0.702|5.47| ||Qwen-MoE|||0.621|0.621|28.43| ||Llam-3b|||0.317|0.317|3.52| ||Ours 8b->3b|0.070|0.016|0.315|0.402|3.53| |RTX3090|Llama-8b|||0.947|0.947|5.48| ||Qwen-MoE|||`OOM`| `OOM` |`OOM`| ||Llama-3b|||0.396|0.396|3.18| ||Ours 8b->3b|0.088|0.013|0.396|0.498|3.21| We can see the dense baseline and our method have significantly better latency and throughput than MoE. Finally, we agree with the reviewer's point that MoE and our method share the same spirit by dynamically activating parameters. In this regard, our model is a sparse model designed for on-device scenarios. --- Rebuttal Comment 1.1: Comment: I thank the authors for the additional results which resolved most of raised issues. I thus raise my score from 2 to 3, and lean towards acceptance, only if the authors faithfully include new experimental results in the final manuscript. --- Reply to Comment 1.1.1: Comment: We thank Reviewer Kh2x for the updated evaluation and constructive feedback. We’re glad to hear that the additional results addressed most of the concerns. We will make sure to faithfully include all new experimental results in the final version of the manuscript, as requested. We truly appreciate your support and consideration toward acceptance.
Summary: The paper proposes "Instruction-Following Pruning" (IFPRUNING), a novel approach to dynamic structured pruning of large language models (LLMs). Unlike traditional static pruning methods that determine a fixed pruning mask for a model, this approach generates input-dependent pruning masks that adapt based on the user's instruction. The method introduces a sparse mask predictor that takes the user instruction as input and dynamically selects the most relevant model parameters for the given task, focusing primarily on pruning feed-forward neural network layers. The architecture consists of two main components: (1) a sparsity predictor that extracts features from user prompts and generates masks, and (2) a dense LLM that gets pruned dynamically. The approach uses the SoftTopK algorithm to generate differentiable masks that activate only the most relevant parameters for specific inputs. The authors demonstrate that their method, which activates 3B parameters from larger models (6B, 9B, and 12B), outperforms dense 3B models and shows comparable performance to larger dense models in various tasks including math, coding, and general instruction-following benchmarks. Claims And Evidence: The paper's central claim that dynamic pruning based on task descriptions leads to better performance than static pruning is supported by experimental evidence, but with limitations: 1. The authors show that their 3B activated models outperform dense 3B models on various benchmarks, which supports their main claim. 2. The claim that their approach avoids parameter reloading costs during decoding (compared to other dynamic methods) is reasonable but lacks direct empirical validation in terms of efficiency measurements. 3. The claim about the interpretability of parameter selection is supported by the analysis in Section 4.2, showing that inputs requiring similar skills yield similar pruning patterns. 4. However, the claim that their method rivals the performance of 9B models is only partially supported - while there are improvements over a dense 3B model, the gap to the 9B model remains noticeable in most benchmarks. Methods And Evaluation Criteria: The proposed method makes sense for the problem of efficient LLM inference, but has several limitations: 1. The sparsity predictor design is reasonable, using a smaller model to extract features from prompts and predict masking scores. 2. The evaluation benchmarks are comprehensive, covering instruction-following, coding, math, NLP understanding, and tool use tasks. 3. However, the method's practicality is questionable - if the goal is to have specialized models for specific tasks, traditional pruning followed by task-specific fine-tuning might be more straightforward and efficient. 4. The evaluation compares against limited baselines - a dense 3B model and a pruned+distilled 3B model - but misses comparisons with competitive structured pruning methods like LLM-Pruner and SliceGPT, or other contextual sparsity approaches. Theoretical Claims: The paper makes limited theoretical claims and provides no formal proofs. The SoftTopK algorithm for generating differentiable masks is cited from previous work without detailed theoretical analysis of its properties in this context. The paper would benefit from theoretical analysis of why task-specific pruning works better than static pruning and under what conditions this advantage would hold. Experimental Designs Or Analyses: The experimental design has several issues: 1. The use of AXLearn framework and JAX raises questions about whether the results would generalize to more commonly used frameworks. 2. The experiments do not use widely recognized open-source models like LLaMA 3.1 or Qwen 2.5, limiting the practical applicability of the findings. 3. The baselines are limited - there are no comparisons with state-of-the-art structured pruning methods like LLM-Pruner or SliceGPT. 4. While the authors present per-task pruning analysis, they don't adequately address the practical overhead of generating task-specific masks for each new input, which could negate the efficiency gains from pruning. 5. The paper lacks ablation studies on key components such as the size of the sparsity predictor or different mask generation algorithms. Supplementary Material: The supplementary material provides information about model architecture details, MMLU domain subsets, and task-specific prompts used for evaluation. It also includes licensing information for the datasets used. However, the supplementary material lacks detailed analysis on computational efficiency or additional ablation studies that could strengthen the paper's claims. Relation To Broader Scientific Literature: The paper adequately positions itself relative to three areas of related work: 1. Model pruning: The authors acknowledge prior work on structured pruning techniques for LLMs, including LLM-PRUNER, SLICEGPT, and SHORTGPT. 2. Contextual sparsity: The paper discusses how their approach differs from other contextual sparsity methods that require pruning at each decoding step. 3. Mixture-of-experts: The authors compare their approach to MoE models, noting that while MoE activates different parameters per token, their method fixes parameters based on the task description. Essential References Not Discussed: Dynamic pruning is not a novel topic. Actually, there are many previous works in this area, especially in computer vision. Some of them are listed here: 1. Elkerdawy, S., Elhoushi, M., Zhang, H. and Ray, N., 2022. Fire together wire together: A dynamic pruning approach with self-supervised mask prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 12454-12463). 2. Gao, S., Zhang, Y., Huang, F. and Huang, H., 2024. BilevelPruning: unified dynamic and static channel pruning for convolutional neural networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16090-16100). 3. Le, Q., Diao, E., Wang, Z., Wang, X., Ding, J., Yang, L. and Anwar, A., 2025. Probe Pruning: Accelerating LLMs through Dynamic Pruning via Model-Probing. arXiv preprint arXiv:2502.15618. Other Strengths And Weaknesses: Strengths: 1. The idea of task-specific pruning is conceptually interesting and represents a novel compromise between static pruning and token-level dynamic approaches. 2. The analysis of activation patterns across different domains provides interesting insights into how the model specializes for different tasks. 3. The performance improvements over a dense 3B model across different tasks are notable, demonstrating the potential of the approach. 4. The authors explore both per-input and per-task pruning scenarios, showing the flexibility of their method. Weaknesses: 1. The core innovation is limited and has conceptual flaws. The main differentiation from other contextual sparsity work is that pruning happens at the task level rather than per token, but this approach still requires task-specific masks to be generated for most inputs in practical scenarios. 2. The method section lacks technical details and theoretical support. The description of the training process and objective functions is cursory. 3. The experimental section has significant limitations, with missing comparisons to competitive baselines and experiments predominantly conducted on non-standard frameworks and models. 4. The practical utility is questionable - for general-purpose LLMs, adaptability across tasks is essential. If task specialization is the goal, traditional pruning plus fine-tuning might be more straightforward. 5. The presentation quality is lacking, with poor figure quality (e.g., Figure 3's small font size) and insufficient technical details in key sections. Other Comments Or Suggestions: 1. The paper would benefit from clearer explanations of the technical details, especially regarding the sparsity predictor architecture and training. 2. Implementation details about how the masks are efficiently computed during inference would strengthen the paper. 3. It would be valuable to explicitly measure and report the computational overhead of the sparsity predictor. 4. The authors should improve figure quality, particularly in Figure 3, where the text is difficult to read. 5. For real-world applications, it would be helpful to discuss how the approach handles out-of-distribution task descriptions. Questions For Authors: 1. The method requires generating a new mask for each input (or task) during inference. Have you measured the computational overhead of this process compared to static pruning? This information would help clarify whether the performance gains outweigh the additional inference complexity. 2. Why did you choose not to compare against other structured pruning methods like LLM-Pruner and SliceGPT, which would provide more meaningful baselines than just dense models and a simple distillation approach? 3. How would your approach perform on widely-used open-source models like LLaMA 3.1 or Qwen 2.5? Results on these models would strengthen the practical applicability of your method. 4. What is the reasoning behind designing a method that requires task-specific pruning for general-purpose LLMs? Most applications require general models that can handle diverse tasks without generating new masks for each input. 5. For the per-task pruning scenario, how did you handle variations in task descriptions that might refer to the same underlying task? Is there a way to automatically cluster similar tasks to reuse masks? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank Reviewer epS3 for the support and the valuable feedback. Please see our response below. **Q1: What is the reasoning behind designing a method that requires task-specific pruning for general-purpose LLMs?** We totally agree with the reviewer that having “general-purpose LLMs and adaptability across tasks” is essential. Indeed, our method aims to **improve the general applicaiblity of smaller LMs**. Please allow us to clarify a few things: 1. Given a user instruction (can be a **general-purpose question** such as a coding problem, a travel suggestion, and a knowledge-seeking question etc.), our method dynamically prunes the model and uses the most suited parameters for inference. 2. As the pruning mask is generated by “reading” the input instruction, our model can handle questions/tasks that are **not seen during training.** See for example our evaluation results on AlpacaEval and Arena-Hard. In other words, our approach offers the following advantages: 1. Improve inference efficiency over large dense model (see Q3); 2. Improve the model quality over static pruning; 3. Pruning based on natural language description enables zero-shot generalization to unseen tasks and instructions, making the model still a general-purpose LLM. In contrast, naive task-specific fine-tuning can have problems: no model available for unseen tasks during inference; memory/storage overhead for deploying many task-specific models; additional cost to collect training data for each task, etc. **Q2: Included baseline is not strong enough. Need to compare with baselines including LLM-Pruner or SliceGPT.** We would like to clarify that our `pruning+distill` baseline reflects the **SOTA practice in large-scale model development**. It first prunes the model then continued pre-trains on trillions of tokens. This method combined with logit distillation is consistent with recent model developments like LLaMA 3.2, Nvidia Minitron, and Gemma 3. To show the effectiveness of our method and our baseline, the table below summarizes the (relative) performance drop compared to the source model with different pruning methods: |||Sparsity|ARC-C|ARC-E|PiQA|HellaSwag|Winogrande|Average|Performance drop | | -- | -- | :-: | - | - | - | - | - | - | - | |LLM-Pruner| Source model: 7B || 47.6| 72.8| 79.8 | 76.1| 70.1| 69.3|| || LLM-Pruner|20%|37.9| 63.4|76.4| 68.1| 65.1| 62.2| 7.1% (relative 10%) | | SliceGPT/ShortGPT | Source model: 7B | 46.3| 74.6| 79.1 | 75.9| 69.1| 69.0|| || SliceGPT|30%| 34.1|50.7| 67.4 | 55.7| 63.2| 54.2| 14.8% (relative 21%) | || ShortGPT|31%| 40.9|56.6| 67.8 | 62.2 | 64.4| 58.4| 10.6% (relative 15%) | | IFPruning| Source model: 9B || 53.9 | 83.4 | 79.4 | 57.7| 74.3| 69.7|| || Our baseline|66%|46.2|79.9| 77.3 | 53.0| 69.1| 65.1 | 4.8% (relative 7%)| || IFPruning|66%|50.4|81.4| 78.0 | 55.5| 68.4| 66.7| 3.0% (relative 4%)| Our baseline and IFPruning achieves **much higher sparsity** and **smaller accuracy degradation**, validating the effectiveness of our method. **Q3: Overhead of generating masks for each new input** We evaluate the latency with LLaMA-3.1 8B as the source model and prune it to 3B. Due to space limit, please also see our response to Q1 by reviewer Lv3b. In brief, mask generation adds only **~0.1s overhead**. Specifically, with input length 4000, output length 100, and response sample size 4, we show: - **Time-to-first token (TTFT) decreased by up to 57%** - **Decoding time decreased by up to 41%** - **Comparable latency to static 3B models** |GPU| Model| Sub-network selection | Parameter loading | Pre-filling | TTFT | Decoding Time | |--| --- | :--: | :--: | :--: | :---: | :--: | |A6000|Llama-8b|-|-|0.702| 0.702 |5.47| || Llama-3b |- |-|0.317| 0.317 |3.52| || Ours 8b->3b |0.070|0.016| 0.315 | 0.402 |3.53| | RTX3090 | Llama-8b|-|-|0.947 | 0.947 |5.48| || Llama-3b |-|-|0.396 | 0.396 |3.18| || Ours 8b->3b |0.088|0.013|0.396| 0.498 |3.21| The overhead from sub-network selection (mask generation) & parameter loading is **negligible (~0.1s, ~2% of the total generation time).** **Q4: Concerns about JAX/AXLearn and framework portability** Our core method is **framework-agnostic** and can be easily implemented in **PyTorch**, as shown in our latency experiments using **LLaMA-3.1-8B-Instruct** and PyTorch. We will make our best effort to release the full implementation on top of an open-source model. **Q5: Sparsity predictor architecture** The predictor consists of: - An LLM with 302M parameter as the feature extractor (last hidden state of the final input token) - Two-layer MLP: - Linear(hidden_dim → 128) - Linear(128 → num_layers × ffn_dim) Training is end-to-end with the masked LLM using standard language modeling loss. **Q6: Missing citations, discussions with other dynamic pruning papers, and figure issues** Thank you for your suggestion. We promise that we will address your comments in the final revision of our paper.
Summary: The paper proposes a dynamic pruning method in which a router determines the pruning strategy of the FFN layers in an LLM model using the input instruction. The sparse mask predictor and LLM weights are jointly trained using instruction-following data and the pre-training corpus. Experiments on different target benchmarks demonstrate that the proposed pruning method can generally outperform the dense baseline of similar size. Claims And Evidence: Yes, the main claim of the paper is that using dynamic, input-dependent sparsity can be beneficial for the pruned model's performance rather than using the same sparsity pattern for all the inputs. The experimental results support such a hypothesis. Methods And Evaluation Criteria: The components of the proposed method have been introduced in previous work so the idea does not have novel elements, but it combines them in an effective manner. I think the proposed method is sound and simple yet effective in practice, so I do not complain about novelty. For the evaluation, I think the paper can be improved in the following aspect * Although it has got a convention in Mixture of Experts literature to compare different models only based on active parameters, I think doing so does not paint a complete picture of the model's real performance in terms of inference latency. Depending on the network's topology, two 3B activated parameters models can have significant different latency values in practice on GPU/TPU. Therefore, the paper should provide at least the inference latency of the pruned model by IFPruning and the dense baselines. Theoretical Claims: The paper has not theoretical claims. Experimental Designs Or Analyses: As the main contribution of the paper is in empirical results rather than methodological development, I think the paper should be improved in the following aspects: * In Sec. 4.1, the paper only mentions that it uses an internal SFT dataset for training. I understand that they may not be able to release this dataset but they at least should provide some statistics and general structure of the dataset. * Also, the results that the higher the base model, the better the performance of the pruned model is not much surprising. There has been empirical and theoretical papers [1, 2] in the literature that indicate that overparameterization helps with training a better base model and improves the quality of the pruned model. The paper should indicate this connection in their discussion. * In Line 167, the paper admits that one can use HardConcrete trick to do pruning, yet it does not indicate why it choses not to do so. It would be nice to have a comparison between the current approach and HardConcrete as it is widely used in practical scenarios. However, I understand that doing experiments in the rebuttal period can be challenging, and I don't ask for new experiments. [1] Learning and Generalization in Overparameterized Neural Networks, Allen-Zhu et al., 2019. [2] Stronger generalization bounds for deep nets via a compression approach, Arora et al., 2019. Supplementary Material: Yes, I checked the appendix A.1 for the model architecture. Relation To Broader Scientific Literature: The paper shows advantages compared to previous static pruning baselines in terms of performance as it makes the pruning strategy dependent on the input. However, I believe that the paper should indicate the following downsides compared to static pruning: * Static pruning enables batch-parallelism on GPUs while dynamic pruning cannot benefit from it as each sample in the batch may have a different pruning strategy. * Static pruning will enable lower memory usage for saving the model on disk and also consumes less GPU memory. In contrast, dynamic pruning cannot do so in practice. * Static pruning can achieve real inference speed up on GPUs/TPUs. It is harder to achieve inference latency reduction with dynamic pruning. Essential References Not Discussed: Please check the "Experimental Designs Or Analyses" section about the missing references. Other Strengths And Weaknesses: I cannot think of any other points other than the ones mentioned above. Other Comments Or Suggestions: I suggest that the authors provide inference latency of their models and the baselines. It is fine to me that the method does not beat the baselines (specially static pruning ones), but doing so will make the paper more useful for the readers and practitioners. I would be happy to raise my score if the authors provide this analysis. Questions For Authors: Please check my comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank Reviewer Lv3b for the support and the valuable suggestions for our paper. We will address the writing feedback, such as adding statistics of the dataset and discussing additional related work, in the next version. Please see our response to the questions and/or major comments below. **Q1: Comparison with static pruning, and latency numbers.** We agree with the reviewer that static pruning enjoys better batch parallelism and lower memory usage compared to dynamic pruning. We discussed the downside of our work in Section 5 and will improve the clarification in our next version. Thank you! As far as inference speed is concerned, we would like to clarify that our method is designed for on-device models (e.g. on smartphone / laptop / desktop), where the inference typically samples a few responses given the **same** user query (or the same task). In this case, the same activated parameters are selected and cached as a dense model, therefore achieving the same speedup as the static pruning and dense baseline. To illustrate the real inference speedup, we test the inference latency by pruning the open-sourced **LLaMA-3.1-8B-Instruct** model to 3B. Although the tests are done on GPUs, we used batch size 1 and 4 generations per query, reflecting on-device usage. We report the **time-to-first-token (TTFT) and the decoding time**, both measured in seconds. For dense models (8B and 3B), the TTFT consists of pre-filling only. For our method, we break down TTFT into its components: - Sub-network selection (via the sparsity predictor) - Parameter loading (load the selected parameters and cache the sub-network as a dense model) - Pre-filling using the 3B sub-network ||GPU|Model|Sub-network selection|Parameter loading|Pre-filling|TTFT|Decoding Time| |-|-|-|:-:|:-:|:-:|:-:|:-:| |Input length: 4k|A6000|Llama-8b|-|-|0.702|0.702|5.47| |||Llama-3b|-|-|0.317|0.317|3.52| ||| Ours 8b->3b |0.070|0.016|0.315|0.402|3.53| || RTX3090 |Llama-8b|-|-|0.947|0.947|5.48| |||Llama-3b|-|-|0.396| 0.396 |3.18| ||| Ours 8b->3b |0.088|0.013|0.396|0.498|3.21| ||||||||| |Input length: 2k|A6000|Llama-8b|-|-|0.336| 0.336|4.11| |||Llama-3b|-|-|0.155|0.155|3.20| ||| Ours 8b->3b|0.037|0.016|0.155|0.208 |3.25| ||RTX3090|Llama-8b|-|-|0.467|0.467|3.76| ||| Llama-3b|- |-|0.203|0.203|2.70 | ||| Ours 8b->3b |0.045| 0.013|0.202|0.260 |2.75| Key takeaways: - TTFT decreased by **up to 57%**, decoding time decreased by **up to 41%. In total, we achieve 1.8x speedup compared to Llama-8b.** - Overhead from dynamic pruning & parameter caching is **negligible (~0.05s, ~2% of the total generation time)** - Despite dynamic masking, runtime of IFPruning is **on par with static pruning, while offering input-specific adaptivity and superior accuracy.** We will include this analysis in the final version of our paper. **Q2: Choice of SoftTopK over HardConcrete.** A2: We appreciate the suggestion. Both SoftTopK and HardConcrete are mask generation operators introduced in previous work, and not a contribution of our work. We chose SoftTopK for simplicity for two reasons: 1. Both methods achieve similar task performance. 2. SoftTopK does not require another auxiliary loss, whereas HardConcrete needs some tuning (e.g. on the auxiliary loss weight and its learning rate). 3. SoftTopk seems more stable in our preliminary study. **Q3: More details on SFT datasets** We will expand the final version with a detailed breakdown of the SFT datasets, including data sources, instruction formats, and size per domain. **Q4: Missing related work.** Thank you for highlighting these valuable references—we will include them in the final revision to strengthen the related work section.
null
null
null
null
null
null
Risk-aware Direct Preference Optimization under Nested Risk Measure
Reject
Summary: This paper tackles token-level preference optimization for LLM alignment by making it “risk-aware.” It modifies the usual Bradley-Terry setup to include a nested risk measure that accounts for potential variability in model updates. They define a token-level advantage function that uses this risk measure, leading to a new loss (Ra-DPO). The main idea is to improve alignment performance while keeping the model from drifting too far from the reference. They test on IMDb, Anthropic HH, and AlpacaEval, showing Ra-DPO often beats methods like DPO, PPO, and TDPO in preference accuracy and also keeps lower sequential KL divergence. Claims And Evidence: The main claims are that their risk-aware method (Ra-DPO) can effectively control model drift while still maintaining or improving preference accuracy. They support these claims with experiments on three standard datasets. The experiments show moderate improvements in accuracy and lower sequential KL divergence compared to baselines. They provide theoretical proofs (in Appendix B) showing that maximizing their proposed loss leads to policy improvements. Overall, their evidence supports the claims, though the practical advantage seems modest. Methods And Evaluation Criteria: The authors adapt known ideas (nested risk measures like CVaR and Bradley-Terry models) to token-level language modeling. Their evaluation uses standard datasets (IMDb, Anthropic HH, AlpacaEval), measuring preference accuracy and KL divergence as indicators of alignment quality and model drift. These criteria make sense given their goal of improving alignment while controlling risk. Although the evaluation approach is typical and appropriate, the improvement in practical metrics (accuracy, drift) is somewhat minor. Theoretical Claims: They include several lemmas and a theorem (Lemmas 3.1, 3.3, 3.4, 3.5, and Theorem 3.6) to justify their token-level risk-aware approach. I checked Lemma 3.1 and Lemma 3.3 in particular—they appear correct based on standard RL math, with straightforward algebra and clear steps. I didn’t find any obvious mistakes in the provided derivations. Experimental Designs Or Analyses: I checked the experimental designs, especially the IMDb and Anthropic HH evaluations. The design looks sound: they clearly define the setup, use standard baselines (DPO, PPO, TDPO, KTO), and present results clearly (accuracy and KL divergence plots). One potential issue is that the performance gains, though consistent, seem quite small (around a 1-3% improvement), which might limit the practical value. Supplementary Material: Mainly the experiments. Relation To Broader Scientific Literature: This paper directly builds on recent token-level preference optimization methods like TDPO and classical risk-sensitive RL literature. The main novelty is combining nested risk measures (widely used in RL) with direct preference optimization methods (like DPO and TDPO). The paper cites existing literature clearly and fits reasonably well within current trends toward more risk-sensitive and token-level RLHF methods. Essential References Not Discussed: Nothing noteworthy to me. Other Strengths And Weaknesses: Strength: 1. The paper clearly explains how to integrate nested risk measures with token-level preference learning, combining known ideas from risk-sensitive RL and preference optimization. 2. They present results across multiple datasets (IMDb, Anthropic HH, AlpacaEval), giving some confidence that the method works consistently. 3. The theoretical part seems carefully done, with explicit derivations to support their method clearly laid out in the appendix. Weakness: 1. The improvements shown in experiments are modest, which makes it unclear whether the added complexity of their risk-aware objective is worthwhile in practice. 2. The main idea is incremental—just applying existing risk-aware RL concepts to the token-level DPO setting. It doesn't introduce a significantly new concept or theory. Other Comments Or Suggestions: The running title of the paper is still "Submission and Formatting Instructions for ICML 2025" from the template. Also in the proof of Theorem 3.6 in the appendix, should it be "Theorem 3.6 Restated"? Questions For Authors: 1. Your reported improvements seem relatively modest (1-3%). Could you explain more clearly why even these small improvements are significant enough to justify using a more complex objective function? 2. Would your risk-aware method work well for other generation tasks or datasets where the distribution shift or uncertainty is larger (e.g., harmful content moderation, toxicity detection)? Have you tried those settings? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **We sincerely appreciate the valuable comments from the reviewer. We hope our responses below provide further clarity.** ## Response to Other Comments Or Suggestions: The running …… "Theorem 3.6 Restated"? We apologize for the confusion caused by our oversight. We will correct these errors in the next version. ## Response to Question 1: Your reported …… objective function? It should be clarified that our proposed $Ra-DPO$ method maintains both: 1. **A natural and simple loss function** (the sum of DPO loss and negative sequence risk ratio), which can be seen in Figure 1; 2. Comparable training efficiency, **with no substantial increase in training time requirements.** _**Note:**_ **Firstly,** in the paper, **the lemmas and theorems presented serve exclusively to demonstrate the theoretical validity of our approach when accounting for risk considerations.** Specifically, we introduce nested risk measures (a nonlinear function) in token-level policy optimization and then prove that maximizing the objective function will result in policy improvements. This method is more likely to effectively balance alignment performance and model drift, thereby preventing model failure in certain aspects and ultimately achieving an improvement in reward accuracy. **Additionally,** as shown in Figures 6, we can observe that our method achieves higher reward accuracy when implemented with Pythia-1.4B as the base model on the Anthropic HH dataset. And, **we provide several additional experimental results,** including a numerical example, several evaluation results of LLMs, the results using nested ERM (entropic risk measure) [1-3] and the results with different seeds, to demonstrate the effectiveness of our algorithm. For details, please refer to the link [🔗Additional_Experiment_Results](https://anonymous.4open.science/r/ICML2025-Ra-DPO-0529/). ## Response to Question 2: Would your …… those settings? Thank you for the valuable question. We would like to provide an extensive discussion about risk-awareness and corresponding experiments. **Risk-Awareness:** In this paper, we introduce nested risk measures to enhance risk-awareness in LLM alignment, which induces a conservative policy during the process of aligning LLMs, which enables the model to remain closely aligned with a reference LLM, thereby preventing significant deviations and maintaining its superior decision-making and reasoning abilities. This is highly valuable in real-world applications, where aligning general-purpose LLMs with human values and intentions (higher reward) without compromising its decision-making and reasoning abilities (lower KL divergence). **Experiments:** Additionally, from the perspective of output verification (e.g., harmful content moderation, toxicity detection), we recommend a hybrid approach that combines Safe RLHF [4] with risk-sensitive measures. This approach independently models both cost and reward functions while accounting for cost distributions. However, it may require more computational resources due to the need to train additional models. Nonetheless, our method can serve as a foundational groundwork for such potential approaches. Moreover, we plan to conduct further research in this direction in the future. It is noteworthy that **our additional experiments also demonstrate examples of using LLMs (DeepSeek and GPO-4o) to evaluate the performance of various algorithms.** The experimental results can be found at the link [🔗Additional_Experiment_Results](https://anonymous.4open.science/r/ICML2025-Ra-DPO-0529/). ## References: [1] Föllmer, Hans, and Alexander Schied. Convex measures of risk and trading constraints. Finance and stochastics, 2002, 6: 429-447. [2] Hau, Jia Lin, Marek Petrik, and Mohammad Ghavamzadeh. Entropic risk optimization in discounted MDPs. In AISTATS, 2023. [3] Fei, Yingjie, Zhuoran Yang, Yudong Chen, Zhaoran Wang, and Qiaomin Xie. Risk-sensitive reinforcement learning: Near-optimal risk-sample tradeoff in regret. In NeurIPS, 2020. [4] Dai, Josef, Xuehai Pan, Ruiyang Sun, Jiaming Ji, Xinbo Xu, Mickel Liu, Yizhou Wang, and Yaodong Yang. Safe RLHF: Safe reinforcement learning from human feedback. In ICLR, 2024.
Summary: This paper presents a risk-aware version of direct preference optimization (DPO) algorithm. The key innovation is to employ a risk-aware objective that operates at the token level (which results in a different algorithm due to the presence of KL divergence). The risk is calculated sequentially in terms of the deviation between the model and a reference model on the token level. The derivation shows that the notorious partition function will cancel in the general setting, akin to the more specific setting studied by the DPO paper. Experiments demonstrate that the algorithm maintains a similar level of preference accuracy relative to a recent benchmark, while having a lower divergence relative to the reference policy. Claims And Evidence: My main concern is regarding the notion of risk-awareness, and how it is measured in the experiments. Why not instead focus on actual benchmarks that measure risk in LLM outputs and instead look at KL divergence discrepancy? For instance can we do some evaluation of the responses generated and measure them in terms of say probability of something quite toxic being generated, and then in doing so assess the success of the newly proposed algorithm to mitigate this phenomenon? Methods And Evaluation Criteria: The paper's main selling point is that the new DPO-ish algorithm is risk-aware, but fundamentally, how can one demonstrate the risk-aware nature of a resultant policy? The current experimental design measures risk-awareness in terms of a token-level deviation from a reference policy, but what if the reference policy itself is risk-unaware, therefore measuring discrepancy between the learned policy and the reference policy would not be indicative of risk-awareness. Theoretical Claims: In terms of the theoretical derivation, I find a strong similarity between this paper and that of "Token-level Direct Preference Optimization", Zeng et al 2024. While the citation is there, I still want to know in what sense the theoretical result is going beyond this paper. The systematic way to derive the objective is very similar between this paper and Zeng et al, with both papers computing a sequential token-level KL computation. Another question pertains to the connection between this paper and "Entropic Risk Optimization in Discounted MDPs". This paper augments the state space by using historical trajectories to avoid the notion that nested risk measures are not law-invariant. But the paper mentioned above actually introduces a set of Bellman equations that are much easier to work with and does so without doing state augmentation. Can the DPO-style derivation presented here be also applied to the setting discussed by the paper mentioned above, or is it only limited to the case where we must augment the state space? Experimental Designs Or Analyses: See the comment about measuring risk-awareness. Also, increasing the number of random seeds used in the 4.1 experiment would be helpful to obtain more reliable error bars. Supplementary Material: Yes Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: My main concern with the paper is readability, or lack thereof. I spent several hours to understand the notation used in the paper, and had to go back and forth between multiple papers to generally understand what the paper means. I don't really want to be mean here, but just genuinely concerned that the paper is not readable at this stage. Some examples below: - what is composition operator used in the set of equations (6). The operator o is not properly defines as far as I can tell. I think I intuitively know what it means, but a proper definition is lacking. - in the set of equations (5), how come the Q function takes as (second) input, a probability distribution?! This is a really basic question that confuses me. In the same vein, in the first line here, and on the right hand side, why is the second input to V is y<t and not y<t+1? Should it not be the case that we consider the value function for states in the next state and therefore need to increment t from left to right? Again, very basic question, so I am leaving it open that I am missing something fundamental here. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **We sincerely appreciate the valuable comments from the reviewer. We hope our responses below provide further clarity.** ## Response to Claims And Evidence: _**Risk awareness:**_ In this paper, risk awareness refers to the sensitivity to risks arising from deviations from the reference model. It induces a risk-averse policy to enable the model to remain closely aligned with a reference LLM and maintain its superior decision-making and reasoning abilities. _**Experimental Benchmarks:**_ From the perspective of output verification, we recommend a hybrid approach that combines Safe RLHF [1] with risk-sensitive measures, which independently models both cost and reward functions while accounting for cost distributions. However, it may require more computational resources due to the need to train additional models. Nonetheless, our method can serve as a foundational groundwork for such potential approaches. _**Research significance:**_ Notably, a critical consideration in alignment research involves balancing performance and model drift. Current approaches (e.g., DPO [2] and TDPO [3]) establish that optimal alignment should simultaneously maintain minimal deviation from the reference model (lower KL divergence) while aligning with human values (higher reward). F-DPO [4] studies this trade-off under varying divergence constraints. _**Additional Experiment Results:**_ We provide additional experimental results to further demonstrate the effectiveness of our approach at the link [🔗Additional_Experiment_Results](https://anonymous.4open.science/r/ICML2025-Ra-DPO-0529/). ## Response to Methods And Evaluation Criteria: The fine-tuning of LLMs typically involves two key stages: supervised fine-tuning (SFT) and preference alignment. During the alignment phase, **the post-SFT model typically serves as the reference model (the most reliable model available),** which generally demonstrates robust reasoning and decision-making capacities. **Importantly,** significant deviation from the reference model generally leads to capability degradation, which inherently constitutes substantial risk. Thus, **the primary challenge in LLM alignment lies in** balancing alignment performance and model drift (maintaining reasoning and decision-making abilities). ## Response to Theoretical Claims: _**Theoretical Breakthroughs.**_ We introduce the nested risk measures to enhance the model's risk sensitivity. Our theoretical advancements primarily include: 1. Incorporating nested risk measures into token-level policy optimization and providing a closed-form solution. **Importantly,** our method maintains a natural and simple loss function shown in Figure 1. 2. Establishing the connection between risk-aware value functions and optimal policies. The key technical contributions lie in: - A risk-aware advantage function design under nested risk measures; - Proof of Bellman-type model equivalence with the Regret Preference Model under nested risk measures. _**State Augmentation.**_ We argue that state augmentation is essential, particularly for token-level generation in LLM alignment. The inherent contextual dependencies of text tasks and the non-Markovian characteristics of risk sensitivity naturally necessitate this approach. Compared to DPO-style methods, it introduces negligible additional complexity and computational overhead. ## Response to Experimental Designs Or Analyses: To ensure clarity, we provide additional clarification as follows: In Figures 2 and 3, each algorithm is represented by **two curves: the darker-colored one shows the raw curve, while the brighter-colored one displays the smoothed version. All curves share one identical random seed**, primarily because (1) **LLM training requires substantial computational resources**, and (2) we followed conventional practice by initializing the trained model with parameters from the reference model (the post-SFT model). Also, **we add experimental results under multiple random seeds** to the "The results with different seeds" folder in link [🔗Additional_Experiment_Results](https://anonymous.4open.science/r/ICML2025-Ra-DPO-0529/). ## Response to Questions For Authors: We apologize for the confusion caused by our oversight. The operator "o" denotes the concatenation of the state and action at time step $t+1$. In the set of equations (5), the second input to $V$ should be $y^{<t+1}$. We appreciate this correction to the writing error. $\pi(\cdot | [x, y^{<t}])$ represents the policy when taking an action given the state $[x, y^{<t}]$. **In the next version, we will correct these errors.** ## References: [1] Safe RLHF: Safe reinforcement learning from human feedback. ICLR, 2024. [2] Direct preference optimization: Your language model is secretly a reward model. NeurIPS, 2023. [3] Token-level direct preference optimization. ICML, 2024. [4] Beyond reverse KL: Generalizing direct preference optimization with diverse divergence constraints. ICLR, 2024.
Summary: The paper introduces Risk-aware Direct Preference Optimization (Ra-DPO), a new method for fine-tuning token-level large language models (LLMs) with higher-order nested risk measures. The moderation of path dependency utilizes Bellman equation. Comprehensive theoretical remarks and justification are provided regarding the integrity with existing methods: - DPO - Bradley-Terry model for preference modeling - Generalization of gradients, in the lens of gradient calculation To keep the proposed method comparable with existing methods, the authors devise two strategies in experiments. Empirical results on IMDb, Anthropic HH, and AlpacaEval using GPT-2 Large, Pythia-1.4B, and Pythia-2.8B confirm superior alignment with reduced model drift compared to baselines. Claims And Evidence: Yes, the major claims are made with clear evidences. Methods And Evaluation Criteria: Yes, the methodology is well-structured and appropriate for the problem: A minor limitation in the experiment is that the authors do not conduct an ablation study on the individual impact of risk control parameters across different datasets. Theoretical Claims: Several theoretical results are included in this paper. - (Lemma 3.5) Equivalence Between Ra-DPO and the Bradley-Terry Model - (Lemma 3.4) Closed-form Solution for Risk-aware Policy Optimization - The authors prove that the optimization problem has a tractable closed-form solution, enabling efficient implementation. However, I was not in a position to check all the correctness, especially the details in the proof in the Appendix. Experimental Designs Or Analyses: The proposed and experimental are well defined and evaluated - Baselines: The authors compare against DPO, PPO, TDPO1, TDPO2, and KTO, providing a comprehensive benchmark. Risk Control Sensitivity: Different values of µ and α are tested in Ra-DPO1 and Ra-DPO2, allowing for a robust analysis of risk-aware preference optimization. - Datasets: IMDb (sentiment alignment), Anthropic HH (dialog alignment), and AlpacaEval (comparison-based evaluation) ensure diverse task coverage. - Metrics: The authors use reward accuracy and KL divergence as key metrics, which are well-aligned with the paper’s objectives. Potential improvement. Supplementary Material: Yes, I went through the arguments or claims in the main article and found respective proofs or Algorithm procedure details in the Appendix, which was very useful. However, I was not in a position to check all the correctness, especially the details in the proof in the Appendix. Relation To Broader Scientific Literature: This paper has properly cited the former works. The key references are found below: - Direct Preference Optimization (DPO) (Rafailov et al., 2023): - Ra-DPO extends DPO by incorporating risk sensitivity at the token level. - Token-level Direct Preference Optimization (TDPO) (Zeng et al., 2024): - Ra-DPO improves upon TDPO by using nested risk measures for sequential risk control. - Risk-aware RL (Bisi et al., 2022; Chen et al., 2024): Overall, the paper makes meaningful claims and has clear notes. Essential References Not Discussed: The essential references are discussed in this paper. In a broader landscape, RLHF is exposed to 'intransitivity' risk, because it rely on Bradley-Terry model as the preference function, where all preferences are transitive by assumption. - Certain literatures have shown that such 'transitive' relationship between preference annotations may not always hold and some techniques are explored but not mentioned in this paper. - https://arxiv.org/abs/2409.19325 (Duan et al, 2017) presented some evidence and can be of interest for future work. Other Strengths And Weaknesses: At high level, this paper opens a door to the higher-order risk measurement and control of RLHF. Besides the good completeness of the work, this high level contribution is the highlight. Other Comments Or Suggestions: N/A Questions For Authors: Q1. In Line 378, Section 4.3, how does the sampling temperature coefficient impacting the performance and contribute to Ra-DPO_2? - In my understanding, under the purpose of ensuring training stability, there are some contradiction in between using this temperature setting and adopting the Ra-DPO_2 strategy in Eq(19), Line 293. Q2. Have you considered using human evaluators to assess alignment quality beyond automatic metrics? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **We sincerely appreciate the valuable comments from the reviewer. We hope our responses below provide further clarity.** ## Response to Methods And Evaluation Criteria: Yes, the …… different datasets. In Appendix Figures 6 and 7, **we present experimental results conducted on the Anthropic HH dataset using Pythia-1.4B and Pythia-2.8B as base models.** We implemented $TDPO_2$, and $Ra-DPO_2$ with different risk control parameters. In addition, **we provide several additional experimental results,** including a numerical example, several evaluation results of LLMs, the results using nested ERM (entropic risk measure) [1-3] and the results with different seeds, to demonstrate the effectiveness of our algorithm. For details, please refer to the link [🔗Additional_Experiment_Results](https://anonymous.4open.science/r/ICML2025-Ra-DPO-0529/). ## Response to Essential References Not Discussed: The essential …… future work. We sincerely appreciate the reviewers' recognition of our work. In Appendix A, we examine key factors that introduce risks in the alignment of LLMs, where we highlight the crucial factor that "there exist conflicts and contradictions among human preferences (or choices)", while implying that the 'transitive' relationship between preference annotations may not always hold. Of course, we acknowledge that we inadvertently omitted citation to the important reference https://arxiv.org/abs/2409.19325. **In the new version, we will include the relevant citations.** ## Response to Question 1: In Line …… Line 293. In Line 378, Section 4.3, **the sampling temperature coefficient is a parameter of AlpacaEval [4], which is a tool to evaluate instruction-following language models based on human annotations.** In this paper, we adhered to the default settings in the official AlpacaEval implementation. It should be explicitly noted that this parameter bears **no direct relationship to the loss function of our proposed Ra-DPO algorithm.** ## Response to Question 2: Have you …… automatic metrics? Thank you very much for your valuable suggestions. **Using human evaluators to assess alignment quality beyond automatic metrics is indeed very convincing.** However, as is commonly known, **this approach incurs substantial labor costs.** For this reason, numerous research efforts are now exploring methodologies that utilize LLM-based evaluators as substitutes for human evaluators. Many studies [4-7] have shown that LLM-based auto-evaluators have become a key component of the LLM development process due to their cost-effectiveness and scalability compared to human-based evaluation. In this paper, **we use AlpacaEval, a fast and affordable benchmark for chat LLMs that uses LLMs to estimate response quality.** ## References: [1] Föllmer, Hans, and Alexander Schied. Convex measures of risk and trading constraints. Finance and stochastics, 2002, 6: 429-447. [2] Hau, Jia Lin, Marek Petrik, and Mohammad Ghavamzadeh. Entropic risk optimization in discounted MDPs. In AISTATS, 2023. [3] Fei, Yingjie, Zhuoran Yang, Yudong Chen, Zhaoran Wang, and Qiaomin Xie. Risk-sensitive reinforcement learning: Near-optimal risk-sample tradeoff in regret. In NeurIPS, 2020. [4] Dubois, Yann, Balázs Galambosi, Percy Liang, and Tatsunori B. Hashimoto. Length-controlled alpacaeval: A simple way to debias automatic evaluators. arXiv preprint arXiv:2404.04475, 2024. [5] Zheng, Lianmin, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin et al. Judging llm-as-a-judge with mt-bench and chatbot arena. In NeurIPS, 2023. [6] Li, Tianle, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Tianhao Wu, Banghua Zhu, Joseph E. Gonzalez, and Ion Stoica. From crowdsourced data to high-quality benchmarks: Arena-hard and benchbuilder pipeline. arXiv preprint arXiv:2406.11939, 2024. [7] Lin, Bill Yuchen, Yuntian Deng, Khyathi Chandu, Faeze Brahman, Abhilasha Ravichander, Valentina Pyatkin, Nouha Dziri, Ronan Le Bras, and Yejin Choi. Wildbench: Benchmarking llms with challenging tasks from real users in the wild. arXiv preprint arXiv:2406.04770, 2024.
Summary: This paper introduces a risk-aware direct preference optimization method that incorporates a nested risk measure into a token-level objective function. The ultimate objective function maximizes the likelihood of the policy while suppressing the deviation between a training model and the reference model using a sequential risk ratio, thereby enhancing the model’s risk awareness during the process of aligning LLMs. The empirical results demonstrate the superior performance of the proposed method. Claims And Evidence: Please see the section of Other Strengths And Weaknesses. Methods And Evaluation Criteria: Please see the section of Other Strengths And Weaknesses. Theoretical Claims: I have checked all the theoretical claims. Experimental Designs Or Analyses: Please see the section of Other Strengths And Weaknesses. Supplementary Material: I have reviewed all sections of the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. This paper proposes a novel model by combining the risk measure and the token-level preference optimization method, which is an interesting contribution to this field. 2. The experiments also showcase the superior performance of the proposed method in balancing alignment performance and model drift. 3. This paper is technically sound and well-structured. Weaknesses: 1. The motivation of this work seems still not clear. As discussed in Lines 50-54, the authors mention that a risk-neutral criterion neglects the characteristics of the reward distribution beyond the mean. This is the primary motivation for risk-aware learning, as demonstrated in many prior works. For example, [1] states that iterated CVaR with the parameter $\alpha$ focuses on optimizing the worst $\alpha$-percent performance at each step and allows the agent to control the risk throughout the decision process tightly. On the other hand, the authors also mention that the proposed method with risk measures aims to achieve a better balance between alignment performance and model drift. The model drift is indicated by the lower KL divergence of policies. This latter motivation seems not aligned with the initial focus on optimizing worst-case performance or capturing the characteristics of the reward distribution beyond the mean. 2. Following the above discussion, the authors need to focus on designing experiments to validate the risk-averse properties of their approach. 3. It is interesting to discuss why risk-aware preference optimization can lead to higher reward accuracy after integrating the risk measure into preference optimization methods since the primary goal of applying risk measure is to optimize the worst $\alpha$-percent performance. 4. This work could be improved by evaluating the proposed approach on other risk measures to demonstrate the generality of the method. **Reference:** [1] Chen, Yu, et al. "Provably efficient iterated CVaR reinforcement learning with function approximation and human feedback." Other Comments Or Suggestions: N/A Questions For Authors: Please see the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **We sincerely appreciate the valuable comments from the reviewer. We hope our responses below provide further clarity.** ## Response to Weaknesses 1: We apologize for the confusion caused by failing to give a clear explanation and would like to re-clarify our motivation. Before restating our motivation, **we first clarify the following facts:** 1. The reference model, typically a post-supervised fine-tuned model, demonstrates robust decision-making and reasoning capabilities. Current approaches, including DPO [1], KTO [2], and TDPO [3], establish that optimal alignment should simultaneously maintain minimal deviation from the reference model (lower KL divergence) while aligning with human values (higher reward). 2. Experiments in TDPO [3] have demonstrated the advantages of examining divergence against a reference LLM on a more granular, token-by-token basis. 3. The TDPO [3] focuses only on the expected reward (a risk-neutral criterion), thereby neglecting the characteristics of the reward distribution beyond the mean. Based on the aforementioned facts and corresponding experimental results, **a critical conclusion emerges:** significant deviation from the reference model typically indicates a heightened risk of degradation in decision-making and reasoning capabilities. **Motivated by** the above facts and conclusion, we introduce nested risk measures to enhance risk-awareness in LLM alignment. Here, “risk” specifically denotes potential hazards arising from deviations relative to the reference model. This is highly valuable in real-world applications, where aligning general-purpose LLMs with human values and intentions (higher reward) without compromising its decision-making and reasoning abilities (lower KL divergence). **We hypothesize that risk-aware models employing nested risk measures (e.g., CVaR and ERM) will systematically reduce the probability of policy options with potential catastrophic consequences (failure, harmful, or deceptive) during policy optimization.** ## Response to Weaknesses 2: We provide **several additional experimental results,** including a numerical example, several evaluation results of LLMs, the results using nested ERM (entropic risk measure) [4-6] and the results with different seeds, to demonstrate the effectiveness of our algorithm. For details, please refer to the link [🔗Additional_Experiment_Results](https://anonymous.4open.science/r/ICML2025-Ra-DPO-0529/). ## Response to Weaknesses 3: We provide discussion about why risk-aware preference optimization can lead to higher reward accuracy as follows: As shown in Figures 6 and 7, we can observe that risk-aware preference optimization (Ra-DPO) achieves higher reward accuracy when implemented with Pythia-1.4B as the base model on the Anthropic HH dataset. This effect disappears when using Pythia-2.8B, which **we attribute to the greater potential for reward accuracy improvement in smaller models. This is evident from** several failed experiments we conducted: smaller models (Pythia-14m, Pythia-70m, and Pythia-160m) are more prone to model drift after thousands of iterations, resulting in empty outputs or invalid responses (extremely brief answers that fail to address the question). The proposed risk-sensitive method, incorporating risk-awareness into the token-level objective function, addresses this through risk-averse policy optimization. This method is more likely to effectively balance alignment performance and model drift, thereby preventing model failure in certain aspects and achieving an improvement in reward accuracy. ## Response to Weaknesses 4: Thank you for the valuable suggestions. We **add the experimental results using nested ERM** to demonstrate the effectiveness of our algorithm, which is conducted on Anthropic HH dataset with Pythia-1.4B serving as the base model. The experimental results can be found at the link [🔗Additional_Experiment_Results](https://anonymous.4open.science/r/ICML2025-Ra-DPO-0529/). Experimental results show that **our $Ra−DPO_2$ algorithm (with Nested-ERM) also achieves consistently lower KL divergence and higher reward accuracy** compared to baseline methods. ## References: [1] Rafailov, Rafael, Archit Sharma, Eric Mitchell, et al. Direct preference optimization: Your language model is secretly a reward model. In NeurIPS, 2023. [2] Ethayarajh, Kawin, Winnie Xu, et al. Model alignment as prospect theoretic optimization. In ICML, 2024. [3] Zeng, Yongcheng, Guoqing Liu, et al. Token-level direct preference optimization. In ICML, 2024. [4] Föllmer, Hans, and Alexander Schied. Convex measures of risk and trading constraints. Finance and stochastics, 2002, 6: 429-447. [5] Hau, Jia Lin, Marek Petrik, and Mohammad Ghavamzadeh. Entropic risk optimization in discounted MDPs. In AISTATS, 2023. [6] Fei, Yingjie, Zhuoran Yang, et al. Risk-sensitive reinforcement learning: Near-optimal risk-sample tradeoff in regret. In NeurIPS, 2020.
null
null
null
null
null
null
Procurement Auctions via Approximately Optimal Submodular Optimization
Accept (spotlight poster)
Summary: The paper studies the design of procurement auctions with submodular welfare. The problem involves an auctioneer and $n$ sellers, each possessing an item for sale with a private cost $c_i$, representing the minimum price at which they are willing to sell. The auctioneer's valuation over items is given by a monotone submodular function $f$. The mechanism selects a subset $S$ of items based on the sellers' reported costs and determines the payment for each seller. The goal is to design a truthful mechanism that maximizes $f(S) - c(S)$ while ensuring that the total payment does not exceed $f(S)$. The main contribution of the paper is establishing useful frameworks that transform a reasonable greedy-like submodular optimization algorithm into a truthful mechanism without losing its approximation/competitive ratio in both offline and online settings. More specifically, the submodular optimization algorithm can determine the item selection strategy for the auctioneer. Then, the authors design corresponding payment rules to ensure truthfulness. The paper further extends these results to the setting of descending auctions. Finally, the proposed mechanisms are empirically evaluated. ## update after rebuttal I appreciate the authors' rebuttal and will keep my original score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, the proposed mechanisms are evaluated both theoretically and empirically. Theoretical Claims: Yes, I reviewed the truthfulness proof of the mechanism. Experimental Designs Or Analyses: Yes, the paper evaluates the proposed mechanisms on a real-world dataset (although the scenario differs slightly from actual applications, it is still reasonable). Supplementary Material: Yes, I reviewed part of the detailed proofs (about truthfulness) in the appendix. Relation To Broader Scientific Literature: The paper makes contributions to mechanism design with a submodular welfare objective. The proposed payment rule computation may influence future work in this area. Essential References Not Discussed: No Other Strengths And Weaknesses: The structure of Section 4 is a little bit weird. It first presents the mechanism framework for the offline setting, followed by Section 4.1, which covers the online mechanism framework. It might be better to move the offline results to Section 4.1, the online results to Section 4.2, or alternatively, move the online results to the appendix. Other Comments Or Suggestions: Section 4 discusses several reasonable assumptions for the meta-algorithm. It might be more readable to assign a name to each assumption. Questions For Authors: . Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for taking the time to read our paper and for their valuable feedback. We will re-organize section 4 and name the assumptions we use for the submodular optimization algorithm.
Summary: This paper focuses on procurement auctions where an auctioneer aims to acquire services from strategic sellers with private costs. The quality of services is represented by a submodular function, and the goal is to design efficient mechanisms that maximize the difference between service quality and total seller costs while meeting IC, IR, NAS constraints. The authors first review existing research on procurement auctions and regularized submodular maximization. Then they show that for the distorted greedy algorithm, a stronger guarantee holds (stronger than previous results). In the mechanism design aspect, they first show that VCG mechanisms satisfy IC, IR, and NAS but are computationally prohibitive. They then develop a framework that can convert all submodular optimization algorithms into sealed-bid mechanisms that meet the desired properties and preserve approximation guarantees. This framework is also extended to the online setting. Additionally, they establish a connection between online submodular optimization and descending auctions, and prove that in the adversarial setting, a descending auction with an exact demand oracle may return a poor solution, but one based on the cost-scaled greedy algorithm can achieve a good approximation guarantee. Finally, The experimental results show that VCG and descending auctions with optimal oracle have high complexity, while greedy-based algorithms have polynomial complexity. In terms of welfare, the direct implementations of approximation algorithms outperform their descending auction variants. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. No issues have been found so far. Experimental Designs Or Analyses: Yes. No issues have been found so far. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper comprehensively studies procurement auctions from multiple aspects. This paper improves the analysis of the distorted greedy algorithm, advancing regularized submodular optimization. This paper develops frameworks to transform submodular optimization algorithms into mechanisms for procurement auctions that satisfy IC, IR and NAS, which is related to the literature on mechanism design in procurement auctions. This paper contributes to the understanding of descending auctions, and makes a bridge between two areas via the reduction from online submodular optimization to descending auctions. Essential References Not Discussed: No, to my knowledge. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: I hope the author can analyze the complexity of the algorithm in more detail. Questions For Authors: See suggestions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for taking the time to read our paper and for their valuable feedback. >I hope the author can analyze the complexity of the algorithm in more detail. Thanks for the comment, notice that all of our algorithms run in polynomial time. For example, in algorithm 2, we need to make at most $O(n)$ many calls to the optimization algorithm in line 284, i.e., one call for each chosen seller. Moreover, for each inner loop starting in line 286, we need to make $O(n \log |B|)$ calls to the scoring function where $|B|$ is the number of possible bids. To summarize, Algorithm 2 makes $O(n)$ calls to the optimization algorithm and $O(n^2 \log |B|)$ calls to the scoring function. We will elaborate on this in the next version of our work. Moreover, our experiments show that they can be implemented even in practical applications. In the next version of our work we will explicitly state the number of oracle calls to the submodular optimization algorithm as well as the extra complexity of our operations, for each of the algorithms we use.
Summary: In this paper, they develop a framework to convert a family of greedy algorithms for submodular maximization to a mechanism for procurement actions. Moreover, they provide an improved analysis of the Distorted Greedy algorithm. Finally, they consider the case of Descending auctions where they design a mechanism based on an online greedy algorithm for submodular maximization. Claims And Evidence: All claims are supported by proofs. Methods And Evaluation Criteria: The proposed algorithms and evaluation criteria make sense. Theoretical Claims: I checked the correctness of proposition 4.1 (Appendix C) and theorem 4.3 (Appendix C). I didn't find any errors. Experimental Designs Or Analyses: The experimental design is clear and sound. Information on the specs of the machine that the experiments where run is missing. Also, the MIP solver that was used is not mentioned. Supplementary Material: I reviewed Appendix C. Relation To Broader Scientific Literature: The paper contains a comprehensive review of submodular maximization algorithms. The main idea is to transform greedy algorithms for submodular maximization to a mechanism for procurement auctions thus connecting the domain of submodular maximization to that of algorithmic game theory and mechanism design. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - In the description of the Distorted Greedy algorithm (lines 174-180) I would suggest using another symbol instead of $k$ in order to avoid confusion with the cardinality constraint. The same holds for Stochastic Distorted Greedy. - I would suggest adding an appendix briefly describing the VCG mechanism for non expert readers. - There might be a typo in the definition of $u_i^M(b)$ (line 161). Please check. - I think there might be a typo in line 263. Is it $S_k$ or $S_{k-1}$? In contrast, in Algorithm 2 line 287 you have $S_{k-1}$. - Algorithm 2 is difficult to follow. Especially, the for loop in lines 286-290. The definition of $p_i$ in the appendix (proof of theorem 4.3) is much clearer. Also, the variable $i$ is used in the for loop as well as inside the $\max$ (line 289). Is this the same variable? Again the notation in the appendix is much clearer. - In line 175 (second bullet), I think it should be $G(l_1^\star, \emptyset, ...) > 0$ instead of $G(l, \emptyset, ...)$ Questions For Authors: - Are you familiar with any other work converting greedy algorithms to auction mechanisms? Is this a classic approach? - Can you clarify what is the complexity of Algorithm 2? I expected that the algorithm would be impractical for medium size inputs however in the experiments it seems to perform well. - Can you formally define $OPT(b)$ in Appendix C? - When using the lazy greedy variants is there a significant reduction in running time? Did you run the experiments using the lazy variants? - Are you going to release the code for the experiments? - Can you describe potential future research directions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for taking the time to read our paper and for their valuable feedback. We respond to each of the points they raised below. >Information on the specs of the machine and the MIP solver. Thanks for the suggestion, we will add more details about the specs of the machines and the MIP solver in the revision. >In the description of the Distorted Greedy algorithm (lines 174-180) ... Thanks for the suggestion, we will make the edit. >VCG in the appendix. This is a valid point, we will do that. >Typo in the definition of $u^M_i(b)$ (line 161). We believe that this expression is correct; the seller could potentially get paid even if their service isn’t purchased, but it only incurs a cost if it has to provide the service. However, all the mechanisms discussed in this paper satisfies the property that the payment of a seller would be 0 if their service isn’t purchased. Please let us know if this isn’t clear. >I think there might be a typo in line 263. Thanks for catching that, this is indeed a typo! It should be $S_{k-1}$. >Algorithm 2 is difficult to follow.. the variable $i$ is used in the for loop as well as inside the $\max$ (line 289). We will modify the algorithm, importing notation from the appendix to make it easier to follow. To answer your question, $i$ is the same variable as in the for loop, and we compute the appropriate threshold payments for seller $i$ in line 289 with respect to $S_{k-1}$ and take the max with the current p_i. >In line 175 (second bullet), I think it should be $G(l_1^*,\emptyset,\ldots) > 0$ instead of $G(l,\emptyset,\ldots) > 0$. Thanks for catching that, this is indeed a typo in line 1175. >Are you familiar with any other work converting greedy algorithms to auction mechanisms? Is this a classic approach? Converting algorithms to mechanisms is in general a classical approach in algorithmic game theory. For instance, VCG transforms an (optimal) algorithm to a welfare-optimal mechanism. If the optimal algorithm for the underlying problem happens to be greedy, then VCG can be viewed as some such transformation. There have been other works studying black-box transformations from algorithms to mechanisms, under various objectives such as welfare or revenue. Some classical works include Lehmann, Lehmann, and Nisan (2001), Archer and Tardos (2001), Mu'alem and Nisan (2002), Babaioff, Lavi, and Pavlov (2009), Dobzinski and Nisan (2010). However, most of these settings handle auctions where the designer is *selling* items, and the conversion may not always work if the algorithm does not give optimal outcome. In our setting, we consider converting approximately optimal algorithms and making sure that the conversion creates a mechanism that satisfies the NAS property adds an additional difficulty. For instance, it was not clear to us whether VCG would satisfy that or not. We will discuss some of these works and the differences to our setting in the next version of our work. >Can you clarify what is the complexity of Algorithm 2? In algorithm 2, we need to make at most $O(n)$ many calls to the optimization algorithm in line 284, i.e., one call for each chosen seller. Moreover, for each inner loop starting in line 286, we need to make $O(n \log |B|)$ calls to the scoring function where $|B|$ is the number of possible bids. To summarize, Algorithm 2 makes $O(n)$ calls to the optimization algorithm and $O(n^2 \log |B|)$ calls to the scoring function. We will elaborate on this in the next version of our work. >Can you formally define $OPT(b)$ in Appendix C? $OPT(b)$ is the *optimal solution* to the optimization problem when sellers report bids $b$, i.e., $OPT(b) \in \argmax_{S \in 2^N} f(S) - \sum_{i \in S} b_i$. We will clarify that in Appendix C. >When using the lazy greedy variants is there a significant reduction in running time? Did you run the experiments using the lazy variants? Yes, there is a significant reduction and we used the lazy variants for all algorithms, except for the distorted greedy, which doesn’t admit a diminishing return structure in its scoring function so that lazy greedy cannot be applied. >Are you going to release the code for the experiments? We will try to release the code. >Can you describe potential future research directions? An immediate direction is to close the gap between the $(1/2,1)$ bound of our descending auction and the $(1-1/e,1)$ we get in the sealed-bid auction. Another interesting problem would be to replace the cost in the objective of the mechanism designer with the payment, i.e., to study an objective of maximizing the surplus of the mechanism designer. It would also be interesting to see if the descending auction can get the same performance as VCG, when we disregard computational considerations. We will elaborate in the next version.
Summary: The paper studies a procurement mechanism with objective function f(S) - \sum_{i \in S} p(i) that is truthful, individually rational, and has nonnegative surplus, and provide a bi-criteria approximation. To the best of my knowledge, this is the first to study such objective function in the procurement auction literature, largely different from the standard procurement auction setting to minimize cost given a utility constraint or the budget-feasible mechanism design setting to maximize utility given an ex-post budget constraint. The authors provide an improved analysis for the standard Distorted Greedy algorithm for this sake, and design a mechanism based on it. The authors then extend their results to online mechanism and descending auction. They further validate their findings through experiments. Claims And Evidence: All the claims sound clear and well-explained. One minor issue I found is that the improved analysis of the Distorted Greedy seems a bit orthogonal to the paper's main contribution, though I appreciate the result itself. One direction might be to make the paper more focused on the mechanism design part, while deferring the improved analysis part to the appendix, but I don't think this to be crucial component to judge the paper's contribution. Methods And Evaluation Criteria: Yes Theoretical Claims: I haven't read all the proofs line by line, but the ideas sound reasonable. Experimental Designs Or Analyses: I found no issue Supplementary Material: Related works / proof overviews. Relation To Broader Scientific Literature: The paper's topic is significant to mechanism design and submodular maximization. Essential References Not Discussed: I don't find any Other Strengths And Weaknesses: The paper is generally well written, and I don't find specific weakness of the paper. One minor point I'd say is that the paper contains too many results, which are not equally interesting. The authors might want to more strategically present a part of the results they think will be the most significant. Other Comments Or Suggestions: Figures are a little hard to parse - legends and titles/axes are not very readable until I enlarge my screen enough. Questions For Authors: Nothing specific. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for taking the time to read our paper and for their valuable feedback. We agree with the suggestions and we will do the appropriate reorganization of our content based on their feedback. If our manuscript gets accepted, we will also make sure to utilize the extra space of the camera ready version to enlarge the figures.
null
null
null
null
null
null
SEMU: Singular Value Decomposition for Efficient Machine Unlearning
Accept (poster)
Summary: The paper proposed a machine unlearning method that only fine-tunes by the subspace of the gradient orthogonal to the weight. It claims to effectively “unlearn” forgetting sets while eliminating the dependency on the original training dataset. Claims And Evidence: see Methods And Evaluation Criteria Methods And Evaluation Criteria: The paper does not compare its method with some related approaches [1–4]. There is a lack of a quality measure for removing the nudity concept. While the paper points out that SalUn’s performance drops quickly with reduced data availability, the comparison to the proposed method is missing. [1] Machine Unlearning via Null Space Calibration. IJCAI-2024 [2] SAP: Corrective Machine Unlearning with Scaled Activation Projection for Label Noise Robustness. AAAI2025 [3] Fast Machine Unlearning Without Retraining Through Selective Synaptic Dampening. AAAI 2024 [4] Deep Unlearning: Fast and Efficient Gradient-Free Class Forgetting. TMLR 2024 Theoretical Claims: There is a lack of proof on how the proposed method works when the gradient is in the weight space. Table 1 indicates that the proposed method does not unlearn the forgetting sets. Experimental Designs Or Analyses: See Methods And Evaluation Criteria Supplementary Material: Yes. All parts of the supplementary material, including pseudo code and additional experiments, were reviewed. Relation To Broader Scientific Literature: The paper claims to eliminate the dependency on the original training dataset. However, experiments still use the remaining set to achieve better performance. Moreover, the proposed method appears to underperform compared to existing methods. Essential References Not Discussed: The paper lacks discussion and comparison with some related methods [1–4]. In particular, references [1] and [2] are highly relevant but are not discussed in the paper. Other Strengths And Weaknesses: 1) The main weakness is the missing discussion and comparison to related work [1–4], which limits the paper’s contribution. 2) The paper overclaims that no remaining dataset is needed, yet experiments still use it. 3) The performance of the proposed method lags behind existing methods. 4) There is insufficient discussion about cases where the gradient has no projection on the weight space. 5) It is unclear how the method would be applied to transformer architectures or convolution blocks. 6) A timing analysis would be beneficial to demonstrate the efficiency of the proposed method. Other Comments Or Suggestions: see weaknesses Questions For Authors: The training process of the component “R” is not clearly explained, and the pseudo code does not show its training process. What do the bold results in Table 4 represent? Why is only the best TA highlighted? The UA is an essential measure of machine unlearning. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Referencing other works** Thank you for highlighting these works. We will discuss the differences between SEMU and these methods and include this analysis in the camera-ready version of our work. Regarding [1], our method does not rely on samples or gradients from the remaining dataset, nor do we perform pseudo-labeling of the forget class to the most activated incorrect class for each unlearning sample. The work [2], published at AAAI25 after the ICML submission deadline, was not known to us at the time. We note that [2] operates on the representation of a trusted dataset, whereas SEMU focuses on gradients and does not require any additional datasets. In the case of [3], multiple datasets (forgetting and remaining) are used, but the authors employ Fisher and Hessian matrices to select important parameters, unlike SEMU, which uses SVD. Lastly, [4], similar to [2], performs unlearning by identifying important parameters in the representation space. It requires the identification of remaining and forget spaces using representations from both datasets. In summary, while various approaches use different projection methods, none operate without a remaining dataset. This discussion will be included in the revised version. **Quality metrics for nudity concept** We provide the additional qualitative and quantitative evaluation for this in Sections 2, 3, and 4 in https://anonymous.4open.science/r/icml2025_submission_3162/REBUTTAL.md . In particular, we use MSE and CLIP measures to compare SalUn and SEMU to Stable Diffusion on NSFW (Tab. 1) and safe (Tab. 2) prompts. Especially, comparing the visual samples in Section 4 shows the quality difference. Tab. 1 |Method|CLIP(T,I)|CLIP(I,I)|MSE| |:-|:-:|:-:|:-:| |SD|0.285| - | - | |SalUn|0.131|0.529|0.101| |SEMU (OUR)|0.280|0.747|0.025| Tab. 2 |Method|CLIP(T,I)|CLIP(I,I)|MSE| |:-|:-:|:-:|:-:| |SD|0.268| - | - | |SalUn|0.196|0.630|0.083| |SEMU (OUR)|0.267|0.855|0.023| We observe almost perfect sampling on safe prompts and have better MSE on NSFW prompts as well. **Remaining dataset usage** SEMU is not limited to operating only in scenarios without a remaining dataset. One of its notable properties is its ability to function effectively in both conditions, with and without remaining datasets. Furthermore, we present results using the remaining dataset (always indicated as SEMU_{remain}). These results also indicate that the remaining dataset has minimal impact on SEMU's performance. **Performance lag** We agree that there is a difference in performance. This difference is mostly caused because of a different approach, i.e., we want to have an unlearning method working reasonably well, even without access to any examples from the remaining dataset, which is really hard for the SalUn. Finally, we observed that SalUn's concept of unlearning leads to catastrophic behaviour on normal (safe) prompts and conceptually far on NSFW as well. We provide the quantitative and qualitative results in Sections 2, 3, and 4 in additional experiments (https://anonymous.4open.science/r/icml2025_submission_3162/REBUTTAL.md). **Table 1 concerns** When looking at Table 1, we notice that the unlearning accuracy (UA) is low, indicating that the model struggles to perform correctly on the forget dataset. This performance is comparable to other unlearning methods such as GA, IU, BS, BE, and FT. Regarding the results of the Membership Inference Attack (MIA), our findings show low recognition rates, suggesting that the attack does not identify the data as part of the model's training set. Based on these observations, we conclude that SEMU effectively performs the unlearning task. **Discussion on gradient projection** Please see ablations in rebuttal for Reviewer 8DF6. **Application to convolution** For a 2D convolutional layer, we treat each channel as a separate matrix (while restricting ourselves to the maximum 'sparse' value across all channels – see line 111 in the code [here](https://anonymous.4open.science/r/icml2025_submission_3162/SEMU/Classification/unlearn/own/transform_model.py)). Notice that we can flatten the kernel dimensions, and by performing matrix multiplication along the appropriate dimensions, we obtain a similar operation to a standard convolution. Then, similar to the linear layer case, we compute $U W V^T$, but here $W$ corresponds to the channel (one dimension represents the flattened kernel). Since the tensors $U, W, V^T$ have more than two dimensions, we use the `einsum` function for computation. After the multiplication, we reshape the dimensions where we previously flattened (for the kernel) and finally permute the tensor to match the correct weight format for the convolutional layer. All these operations are performed in just three lines of code (see lines 78-80 in [this file](https://anonymous.4open.science/r/icml2025_submission_3162/SEMU/Classification/unlearn/own/utils.py)). **Time analysis** Please see the response for Reviewer Qmc3.
Summary: This paper proposes a machine unlearning (MU) method named Singular Value Decomposition for Efficient Machine Unlearning (SEMU). The authors disentangle the gradients of parameter weights with Singular Value Decomposition (SVD) to identify the important proportion for MU. They keep all original weight matrices frozen and concatenate a processed SVD output of the projected accumulated gradient matrix on each of them. In particular, each accumulated gradient matrix is projected in a direction perpendicular to the existing weights before SVD, and all elements of the diagonal matrix in the SVD output are initialized as 0. During the unlearning training procedure, only the modified diagonal matrices are updated. The authors focus on two kinds of visual tasks, image classification, and generation, to validate their method. For the former, the authors conduct experiments on random data forgetting and class-wise forgetting. For the latter, class unlearning and concept unlearning are selected to evaluate their method. ## update after rebuttal This reviewer appreciates the authors' efforts to address the raised concerns. After checking the authors' responses, the major concerns of this reviewer have been solved. Therefore, this reviewer choose to raise the rating. Claims And Evidence: - The authors state in the contributions that they propose a remaining dataset-free scenario for machine unlearning. However, some existing work has focused on this topic, such as [R1] and [R2]. - The authors try to disentangle a specific operator A with a projection $\rm\pmb{A = UU^TAV^TV}$, where $\rm\pmb{U}$ and $\rm\pmb{V}$ are orthogonal matrices. They combine it with concentrating the gradient information into a small proportion intuitively. This reviewer considers that intuitive thought is not strict enough, and an interpretation should be needed. - The authors propose a projection operator $p_{A, B}(X)$ and claim that “This projection is particularly useful when applied to the gradient matrix G.” Why is this projection useful for the gradient matrix? They should present an explanation for it. - The authors claim in the experiment part that SalUn is the most similar approach (compared with the proposed method). Why? [R1] Bonato, Jacopo, Marco Cotogni, and Luigi Sabetta. "Is Retain Set All You Need in Machine Unlearning? Restoring Performance of Unlearned Models with Out-of-Distribution Images." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024. [R2] Cheng, Xinwen, et al. "Remaining-data-free Machine Unlearning by Suppressing Sample Contribution." arXiv preprint arXiv:2402.15109 (2024). Methods And Evaluation Criteria: - Methods - The novelty is limited. - Loss function: The loss functions are the same as that in SalUn. - Training method: The implementation of the method is similar to low-rank adapters, such as LoRA [R3], which introduces extra parameters to the original model and only updates them during follow-up finetuning. The authors should discuss these methods and compare them with theirs. - Evaluation Criteria - Some metrics are unclear. - MIA: How to compute? - UA, RA, and TA: The accuracy of the image classification task is easy to guess. However, how to measure the accuracy of image generation task? - An important metric is not discussed. - Train time: Although the authors show the rate of training parameters, the training time needed to achieve the best performance is a more straight metric to measure the computation efficiency. [R3] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." ICLR 1.2 (2022): 3. Theoretical Claims: This reviewer has quickly checked all the mathematical proofs and found no errors in general. Experimental Designs Or Analyses: Yes. This reviewer has checked all experiment settings and results, and there are some issues. - The results are not satisfactory enough. - Image classification: The UA and MIA are commonly worse than those of SalUn. - Image generation: In Table 6 and Table 7, most results are worse than those of SalUn. - Some experiment settings need to be clarified. - TA: How is the test set constructed? Is it the same as the original dataset or processed like the unlearned training set? - Forgetting data: How to replace the original data labels? - Some settings do not align with the baselines. - Image classification - Missing datasets: - RL: Lacuna-10 and Lacuna-100 - l1-sparse: ImageNet - BS and BE: Vggface2 - SalUn: SVHN and TinyImageNet - Missing model: - SalUn: Swin-T - Image generation - Missing dataset - FMN: ConceptBench - In the class unlearning of image generation, the authors only unlearn the "airplane" class from CIFAR-10. Additional experiments on unlearning other classes should be conducted to present the stability of the proposed method. Supplementary Material: This reviewer has reviewed all the supplementary materials. Relation To Broader Scientific Literature: - The remaining dataset-free scenario has been explored in [R1] and [R2]. - The proposed method is similar to low-rank adapters, such as LoRA [R3]. - The proposed method do not surpass existing work like SalUn. Essential References Not Discussed: One characteristic of the proposed method is no need for remaining dataset-free. There is a published article [R1] exploring this topic. Additionally, this method is similar to LoRA [R3], which should be discussed further. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: - There exist some writing issues. - Typos in the caption of Figure 2 - remain unaltered **adn** they are derived from - Wrong jump links in section 6, Image Generation - requiring only a small fraction of the trainable parameters (see **Fig.G.1 and Fig. G.1**) - the images in **Figure 6** show that after SEMU - Wrong equation writing in Eq. 17 and Eq. 18 - A missing equator, or redundant $L_c(\theta_u)$ and $L_g(\theta_u)$ - Unexplained marks in tables - up and down arrows in Table 6 and Table 7 - Some signs can be uniform. - In section *Truncated SVD*, $\rm\pmb{\Sigma_r}$ and $\rm\pmb{U_r}$ represent the orthogonal matrices of SVD output, while $\rm\pmb{A_r}$ and $\rm\pmb{B_r}$ do in section *Selecting most importan subspace of $\Sigma$*. - Some items should be explained further. - In Eq. 18, the generation loss contains a mean squared error loss $ℓ_{MSE}(θ_u; D_r)$. What are the details of this loss? - In Algorithm 2 and Algorithm 3, a description says, "When using retrain mode." What is the meaning of retrain mode? Questions For Authors: - Why do you consider that you propose a remaining dataset-free scenario? - What are the innovations of your method when compared to low-rank adapters, like LoRA [R3]? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your feedback. Below we address the concerns. **Previous works R1 and R2** Regarding R1, we would like to highlight that it requires an additional surrogate dataset $\mathcal{D}^{sur}$, which is not required for SEMU. This means our approach does not rely on extra datasets to maintain the neural network's capabilities. As for R2, we were not aware of this work since it has not yet been published in a peer-reviewed venue, but with only an arXiv version available. The authors of R2 propose a different method for machine unlearning, focusing on altering the entire model rather than selecting crucial weights. We will include this discussion in the camera-ready. In terms of comparison, we believe it is not feasible to compare SEMU with R2 due to its unpublished status and the unavailability of the code. However, we will include a comparison with R1 in our revised version. **Intuition behind SEMU** In practice, some directions are more important than others for all weights. Observe that the weights are roughly proportional to the averaged gradient over the entire dataset. However, if we consider only the subset (class) we want to unlearn, its gradient will share some directions with the gradient of the whole dataset, but will also have directions specific to that subset. Thus, the projection ensures that we remove the common directions from both the weights and the gradient of our subset. Consequently, during the unlearning process, we do not modify the directions crucial to the model but only those specific to the dataset. **Similarity to SalUn** We believe SalUn is the most similar to SEMU, as both methods aim to alter only crucial model weights based on gradient information. However, our approach significantly reduces the number of altered weights by up to 50 times. Additionally, SEMU uses the same loss functions when processing the forget dataset and can operate without a remaining dataset, addressing a limitation of SalUn. When SalUn operates only on the forgetting dataset, its performance diminishes as the model collapses (see response for Reviewer Qmc3). **Loss functions for SEMU** The training process and loss function of SEMU differ slightly, as SEMU does not rely on the remaining dataset. Therefore, the objective for the remaining set is not used in model optimization. **Metric definition** When it comes to metrics, we use the well-established ones in literature. When it comes to MIA, we use an MIA defined in [1], which was also used in SalUn and previous works. [1] Carlini, Nicholas, et al. "Membership inference attacks from first principles." 2022 SP. **Accuracy for generation task** To measure the accuracy of the image generation task, we use a classifier trained to recognize images generated by the model for a given class. We then apply this classifier to a newly generated batch of images after unlearning. **Time consumption** Please see the response for Reviewer Qmc3 **MIA and UA worse than SalUn** We agree that the results presented in Tab. 6 and Tab. 7 are comparable or slightly worse than SalUn, especially in terms of the FID metric. This difference is mostly caused because of a different approach, i.e., we want to have an unlearning method working reasonably well, even without access to any examples from the remaining dataset, while SalUn utilizes the remaining dataset during unlearning. Finally, we observed that SalUn's concept of unlearning leads to catastrophic behaviour on normal (safe) prompts and conceptually far on NSFW as well. We provide the quantitative and qualitative results in Sections 2, 3, and 4 in additional experiments (https://anonymous.4open.science/r/icml2025_submission_3162/REBUTTAL.md). **Test set construction** The test set consists of test images from the dataset used for evaluation. In the case of random data forgetting, the test set remains unaltered. However, when performing class-wise forgetting, we remove the forgotten class from the test. **Forgetting labels** When forgetting, we perform random relabelling, meaning that we assign a random label from the remaining classes. **Additional benchmarks** See responses to Reviewers iRuu and Qmc3. We believe these experiments showcase SEMU's effectiveness and the results cover a broad range of task scales, capabilities, and complexities, while the proposed benchmarks are similar. **More samples for DDPM** Unlearning samples of other CIFAR10 classes generated with DDPM will be added to the revised version. Now, we present them at https://anonymous.4open.science/r/icml2025_submission_3162/REBUTTAL.md in Section 5. **Why remaining-free?** The dataset-free scenario can be justified for similar reasons as exemplar-free continual learning. Namely, due to privacy reasons (e.g. GDPR, CCPA), the remaining data may be inaccessible during the unlearning or too large to efficiently retrain on. Also, computational efficiency is improved by avoiding training episodes on the remaining data. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response. However, there remain some unsettled issues. - Though there are no existing machine unlearning methods with available code, the authors should discuss the difference between LoRA and their method theoretically. - The authors say that they utilize trained models to recognize images generated by the model. Can the authors present the accuracy of each model to show the reliability of the recognition procedure? --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for their response to our rebuttal. **On the classifier's accuracy** We followed the experimental scheme from the SalUn paper, and we used the pretrained classifiers for class generations with diffusion models. In particular, for DDPM and CIFAR10, we used Resnet34, achieving $94.97$\% accuracy. Whereas, for Stable Diffusion and Imagenette, we used Resnet50 (with the weights from torchvision), achieving acc@1=80.858\% and acc@5=95.434\% on ImageNet. **On the differences between SEMU and LoRA** While both LoRA and SVD involve low-rank matrices, they are fundamentally different in formulation and purpose. SVD identifies and surgically removes subspaces associated with specific data by editing or eliminating components correlated with that data. LoRA just learns an unconstrained additive update for adaptation. The low-rank matrix in SEMU is not merely a compression tool (as in, e.g., [1]), but a mechanism to enable interpretable and controlled unlearning. Moreover, as we already presented in the manuscript, SVD gives an optimal r-rank decomposition according to the singular values (see Theorem 1), whereas LoRA is a low-rank decomposition learned via gradient methods. SEMU uses SVD, because it gives us the orthonormal projections, resulting in the geometric separation of the features to unlearn from the remaining knowledge, being also easily interpretable. These properties are not true for the learnable projections, like LoRA. In the paper [2], authors decompose the weight matrix $W \in \mathbb{R}^{m \times n}$ into $AB + W^{res}$, where $A = U_{:r}S^{\frac{1}{2}}_{:r} \in \mathbb{R}^{m \times r}$ and $B = S_{:r}^{\frac{1}{2}} V^{T}_{:r} \in \mathbb{R}^{r \times n}$. $A$ and $B$ correspond to $r$ principal singular values of $W$, and are further trained. $W^{res} = U_{r:}S_{r:}V^{T}_{r:} \in \mathbb{R}^{m \times n}$ is associated with residual singular values and remains frozen during fine-tuning. Such an approach surpasses LoRA in several experiments. Analogously, in the context of machine unlearning, SVD precisely selects the most important components related to the forget dataset and leaves the rest of the parameters intact. We sincerely appreciate your constructive comments and concerns, which help us improve our manuscript. We hope our detailed response effectively addresses your concerns. If so, we would appreciate it if you could increase the rating accordingly. Please feel free to ask if you have any additional questions. **References:** [1] Wang, X., Zheng, Y., Wan, Z., & Zhang, M. (2024). Svd-llm: Truncation-aware singular value decomposition for large language model compression. arXiv preprint arXiv:2403.07378. [2] Meng, F., Wang, Z., & Zhang, M. (2024). Pissa: Principal singular values and singular vectors adaptation of large language models. Advances in Neural Information Processing Systems, 37, 121038-121072.
Summary: The paper proposes SEMU, a machine unlearning method using Singular Value Decomposition (SVD) to efficiently erase specific data influences from trained models. SEMU leverages SVD to project model gradients into a low-dimensional subspace, identifying critical weights linked to unwanted data. By updating a small ratio of parameters and eliminating reliance on original datasets, SEMU achieves competitive unlearning performance in image classification (CIFAR-10/100) and generation (DDPM, Stable Diffusion) tasks while preserving model utility. Claims And Evidence: The claims focus on two main topics: **competitive unlearning performance** and **improved efficiency**. ### **competitive unlearning performance** Taking the results in the Appendix into account, the provided results for Random Data Forgetting seem to be good. However, the overall presentation is not clear, and the important metrics were not well explained, e.g., what's the difference between RA and TA, and why higher TA results were not in bold? In addition, this paper lacks of ablation studies for the proposed method. ### **improved efficiency** > 'SEMU eliminates the dependency on the original training dataset (in abstract)' SEMU achieves good performance on unlearning without $D_r$ (Tables 1–3). However, 'eliminates' may be too confident. > 'SEMU minimizes the number of model parameters (in abstract)' This is true when compared with the selected baselines. However, the evaluation with respect to time consumption is missing, which is important as the method involves extra computation cost to decide the trainable params. Methods And Evaluation Criteria: The methods and evaluation criteria in the SEMU paper are largely appropriate for machine unlearning (MU) but have notable limitations. Strengths: - Theoretically grounded (Theorem 4.1) for low-rank approximation, aligning with MU’s goal of minimal parameter updates. - Gradient projection logically preserves model performance. Limitations: - An important baseline is missing. Since SEMU changes the network structure, a similar method, namely LoRA (using LoRA for MU tasks), may have less trainable parameters and better performance. - No analysis of SVD’s computational overhead. - No sufficient ablation study was provided. Theoretical Claims: There was a proof for Theorem 4.1, it correctly invokes the Eckart-Young-Mirsky theorem, which guarantees that truncated SVD minimizes the Frobenius norm error for rank-r approximations. The proof is too short that It's uncertain if any important assumptions were missing. Experimental Designs Or Analyses: Besides the issues mentioned above, additional issues: - It's noted that TParams of SalUn is set to 50\% in Table 1, why set that to 100% in Table 2 and Table 3? - The analysis in Table 5 and Figure 4 are for SalUn only, which contributes less to the evaluation of the proposed method. why not do the same for the proposed SEMU? - The numeral evaluation for the image generation task (Table 6) is confusing, as UA seems to be a the-smaller-the-better metric in Table 1. And the TA has a huge gap compared to 'Retrain'. How to understand this result? Supplementary Material: Since there was no extra file of Supplementary Material, I just reviewed the Main text and the Appendix. Relation To Broader Scientific Literature: Yes. The SEMU paper positions its contributions within the broader machine unlearning (MU) literature by addressing two key limitations of prior work: parameter inefficiency and dependency on the remaining dataset ($D_r$). Essential References Not Discussed: Since SEMU changes the network structure by adding an extra component for each layer (Eq 13), a well-known similar method, namely LoRA, should be discussed. Moreover, LoRA can be used for MU tasks, it may have less trainable parameters and better performance. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: Typos: - In the caption of Figure 2, "... adn(?) ...". - In the second paragraph of Sec 3.2, "and often(?) $D_r$". Questions For Authors: I have listed most of my questions above, additionally: - Have you properly set the Latex template? It's expected that papers under review should have line numbers (using `\usepackage{icml2025}` instead of `\usepackage[accepted]{icml2025}`) Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Time consumption** In the Table below we show a comparison of time needed to unlearn DDPM model: |Method|Preprocessing time|1000 iters time| |:-|:-:|:-:| |SEMU|44.18s|308s| |SEMU_retrain|44.18s|530s| |SalUn|50.69s|1170s| Also, we show the time of unlearning of ResNet-18: |Method|Dataset|Preprocessing time|One unlearning epoch| |:-|:-:|:-:|:-:| |SEMU|CIFAR-10|3.27s|11.01s| |SalUn|CIFAR-10|2.25s|14.70s| |SEMU|CIFAR-100|3.36s|11.31s| |SalUn|CIFAR-100|2.20s|14.85s| **Bolded values in the tables** highlight two key metrics: target accuracy, reflecting the similarity between a model’s performance and its retrained version, and the number of parameters modified during unlearning. This emphasizes that SEMU minimally affects model behavior compared to other methods. For generative models, we evaluate stable diffusion outputs before and after unlearning using a safe prompt, identical seed, and noise. Notably, we observe that SalUn exhibits drastic changes in model behavior (not ability), which are not evident with SEMU. **Ablations** Here we provide ablations on the projection usage in SEMU: |Dataset|Task|Projection|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR10|Random 10%|No|3.80(1.44)|96.46(3.54)|89.78(4.48)|11.64(1.24)| |CIFAR10|Random 10%|Yes|0.60(4.64)|99.40(0.60)|94.22(0.04)|5.40(7.48)| |CIFAR10|Random 50%|No|2.13(5.78)|97.69(2.31)|91.17(0.55)|8.37(10.92)| |CIFAR10|Random 50%|Yes|1.77(6.14)|98.12(1.88)|91.80(0.08)|7.20(12.09)| |CIFAR10|Class Forget.|No|99.72(0.28)|98.55(1.45)|92.65(0.60)|100.00(0.00)| |CIFAR10|Class Forget.|Yes|99.83(0.17)|98.22(1.78)|92.26(0.21)|100.00(0.00)| One can observe that the projection positively influences the unlearning process by preserving model capabilities what can be seen in RA and TA metrics, while slightly widening the gap between retrain model and unlearned one in FA and MIA. Another ablation on the parameter choosing the portion of variance explained by the SVD that is used for selecting parameters for alteration is in the rebuttal for Reviewer iRuu. **Comparison with LoRA** Thank you for your feedback. We are open to comparing SEMU with the LoRA method for machine unlearning. Could you please direct us to a specific LoRA-based machine unlearning method with available code? This will enable us to conduct a fair comparison. If such a method does not exist, we believe that adapting LoRA for machine unlearning is beyond the scope of our current work, as it would require significant effort. **Theoretical claims** Thank you for noticing the concern with the proof. We know that the Eckart-Young-Mirsky theorem (matrix approximation lemma) is true for any unitarily invariant norm. In particular, it is true for the Frobenius norm too. Throughout the paper, while introducing SVD formally, we assume the Frobenius norm and operate in a Hilbert space. Since each element of $S^r$ is of rank $r$ at most, we know that among all $r$-rank matrix approximations for $G$, the optimal one is the one given by the SVD. Moreover, we know that the optimal solution is unique. In the revised version of this manuscript, we will add the Eckart-Young-Mirsky theorem and all the needed assumptions in the same place, in order to make the proof more approachable. **100% of TParams for SalUn in Supplement** We are grateful to the Reviewer for catching the typo. It was an oversight due to a copy-paste error. We apologize for the mistake. In the camera-ready version of the paper, we will correct the values in Tables 2 and 3 for SalUn (50% instead of 100%). **Table 4 and Table 5 are only for SalUn** In our evaluation of SalUn, we aimed to demonstrate that when fewer parameters are altered than originally described, SalUn's effectiveness diminishes, regardless of the data. To achieve a fair comparison, we conducted an experiment using the same setup but altering only 1% of the weights with SalUn. The results can be found in a rebuttal for Reviewer Qmc3. **Numerical evaluation for the generation task** We thank the Reviewer for a question regarding their concern. We perform a similar analysis of the generative diffusion models behavior as for classification tasks. That’s the rationale behind reporting TA as well. Following the SalUn evaluation procedure, firstly, we pretrained the same classifier to analyse the generated samples. We observed that even a small change in generated features is hard for such a classifier. Following the Reviewer's concern, we want to admit that the TA metric for unlearning in the generative models scenario is a biased metric. For the next comparisons (e.g., the ones from Section 1 in https://anonymous.4open.science/r/icml2025_submission_3162/REBUTTAL.md), we focused on FID and UA metrics. **Template usage** Thank you very much for spotting this issue. We are very sorry for the mistake and any difficulties this could cause. **Typos** We would like to thank the Reviewers for their thorough work in helping us improve our manuscript. We apologize for typos.
Summary: The authors proposed a Singular Value Decomposition for Efficient Machine Unlearning method which solve two problems 1) the need remaining dataset for unlearning process and 2) changes too many parameters during unlearning process Claims And Evidence: This article show good evidence to support its claim that their SEMU method show reasonable performance (superior or matching) to existing method while simpler in sense of optimization Methods And Evaluation Criteria: The nature of the method , in layman language, is similar to group parameters (gradient matrix) into r cluster (where r is dimension of the truncated SVD). Overall it makes a lot of sense. The math reasoning and results also support it. Theoretical Claims: The theory part of this study is simple and straight forward based on basic SVD and based on previous paper in the literature Experimental Designs Or Analyses: I think the experimental part is the weakest point of this paper. The author only shows results of image classfication and image generation results. Over task such as NLP and math are needed to show how versatile this unlearning method is Supplementary Material: NA Relation To Broader Scientific Literature: unlearning can be used in many fields of sciences and beyong Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: The most important questions is why choose SVD to select parameters? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Why SVD?** Our objective was to minimize the number of altered weights during the unlearning process to maintain the model's behavior. To achieve this, we looked for an effective selection mechanism. SVD, our first choice, proved to be successful, so we did not explore other parameter selection methods. However, investigating alternative selection methods for potential improvements in effectiveness and efficiency is an interesting path for future work. **Experiments on TinyImageNet with ViT and ResNet** To futher show applicability of SEMU, we provide results on TinyImageNet for ResNet18 and ViT models. Performance on ResNet-18, pre-trained on Tiny ImageNet dataset, for 10% random data forgetting. |Methods|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:| |Retrain|36.40|99.98|63.67|63.77| |ℓ1-sparse|15.19(21.21) |98.61(1.37)|61.78(1.89)|26.39(37.38)| |SalUn|27.78(8.62)|97.20(2.78)|59.70(3.97)|72.80(9.03)| |SEMU|5.44(30.96)|95.02(4.96)|64.03(0.36)|15.18(48.59)| |SEMU_remain|5.08(31.32)|94.98(5.00)|63.77(0.10)|20.81(42.96)| Performance on ResNet-18, pre-trained on Tiny ImageNet dataset, for 1 random class (number 9) data forgetting. |Methods|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:| |Retrain|100.00|99.98|64.21|100.00| |ℓ1-sparse|44.00(56.00)|62.76(37.22)|49.93(14.28)|50.60(49.40)| |SEMU|20.80(79.20)|95.52(4.46)|64.95(0.74)|44.60(55.40)| |SEMU_remain|65.80(34.20)|96.17(3.81)|64.67(0.46)|87.80(12.20)| Performance on ViT, pre-trained on Tiny ImageNet dataset, for 10% random data forgetting. |Methods|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:| |Retrain|14.30|99.91|85.59|24.61| |SEMU|2.14 (12.16)|95.01 (4.90)|85.85 (0.26)|5.87 (18.74)| |SEMU_remain|2.00 (12.30)|94.96 (4.95)|85.48 (0.11)|8.04 (16.57)| Performance on ViT, pre-trained on Tiny ImageNet dataset, for 1 random class (number 9) data forgetting. |Methods|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:| |Retrain|100.0|99.91|85.37|100.0| |SEMU|20.80 (79.20)|95.47 (4.44)|84.09 (1.28)|44.50 (55.50)| |SEMU_remain|65.80 (34.20)|96.12 (3.79)|85.10 (0.27)|87.70 (12.30)|
Summary: The paper "SEMU: Singular Value Decomposition for Efficient Machine Unlearning" introduces a new method for machine unlearning (MU). The goal is to remove specific data from AI models without damaging overall performance. Traditional unlearning methods require modifying large portions of the model or retraining with remaining data. This makes them computationally expensive and impractical for privacy-sensitive applications. SEMU solves these issues by using Singular Value Decomposition (SVD). Instead of altering the entire model, SEMU identifies and modifies only the most crucial weights linked to the data that needs to be forgotten. This makes the process faster and more efficient, with minimal impact on the model’s generalization ability. The paper demonstrates SEMU’s effectiveness through experiments on image classification (CIFAR-10, CIFAR-100) and image generation (Stable Diffusion, DDPMs). The results show that SEMU can achieve strong unlearning performance while modifying less than 1% of the model’s parameters. It also works without requiring access to the original training dataset, making it ideal for privacy-focused applications. In conclusion, SEMU provides an efficient, data-independent, and computationally lightweight approach to machine unlearning. It outperforms existing methods in efficiency while maintaining accuracy. The authors suggest that SEMU could be extended to large language models (LLMs) and vision-language models (VLMs) in future research. Claims And Evidence: The paper makes several key claims about SEMU's effectiveness, efficiency, and practicality in machine unlearning. The main claims are: 1. SEMU achieves efficient unlearning by modifying only a small fraction of model weights (~1%) instead of retraining the entire model. 2. SEMU does not require access to the remaining dataset, making it more privacy-friendly than traditional methods. 3. SEMU maintains model accuracy while effectively removing unwanted knowledge. 4. SEMU outperforms other unlearning methods in both image classification and image generation tasks. These claims are backed by extensive experimental results on CIFAR-10, CIFAR-100, and Stable Diffusion models. The paper provides detailed comparisons against existing methods like SalUn, ESD, and Forget-Me-Not (FMN). The results show that SEMU achieves similar or better unlearning performance while altering far fewer model parameters. The claim that SEMU eliminates the need for the remaining dataset is well-supported. The experiments show that even without access to retained data, SEMU still performs effective unlearning with minimal accuracy loss. However, the paper does acknowledge that having access to some remaining data can further improve results. There are no major unsupported claims in the paper. The methodology, theoretical background, and experiments provide clear and convincing evidence to validate SEMU’s advantages. The only area where further research may be needed is in applying SEMU to different architectures like large language models (LLMs). Methods And Evaluation Criteria: The paper uses logical and well-structured methods to evaluate SEMU. The authors test their approach on both classification and generative models, ensuring broad applicability. They compare SEMU against state-of-the-art machine unlearning methods, including SalUn, ESD, and FMN, using widely accepted benchmarks. Theoretical Claims: The paper presents a theoretical foundation for SEMU, primarily based on Singular Value Decomposition (SVD) and its ability to reduce model parameters in a structured way. The theoretical claims focus on why SEMU is effective for machine unlearning and how modifying a small subset of model weights can achieve efficient forgetting without damaging overall performance. Experimental Designs Or Analyses: The experimental design in this paper is well-structured and rigorous. The authors carefully design tests for both image classification and image generation tasks to evaluate SEMU’s unlearning performance. They also compare SEMU against multiple baseline methods, ensuring a fair and meaningful comparison. Supplementary Material: The supplementary material provides additional experimental results, ablation studies, and implementation details that support the main claims of the paper. Relation To Broader Scientific Literature: The paper builds on prior research in machine unlearning, matrix factorization, and model compression, integrating these concepts into a novel approach. SEMU’s use of Singular Value Decomposition (SVD) for unlearning connects to several existing fields in AI and machine learning. Essential References Not Discussed: The paper presents a strong foundation by referencing key works in machine unlearning, model compression, and SVD-based optimizations. However, some critical prior research is missing, which could strengthen the context of SEMU’s contributions. For example, the paper introduces a selective unlearning method using SVD, arguing that modifying only a small fraction of model parameters (~1%) is sufficient. However, prior works on low-rank decomposition in deep learning have studied similar principles in different contexts but are not cited here. Example of a missing reference: "The key contribution of this paper is an efficient machine unlearning method using low-rank SVD, modifying fewer parameters than prior approaches. However, previous work by Denton et al. (2014) proposed a low-rank decomposition method for CNN compression, which also showed that selective weight modification can preserve model performance. While SEMU applies this concept to unlearning, acknowledging this prior work would provide stronger theoretical grounding." Similarly, SEMU claims that unlearning can be achieved without access to the remaining dataset. However, studies like Wu et al. (2022), "PUMA: Provable Machine Unlearning", provide mathematically provable guarantees for unlearning but are not cited. Including this reference would help differentiate SEMU’s empirical approach from provable unlearning techniques. By adding references to low-rank model adaptation, provable machine unlearning, and privacy-preserving ML, the paper could provide a more comprehensive context for its Other Strengths And Weaknesses: Strengths 1. The use of Singular Value Decomposition (SVD) for selective forgetting is a novel contribution to the machine unlearning field. Unlike prior methods that require full model retraining or large-scale fine-tuning, SEMU modifies only a small fraction of model weights (~1%), making it computationally efficient. 2. One of SEMU’s most significant advantages is its ability to perform unlearning without access to the remaining dataset. This is a major step forward for privacy-preserving AI, where retraining with retained data is often impractical. 3. The authors conduct extensive experiments on both classification and generative models, covering datasets like CIFAR-10, CIFAR-100, and Stable Diffusion. The results are compared against state-of-the-art MU methods (SalUn, ESD, Forget-Me-Not), demonstrating SEMU’s superior efficiency and effectiveness. Weaknesses 1. SEMU is an empirical approach, meaning it lacks formal mathematical guarantees for unlearning effectiveness. Prior work, such as PUMA (Wu et al., 2022), provides provable unlearning methods, whereas SEMU relies on experimental validation rather than formal proofs. 2. While SEMU is tested on image classification and generative models, it is not evaluated on large-scale architectures like transformers or LLMs. Applying SEMU to text-based models (e.g., BERT, GPT-4, or ViTs) would strengthen its generalizability. 3. The paper does not fully explore how different SVD truncation levels affect unlearning performance. An ablation study on how much of the singular value spectrum needs modification could help optimize SEMU’s implementation further. 4. While SEMU is described as computationally efficient, GPU/memory usage details for different architectures are not fully reported. A detailed breakdown of training costs compared to full retraining methods would provide more clarity on real-world feasibility. Other Comments Or Suggestions: NONE Questions For Authors: 1. How does SEMU perform on large-scale architectures such as transformers and large language models (LLMs)? 2. How sensitive is SEMU’s performance to different levels of SVD truncation? 3. What specific hyperparameters influence SEMU’s efficiency the most? Ethical Review Concerns: NONE Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thorough review. We would like to address some of your concerns below: **Time needed for SEMU when compared to SalUn** In the Table below we show a comparison of time needed to unlearn DDPM model: |Method|Preprocessing time|1000 iters time| |:-|:-:|:-:| |SEMU|44.18s|308s| |SEMU_retrain|44.18s|530s| |SalUn|50.69s|1170s| Also, we show the time of SEMU and SalUn in unlearning of CIFAR10 and CIFAR100 for ResNet-18: |Method|Dataset|Preprocessing time|One unlearning epoch| |:-|:-:|:-:|:-:| |SEMU|CIFAR-10|3.27s|11.01s| |SalUn|CIFAR-10|2.25s|14.70s| |SEMU|CIFAR-100|3.36s|11.31s| |SalUn|CIFAR-100|2.20s|14.85s| When it comes to memory usage, SEMU does not require additional storage, so it requires the same amount of memory as SalUn, or slightly less, as we do not require a mask of neurons which needs to be altered. **SEMU for large architectures** In this experiment, we demonstrate the effectiveness of SEMU using the TinyImageNet dataset and the ViT model. We tested SEMU's ability to forget 10% of randomly chosen data and one entire class. The results indicate that SEMU maintains strong performance with the ViT model in both scenarios. Regarding the application of SEMU to Large Language Models (LLMs), it's important to note that our focus has been on models designed for computer vision. Therefore, adapting SEMU to LLMs may be beyond the scope of this work. |Methods|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:| |Retrain|14.30|99.91|85.59|24.61| |SEMU|2.14 (12.16)|95.01 (4.90)|85.85 (0.26)|5.87 (18.74)| |SEMU_remain|2.00 (12.30)|94.96 (4.95)|85.48 (0.11)|8.04 (16.57)| |Methods|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:| |Retrain|100.0|99.91|85.37|100.0| |SEMU|20.80 (79.20)|95.47 (4.44)|84.09 (1.28)|44.50 (55.50)| |SEMU_remain|65.80 (34.20)|96.12 (3.79)|85.10 (0.27)|87.70 (12.30)| **Referring to other low-rank adaptation methods**. We will update the discussion on low-rank adaptation methods in the final version of our work, which will include PUMA. **Different SVD truncation levels influence on model's performance** Parameter $r$ gives the size of a submatrix of the SVD projection, which we use for unlearning purposes. In particular, in each changed layer $L$, we are setting the value of $r_L$ to be the same percentage alpha of the rank of this matrix. This procedure gives us the square submatrices of sizes $r_L \times r_L$. Please note that the value of $r_L$ is different for various layers, however, in each layer, we have the same percentage (alpha) of important directions in the SVD projection. We consider SEMU_retrain (Tab. 1) and SEMU_subset (Tab. 2) scenarios. Tab. 1 |alpha|UA|FID| |:-|:-:|:-:| |0.01|100.00|16.64| |0.05|100.00|17.83| |0.1|100.00|17.83| |0.2|100.00|17.36| |0.3|100.00|17.39| |0.4|100.00|17.40| |0.5|100.00|17.39| Tab. 2 |alpha|UA|FID| |:-|:-:|:-:| |0.01|100.00|17.17| |0.05|98.00|18.20| |0.1|100.00|17.83| |0.2|100.00|17.74| |0.3|100.00|17.72| |0.4|100.00|17.72| |0.5|100.00|17.57| We observe that the proposed method of selecting the most important directions is more efficient than selecting just a percentage of the low-rank projection matrix (better UA and FID). For naive selection, we observe that the best results are for alpha=0.01. Then, the metric values are higher, but not linearly. **Most important parameters for SEMU** The effectiveness of SEMU is influenced by several factors, due to the fine-tuning of crucial model parameters. From the standpoint of the machine unlearning method, the parameter $\gamma$ is important. It is responsible for the selection of weights that are modified during the unlearning process. Specifically, $\gamma$ selects weights with an SVD variance no less than its value (e.g. 90% of variance), enabling the identification of a critical subset of weights for alteration. For further details, please refer to the SEMU section and Eq. 12.
Summary: The paper performs an SVD decomposition for machine unlearning, which enables them for efficient unlearning. They also propose a dataset-free scenario, addressing data privacy concerns. Experiments show their superiority over other methods At the core, SEMU aims to change a minimal number of model parameters, with a goal of removal of unwanted knowledge. They pose this problem as minimizing d(G; S^{r} _{A,B}) where G is the underlying loss function and S is subspace matrix induced by the matrices A and B. The paper finally performs experiments for class unlearning and image generation. Claims And Evidence: The paper claims several points: **The paper claims that their method is theoretically substantiated in the sentence quoted as follows:** "To overcome the challenges of gradient-based unlearning, we propose a novel, theoretically grounded method for selecting the most significant subspace of weights, θs , derived from the forgetting dataset Df ." This claim is not substantiated. One possibility to show some generalization bound. I was wondering if the authors could connect to stability (Olivier Bosquet et al., Stability and Generalization, 2002) and then use them to show some generalization results. Note that Bosquet et al. showed stability in the context of perturbation of “data” rather than model perturbation. Specifically, they showed that in the presence of a regularizer, the weight values w satisfy: $$ ||w(D)-w(D’)|| = O(1/|D|) \text{ where } |D\backslash D’|=1 $$ This result is further used to show a generalization guarantee. This paper is a great recipe for understanding the above problem's " dual ": Can the performance be stable and therefore enjoy a better generalization guarantee if we minimize Eq. (10), which directly ensures the S is close to G? Given the short rebuttal timeframe, it is absolutely OK not to go for a complete proof of theory, but some discussion on connection would be helpful. Otherwise, removal of the above sentence may be better. **The paper claims to perform efficient unlearing.** However, I did not find (or perhaps did not understand) how the model is efficient. If the unlearning method is efficient, then it would be good to obtain a tradeoff plot between accuracy and efficiency. This is particularly important, because for example, a simple randomization can be very efficient but inaccurate and in this paper, SVD may consume time. **Extensive experimental validation** Since the paper claims to perform efficient unlearning, it would be important to perform experiments using Imagenet dataset or tinyImagenet dataset. To the best of my understanding, the paper performs image generation using a subset of Imagenet dataset. However, they are not usign Imagenet for classification task. CIFAR10 or CIFAR100 may be less challenging for this task. I understand that it may be difficult to perform experiments on imagenet during rebuttal period. Could the authors perform experiments using tinyimagenet instead for class prediction? Methods And Evaluation Criteria: The evaluation/experiments can be divided into two clusters: One is quantitative, and the other is qualitative. Re. For quantitative experiments, which are more metric based, I would prefer a trade-off between accuracy and time (both training and inference) and unlearning accuracy and time (both training and inference). The authors can also compare accuracy and memory (say how much GPU memory being consumed). Is SEMU pareto optimal in those curves? There is not much ablation study of different components of their approach. For example, what is the benefit of Projection gradient improvement? It is not clearly understood. Qualitative experiments are OK. Theoretical Claims: As I mentioned in the claims and evidence, the theoretical justification is not adequate. Can the authors leverage some existing results of stability to show some generalization bound? Experimental Designs Or Analyses: As I mentioned in the claims and evidence, experiments with large datasets are important in this context. Tinyimagenet can be a good candidate. Moreover, a tradeoff plot between the time and accuracy plus time and memory can be helpful. Also, it would great if the authors can compare or discuss the conenction with various data subset selection methods, including Pruning, RHOLoss, GradMatch etc., which performs training on a small subset of data. Specifically, can we select the forget batch using one of these methods? Supplementary Material: I read the supplementary materials (Appendix). I have a few suggestions. Please see below. Relation To Broader Scientific Literature: The authors did a great job in the related work. But more papers and conenction with differential privacy and data subset selection would be better. Essential References Not Discussed: I do not see any such obvious reference missed. Other Strengths And Weaknesses: Apart from my points, I think the paper needs some reorganization. For example, many important details— from algorithms to loss functions are deferred to Appendix. Eq. (17,18) can be brought back to main. Other Comments Or Suggestions: Minor: In introduction: Our contributions "ca be" summarized as follows ---> Our contributions "can be" summarized as follows: Questions For Authors: 1. Can the authors show some theoretical insights which can shed light on generalization bounds? 2. How will the approach perform for large datasets? 3. How well this approach trade off in terms of time and memory? Can we have a trade off curve? ================ After rebuttal: Authors sufficiently addressed my concerns. But after looking at other reviews, I decided to keep my score. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: To address the concerns and questions raised by the Reviewer, we would like to point out the following: **More on theoretical aspects of projection**. In practice, some directions are more important than others for all weights. Observe that the weights are roughly proportional to the averaged gradient over the entire dataset. However, if we consider only the subset (class) we want to unlearn, its gradient will share some directions with the gradient of the whole dataset, but will also have directions specific to that subset. Thus, the projection ensures that we remove the common directions from both the weights and the gradient of our subset. Consequently, during the unlearning process, we do not modify the directions crucial to the model but only those specific to the dataset. We thank the Reviewer for a related paper. We will work more on theoretical guarantees. **Performance on larger dataset**. We have run experiments on TinyImageNet and here are the results: Performance on ResNet-18, pre-trained on Tiny ImageNet dataset, for 10% random data forgetting. |Methods|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:| |Retrain|36.40|99.98|63.67|63.77| |ℓ1-sparse|15.19(21.21) |98.61(1.37)|61.78(1.89)|26.39(37.38)| |SalUn|27.78(8.62)|97.20(2.78)|59.70(3.97)|72.80(9.03)| |SEMU|5.44(30.96)|95.02(4.96)|64.03(0.36)|15.18(48.59)| |SEMU_remain|5.08(31.32)|94.98(5.00)|63.77(0.10)|20.81(42.96)| Performance on ResNet-18, pre-trained on Tiny ImageNet dataset, for 1 random class (number 9) data forgetting. |Methods|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:| |Retrain|100.00|99.98|64.21|100.00| |ℓ1-sparse|44.00(56.00)|62.76(37.22)|49.93(14.28)|50.60(49.40)| |SEMU|20.80(79.20)|95.52(4.46)|64.95(0.74)|44.60(55.40)| |SEMU_remain|65.80(34.20)|96.17(3.81)|64.67(0.46)|87.80(12.20)| One can observe that in both experiments, SEMU achieves the lowest gap in target accuracy of the model. **Time needed for SEMU** Comparison of the time needed for SEMU and SalUn for the DDPM model. As can be observed, SEMU requires much less time to perform unlearning. |Method|Preprocessing time|1000 iters time| |:-|:-:|:-:| |SEMU|44.18s|308s| |SEMU_retrain|44.18s|530s| |SalUn|50.69s|1170s| Here are also the results for unlearning for the ResNet18 model: |Method|Dataset|Preprocessing time|One unlearning epoch| |:-|:-:|:-:|:-:| |SEMU|CIFAR-10|3.27s|11.01s| |SalUn|CIFAR-10|2.25s|14.70s| |SEMU|CIFAR-100|3.36s|11.31s| |SalUn|CIFAR-100|2.20s|14.85s| When it comes to memory usage, SEMU does not require additional storage, so it requires the same amount of memory as SalUn, or slightly less, as we do not require a mask of neurons which needs to be altered. **Comparison to SalUn within similar experimental conditions** Additionally, we ran experiments with SalUn, showcasing that SalUn collapses when the remaining dataset is not used during the unlearning procedure, no matter the amount of parameters altered (1% and 100%). |Method|Dataset|Task|With Remain Data|RA|TA|UA|MIA|TParams| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |SalUn|CIFAR10|Random 10%|Yes|99.52|93.66|0.82|6.38|1%| |SalUn|CIFAR10|Random 10%|No|12.86|12.47|86.91|67.76|1%| |SalUn|CIFAR10|Random 10%|Yes|98.03|92.41|5.51|16.38|100%| |SalUn|CIFAR10|Random 10%|No|18.70|18.21|81.93|3.18|100%| |SEMU|CIFAR10|Random 10%|No|99.40|94.22|0.60|5.40|0.54%| |Method|Dataset|Task|With Remain Data|RA|TA|UA|MIA|TParams| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |SalUn|CIFAR10|Class Forget.|Yes|99.65|94.83|93.35|100.00|1%| |SalUn|CIFAR10|Class Forget.|No|32.23|31.44|87.79|89.51|1%| |SalUn|CIFAR10|Class Forget.|Yes|99.48|93.94|99.99|100.00|100%| |SalUn|CIFAR10|Class Forget.|No|13.88|13.65|76.64|41.48|100%| |SEMU|CIFAR10|Class Forget.|No|98.22|92.26|99.83|100.00|0.87%| |Method|Dataset|Task|With Remain Data|RA|TA|UA|MIA|TParams| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |SalUn|CIFAR100|Random 10%|Yes|97.46|72.74|4.13|22.44|1%| |SalUn|CIFAR100|Random 10%|No|1.44|1.29|98.56|0.71|1%| |SalUn|CIFAR100|Random 10%|Yes|98.83|67.17|64.69|91.76|100%| |SalUn|CIFAR100|Random 10%|No|0.97|1.14|98.84|9.96|100%| |SEMU|CIFAR100|Random 10%|No|97.39|74.14|2.53|8.82|1.18%| **Influence of projection on results** |Dataset|Task|Projection|UA|RA|TA|MIA| |:-|:-:|:-:|:-:|:-:|:-:|:-:| |CIFAR10|Random 10%|No|3.80(1.44)|96.46(3.54)|89.78(4.48)|11.64(1.24)| |CIFAR10|Random 10%|Yes|0.60(4.64)|99.40(0.60)|94.22(0.04)|5.40(7.48)| |CIFAR10|Random 50%|No|2.13(5.78)|97.69(2.31)|91.17(0.55)|8.37(10.92)| |CIFAR10|Random 50%|Yes|1.77(6.14)|98.12(1.88)|91.80(0.08)|7.20(12.09)| |CIFAR10|Class Forget.|No|99.72(0.28)|98.55(1.45)|92.65(0.60)|100.00(0.00)| |CIFAR10|Class Forget.|Yes|99.83(0.17)|98.22(1.78)|92.26(0.21)|100.00(0.00)| One can observe that the projection positively influences the unlearning process by preserving model capabilities, which can be seen in RA and TA metrics, while slightly widening the gap between retrain model and the unlearned one in FA and MIA. We are grateful for the review, and we are looking forward to a fruitful discussion on the provided answers.
null
null
scSSL-Bench: Benchmarking Self-Supervised Learning for Single-Cell Data
Accept (spotlight poster)
Summary: This paper proposed a benchmarking analysis for SSL method's application in single-cell data analysis. ## update after rebuttal I raised my score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No, they do not have theoretical claims in the manuscript. Experimental Designs Or Analyses: Yes. I have questions about this section, which is provided in my comments. Supplementary Material: Yes, I have reviewed. Relation To Broader Scientific Literature: I think the readers in single-cell analysis will be interested in reading this paper, but only limited to these people. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA. Other Comments Or Suggestions: Please see my comments. Questions For Authors: The authors proposed a benchmarking analysis for selecting the most suitable approach in performing self supervised learning (SSL) for single-cell data analysis. I have a couple of questions about the novelty and the meaning of this work. If the authors can address them, I may consider increasing the score. Please see my comments below 1. The motivation is not very clear to me. How to justify that the self supers learning approach is very powerful in analyzing single-cell data? Some cited methods such as scCLIP are not published. Could the authors provide more support to strengthen their position of self-supervised learning? 2. The baseline models should include more current research to support their conclusions. For example, there exists a couple of multi-omic data integration approaches which could be treated as potential better baselines (scGLUE, https://github.com/gao-lab/GLUE; scButterfly, https://www.nature.com/articles/s41467-024-47418-x; Monae, https://www.nature.com/articles/s41467-024-53355-6). 3. The selected tasks are also not interesting, especially cell-type annotation and modality prediction, which are actually two supervised learning tasks. We can not directly predict expression profiles or shall have levels without references. If the others can demonstrate the emergent abilities of SSL-based approaches in handling this task, I think it would be very interesting. Otherwise, the authors may focus on more SSL-related tasks, for example, only SSL-based methods can address and other baselines cannot perform, but it will be superbly hard, as simple tasks such as clustering are also well-explored. 4. Could the authors distinguish their contributions and the contributions of this paper (https://www.nature.com/articles/s42256-024-00934-3)? They are all talking about SSL for single-cell data analysis and the conclusions are similar. 5. Figure G1 is also not clear to me. From my understanding, the authors are working on single-cell data. Therefore, why do they include other modalities are examples? I suggest others to revise the figure using single-cell data as an example Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We express sincere gratitude to the reviewer for providing feedback and raising several points about the validity of the work, which we address below and extend our evaluation accordingly. We hope that the reviewer will consider updating their review score if they find our comments and new results satisfying. **Motivation:** SSL is recognized for effectively learning robust representations from unlabeled data, enhancing downstream tasks. Many recent studies show the substantial advantages of SSL in single-cell analyses. Richter et al. (www.nature.com/articles/s42256-024-00934-3) demonstrates SSL methods excel in transfer learning, zero-shot, and cross-modality predictions, outperforming supervised approaches. The widely cited methods scVI , totalVI, CLEAR, CLAIRE, and Concerto show SSL's ability to capture biological heterogeneity and manage batch effects, better than supervised methods. **Additional Baselines:** Based on the reviewers’ suggestions, we added scButterfly and scTEL (www.nature.com/articles/s41540-024-00484-9) for multi-omics. The suggestions significantly improved the benchmark. scButterfly excels in cell typing, but scTEL underperforms in all tasks. However, our findings still indicate that SSL methods outperform other for multi-omics. New results in https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/table_1.pdf, https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/table_H4.pdf, https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/table_H5.pdf. For uni-modal datasets, we added scGPT, scBERT and SCDC (https://ieeexplore.ieee.org/document/10086540), see our responses to X2so and KvUM. To clarify, our benchmark focuses on RNA-seq and CITE-seq data (RNA+protein) and is not directly comparable to scGLUE and Monae, which use RNA+ATAC. ATAC is a completely different modality that uses different methods. scGlue/Monae’s documentation covers only RNA+ATAC, without CITE-seq guidance. **Selected Tasks:** The selected tasks are integral to ongoing single-cell challenges (https://proceedings.mlr.press/v176/lance22a, https://openproblems.bio/results), supported by existing literature (scVI, totalVI, CLEAR, CLAIRE, Concerto), and relevant to a broader community (https://genomebiology.biomedcentral.com/articles/10.1186/s13059-020-1926-6). - *Batch Correction* is crucial for identifying true biological variation from experiment variability, e.g., platforms constantly change chemistry settings. Often, one cannot access a previous technology version and instead performs batch correction to unmask the biological signal, see prior studies www.nature.com/articles/s41592-018-0254-1, https://doi.org/10.1093/bioinformatics/btz625, www.nature.com/articles/s41592-021-01336-8, https://pubmed.ncbi.nlm.nih.gov/34062119. - *Cell Typing* maps new cells onto existing reference atlases and is vital for biological discoveries (www.nature.com/articles/s41587-021-01001-7, Concerto, scButterfly). Although the reviewer identifies cell typing as supervised, in the SSL context it is about unsupervised representation learning followed by minimal supervised inference. - *Missing Modality Prediction* infers unseen modalities, tests SSL method's ability to generalize across data types, and enhances the utility of existing single-modality datasets. The inferred modalities can be used for the improved analysis (https://pubmed.ncbi.nlm.nih.gov/34062119, Concerto, scButterfly). **Contributions:** We cited Richter et al.from December 2024 in our initial paper and described the distinctions on line 36-55, col.2. To clarify our contributions and differences: - *Scope:* Richter et al. evaluate masked autoencoders (MAE), BYOL, and Barlow Twins on transfer learning, zero-shot, and fine-tuned SSL scenarios. Our scSSL-Bench assesses a broader range of several SSL methods, specialized and generic. - *Tasks:* Richter et al. focus on transfer learning. We evaluate batch correction, cell typing, and modality prediction for single- and multi-omics, addressing practical challenges from the community. - *Hyperparameters/Augmentations:* We address the impact of hyperparameters and augmentations for single-cell data, contributing valuable insights into optimal SSL configurations. - *Findings:* Richter et al. conclude in favor of MAE over contrastive methods at scale. We highlight different SSL methods perform optimally for different tasks and modalities, noting specialized SSL methods (scVI, CLAIRE) and scFMs excel at uni-modal batch correction, while generic SSL methods (VICReg, SimCLR) dominate for multi-modal data. **Figure G1** is an overview of benchmarked models showing structural differences. The visualized models can be applied to any input type. As we do not reference modalities in this figure, we're uncertain about the specific concern. We welcome suggestions to improve clarity. We appreciate the time you've taken to review our manuscript and provide your comments. We welcome any additional questions you might have. --- Rebuttal Comment 1.1: Comment: Thank you for answering my questions, I raised my scores as weak acceptance as I still think the conclusions are not very interesting and it overlaps a lot with the previous SSL-based method for single-cell data analysis. But this work is very solid so I still want to vote for acceptance.
Summary: This paper proposes a self-supervised learning (SSL) benchmark for single-cell data. The authors tried twelve representative SSL methods and conducted comprehensive evaluations on eight datasets across three downstream tasks. The experimental designs are technically sound, and the paper is well-organized and written. Claims And Evidence: This work benchmarks the performance of representative SSL methods on single-cell data. The comparisons and ablation studies are well-designed, and the results could be a valuable reference for researchers interested in this area. Methods And Evaluation Criteria: The authors evaluated twelve representative SSL methods on eight datasets across three downstream tasks, including batch correction, cell type annotation, and missing modality prediction. The evaluations are comprehensive enough to help understand the effectiveness of different SSL methods on single-cell data. Ablation studies are also conducted to help interpret the importance of each component in the SSL methods. Theoretical Claims: This work has no theoretical claims. Experimental Designs Or Analyses: The chosen SSL methods and downstream tasks for benchmarking are representative. The experiments are well-designed to reflect the performance of different SSL methods and the effectiveness of different components in method design. Supplementary Material: The authors provided experimental details and additional results in the supplementary material, which is clear and appropriate. Relation To Broader Scientific Literature: This work could be a helpful reference for researchers interested in single-cell representation learning. Essential References Not Discussed: The current benchmarked methods are mostly discriminative ones. Currently, there are some generative single-cell SSL methods such as scBert (scBERT as a large-scale pretrained deep language model for cell type annotation of single-cell RNA-seq data, Nature Machine Intelligence 2022) and Geneformer (Transfer learning enables predictions in network biology, Nature 2023). The authors need to include these generative SSL methods in the benchmark as well. Besides, the authors are also encouraged to include recent batch correction and data integration methods for single-cell data, such as Single-Cell RNA-Seq Debiased Clustering via Batch Effect Disentanglement (TNNLS 2024), scBridge embraces cell heterogeneity in single-cell RNA-seq and ATAC-seq data integration (Nature Communications 2023), etc. Other Strengths And Weaknesses: The current datasets used for evaluation are all relatively small. It is interesting to see whether the same conclusions could be arrived at on much larger datasets such as the Human Fetal Atlas and Mouse Atlas. When discussing the experimental results, the authors are encouraged to provide more in-depth explanations instead of plain descriptions. For example, in line 300, the authors write, ''these SSL methods prioritize batch correction over bio conservation as indicated by their high batch and low bio score.'' But why? More explanations are expected. Other Comments Or Suggestions: When evaluating different representation dimensions, more choices like 128 and 256 are expected. Questions For Authors: I expect the authors to respond to my previous concerns. In addition, for missing modality prediction, the authors use the average of the nearest neighbors in the observed modality as the prediction results. However, the characteristics and neighbors in the two modalities could usually differ. In this case, is such a prediction paradigm reasonable? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. We appreciate your recognition that our paper could be a helpful reference for researchers interested in single-cell representation learning and that we conducted comprehensive evaluations. We address your questions and suggestions, which we have integrated into the updated manuscript to further improve the paper. **Additional Methods:** Following your’s and reviewer X2so’s suggestion, we extended the benchmark to also include single-cell Foundation Models (scFMs). As requested, we included scBERT for cell type annotation of single-cell RNA-seq data into our benchmark. In addition, we added Geneformer and scGPT for batch integration and cell type annotation, see https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/table_1.pdf, https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/table_H2.pdf and https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/table_H3.pdf. We added scBridges to the introduction. Although scSSL-Bench currently leverages only RNA-seq and CITE-seq data, we cite scBridge and think it is valuable to extend the benchmark with ATAC-seq integration later. Moreover, we included results for SCDC (https://ieeexplore.ieee.org/document/10086540) for uni-modal tasks, and scButterfly and scTEL for multi-omics tasks. We accordingly updated the tables, see details in https://anonymous.4open.science/r/scSSL-Bench-Rebuttal. **Dataset Scale and Diversity**: We agree that incorporating very large datasets like the Human Fetal Atlas (\~4 million cells) and Mouse Atlas (\~300 thousand cells) would further validate our benchmark's scalability. While time constraints prevented their inclusion in this revision, we explicitly note this as a direction for future work in the manuscript. To demonstrate scalability, we have evaluated SSL methods and baselines on Tabula Sapiens (\~1.1 million cells, https://cellxgene.cziscience.com/collections/e5f58829-1a66-40b5-a624-9046778e74f5), see new table https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/sapiens_bc.pdf. We'd also like to clarify that many of our existing datasets are substantial in size and widely used in the community, including Immune Cell Atlas (Conde et al., \~330,000 cells), multi-modal PBMC (Hao et al., \~160,000 cells), and BMMC (Lucken et al., \~90,000 cells), see https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/datasets.pdf. **Results Discussion:** In the updated paper, we explain, wherever possible, the trade-offs between batch correction and bio conservation. Specifically, when describing how certain SSL methods prioritize batch correction over bio conservation, we discussed potential reasons, such as the nature of the SSL methods, the impact of specific augmentations, and the methods' underlying loss functions. For instance, in line 300, batch correction is prioritized since PBMC and ImunneCellAtlas datasets contain “harder” batch effects, SSL models try to correct the batch effects, neglecting bio conservation. **Representation Dimensions:** Our original rationale for selecting relatively small embeddings was guided by prior work in the literature, e.g., scVI www.nature.com/articles/s41592-018-0229-2, that demonstrated effective performance with lower-dimensional embeddings for single-cell analyses. We have extended the evaluation up to 1024 and come to the same conclusion. The representation dimensions of 64 or 128 reach a similar performance as 1024 while requiring less training time and memory for training and downstream tasks. We have updated Figure G4 with the following plots https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/projection_HIC.pdf and https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/projection_MCA.pdf. **Averaging for Missing Modality Prediction**: We agree that the characteristics and neighborhood structures of two modalities may differ, potentially affecting prediction accuracy. However, our choice of averaging nearest neighbors follows a common practice in the literature, e.g., Concerto www.nature.com/articles/s42256-022-00518-z. Moreover, after adding scButterfly as suggested by reviewer m4qB that can generate missing modalities, we get a slightly better Person correlation with the nearest neighbor averaging using scButterfly embeddings (0.856) than generating proteins directly using scButterfly (0.84), see https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/table_H5.pdf. We acknowledge the concern regarding potential discrepancies between modalities and now add this point to the discussion, highlighting that exploring more sophisticated prediction paradigms is an interesting avenue for future research. We sincerely thank you for your thoughtful review and constructive suggestions, which have significantly improved the quality and clarity of our paper. We welcome any additional questions that the reviewer might have. --- Rebuttal Comment 1.1: Comment: I sincerely appreciate the effort the authors made in the rebuttal. My concerns have been well addressed and I would like to raise my score to accept.
Summary: The authors present scSSL-Bench, a single-cell data benchmark that integrates 12 different approaches and 8 different datasets. The authors run extensive experimentations to answer three critical questions and provide invaluable insights and takeaways - This is no easy feat considering there are many moving parts due to different data augmentation, normalization, and training strategies. Claims And Evidence: Yes the authors pose 3 research questions which are extensively supported by detailed experimentations. Methods And Evaluation Criteria: Yes, it does. Theoretical Claims: No theoretical claims were made in the paper. Experimental Designs Or Analyses: I checked the experimental designs and analyses - Couldn't have been phrased better. Supplementary Material: All or them. Relation To Broader Scientific Literature: I think this is a timely addition that reflects the current single-cell literature mostly. Essential References Not Discussed: While I like how the authors extensively cited existing literature, I find the lack of any references to single-cell foundation models somewhat puzzling. Yes, there are lots of hype around single-cell FM, but still, I think a reader who doesn't understand nuances might be confused by lack of single-cell FM. Other Strengths And Weaknesses: I really appreciate the gargantuan effort that the authors had put into curating the benchmark. Having carried out effort myself, I know it's no easy feat, trying to draw overall takeaways/insights with so many moving parts. I think this will be a great contribution to the field where anyone not experienced enough in single-cell field can use it to jumpstart their research. I haven't tried out the github myself, but hopefully it is very straightforward to use for any newcomers. I hope the authors keep contributing to this benchmark, so that it can stand test of time. Despite its strengths, I have few minor comments that are holding me back in giving higher scores. - I think the authors needs to provide more detail on the 8 datasets used, e.g., sequencing technologies, how large is the gene panel for each dataset, since the readers would want to pick up the detail right away. - My understanding is that there are way more publicly-available single-cell dataset than these 8. Why were these 8 chosen? Are there future plans to add much more datasets to this? - As mentioned above, I find the lack of any discussion on single-cell FM quite puzzling. Yes, there is lots of hype, but I think they need to be either mentioned or benchmarked, since this will eventually be asked by end-users. - Since augmentation in training is really crucial and important, ideally authors should provide one or two other SSL methods with augmentation ablations (only VICReg shown in the paper so far), to demonstrate the trend holds. Other Comments Or Suggestions: - The dimension ablation seem intriguing, I am more used to much higher latent dimension (e.g., 512 in scGPT). So what is the rationale for only staying in the low regime (8,16,32,...)? Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and highlighting the relevance of our benchmark and the quality of our experiments. In the following, we address your questions and suggestions, which have improved the quality of the paper and the benchmark. **Single-cell FM:** We acknowledge that Foundation Models (scFMs) have recently gained significant attention in the single-cell genomics community. Although scSSL-Bench evaluates a diverse set of SSL approaches, we added scGPT, Geneformer, and scBERT to the comparison, see https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/table_1.pdf and https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/table_H3.pdf. As suggested, we will concisely discuss and cite the scFMs in the Introduction to provide the necessary context for readers and will update the discussion according to our new results, which showcase that scFMs demonstrate strong performance for bio conservation in batch integration and good performance in cell typing. Our analysis reveals a substantial performance improvement in scGPT after fine-tuning compared to its zero-shot performance, underscoring the importance of fine-tuning scFMs. We will also add a section to the appendix detailing hyperparameters and fine-tuning. We also compare to one more single-modality method SCDC (https://ieeexplore.ieee.org/document/10086540) and two more multi-modal methods, scTEL (www.nature.com/articles/s41540-024-00484-9) and scButterfly (www.nature.com/articles/s41467-024-47418-x), as suggested by Reviewer m4qB, see https://anonymous.4open.science/r/scSSL-Bench-Rebuttal for new results. **Datasets Overview**: We have extended the dataset overview provided in Appendix B to include the sequencing technologies for each dataset, the number of features (genes, proteins), and the number of cells of a specific cell type, see details in https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/datasets.pdf. For space consideration, we could not add this additional information to the main text. **Why These Datasets:** The selected datasets represent commonly used and established benchmarks in the single-cell literature, enabling direct comparisons with previous studies. For instance, these datasets have been employed in widely referenced studies and benchmarkings (e.g., www.nature.com/articles/s42256-024-00934-3, www.nature.com/articles/s42256-022-00518-z, www.nature.com/articles/s41540-024-00484-9, www.nature.com/articles/s41467-024-47418-x, https://academic.oup.com/bioinformatics/article/39/3/btad099/7055295, https://academic.oup.com/bib/article/23/5/bbac377/6695268), providing comparability across existing work. We now highlight this aspect in the manuscript. Furthermore, scSSL-Bench is engineered for scalability and easy extension. Our implementation with Hydra, a configuration management framework that enables flexible experiment configuration, parameter sweeping, and support for HPC environments, makes adding new datasets straightforward with minimal adjustments. This allows researchers to extend the benchmark with additional datasets. To this point, we also added the Tabula Sapiens dataset with 1.1 million cells to our evaluation, see https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/sapiens_bc.pdf. **Augmentation Analysis**: Initially, we demonstrated augmentation ablations using VICReg due to its consistently strong performance. Following the reviewer's recommendation, we have extended our augmentation ablations to include SimCLR https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/augmentations_simclr.pdf and MoCo https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/augmentations_moco.pdf. This additional analysis confirms our original finding that masking is the most effective augmentation technique across all three SSL methods. Additionally, CrossOver shows competitive performance, especially for SimCLR. **Representation Dimensions:** Our original rationale for selecting relatively small embeddings was guided by prior work in the literature, e.g., scVI www.nature.com/articles/s41592-018-0229-2, that demonstrated effective performance with lower-dimensional embeddings for single-cell analyses. We have extended the evaluation up to 1024 and come to the same conclusion. The representation dimensions of 64 or 128 reach a similar performance as 1024 while requiring less training time and memory for training and downstream tasks. We have updated Figure G4 with the following plots https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/projection_HIC.pdf and https://anonymous.4open.science/r/scSSL-Bench-Rebuttal/projection_MCA.pdf. We appreciate the reviewer's thoughtful feedback and have made substantial improvements to the manuscript accordingly. These changes have significantly strengthened our work and we hope scSSL-Bench will serve as a valuable resource for the single-cell and machine learning communities. We are happy to answer any additional questions that the reviewers might have. --- Rebuttal Comment 1.1: Comment: All my questions were throughly answered (more than sufficient) and I have adjusted my score accordingly - I think it's a really good contribution to the community.
null
null
null
null
null
null
null
null
Parametric Scaling Law of Tuning Bias in Conformal Prediction
Accept (poster)
Summary: The manuscript explores the phenomenon of tuning bias in the field of conformal prediction, which is a statistical method used to ensure that prediction intervals or sets cover the true value with a specified probability. The focus is on how the tuning of parameters, when done on the same dataset used for calibration, affects the coverage accuracy of the prediction models. Key points from the paper include: 1. Tuning Bias Definition and Impact: Tuning bias is defined as the coverage gap that arises when the same data set is used for both parameter tuning and calibration. The paper empirically demonstrates that this bias is generally negligible for simple parameter tuning across various conformal prediction methods. 2. Parametric Scaling Law: The study observes that the magnitude of tuning bias increases with the complexity of the parameter space and decreases with the size of the calibration set. This relationship is formalized through the derivation of upper bounds on the tuning bias, which align with empirical observations. 3. Theoretical Framework: A theoretical framework is established to quantify the tuning bias, using empirical process theory within the extended parameter space. This framework provides a rigorous basis for understanding and predicting the behavior of tuning bias under different conditions. 4. Empirical Studies: The paper includes extensive empirical evaluations involving methods like RAPS, SAPS, score aggregation, and confidence calibration methods, using datasets like CIFAR-100 and applying models like ResNet-18. These studies confirm the scaling laws and the minimal impact of tuning bias under typical conditions. 5. Reduction Strategies: Potential strategies to mitigate tuning bias are discussed, focusing on increasing calibration set size or reducing parameter space complexity. Practical challenges such as data scarcity are acknowledged, suggesting order-preserving regularization as a promising approach to manage tuning bias effectively. 6. Contributions and Future Directions: The primary contributions are identifying the negligible effect of tuning bias in many scenarios, formalizing the parametric scaling law of tuning bias, and proposing theoretical models to understand and predict tuning bias. The paper suggests further research could explore structured parameter spaces to refine the precision of tuning bias predictions. Overall, this paper provides a significant theoretical and empirical foundation for understanding tuning bias in conformal prediction, offering insights that can help in designing more reliable machine learning models, particularly in settings where rigorous uncertainty quantification is critical. Claims And Evidence: The exact definition of exchangeability appears very late in the paper, which can be formalized, highlighted a bit of earlier. Also, a clearer discussion on the relationship between exchangeability and the using of the same dataset for tuning and calibration can be further highlighted. I am not sure if I am correct, but for many of previous works, it seems that scaling law typically indicates larger parametric space / data scales leads to improve performance or robustness. However, in this paper, the authors seems to understand the problem of conformal prediction from a generalization perspective, exploring the impacts of parameter space and data scales, which is quite a style of classical machine learning. So, I am concerned if the “Scaling Law” can be somewhat misleading in the title. I am quite confusing about two seemingly conflicting claims: “the tuning bias is negligible for simple parameter tuning in conformal prediction methods” and “the parametric scaling law of the tuning bias increases with parameter space complexity and decreases with calibration set size”. From my understanding, the former states that the violation of exchangeability is not actually a big deal yet latter states that it is still have side impacts, especially influenced by the data scale and parameter scale. Hoping the authors could made some clarification to address my misunderstandings. Methods And Evaluation Criteria: I think the authors could further clarify how the CovGap is computed in practice, which seems critical for their empirical analyses. Also, for the potential solutions section, the mentioned methods are quite easy. However, for a theoretical style paper, maybe it is not a big deal. Theoretical Claims: I think the authors would like to take their works as a theoretical style paper, which is evident from its large amount of discussions on theoretical analysis and small amount of experimental verification. I did not check every details within the theoretical analysis, but it seems like an applications of PAC framework for CP problem, where the dataset size is echoed by n and the parameter space is echoed by VC dimension. I may raise two questions, 1) what’s the challenges, uniqueness, and contributions of this work from a theoretical perspective, as I think this paper should be categorized as a theoretical analysis work. 2) Besides echoing the observations within Section 3, what other observations can we draw from the new theoretical analysis. Experimental Designs Or Analyses: From Figure 1, it seems that the CovGap values are small except for VS and ConfTr ft. Therefore, why do not we just choose other CP methods or are their some particular interesting in using those methods with more hyper-parameters like VS and ConfTr? Supplementary Material: The supplementary material is satisfied, with details about existing works and their theoretical derivations. I quickly go thorough the appendix, and did not find some obvious mistakes or errors. Relation To Broader Scientific Literature: This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which I feel must be specifically highlighted. Essential References Not Discussed: The authors fully review the existing literature with many concrete examples, therefore I think the reference is good enough. Other Strengths And Weaknesses: It seems that the authors consider the situation where the same dataset is used for tuning and calibration, aiming to understanding its impacts on tuning bias. However, I am concerned that if it is truly difficult to have another hold-out set, as the validation dataset can be easily separated into two parts, one for tuning and one for calibration. I think the authors could further emphasize why the considered setup is pragmatic or this problem has some other interesting factors from an academic point of view. Other Comments Or Suggestions: NA Questions For Authors: Please see the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive review and insightful feedback. **1. Exchangeability definition/discussion** Thank you for the suggestion. We agree that introducing the exact definition earlier can improve the clarity. In the current version, we defaultly introduce Exchangeability as a well-known assumption of CP (with references). In the final version, we will add the exact definition of Exchangeability in Sec. 2, as the reviewer suggested. We present in Subsec. 4.1 (Lines 261-274) why using the same dataset violates the Exchangeability. We will improve the clarity of this part in the final version. **2. Clarification of the term "scaling law"** Thank you for raising the concern. Recently, the term "scaling law" frequently appears in the context of large language models - the performance scales up with parameter/data numbers. We want to clarify that the above scaling law is formally termed "neural scaling law" (see Wikipedia), a special case of empirical scaling law. Notably, the term 'scaling law' in deep learning refers to a broader concept that describes the relationships between functional properties of interest (such as tuning bias in our work) and characteristics of the model architecture or dataset (e.g., model size) [1]. To improve the clarity, we will add a concise description of "scaling law" with references in the final version. **3. Seemingly conflicting claims** Thank you for highlighting the potential misunderstanding. We want to clarify that 'simple parameter tuning' in the first claim refers to methods with few parameters, such as RAPS and TS. As Line 135, we excluded methods with a larger number of parameters, such as VS and the fine-tuned version of ConfTr. We use the first claim to demonstrate that tuning bias can be negligible in certain cases, which motivates the subsequent analysis. This does *not* conflict with the second claim regarding the parametric scaling law, which illustrates when tuning bias can be either small or large. To enhance clarity, we will revise the first claim to state: 'The tuning bias is not always significant for parameter tuning in ...' **4. Concerns on CovGap** CovGap is computed empirically as the absolute difference between the target coverage ($1-\alpha$) and the empirical coverage: $$ |(1-\alpha) - \frac{1}{n'} \sum_{i=1}^{n'} \mathbb{1}(y_i \in \hat{C}(x_i))|, $$ where $n'$ is the size of the test set, and $ \hat{C}(x) $ is the CP set for an input $x$. We will add a concise description of CovGap in the final version. As for the method, we present potential solutions for mitigating tuning bias as an extension (See response #1 to reviewer mAZq). We hope the guideline can inspire more future work to design specific methods for addressing this challenge. **5. Theoretical contribution and uniqueness** Thank you for the recognition. In this work, we formulate the tuning bias into ERM framework, and propose a general theory to bound the tuning bias by the PAC and empirical process theories, explaining the empirical scaling law. Then, we derive the bias bounds in finite and infinite parameter cases, respectively. In particular, we also provide the specific bounds of tuning bias in various tuning methods. Lastly, we provide the theoretical results to support the two practical guidelines addressing the challenge. We list the theoretical contribution as follows: 1. **Problem formulation**: This work is the first to formulate the "tuning bias" arising from dataset reuse, which provides a new direction for understanding non-exchangeable CP. 2. **CP-specific complexity analysis**: We derive the bounds of tuning bias with the complexity measures (e.g., VC dimension) of the *CP-specific hypothesis class*, which can be developed as a theoretical toolkit for refined bounds of general biases in CP. 3. **Analytical framework for tuning methods**: we establish a framework to derive the bias bounds for various tuning methods and present several examples. This framework can be utilized as a theoretical justification for special-designed methods in subsequent works. **6. Why not choose other methods?** Thank you for the insightful question. As Sec. 5, we mitigate the tuning bias by reducing the number of parameters. A specific example is to use alternative methods with fewer parameters (e.g., switching from VS to TS). However, this way may be impractical when complex methods are necessary to obtain tighter CP sets (ConfTr), improved model calibration (VS), and so on. This question highlights our contribution to revealing the scaling law of tuning bias, which provides guidelines for designing the tuning process in various scenarios. **7. Why not split the dataset?** Thank you for raising the concern. We refer to reviewer jecM's response #3 to answer the issue. As suggested by the reviewer, we will strengthen the motivation in Sec. 2 of the final version. [1] Villalobos, Pablo. "Scaling Laws Literature Review." Published online at Epochai. org (2023). --- Rebuttal Comment 1.1: Comment: Thanks for the authors' responses, and most of my concerns have been addressed. I agree with Reviewer 4yEA for the contribution of this work, willing to raise my original score to 4. --- Reply to Comment 1.1.1: Comment: Thank you for reviewing our response and increasing the score. We are delighted that our response addressed your concerns. Your feedback is highly valuable in improving the quality of this work.
Summary: In this paper, the authors focus on the tuning bias produced by parameter tuning in many conformal preidction methods. First, they reveal that the tuning bias is negligible for simple parameter tuning in many conformal prediction methods. Then, the authors establish a parametric scaling law, showing that tuning bias increases with parameter space complexity and decreases with calibration set size, supported by both empirical evidence and a theoretical framework using constrained ERM and VC dimension. The paper also discusses solutions to mitigate tuning bias, such as increasing the calibration set size or reducing parameter space complexity through fewer parameters or regularization techniques. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence, including a combination of empirical results, theoretical analysis, and practical considerations. Methods And Evaluation Criteria: The evaluation criteria make sense for the problem of tuning bias in conformal prediction. The paper conducts extensive empirical studies across various conformal prediction methods (e.g., RAPS, SAPS, score aggregation, temperature scaling, vector scaling, C-Adapter, ConfTr) on benchmark datasets such as CIFAR-10, CIFAR-100, and ImageNet. These datasets are standard in machine learning and are suitable for evaluating the performance of conformal prediction methods. By varying the calibration set size and the complexity of the parameter space, the paper effectively demonstrates the impact of these factors on tuning bias, which aligns with the theoretical analysis. Theoretical Claims: The theoretical claims are supported by correct and logically sound proofs, grounded in established statistical learning theory. For example, the derivation of the scaling law is supported by Proposition 4.2 and Proposition 4.6, which provide bounds for the tuning bias in finite and infinite parameter spaces, respectively. These propositions use classical concentration inequalities and VC dimension bounds. While the proofs could benefit from more detail and rigor in certain steps, the overall correctness of the theoretical analysis is robust. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are generally sound and valid. This work can be benefited if the authors can provide more detailed information on hyperparameter settings, such as the number of repetitions. I believe it would improve the reproducibility and robustness of the findings. Supplementary Material: I roughly checked the experiment results and proof. Relation To Broader Scientific Literature: In summary, the paper makes several key contributions that advance the understanding of tuning bias in conformal prediction, both theoretically and empirically. In particular, this work provides new insights and practical guidelines for managing tuning bias in real-world applications: 1. Analyzing the Tuning Bias in Conformal Prediction; 2. Presenting Parametric Scaling Law of Tuning Bias; 3. Theoretical Framework to quantify the upper bound; 4. Practical Guidelines to reduce tuning bias. Although some works investigated the non-exchangeability of using the same dataset(as stated in the Introduction), this work is novel as it provides a extensive study on quantifying the tuning bias of conformal prediction, which could be pretty interesting in the community. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths 1. The problem studied in this paper is significant. It is practical to use the same hold-out data for conformal calibration and parameter tuning in data-scarce scenerios. Therefore, the findings have practical implications for improving the reliability of conformal prediction in real-world applications. 2. The theoretical analysis is solid. The paper’s theoretical framework offers a new perspective on tuning bias, which could influence future research in statistical learning and uncertainty quantification. 3. The paper is well-structured, with clear delineation between empirical studies, theoretical analysis, and practical solutions 4. The proposed guideline is useful. The discussion of potential solutions to mitigate tuning bias, such as increasing the calibration set size and reducing parameter space complexity through regularization, provides actionable insights for practitioners. Weaknesses 1. (Minor issue) The writing of experimental results is a little ambiguous. I encourage the authors to improve the writing of Subsections 3.1 and 3.2, providing more details of the experimental settings and clearer observations. Other Comments Or Suggestions: N/A Questions For Authors: Please explain why C-adapter achieves much smaller tuning bias but vector scaling cannot? It seems C-adapter tunes more parameters than VS. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. Below, we address your concerns point by point. **1. C-Adapter vs. Vector Scaling** > Please explain why C-adapter achieves much smaller tuning bias but vector scaling cannot? It seems C-adapter tunes more parameters than VS. Thank you for the insightful question. It prompted us to delve deeper into the structural differences between C-Adapter and Vector Scaling (VS). In particular, we find that the **order-preserving regularization** in C-Adapter can significantly decrease the tuning bias. Formally, we propose a new proposition: **Proposition:** *Let $f$ be a logits value function for a classification of $K$ classes. The matrix scaling $g(x) = W f(x) + b$ is order-preserving if and only if $W$ has the form $W = a I + \mathbf{1} v^T$ for some scalar $a > 0$ and vector $v \in \mathbb{R}^K$, and $b$ is a constant vector (i.e., $b_j = b_{j'}$ for all $j, j'\in [K]$). Here, $I$ is the $K \times K$ identity matrix and $\mathbf{1}$ is the $K$-dimensional vector of all ones.* Here, we regard C-Adapter as a special case of matrix scaling with order-preserving regularization. The above proposition shows that **the order-preserving regularization reduces the dimension of parameter space from $K^2 +K$ to $K+2$**, which is much smaller than VS with its dimension being $2K$. Based on the parametric scaling law (Section 3), we explain why C-Adapter can achieve lower tuning bias than VS. Thank you again for inspiring us to revealing the impact of order-preserving regularization and we will add it in the discussion of the final version. **2. Experimental details & reproducibility** > The experimental designs and analyses ... can be benefited if the authors can provide more detailed information on hyperparameter settings, such as the number of repetitions. I believe it would improve the reproducibility and robustness of the findings. Thank you for the suggestion. We provide the experimental settings in Appendix A. In particular, we repeat all experiments with 30 runs and present the standard deviations in Fig. 2. In the final version, we will improve the writing of experimental setup to enhance the clarity and reproducibility. **3. Writing clarity (subsections 3.1/3.2)** > (Minor issue) The writing of experimental results is a little ambiguous. I encourage the authors to improve the writing of Subsections 3.1 and 3.2, providing more details of the experimental settings and clearer observations. Thank you for pointing out the writing issue. We will revise Subsections 3.1 and 3.2 to clearly present the experimental settings and the key observations, in the final version. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed responses. My concerns are clarified now. Given the contribution of this excellent work, I support the acceptance now. --- Reply to Comment 1.1.1: Comment: Thank you for reviewing our response and raising your score. We are pleased that our response addressed your concerns, which also improves the quality of this work.
Summary: This paper points out the problem that the exchangeability assumption of conformal prediction does not hold if the holdout set (applied for parameter tuning) and calibration set are identical. A parametric scaling law is proposed such that the tunning bias increases with parameter space complexity and decreases with calibration set size. A theoretical study is conducted to provide an upper bound of the bias. Potential solutions, like regularization during tunning, are provided. Claims And Evidence: Claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: This work does not propose an implementable solution (i.e. an algorithm). Only an intuitive solution is provided. The notation of a prediction set is not consistent. It shows up \hat{C} and C in different places. Do they refer to different concepts? The metric CovGap is introduced clearly. In line 80, as prediction set C(x) is related to the test input x. However, CovGap(C) is only a function of C and x disappears. Do you mean CovGap is the expectation of coverage gap for all test inputs? The same problem happens to other metric definitions, including TuningBias(C). The empirical study is conducted on APS score, but the score is not cited in line 163 until Appendix A. Theoretical Claims: Based on Eq.(2), a ceiling function should be added at the subscript of Q in Eq.(4). The upper bound \mathbb{E}\mathcal{R}_\Lambda is not introduced clearly when it shows up first in line 232 until line 260. Also, it is not clarified what space/set the expectation is calculated on. Experimental Designs Or Analyses: Experiments are only conducted on two datasets CIFAR-100 and ImageNet, and there is no regression task. It should be claimed if the work only focuses on conformal prediction on classification tasks. There is no standard deviation results in Table 1. The authors do not mention the experiments in Table 1 were conducted multiple times. Supplementary Material: I appreciate the authors provide additional experiment results in Appendix. Yet there is no explanation why ConfTr(ft.), TS, and VS perform so differently on CIFAR-10, CIFAR-100, and ImageNet. This should be related to the characteristics of the three datasets. Relation To Broader Scientific Literature: The problem of overfitting the holdout set, which should not be applied for calibration, is kind of novel, and I appreciate the experimental and theoretical works. The goal of reducing the amount of parameters and regularization is to prevent the overfitting issue. People can easily come up with the ideas without the theoretical analysis. More sophisticated solutions are expected for higher impact. Essential References Not Discussed: The related works are introduced extensively. Other Strengths And Weaknesses: The investigated problem is novel and sufficient theoretical works are conducted. However, the presentation should be further improved. For instance, the same notations are reused for different concepts, such as \hat{C} in Eq.(3) and Eq.(5). Legends for 'same' and 'holdout' are not consistent in figure 2 and 3. Writing should be improved. In line 367, 'Theoretically. we provide a theoretical result...'. The poor presentation makes the paper hard to understand. It hinders me to validate the correctness of some theories and proofs. Besides, as mentioned above, the works lacks a more insightful solution to the problem. The experiments of the current intuitive solutions are not enough either. Other Comments Or Suggestions: There is a typo in line 246. The notation system should be redesigned. Also, the name of corollary and proposition is abused. Intermediate conclusions or experiment observations should not be stated as corollaries and propositions (such as proposition 5.1 and 5.2). Questions For Authors: What space of \mathbb{E}\mathcal{R}_\Lambda is computed over? CovGap is a function of alpha in line 233. Does CovGap is also a function of alpha in line 81 as well? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the nuanced and constructive feedback. Below, we address your concerns point by point. **1. Lack of a sophisticated solution** We want to clarify that the primary objective of this work is to provide a comprehensive understanding of tuning bias in conformal prediction rather than to develop a specific, sophisticated solution at this stage. To this end, we present the main contributions of this study as follows: identifying the tuning bias, introducing a scaling law for the tuning bias, and establishing a theoretical framework to quantify its upper bound. Furthermore, we discuss potential solutions for mitigating tuning bias as an extension. Rather than introducing a novel methodology, we propose **two effective guidelines** to address tuning bias in real-world applications: reducing the parameters of model tuning (e.g., adopting a more parameter-efficient strategy) and implementing regularization techniques (e.g., order-preserving constraint). These guidelines provide **actionable**, **theory-driven** solutions and establish a foundation for future research to develop specialized methods tackling this challenge. Therefore, we believe this work not only **builds a robust theoretical framework** for understanding tuning bias—as recognized by Reviewer jaH5—but also **charts a clear path** for subsequent studies in this field. **2. Task scope** Thank you for the suggestion. We clarify that our analysis reveals a general phenomenon of conformal prediction in both classification and regression tasks. Section 3 focuses on classification, as many parameter tuning methods (like TS/VS and ConfTr) are designed for classification tasks. Here, we provide new results to show the case of regression tasks following previous work [1] with a target coverage of 90% and 30 repetitions. We present the CovGap and TuningBias (in percentage) in the table below: |Method|Varying #models||Varying Cal. Size|| |-|-|-|-|-| ||40|160|100|500| |Same|5.63|6.27|5.41|3.27| |Hold-out|3.17|2.56|3.91|2.90| |**Tuning-bias**|2.46|3.71|1.50|0.37| From the table above, we validate the parametric scaling law of tuning bias on regression tasks. In addition, our theoretical framework is general for various tasks. We will clarify the task scope and add the above results and details of the experiments in the final version. **3. Table 1 - Standard deviations/repeats** Thanks for the suggestion. In the current version, we present average results with few runs in Table 1. Here, we updated the results (in percentage) with 30 runs: | Method | CIFAR-100 | ImageNet | |-|-|-| |TS| **0.59 ± 0.38** | **0.43 ± 0.29** | |VS| 1.63 ± 0.76 | 6.43 ± 0.53 | | ConfTr (ft.) w/ OP | **0.52 ± 0.37** | **0.40 ± 0.31** | | ConfTr (ft.) w/o OP | 6.15 ± 0.86 | 21.68 ± 0.58 | The new results lead to the same conclusion as the previous version. We will update the table in the final version. **4. Performance gap between various datasets** Thank you for the suggestion. It is wothy noting that the parameter numbers of those tuning methods (e.g., VS and C-Adapter) are positively related to the class numbers of the dataset (See Line 354). Thus, datasets with more classes require more parameter numbers in the tuning, leading to a larger tuning bias. This explains why those methods perform differently in various datasets. We will add a brief explanation in Appendix B of final version. **5. Concerns of presentation** Thank you for raising the writing concerns. We agree that a clearer and more consistent notation system can benefit the paper a lot: 1. **CovGap**: Both CovGap and TuningBias are dataset(distribution)-level metrics, instead of instance-level. In particular, CovGap measures the absolute difference between the target coverage and the actual coverage on the dataset/distribution, see Line 81. In addition, CovGap is a function of alpha, but we omit the alpha for simplicity. We will fix the notation issue in Lines 80-81 and update the notation to ensure consistency in the final version. 2. **Prediction set**: $\hat{C}$ in Eq. (3) and (5) are the empirical form of the CP set, where the *hat* notation emphasizes it is associated with observations. $C$ denotes general form CP sets that may be assigned without observations (e.g., oracle CP sets). 3. **Clarify $\mathcal{R}_\Lambda$**: It is the supremum of an empirical process. In the current version, we define it in Line 239 and rewrite it in Line 260. The $E\mathcal{R}_\Lambda$ is expected over the distribution of the calibration set. 4. **Propositions**: Prop. 5.1 and 5.2 are theoretical derivations (See proofs in App. I and J) instead of empirical observations. 5. Other issues: we will add proper citations, fix the typos, and improve the clarity in the final version. Thank you again for the nuanced review. [1] Liang, Ruiting, Wanrong Zhu, and Rina Foygel Barber. “Conformal Prediction after Efficiency-Oriented Model Selection.” arXiv. https://doi.org/10.48550/arXiv.2408.07066. (2024)
Summary: This paper finds out that the coverage gap of using same dataset for tuning and calibration is negligible in most of the conformal prediction methods. Also, this paper observes a scaling law about how parameter space complexity and calibration set size influence the tuning bias. Then this paper proposes a theorectical framework to quantify tuning bias, and gives out a theoretical proof for the scaling law. At the end, this paper discuss two solutions of reducing tuning bias based on the scaling law. Claims And Evidence: The claims are clear, and the evidence is convincing. Methods And Evaluation Criteria: For the methods, there is a risk of overfitting to the specific characteristics of the calibration data used in the study, particularly if the separation between tuning and calibration is not well-managed. This could lead to models that perform well on specific dataset characteristics but poorly generalize to new data, undermining the reliability and utility of the predictions in practical applications. These drawbacks underscore the need for careful application and further testing of these methods across various settings and conditions to fully understand their limitations and potential. Theoretical Claims: The theoretical claims are solid with proofs. Experimental Designs Or Analyses: Although the paper discusses how tuning bias scales with the complexity of the parameter space, there might be a lack of experiemental results that can show how reducing the number of parameter could help reducing the bias. Supplementary Material: I cannot find the supplementary material that support your experiments results (e.g., code) Relation To Broader Scientific Literature: While the paper contributes interesting findings on tuning bias and its scaling laws, it may not adequately integrate these contributions with existing theories or frameworks within the broader field of machine learning. Even in the field of conformal prediction, how the findings help the further research path is not mentioned. Essential References Not Discussed: All the essential references are discussed. Other Strengths And Weaknesses: I think this paper is not well-motivated, the paper may not sufficiently demonstrate the practical necessity of investigating tuning bias where the tuning and calibration datasets are the same. In most real-world applications and existing research, these datasets are intentionally kept separate to avoid overfitting and ensure the model's generalizability. The necessity to study what happens when they overlap might not be convincingly argued, making the motivation behind this research seem theoretical rather than practical. If the paper leans heavily on theoretical justifications without clear paths to application or examples of real-world scenarios where such tuning bias issues prominently occur, it might reinforce the impression that the motivation is more academic than practical. Other Comments Or Suggestions: There are not too much typos or notation error, but I recommend to make the TuningBias and CovGap in the formula as Text form. Questions For Authors: No questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your positive and valuable feedback. **1. Results of reducing parameter numbers:** Thank you for the suggestion. In the manuscript, we presented two empirical evidences to validate the effect of reducing the parameter number: 1. TS vs. VS (Table 1, Fig. 1 and 3): we present a pilot study to show that TS with fewer parameters achieves much smaller tuning biases than VS with more parameters. 2. Experiments of scaling law (Fig. 2(a)): we analyzed the correlation between amounts of parameters and tuning bias by freezing different numbers of parameters within VS. The results show that increasing the number of parameters leads to higher tuning bias, supporting the claim. As the analysis above is sufficient to show that "reducing the parameter numbers can result in lower tuning bias", we did not present more results in Section 5. In the final version, we will explicitly refer to these empirical results in the discussion. **2. Relation to literature:** Thank you for the positive comment and suggestion. In the final version, we will improve the writing of related work to clearly present the position of this work in the literature. Here, we contextualize our work as follows: - **Integration with existing theories**: we formulated the tuning bias as *constrained ERM problem*, a special case of learnability theory (we discussed in related work). In Subsec. 4.2 and 4.3, we introduce *Dvoretzky–Kiefer–Wolfowitz inequality* and *VC dimension* to analyze the tuning bias in finite and infinite parameter spaces, respectively. Thus, our theoretical framework is tightly integrated with existing machine learning theories, as discussed in related work. - **How it benefits future works**: In this work, we provide the first study to quantify the tuning bias and its scaling laws. This enables researchers to determine when splitting the dataset is necessary or if data reuse is acceptable, which is particularly crucial in data-scarce scenarios (such as rare diseases in medical diagnosis). In addition, we also provide practical guidelines for developers to alleviate the tuning bias (as appreciated by Reviewer 4yEA). Theoretically, this work is the first to employ the ERM framework in conformal prediction, offering a novel tool for analyzing the learnability of general conformal prediction problems. **3. The practical necessity of exploring tuning bias and why not split dataset** Thank you for highlighting this writing issue. In the current version, we only describe the significance of reducing tuning bias in the discussion (Sec. 5), which makes it challenging for readers to grasp the motivation earlier in the paper. Here, we'd like to clarify that understanding the tuning bias is crucial in conformal prediction practice: - **Data-scarce scenarios**: Splitting the labeled dataset is impractical in data-scarce scenarios like rare diseases, natural disaster prediction, and privacy-constrained personal data. With limited data, using separated datasets will reduce points for parameter tuning and conformal calibration, compromising the approach's effectiveness and stability. Thus, it’s valuable to assess when splitting is needed, or data reuse is permissible rather than sticking to traditional practices. - **Simple implementation**: Even with sufficient data, maintaining separate sets can increase the pipeline complexity. Understanding when this separation is unnecessary—such as when tuning bias is negligible—enables simpler, more streamlined workflows while preserving coverage guarantees, offering practical relevance. - **Foundational understanding**: exploring the tuning bias can provide an in-depthknowledgeg of the exchangeability assumption in conformal prediction. In particular, the insight in this work may inspire future works in non-exchangeable conformal prediction. It could also answer Reviewer jaH5's concern, "why not split split the dataset?" for the same reasons. Indeed, it is easy to split the validation set when the provided data is sufficient. However, it can be particularly important to consider data reusing in data-scarce scenarios, like rare diseases, natural disaster prediction, and privacy-constrained personal data. With limited data, separating the dataset will further exacerbate the data scarcity problem, compromising the effectiveness of both conformal calibration and parameter tuning. In addition, exploring the tuning bias can provide an in-depth understanding of the exchangeability assumption, which may inspire future works in non-exchangeable CP. As suggested by the reviewer, we will emphasize the motivation of exploring the tuning bias in the Introduction and Background of the final version. **4. Supplementary material and notation** Thanks for the suggestion. In the final version, we will update the notation of TuningBias and CovGap with text form and release the code on GitHub once it is accepted. --- Rebuttal Comment 1.1: Comment: Thanks for authors' response, I can understand the motivation now. Hope this can be highlighted in the final version, and I will adjust my score. --- Reply to Comment 1.1.1: Comment: Thank you for raising the score. We are pleased that our response addressed your concerns, which also improves the quality of this work. Once again, we appreciate your positive and valuable feedback.
null
null
null
null
null
null
Domain-Adapted Diffusion Model for PROTAC Linker Design Through the Lens of Density Ratio in Chemical Space
Accept (poster)
Summary: In this work the authors explore a domain-adapted diffusion model for unconditional molecular generation, focusing on PROTAC linker design. While this is an interesting application, the novelty is somewhat limited as it does not explore one of the most compelling aspects of molecular design: conditional molecular generation. The challenge of generating molecules of this size in an unconditional setting is largely addressed in existing work, unless specific metrics to assess chemical space exploration were included (which are not present here). While the background and baseline choices are relevant, they do not introduce particularly novel insights. Overall, this paper is close to the acceptance threshold, and with additional experiments and analysis, it could become more impactful for both the machine learning and molecular engineering communities, but I have ranked it as a weak reject in its current state. Summary of Feedback and Concerns 1. Reproducibility Concerns: - Unclear how training, validation, and/or test sets were determined (limited details in paper, and no code provided). - Lack of clarity on structures sampled from ZINC for pre-training (were they quasi-PROTACs?); furthermore, unclear why the molecular splitting strategy that the authors used for pre-training makes sense, was it just to generate as many splits as possible for the quasi-PROTAC pre-training data? - How were SMILES, and eventually conformers, in the pre-training and fine-tuning sets standardized and processed prior to training? There is a lot of ambiguity in data processing and standardization steps that is not clarified in the text, yet this is really important. - The statement regarding PROTAC data for fine-tuning is confusing: _"We select 365 different warheads as the test set of 327 PROTAC samples, and the remaining as the training set of 2,943 samples for the fine-tuning phase."_ Unclear what is meant by this and frankly the statement does not make much sense to me. Hope the authors can clarify. - No code availability, making it difficult (impossible) to validate and reproduce the work. 2. Model Training and Evaluation: - I am concerned over the computational cost of training and sampling the diffusion model, despite the authors' claim: _"fine-tuning approach in the DAD-PROTAC model enjoys two main advantages. First, it is computationally efficient through the score estimator correction rather than full model retraining. Second, it explicitly estimates the density ratio in the chemical space with theoretical rigor for effective domain adaptation."_ Could the authors provide some more details here about the computational cost for training and inference? For instance, how long does it take to generate a linker for a new structure? - The authors' benchmarking choices are relevant but could have been stronger, they compare against 3DLinker, DiffLinker, and LinkerNet; however, REINVENT (Link-INVENT) would have been a more suitable benchmark as it is one of the best molecular generative models publicly available, even though it does not explicitly use 3D data (see my concerns below about this). 3. Justification for the 3D Representation: - The authors should better justify why 3D structures are necessary for this task. It is unclear whether a 2D representation would have sufficed, and whether the complexity of using 3D models is warranted. 4. Benchmarking and Evaluation Limitations: - The model appears to be generating PROTAC linkers unconditionally, rather than for specific POIs (proteins of interest) and E3 ligases, limiting its practical utility. Future work should focus on generating PROTACs for specific targets and evaluating performance across different targets. - The focus on basic molecular generation benchmarks (e.g., validity) is not very insightful, as any standard generative model should meet these. Rediscovery benchmarks would have been more meaningful to demonstrate the model’s practical applicability for PROTAC linker design; however, these are only really feasible in a conditional setting, which the authors did not explore. - Sample efficiency should be evaluated, as PROTAC-based generative models must be sample-efficient due to the limited amount of publicly available PROTAC data. Was this explored at all? 5. Overall Summary: - None of the presented experiments convincingly demonstrate that this model would be useful in a prospective setting; as such, I question the utility of it. The use or need for 3D data in this setting was not really motivated, so I think the authors need to come back to this. Currently, the study feels incomplete, unless the aims of the study are reframed from PROTAC design to simply domain-adaptation. IMO, sample efficiency and rediscovery benchmarks should be prioritized to establish the model’s utility for PROTAC design. Claims And Evidence: Partially. See my detailed review above. Methods And Evaluation Criteria: Partially. See my detailed review above. Theoretical Claims: Fine. Experimental Designs Or Analyses: Analysis is incomplete. See my detailed review above. Supplementary Material: Skimmed it in search of the details I found missing in the main text, and did not find. See my detailed review above. Relation To Broader Scientific Literature: Decent review of the broader scientific literature. Essential References Not Discussed: Perhaps Link-INVENT https://doi.org/10.1039/D2DD00115B Other Strengths And Weaknesses: The paper is overly convoluted, with a heavy focus on the math and derivations for the domain-adaptation via the density ratio, without a strong-enough emphasis on if the evaluations are the most relevant for the model. Overall the paper could be restructured for clarity. Other Comments Or Suggestions: Overall it is not a bad paper, I want to make that clear, I quite enjoyed reading it. Nevertheless, it can be improved through the analysis and experiments recommended above. Questions For Authors: See my detailed review above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the constructive comments! # 1. Experimental Details Regarding the training/test split, since the proposed model is a pretrain-finetuning model, we use different datasets for the two phases. We use the ZINC dataset (438610 small molecule samples as the training set) to pretrain the model (Sec. 3.1.1, lines 283-293 and Appendix E.1.1). We use the PROTAC-DB dataset during the fine-tuning phase. More specifically, 2943 PROTAC samples of the PROTAC-DB dataset are used for fine-tuning (including both the training and validation sets), while the remaining 327 PROTAC samples of the PROTAC-DB dataset are used as the test set (Sec. 3.1.1, lines 295-300 and Appendix E.1.2). Regarding the structures sampled from ZINC, they can be viewed as quasi-PROTACs. We follow the existing molecular splitting method on the ZINC dataset for the linker design task in DiffLinker[1]. The molecules are fragmented by enumerating all double cuts of acyclic single bonds that are not within functional groups. The resulting splits are filtered by SA, PAINS, and other criteria. Please refer to Appendix E.1.1. Regarding the data pre-processing, we do not use SMILES since we focus on 3D structure generation. The input should be conformers. We first generate 3D conformers using RDKit and define a reference structure for each molecule by selecting the lowest-energy conformations. This also follows the practice in [1]. Please refer to Appendix E.1.1. Regarding the PROTAC data for fine-tuning, we split the PROTAC dataset into training (2943 samples) and test (327 samples) sets. The **test set contains 365 unique warhead pairs not seen in the training set** to evaluate generalization. The mismatch in numbers (327 PROTACs vs. 365 warheads) arises since some warheads are shared across PROTACs. We will revise it for clarity. Regarding code, please refer to Appendix E.4, end of page 24. [1] Ilia Igashov et al., Equivariant 3D-conditional diffusion model for molecular linker design. # 2. Model Training and Evaluation Regarding the computational cost for training and inference, please refer to Appendix E.4, lines 1305-1308. Our model can converge within 77 hours during the pre-training phase. For the fine-tuning phase, the guidance network can converge within 31 hours. For the sampling efficiency, DAD-PROTAC can sample one PROTAC linker within 29 minutes. It is efficient compared to other baselines in Figure 4(a). Regarding the Link-INVENT, we do not include it as a baseline since it does not use the 3D molecular graphs as the input. However, we can transform the 3D graphs in the datasets to their corresponding SMILES strings or 2D graphs, and then use SMILES/2D graphs to pretrain and fine-tune the Link-INVENT. We then compare it with DAD-PROTAC below. The Link-INVENT performs well on the valid metric but performs quite badly on metrics like uniqueness and novelty. This is because it does not use 3D information and can only generate the linker in the restrained 2D space, making it difficult to generate unique or novel structures. **We will cite the Link-INVENT paper[2] and compare its performance with ours.** | Models | Valid% | Unique% | Novel% | Recover% | |:---:|:---:|:---:|:---:|:---:| | 3DLinker-Fine-tuning | 58.6 ± 0.2 | 49.2 ± 0.6 | 56.2 ± 0.5 | 24.3 ± 0.4 | | LinkerNet-Fine-tuning | 82.9 ± 0.3 | 54.6 ± 7.0 | 63.7 ± 3.8 | 32.8 ± 0.8 | | Link-INVENT-Fine-tuning | 93.6 ± 0.5 | 43.7 ± 0.9 | 52.4 ± 1.4 | 29.1 ± 1.7 | | DAD-PROTAC | **94.8 ± 0.4** | **69.3 ± 0.3** | **71.5 ± 0.3** | **45.7 ± 0.6** | [2] Jeff Guo et al., Link-INVENT: generative linker design with reinforcement learning. Digital Discovery, 2023. # 3. Justification for the 3D Representation If we only use SMILES or 2D graphs, atoms in the 3D space that are close in the molecule structure can be far away in the SMILES string/ 2D graphs. The 3D representation contains information about relative distance and orientation between the sub-structures, which is vital to successful PROTAC linker design[3]. While 2D representations capture connectivity, they ignore torsional flexibility and spatial constraints critical for PROTAC efficacy. Most SOTA methods(Delinker, DiffLinker, LinkerNet) all use 3D representations. We will add more discussions to the paper. Please check the comparison with Link-INVENT above. [3] Fergus Imrie et al., Deep Generative Models for 3D Linker Design. Journal of Chemical Information and Modeling, 2020. # 4. Benchmarking Regarding conditional PROTAC linker generation, we leave it for future work (Sec. 4, lines 435-438). Regarding the rediscovery benchmark, we include the recovery rate (Appendix E.3.4) as the evaluation metric in Tables 1 and 2. Regarding sample efficiency, please refer to **the response to Reviewer 3rte: Performance with different sizes of PROTAC datasets**. # End of Response We hope that our responses have effectively addressed all of your concerns. If so, we would appreciate it if you could increase your rating accordingly. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for the thoughtful response, as well as the additional data preparation details and the comparison to LinkINVENT. I have found it very informative, and believe these results should also be incorporated into the manuscript. I will update my score to a 3 if these components are suitably integrated in the revised manuscript, as well as the notes below regarding the code. Apologies for missing the anonimized code repository, it was an oversight on my part. It is very good that it has been provided, however, the documentation is limited: there are not really set-up instructions in the README nor any example use cases, which limits its utility and makes it hard to follow. This is especially problematic since a lot of these details essential for reproducibility have not been present in the initial draft of the manuscript. Can the authors please integrate better documentation and improve the usability in the revised code repository, along with the revised manuscript? (to make it unambigious: I would also like to see an improved repository in order to increase my score to a 3) Finally, I am not convinced by the aforementioned justification that the 3D representation is needed, as transformers are also able to capture long-range dependencies from 2D representations like SMILES. Indeed, a more thorough discussion on the strengths and limitations of the chosen representation will improve the paper. --- Reply to Comment 1.1.1: Comment: Thanks so much for your further insightful and valuable comments! Your suggestions really make this work better. Rest assured that we will definitely revise the manuscript in the final version to incorporate all your informative suggestions, especially the additional data preparation details and the comparison to LinkINVENT. --- Regarding repository documentation, we have improved the README file significantly. Please still refer to Appendix E.4, end of page 24. We now include the step-by-step running instructions, including a detailed setup guide for environment configuration, how to do data preprocessing, how to pretrain the model, how to fine-tune the model, how to sample the linkers, and how to evaluate the performance of the sampled results. We also provide some example data files and the pre-trained model and fine-tuned model weights as well. The workflow of this repository is now clear. --- Regarding justification for the 3D representation, we will expand our discussion on the strengths and limitations of 3D representation in the final revised manuscript. While it is true that transformers can capture long-range dependencies in SMILES or 2D representations, we emphasize that the **3D representation encodes explicit geometric information** about inter-atomic distances, orientations, and torsional angles. **Such spatial and conformational details are missing in SMILES or 2D representation, but they are vital to successful PROTAC linker design.** Our comparison with LinkINVENT and the results in [1] both empirically show that omitting such spatial and conformational information harms the performance of the linker design task. We will definitely include these comparison results with LinkINVENT to support this point. [1] Fergus Imrie et al., Deep Generative Models for 3D Linker Design. Journal of Chemical Information and Modeling, 2020. --- We believe these improvements comprehensively address all the concerns while strengthening the overall paper. If so, we would appreciate it if you could increase your rating accordingly. Thanks again so much for your great suggestions!
Summary: This paper introduces DAD-PROTAC, a domain-adapted diffusion model for designing linkers in Proteolysis-targeting chimeras (PROTACs). The main algorithmic idea is the efficient fine-tuning strategy via density ratio estimation, avoiding full retraining of the diffusion model. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence: * **Claim:** Existing diffusion models for linker design, trained on small-molecule datasets, suffer from a distribution mismatch when applied to PROTACs. **Evidence:** Figure 2 clearly shows the difference in molecular weight distributions. Appendix B.1 and Figure 7 and F.1, along with the accompanying text, provides substantial additional detail, discussing differences in data collection and various physicochemical properties (LogP, rotatable bonds, etc.). * **Claim:** DAD-PROTAC's domain adaptation, via density ratio estimation, improves performance. **Evidence:** Table 1 shows superior performance of DAD-PROTAC compared to baselines (3DLinker, DiffLinker, LinkerNet) and their fine-tuned versions across multiple metrics (validity, uniqueness, novelty, recovery, QED, SA, Emin, RMSD). Table 2 presents a convincing ablation study, demonstrating the importance of the density ratio estimation and the use of noise-perturbed samples. * **Claim:** DAD-PROTAC is more computationally efficient than full fine-tuning. **Evidence:** Figure 4(a) shows significantly reduced fine-tuning time compared to standard fine-tuning approaches, while achieving higher validity. The text discusses the computational cost savings of learning a classifier for the score correction term rather than retraining the entire model. * **Claim:** The generated linkers are closer to real protacs. **Evidence:** Figure 5 gives the distribution of molecular weight of the generated linkers. Overall, the claims are well-supported by a combination of quantitative results, qualitative visualizations (Figures 1, 5, and Appendix figures), and theoretical justification (Theorems 2.1 and 2.2). Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem. * **Methods:** The core idea of using a diffusion model with domain adaptation via density ratio estimation is well-motivated and theoretically grounded. The decomposition of the score estimator is a clever way to leverage pre-trained models while adapting to the target domain. The use of EGNNs is standard and appropriate for handling 3D molecular structures. * **Evaluation Criteria:** The paper uses a comprehensive set of metrics relevant to molecular generation: * **Validity, Uniqueness, Novelty:** Standard metrics for assessing the quality and diversity of generated molecules. * **Recovery:** Measures the ability to reproduce known linkers, which is important for validating the model's ability to learn the underlying distribution. * **QED, SA:** Established metrics for assessing drug-likeness and synthetic accessibility. * **Emin, RMSD:** 3D conformation-specific metrics that evaluate the quality of the generated structures. * **Datasets:** Using ZINC for pre-training and PROTAC-DB for fine-tuning and evaluation is a reasonable choice, reflecting the availability of data in these domains. The paper clearly explains the data processing and splitting procedures. * **Baselines:** The comparison to 3DLinker, DiffLinker, and LinkerNet (and their fine-tuned versions) provides a good assessment of the proposed method's performance relative to existing state-of-the-art approaches. Theoretical Claims: I checked the correctness of the proofs for theoretical claims. Theorem 2.1 is correct. The decomposition is valid. Theorem 2.2 is correct. The training objective is derived in the appendix. The demonstrations are given in Appendix C and B.4, and they are corrects. Experimental Designs Or Analyses: I checked the soundness/validity of the experimental designs and analyses. The ablation study (Table 2) is particularly well-designed, isolating the contributions of different components of the proposed method. The comparisons to baselines are fair, with pre-trained versions of the baselines included. The use of multiple metrics provides a comprehensive evaluation. The visualization results in Figure 5 support the claims. Supplementary Material: I reviewed the supplementary material. I checked all parts. It provides valuable additional information, including: **Detailed explanations of the method:** Appendix A provides a detailed overview of related work. Appendices B and D elaborate on the preliminaries, method details, and pseudocode. **Extended experimental results and analysis:** Appendix E details the experimental setup. Appendix F includes additional figures and tables, such as distributions of molecular descriptors, further comparisons, and visualizations. Relation To Broader Scientific Literature: The paper is well-situated within the broader scientific literature. It clearly explains the context of PROTAC linker design and the limitations of existing approaches. The related work section (and Appendix A) provides a good overview of relevant work in molecular generation, diffusion models, and PROTAC design. The paper cites relevant papers on diffusion models (Hoogeboom et al., 2022; Nichol & Dhariwal, 2021), EGNNs (Satorras et al., 2021), molecular generation (Guan et al., 2023; Igashov et al., 2024), and PROTACs (Bemis et al., 2021; Troup et al., 2020). Essential References Not Discussed: No essential references appear to be missing. Other Strengths And Weaknesses: **Strengths:** * **Novelty:** The core idea of using density ratio estimation for domain adaptation in PROTAC linker design is novel and well-motivated. * **Thoroughness:** The paper is very thorough, with extensive experiments, ablations, visualizations, and theoretical justifications. * **Clarity:** The paper is generally well-written and easy to follow. The figures and tables are informative and well-designed. **Weaknesses:** * The Figure 3 of DAD-PROTAC model is a little complicated, and is hard to understand. Other Comments Or Suggestions: Reference the **Other Strengths And Weaknesses** part Questions For Authors: 1. The current approach assumes the number of atoms in the linker is pre-specified. How challenging would it be to extend the model to generate linkers of variable lengths? How would this impact the density ratio estimation and score correction steps? This would significantly improve its real applicability. 2. How does DAD-PROTAC's performance scale with the size of the PROTAC training dataset? The paper mentions the limited size of PROTAC datasets as a challenge. Could you provide some insights into how the method would perform with even smaller or larger datasets? A learning curve would help in assesing this limitation. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the constructive comments! # 1. Figure 3 We will make Figure 3 clearer in the final version with fewer annotations and highlight more on how the score correction term is obtained. We will also explicitly annotate the input and output of two phases and each model component. # 2. Pre-specified number of atoms We **follow most existing SOTA linker design methods like LinkerNet[1] and also pre-specify the number of atoms**. This is because the prediction of the number of atoms in the linker should be solved via **a deterministic model**, while the generation of the linker (atom coordinates and bonds) should be solved via another **generative model**. **The two steps of the linker design task are usually solved separately**. We focus on the generation part of this task in this work. For the prediction of the number of atoms in the linker, **we can use a separately trained GNN to produce probabilities for the linker size**. The input is still the molecule fragments. Later, when we ​​generate the linker’s structure, we first pre-specify the linker size based on the maximum predicted probabilities by this separately trained GNN and then use our proposed method in this paper. Since the linker size prediction step and the linker structure generation step are trained separately, **no components (density ratio estimation and score correction) in the proposed DAD-PROTAC would be affected**. We include the experimental results below. As we can see, **the additional training of another neural network for linker size prediction may introduce more errors for the linker size, and thus the performance may degrade**. We leave the joint training of both linker size prediction and linker structure generation for future work, as it is currently out of the scope of this work. | | Valid% | Unique% | Novel% | Recover% | |:---:|:---:|:---:|:---:|:---:| | DAD-PROTAC w/o pre-specified number of atoms | 93.1 ± 0.6 | 66.2 ± 0.7 | 69.3 ± 0.8 | 40.4 ± 0.5 | | DAD-PROTAC (Ours) | 94.8 ± 0.4 | 69.3 ± 0.3 | 71.5 ± 0.3 | 45.7 ± 0.6 | [1] Jiaqi Guan et al., LinkerNet: Fragment Poses and Linker Co-Design with 3D Equivariant Diffusion. In NeurIPS 2023. # 3. Performance with different sizes of PROTAC datasets **For sample efficiency**, we analyze the performance of the proposed DAD-PROTAC with different portions of the PROTAC dataset used during the fine-tuning phase in the table below. We reduce the PROTAC dataset size from 100% (full) to 50%, 25%, and 10%. We also add an extra 20% of samples from the PROTAC-DB 2.0 database as the extended dataset. This non-linear degradation suggests our approach is reasonably robust to limited PROTAC data for fine-tuning. Note that, **even at 10% of the PROTAC dataset size, DAD-PROTAC can still outperform the baseline 3DLinker-Fine-tuning approach using 100% of the PROTAC data**. For larger datasets, our preliminary experiments with additional data show diminishing returns with 1.2x our current dataset size, suggesting we are approaching the performance ceiling of the current architecture. **We will add these results and include a learning curve figure in the final version that visualizes these results for sample efficiency analysis.** | DAD-PROTAC with different sizes of datasets | Valid% | Unique% | Novel% | Recover% | |:---:|:---:|:---:|:---:|:---:| | 10% PROTAC dataset | 64.9 ± 0.8 | 56.8 ± 0.7 | 63.8 ± 0.5 | 31.4 ± 0.8 | | 25% PROTAC dataset | 76.3 ± 0.6 | 59.5 ± 0.5 | 67.9 ± 0.4 | 38.2 ± 0.7 | | 50% PROTAC dataset | 89.9 ± 0.5 | 65.8 ± 0.4 | 70.7 ± 0.3 | 43.1 ± 0.6 | | 100% PROTAC dataset | 94.8 ± 0.4 | **69.3 ± 0.3** | 71.5 ± 0.3 | **45.7 ± 0.6** | | 100% PROTAC dataset (full) + 20% additional | **95.1 ± 0.3** | 68.1 ± 0.3 | **71.9 ± 0.3** | 45.2 ± 0.5 | # End of Response We hope that our responses have effectively addressed all of your concerns. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal and for providing additional experiments and clarifications. I appreciate you addressing my questions regarding: 1. Figure 3: Your plan to clarify Figure 3 in the final version is welcome. 2. Variable Linker Length (Q1): Thank you for the explanation regarding the standard practice of pre-specifying the linker length and for conducting the experiment with a separate length prediction step. The results provide valuable context and appropriately frame joint training as future work. 3. Dataset Size Sensitivity (Q2): The new experimental results analyzing performance with varying PROTAC dataset sizes are very helpful. They effectively demonstrate the robustness of DAD-PROTAC, even with smaller datasets, and provide useful insights into sample efficiency. Adding the learning curve figure will be a good addition. Your responses and the new empirical evidence have successfully addressed the points raised in my review. My overall positive assessment of the paper remains, and I confirm my recommendation
Summary: This study focuses on domain adaptation in diffusion models for biology. The authors pretrain a diffusion model on the ZINC dataset and attempt to use the model on the PROTAC domain. For finetuning, they use density ratio estimation techniques to correct the score function on the ZINC dataset. The method looks intersting and the results are great. ## Update after rebuttal I appreciate the authors' efforts to answer the questions, most of my concerns are resolved. Thank you. Claims And Evidence: Yes, the claims are supported by the experiments. Methods And Evaluation Criteria: Yes, the method and evaluation make sense. Theoretical Claims: I checked the claims. Experimental Designs Or Analyses: Yes, I think this study is solid. Supplementary Material: I checked the supplementary material, including the results, pseduo-code and proofs. Relation To Broader Scientific Literature: It would benefit the linker design area. Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths** 1. The domain adaptation diffusion is novel and interesting. Score correction helps preserve the original power of the pretrained model on ZINC and prevents from overfitting on the PROTAC dataset. 2. The results are great and outperform other methods. The ablation study proves the usefulness of the method. **Weaknesses** 1. Training a classifier on x(t) may introduce some errors. For x(t) when t is large (in the early stage of reverse diffusion), will the classifier be hard to train? Since the atom number is so different (see Figure 2), will the classifier learn to cheat by counting the number of atoms? 2. Since you firstly train a classifier, and then train another network with the classifier, will error propagation hurt the performance? A baseline can be use Monte Carlo estimation to directly estimate the value of the score correction in Theorem 2.1. In Table 2, you have ablated "direct score correction approximation" and "density ratio estimation via clean samples", can you provide more details? Other Comments Or Suggestions: NA Questions For Authors: **Questions** There are other methods for "finetuning" a pretrained diffusion model on a specific domain, such as LoRA in LLM and diffusion DPO in image generation. Is it possible that these techniques help prevent overfitting? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the constructive comments! # 1. Errors for large $t$ In Eq.(16), we train a time-dependent classifier $W(X_t^L,t)$ with samples from potentially all steps $t$ jointly, instead of training different classifiers $W_t(X_t^L)$ for each step $t$ individually. This means that the **training set has both easy samples (t is small) and hard samples (t is large)**. This will mitigate the issue of differentiating the samples from two domains when t is large. Besides, we can easily **add more samples (with larger $m$ and $n$) to the training set to further mitigate this issue**. We also compare the classification performance to show the performance of this trained classifier on different groups of $t$ (based on its range) on an additional test set below. This table shows that **the model’s performance drop for large $t$ is acceptable**. The further **incorporation of focal loss to Eq.(16) by putting more focus on hard samples ($t$ is large) is not necessary**. We will add it to the ablation study in the final version. | test samples (range of t) | 0% to 25%T | 25%T to 50%T | 50%T to 75%T | 75%T to 100%T | |---|---|---|---|---| | classification accuracy (with Eq.(16))(Ours) | 97.5 | 96.8 | 97.1 | 96.7 | | classification accuracy (with Eq.(16) + focal loss) | 97.3 | 97.2 | 97.2 | 96.9 | # 2. Model cheats by counting the number of atoms The number of atoms is only one of the differences between PROTACs and small molecules. **Please refer to Figure 8 of the paper to see other differences, like LogP, which are more likely to be dependent on the distributions of atom coordinates. Therefore, we train the classifier on atom coordinates $X$ in Eq.(16) to encode all potential distribution differences in their chemical space**. We specifically design our classifier to prevent this shortcut. We use original features like $X$ as the input rather than global molecular properties like the number of atoms. We validate this by testing the classifier on a separate dataset containing molecular structures with the same atom counts but from different domains (PROTACs and small molecules). The results in the table below confirm that **classification accuracy on test samples with the same atom counts remains consistent with the overall results**. If we only use atom counts as input, the classification performance would drop significantly. | | test samples (overall) | test samples (same atom counts) | |---|---|---| | classification accuracy (use all original features as input) (Ours) | 97.0 | 96.7 | | classification accuracy (use only atom counts as input) | 71.3 | 53.2 | # 3. Error propagation and Monte Carlo estimation From the results in the last point above, we can see that **the classification error in this step is quite small**, and this alleviates the error propagation issue in the first place. We further find that using MC simulation to estimate the score correction term directly may get slightly better performance, but it is **computationally prohibitive**. The extended ablation study from Table 2 is summarized below. “direct score correction approximation” refers to the model where we only run MC simulations on a few samples $X_t^L$ to obtain the corresponding score correction terms in Eq.(11) and then train another neural network to learn to approximate this score correction term. “Density ratio estimation via clean samples” refers to the model where we only use clean samples to train the classifier in Eq.(16) (namely, set t =0). More discussion is found in Sec 3.3 in the paper. From the results in the table below, **MC simulation suffers from high computational cost, our current design keeps a good balance between effectiveness and efficiency.** | | Valid% | Unique% | Novel% | Recover% | Relative Running Time | |---|---|---|---|---|---| | DAD-PROTAC w/ direct score correction MC simulation | 95.1 ± 0.6 | 70.1 ± 0.5 | 70.3 ± 0.4 | 46.2 ± 0.4 | 131x | | DAD-PROTAC w/ direct score correction approximation | 81.9 ± 0.8 | 49.5 ± 0.4 | 59.7 ± 0.3 | 28.1 ± 0.7 | 0.8x | | DAD-PROTAC w/ density ratio estimation via clean samples | 90.0 ± 4.7 | 62.8 ± 6.1 | 69.3 ± 0.2 | 41.0 ± 0.6 | 0.95x | | DAD-PROTAC (Ours) | 94.8 ± 0.4 | 69.3 ± 0.3 | 71.5 ± 0.3 | 45.7 ± 0.6 | 1x | # 4. Other methods for fine-tuning diffusion models LoRA and DPO are promising for parameter-efficient fine-tuning. However, they assume the source and target domains share a similar latent structure, which is invalid here due to the significant chemical differences between small molecules and PROTACs (Figure 8 in the paper). Our method explicitly models the domain shift via density ratios estimation, which is specifically designed for the linker design task. That said, integrating LoRA-style methods into our framework (e.g., for the correction term) could further improve efficiency. We leave it as future work since it is out of the scope of this paper. # End of Response We hope that our responses have effectively addressed all of your concerns.
null
null
null
null
null
null
null
null
Three-Dimensional Trajectory Prediction with 3DMoTraj Dataset
Accept (poster)
Summary: In this paper, the authors address the challenge of predicting 3D trajectories, which is more complex than 2D trajectory prediction. To achieve this goal, the authors first introduce the 3DMoTraj dataset, collected from unmanned underwater vehicles (UUVs) in oceanic environments. Then, they propose a new method with two key components: decoupled trajectory prediction and correlated trajectory refinement. Based on the new dataset and solution, they reported extensive experiments to show the solution’s superior performance in 3D trajectory prediction. ## Update After Rebuttal The authors responded to my questions in a great way. They addressed my concerns. I do not have additional questions. I may keep my rating unchanged for this paper (weak accept) for the following reasons: 1. The newly collected dataset is they key factor. Collecting more data with annotations is a good contribution to the community. Thus, I prefer to accept this paper. 2. However, the improved accuracy claim in the paper is not that convincing, which resulting in a "weak accept" score. Claims And Evidence: Overall, claims made in this paper are supported by evidence. 1. The dataset. This paper provides detailed description of the dataset collection, including basic, motion, curvature, intention information. Besides that, the authors also report statistics information about the dataset. 2. The proposed solution has a detailed description and complexity analysis. The additional information reported in the supplementary is also helpful. The improved accuracy claim may need a stronger support. In Table 2, the proposed solution performs better than other existing work on the newly collected dataset, e.g., MRGTraj (2023), MS-TIP(2024). However, in Table 6, on other existing 2D trajectory prediction datasets, only one solution (LBEMB, 2021) was compared, which makes the generalization capability of the proposed solution and accuracy improvement claim become a little bit less convincing. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the trajectory prediction task. It will be better, if the authors could report the accuracy for each prediction step. In this case, readers may get more sense about the performance of the proposed solution, e.g., the model performs better at the first few steps and the performs worse at the last step, etc. Theoretical Claims: This paper does not have challenging mathematical proofs or theoretical claims. (1) The proposed Decoupled Trajectory Prediction and Correlated Trajectory Refinement modules should work in theory. (2) The prediction complexity analysis looks convincing. Experimental Designs Or Analyses: Overall, the experimental designs and analyses make sense. However, as mentioned above, it will be better to compare the proposed solution with more existing solutions on public datasets to prove the good generalization capability of the proposed solution. Supplementary Material: The supplementary material provides more details of the proposed solution, e.g., complexity analysis, dataset statistics information, and visualized results. Relation To Broader Scientific Literature: The proposed solution was adopted from the LBEBM method (Pang et al, 2021). The authors replace LBEBM’s decoder with three independent decoders to predict trajectories separately along the x-, y-, and z-axes. In terms of the newly collected dataset, it should be a good addition to the existing datasets, e.g., ETH/UCY, SDD, etc. (especially for the 3D trajectory prediction task). Essential References Not Discussed: It will be better if the authors could discuss more recent work, for example: * Lan Feng, Mohammadhossein Bahari, Kaouther Messaoud Ben Amor, Éloi Zablocki, Matthieu Cord, and Alexandre Alahi. "Unitraj: A unified framework for scalable vehicle trajectory prediction." In European Conference on Computer Vision, pp. 106-123. Cham: Springer Nature Switzerland, 2024. * Yi Xu, and Yun Fu. "Adapting to length shift: Flexilength network for trajectory prediction." In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15226-15237. 2024. * Moein Younesi Heravi, Youjin Jang, Inbae Jeong, and Sajib Sarkar. "Deep learning-based activity-aware 3D human motion trajectory prediction in construction." Expert Systems with Applications 239 (2024): 122423. * Zhuoyong Shi, Jiandong Zhang, Guoqing Shi, Longmeng Ji, Dinghan Wang, and Yong Wu. "Design of a UAV Trajectory Prediction System Based on Multi-Flight Modes." Drones 8, no. 6 (2024): 255. Other Strengths And Weaknesses: Other Strengths: 1. The intuitions behind deigns were described in details, e.g., to address the increased prediction complexity of 3D trajectory, decopuled trajectory prediction and correlated trajectory refinement were proposed. 2. The supplementary material provides more details and analysis help readers better understand their work. Other Weaknesses: 1. It will be better if the authors could report some failure cases. It may more important for the 3D prediction task, because 3D motion has more freedom than 2D motion and has higher possibility to have failure cases. Reporting failure cases will help readers better understand the performance of the proposed solution. Other Comments Or Suggestions: 1. Maybe we can try GRU instead of LSTM. There are some work show GRUs achieves better resutls than LSTMs. 2. It seems like the content of Section 4.1 does not exactly match with Figure 4. For example, Equation (2) has a concatenation operation, but we cannot find it in Figure 4. Maybe we can refine the content and the figure. 3. Even though the authors provide a complexity analusis, it will be better if the authors could report the profiled results (latency, power, etc.). Questions For Authors: 1. Do we need to change the values of iteration number and layer number if we apply the solution on different scenes or datasets? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: Demonstrate the generalization capability to 2D datasets on more baselines.** **A1**: We evaluated our prediction strategy on four additional baselines using two widely adopted 2D datasets: ETH&UCY and SDD. All models were trained and tested on the same machine for fair comparison. |Methods|ETH&UCY|SDD| |:----|:----|:----| |PECNet|0.30/0.48|10.02/15.79| |PECNet+our|0.25/0.42|9.45/15.26| |NPSN|0.28/0.44|8.56/14.95| |NPSN+our|0.25/0.39|8.34/14.36| |MSRL|0.20/0.36|8.36/13.85| |MSRL+our|0.19/0.34|8.29/13.56| |TrajCLIP|0.21/0.35|7.69/13.31| |TrajCLIP+our|0.19/0.32|7.63/13.30| These results demonstrate that our approach consistently enhances the performance of multiple baselines, further validating its generalization capability to 2D trajectory prediction. *** **Q2: Report the accuracy for each prediction step.** **A2**: The prediction accuracy at each step for the baseline LBEBM and our method is presented in the table below. Additionally, a visualized line chart of these results is provided; Please refer to Figure-2 at the URL <https://anonymous.4open.science/r/ICML_ID4436/README.md>. |Steps|#1|#2|#3|#4|#5|#6|#7|#8|#9|#10|#11|#12| |:----|:----|:----|:----|:----|:----|:----|:----|:----|:----|:----|:----|:----| |LBEBM|0.18|0.34|0.47|0.58|0.69|0.79|0.87|0.95|1.06|1.27|1.40|1.47| |Our|0.17|0.29|0.35|0.43|0.49|0.55|0.61|0.67|0.72|0.81|0.89|1.02| The results and their line chart indicate that our method achieves progressively more significant improvements over the baseline as the prediction horizon extends. This highlights our approach’s ability to mitigate the error accumulation problem in trajectory prediction, further validating its capability to reduce prediction complexity. *** **Q3: Discuss more recent work, for example, [1]-[4].** **A3**: We will incorporate a discussion of the recent works [1]-[4] in our manuscript. *** **Q4: It will be better to report some failure cases.** **A4:** We visualized a representative failure case in Figure-3 at <https://anonymous.4open.science/r/ICML_ID4436/README.md>, showing that our method struggles with trajectories featuring multiple sharp bends in short time frames. A more advanced interaction modeling could improve such case. However, as our primary focus is reducing the prediction complexity of 3D trajectories, we adopt a simple interaction modeling strategy, leading to suboptimal performance in this case. In the future, we will explore specialized 3D interaction modeling for complex motion patterns. *** **Q5: Maybe we can try GRU instead of LSTM.** **A5:** We evaluated the impact of replacing LSTM with GRU in our method, and the results are presented below: | Architecture|ADE|FDE| |:----|:----|:----| |LSTM|0.58|1.02| |GRU|0.59|1.06| The results indicate that GRU performs slightly worse than LSTM in our method. However, considering GRU’s higher computational efficiency, we recommend using the GRU-based version for low latency or resource-limited scenarios. *** **Q6: The content of Section 4.1 does not precisely match with Figure 4.** **A6:** Sorry for the typos in Section 4.1; we will refine them to ensure alignment with Figure 4. *** **Q7: Report the profiled results (latency, power, etc.).** **A7:** We compared our method’s profiled results with several top-notch methods. Specifically, we tested all models on an NVIDIA 2080 Ti GPU using an input size of 70×8x3, where 70 represents agents' number predicted simultaneously—exceeding the agent count in most real-world applications. |Methods|Parameters (M)|FLOPs (G)|Inference time (s)|ADE|FDE| |:----|:----|:----|:----|:----|:----| |MSRL|0.59|0.12|0.09|1.50|2.13| |LBEBM|1.24|0.09|0.05|0.84|1.47| |NPSN|0.22|0.14|1.29|0.75|1.04| |CausalHTP|0.04|0.16|2.54|0.71|1.30| |MRGTraj|4.36|20.04|0.06|0.69|1.36| |Our|3.41|0.24|0.08|0.58|1.02| The results show that our method achieves the best performance with a relatively good model efficiency, making it suitable for deployment on embedded robotic systems. Additionally, with an inference speed exceeding 12 FPS, our method meets the real-time decision-making requirements of robotics. *** **Q8: Change the iteration and layer numbers on different scenes or datasets?** **A8:** It is advisable to adjust these hyperparameters for scenes with significant variations, as different environments impact agent movement to varying degrees. For instance, underwater environments introduce more excellent resistance and turbulence than aerial environments, necessitating different hyperparameters for optimal performance. *** **Reference** [1] Feng L, et al. Unitraj: A unified framework for scalable vehicle trajectory prediction. ECCV. 2024. [2] Xu Y, et al. Adapting to length shift: Flexilength network for trajectory prediction. CVPR. 2024. [3] Heravi M Y, et al. Deep learning-based activity-aware 3D human motion trajectory prediction in construction. ESWA. 2024. [4] Shi Z, et al. Design of a UAV Trajectory Prediction System Based on Multi-Flight Modes. Drones, 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your responses. The authors responded to my questions in a great way. They addressed my concerns. I do not have additional questions. Thank you so much. --- Reply to Comment 1.1.1: Comment: Thanks for confirming that all the questions have been answered by the rebuttal. As such, we kindly ask you to consider raising the score of the overall recommendation. Thank you,
Summary: The paper addresses the problem of 3D trajectory prediction by introducing a novel dataset and an innovative prediction framework. Building upon the 3DMoTraj dataset, they propose a dual-component prediction method that decomposes the 3D trajectory prediction task into two stages. The first stage, decoupled trajectory prediction, independently forecasts trajectories along each spatial axis to reduce overall prediction complexity. The second stage, correlated trajectory refinement, models inter-axis dependencies to generate corrective offsets that enhance the initial predictions. Extensive experiments demonstrate the superiority of the proposed approach. Claims And Evidence: This paper provides sufficient experiments to support their claims. Methods And Evaluation Criteria: The proposed dataset makes sense for the trajectory prediction task. Theoretical Claims: I have checked their claims. Experimental Designs Or Analyses: Yes. Supplementary Material: N/A. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: 1. The 3DMoTraj dataset provides a valuable benchmark for evaluating 3D trajectory prediction algorithms in realistic settings. Its frame-wise annotations for both static and dynamic intentions offer precise descriptions of motion characteristics in 3D environments. The proposed dataset is likely to facilitate further research in this area. 2. The authors show that the prediction complexity of 3D trajectories is nearly double that of 2D trajectories, given that a 3D Gaussian distribution requires optimizing 9 parameters. The authors demonstrate that a 3D Gaussian distribution can be decomposed into independent 1D Gaussian components along with a correction factor. 3. The proposed methodology is well designed. The divide-and-conquer strategy, comprising decoupled trajectory prediction and correlated trajectory refinement can reduce the overall prediction complexity. 4. Extensive experiments, including ablation studies and comparisons with state-of-the-art methods, reveal that the proposed approach significantly enhances prediction accuracy and robustness. Weaknesses: 1. Experimental validation using a 3D trajectory dataset collected from UAVs would further strengthen the paper, given the increased complexity of UAV motion trajectories. I understand that collecting and annotating such a dataset requires substantial effort and may be impractical under current time and resource constraints; however, I look forward to seeing future work on UAV trajectory prediction. 2. A thorough analysis of the model's efficiency is necessary. In the context of robotics applications, prediction algorithms must not only ensure accuracy but also achieve high inference speed to enable real-time decision-making. Evaluating the computational cost and runtime performance of the proposed method would strengthen the paper’s contributions. Other Comments Or Suggestions: N/A. Questions For Authors: Refer to above sectoins. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1: I look forward to seeing future validation on the trajectory dataset of unmanned aerial vehicles.** **A1**: As outlined in our conclusion, future work will involve collecting a large-scale 3D trajectory dataset from unmanned aerial vehicles (UAVs) to validate our proposed methodology further. Specifically, we plan to capture trajectories across nine distinct environments, including large-scale indoor mall scenarios, urban road airspace scenarios, urban low-altitude logistics corridors, dense vegetation agricultural fields, large industrial park settings, cross-sea bridge inspection sites, post-disaster debris zones, open water airspace environments, and wilderness forest environments. For each environment, we aim to collect six hours of UAV trajectory data, allocating three hours for training, two for validation, and one for testing. At the current stage, we have collected two environments—large-scale indoor mall scenarios and urban road airspace scenarios—though annotations remain incomplete. In the indoor mall scenario, three or four UAVs simulate coordinated formation movement to mimic goods pickup and delivery tasks. In the urban road airspace scenario, four or six UAVs navigate complex intersections to simulate 3D logistics transmission environments. To evaluate our method on a more general 3D dataset, we randomly selected one hour of annotated data from these two scenarios (30 minutes for training, 20 for validation, and 10 for testing) and benchmarked our approach against other state-of-the-art methods. | Methods| Large-scale indoor mall scenarios| Urban road airspace scenarios | |:---- |:---- |:---- | | NPSN | 0.72/0.97 | 0.83/1.31 | | MRGTraj | 0.60/0.78 | 0.73/1.23 | | CausalHTP | 0.54/0.71 | 0.68/1.18 | | S-Implicit | 0.52/0.73 | 0.65/1.25 | | MS-TIP | 0.47/0.77 | 0.62/1.12 | | TrajCLIP | 0.45/0.74 | 0.61/1.13 | | Our | 0.32/0.67 | 0.42/0.95 | The results demonstrate that our method outperforms all investigated approaches, validating its effectiveness in diverse 3D scenarios. Besides, our method performs best at urban road airspace scenarios, demonstrating the method's ability to predict complex non-formation trajectories. Moreover, our approach performs better in UAV-based 3D environments than UUV-based underwater scenarios, further confirming the greater complexity of underwater trajectory prediction due to higher resistance and turbulence. We have also visualized several predicted trajectories in Figure-4 and Figure-5 at the anonymous URL <https://anonymous.4open.science/r/ICML_ID4436/README.md>, further illustrating the ability of our method in general 3D trajectory prediction. *** **Q2: Evaluating the computational cost and runtime performance.** **A2**: We compared our method’s computational cost, parameter, and runtime performance with several state-of-the-art methods. Specifically, we tested all models on an NVIDIA 2080 Ti GPU using an input size of 70×8x3, where 70 represents the number of agents predicted simultaneously—exceeding the agent count in most real-world applications. The results are presented below: | Methods | Parameters (M) | FLOPs (G) | Inference time (s) | ADE | FDE | |:---- |:---- |:---- |:---- |:---- |:---- | | MSRL | 0.59 | 0.12 | 0.09 | 1.50 | 2.13 | | LBEBM | 1.24 | 0.09 | 0.05 | 0.84 | 1.47 | | NPSN | 0.22 | 0.14 | 1.29 | 0.75 | 1.04 | | CausalHTP | 0.04 | 0.16 | 2.54 | 0.71 | 1.30 | | MRGTraj | 4.36 | 20.04 | 0.06 | 0.69 | 1.36 | | Our | 3.41 | 0.24 | 0.08 | 0.58 | 1.02 | The results demonstrate that our method achieves the best performance with a relatively good model efficiency, making it suitable for deployment on embedded robotic systems. Additionally, with an inference speed exceeding 12 FPS, our method meets the real-time decision-making requirements of robotics applications.
Summary: Firstly, this paper introduces the 3DMoTraj dataset, a novel 3D trajectory dataset collected from unmanned underwater vehicles (UUVs) in oceanic environments. The dataset includes annotations for both static (endpoint octant) and motion (velocity change) intentions. Secondly, to address the increased complexity of 3D trajectory prediction compared to 2D, this paper proposes a method consisting of two components: decoupled trajectory prediction (independently predicting each axis to reduce complexity) and correlated trajectory refinement (modeling inter-axis correlations to refine predictions). The approach leverages LSTM-based modules with state-correlation and aggregation mechanisms. Experiments on the 3DMoTraj dataset demonstrate improvements over state-of-the-art methods, achieving lower Average Displacement Error (ADE) and Final Displacement Error (FDE). This paper also validates the method’s generalization to 2D datasets like ETH/UCY and SDD. Claims And Evidence: The claims in this paper are not fully supported by clear and convincing evidence for the following reasons: 1. Dataset Limitations: The 3DMoTraj dataset is self-collected and may be specific to the 3D trajectory prediction model presented in this paper. The dataset is an analog simulation data based on human intervention and has not been validated for its application in real-world scenarios. 2. Method Validation: The model proposed in this paper has only been evaluated on the self-constructed dataset, and has not been experimentally validated against state-of-the-art methods on other 3D datasets, so it does not indicate whether the excellent experimental results stem from the design of the model or from the bias of the specific dataset. 3. Application scenario: Most of the trajectories in the real world are 2D, so many studies are based on 2D trajectories. Many 3D trajectories are military datasets, and the application scenarios of this research are limited. Methods And Evaluation Criteria: The proposed method and evaluation criteria partially align with the problem of 3D trajectory prediction but suffer from critical limitations that undermine their suitability for broader validation and practical impact. First, the 3DMoTraj dataset is generated specific to this model, and it is not possible to assess the effectiveness of this method as applied in a realistic scenario. Second, the design concept of this model is simple, mainly based on LSTM and Attention mechanism, which is not innovative enough. Theoretical Claims: This paper makes theoretical claims about the complexity of 3D trajectory prediction and the decomposition of 3D Gaussian distributions. After reviewing the mathematical derivations, the following issues are identified: 1. Validity of Parameter Complexity Analysis This paper correctly argues that 3D trajectory prediction requires optimizing 9 parameters per point (3 means, 3 variances, 3 correlations), compared to 5 parameters for 2D (2 means, 2 variances, 1 correlation). This analysis is mathematically sound. 2. Decomposition of 3D Gaussian Distributions The decomposition of the 3D Gaussian distribution into independent 1D Gaussians and a correlation correction term (Equations 22–24) is theoretically valid. The authors correctly show that the exponential term can be split into independent (Part-I) and interdependent (Part-II) components. However, the claim that this reduces the total number of parameters from 9 to 6 is misleading. While Part-I corresponds to 6 parameters (3 means, 3 variances), Part-II still requires modeling 3 correlations (ρ_xy, ρ_xz, ρ_yz), totaling 9 parameters. The decoupling strategy simplifies the optimization process by separating independent and correlated components but does not reduce the total number of parameters. In summary, the proof and application of the mathematical theory in this paper is correct. Experimental Designs Or Analyses: The experimental designs and analyses in this paper contain critical flaws that compromise the validity of the conclusions. 1. Comparison with State-of-the-Art Methods  This paper modifies 2D trajectory prediction methods (e.g., SSTGCNN, MSRL) to receive 3D trajectory inputs, and also adjusts the data enhancement strategies of these methods, which may affect the original effectiveness of these methods.  The model proposed in this paper has only been evaluated on the self-constructed dataset, and has not been experimentally validated against state-of-the-art methods on other 3D datasets, so it does not indicate whether the excellent experimental results stem from the design of the model or from the bias of the specific dataset. 2. Ablation Studies In the ablation experiments, the authors did not discuss the role of the SC module and SA module in State-Correlation and Aggregation LSTM (SCA-LSTM) separately. 3. Generalization to 2D Datasets Comparison experiments with a single baseline LBEBM on the ETH/UCY and SDD datasets do not prove that the 3D Decoupled-Correlated trajectory prediction strategy proposed in this paper can improve the performance of 2D trajectory prediction. Supplementary Material: Yes, I reviewed the supplementary material, specifically the following sections: A. Prediction Complexity Analysis: This section mathematically decomposes 3D Gaussian distributions into independent and correlated components, supporting the theoretical motivation for the decoupled-refinement approach. B. Details of Vanilla LSTM: A standard explanation of LSTM mechanics, which clarified the baseline architecture used in the method. C. Visualization of the Ablation Study: Visual comparisons of ablation variants (Figure 7) highlighted improvements in z-axis predictions but lacked quantitative metrics for these cases. D. Motion Trajectory Visualization: Additional scenario visualizations (Figure 8) provided context for dataset diversity but did not address the narrow scope of UUV-only data. E–H. Distributions of Distance, Velocity, Acceleration, and Curvature: These sections (Figures 9–12) quantified dataset statistics, confirming high variability in UUV dynamics but reinforcing the dataset’s specificity to oceanic environments. Relation To Broader Scientific Literature: The approach of independently predicting each axis (x, y, z) is a very common mathematical idea. This reduces the complexity of optimizing 3D Gaussian distributions, as theoretically justified in the paper. The SCA-LSTM module’s state-correlation and aggregation mechanisms draw inspiration from attention-based methods (e.g., Trajectron++, Salzmann et al., 2020) and graph neural networks (e.g., STGAT, Huang et al., 2019). The paper contributes a specialized dataset and a method for UUV trajectory prediction. While its decoupled-correlated framework and intention annotations build on prior work in 2D and 3D prediction, the narrow application scope and lack of cross-domain validation restrict its broader scientific impact. To better align with the literature, future work should validate the method on diverse 3D datasets and compare against 3D-specific baselines. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The 3DMoTraj dataset focus on 3D trajectories of UUVs in oceanic environments, addressing a critical gap in underwater robotics research. Its annotations for motion and static intentions (velocity changes and endpoint octants) provide a unique foundation for intention-conditioned 3D prediction. Weaknesses: 1. The 3DMoTraj dataset is self-collected and may be specific to the 3D trajectory prediction model presented in this paper. The dataset is an analog simulation data based on human intervention and has not been validated for its application in real-world scenarios. 2. The model proposed in this paper has only been evaluated on the self-constructed dataset, and has not been experimentally validated against state-of-the-art methods on other 3D datasets, so it does not indicate whether the excellent experimental results stem from the design of the model or from the bias of the specific dataset. Other Comments Or Suggestions: It is recommended that the authors refine the implementation principles of the SA module and SC module in the model framework diagram in Figure 4. Questions For Authors: Method Scalability: How does your method perform in scenarios with more agents, higher noise, or non-formation dynamics (e.g., adversarial UUV interactions)? Have you tested its scalability to such real-world complexities? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Q1: The dataset is simulation data and has not been validated in real-world scenarios.** **A1**: While our dataset is based on predefined formation trajectories, real-world disturbances naturally cause deviations from planned paths and formation shifts. These deviations reflect real challenges in trajectory prediction. Moreover, most real-world robotic trajectories are pre-programmed rather than spontaneously generated, meaning our dataset aligns with actual robotic motion patterns, making it representative of real-world scenarios. *** **Q2: Validating the proposed method against top-notch methods on other 3D datasets.** **A2**: It is hard to find publicly available 3D trajectory dataset. We plan to collect a more general 3D trajectory of unmanned aerial vehicles (UAVs) to validate our method. Specifically, we will capture trajectories across nine environments from the civilian field. Currently, we have collected two environments: large-scale indoor mall scenarios and urban road airspace scenarios. We randomly selected one hour of annotated data from these two scenarios and benchmarked our method against top-notch methods. For more experimental details, please see **Q1** of Reviewer cMBX. |Methods|Large-scale indoor mall scenarios|Urban road airspace scenarios| |:----|:----|:----| |NPSN|0.72/0.97 |0.83/1.31| |MRGTraj|0.60/0.78|0.73/1.23| |CausalHTP|0.54/0.71|0.68/1.18| |S-Implicit|0.52/0.73|0.65/1.25| |MS-TIP|0.47/0.77|0.62/1.12| |TrajCLIP|0.45/0.74|0.61/1.13| |Our|0.32/0.67|0.42/0.95| The results show that our method outperforms all others, validating its effectiveness in diverse 3D scenarios. The visualized results in Figure-4 and Figure-5 at URL <https://anonymous.4open.science/r/ICML_ID4436/README.md> further illustrate its general 3D prediction capability. *** **Q3: Many 3D trajectories are military datasets with limited application scenarios.** **A3**: 3D trajectory prediction has broad civilian applications. Beyond UAV navigation and obstacle avoidance, it supports urban air mobility, precision agriculture, disaster response, and industrial inspections, ensuring safe and efficient operations. These applications extend its impact far beyond military use. *** **Q4: The claim that reduces the total number of parameters is misleading.** **A4**: Yes, our decoupling strategy simplifies the optimization process but does not reduce optimized parameters. We will clarify this in our paper. *** **Q5: The effects of modifying the data enhancement strategies of compared methods.** **A5**: To ensure a fair augmentation, we conducted a comparative study of 2D augmentations (rotation, flipping, and translation) versus their 3D counterparts across several top-notch methods. |Augmentation|2D strategies|3D strategies| |:----|:----|:----| |CausalHTP|0.71/1.30|0.69/1.27| |MS-TIP|0.70/1.31|0.70/1.29| |MRGTraj|0.69/1.36|0.68/1.31| |S-Implicit|0.68/1.22|0.66/1.20| |Our |0.58/1.02|0.47/0.88| The results show that 3D strategies provide slight improvements on others but significantly enhance our method, suggesting that 3D strategies are particularly beneficial for our model. For fairness, we use standard 2D strategies with 3D inputs for all methods. *** **Q6: Discussion of the SC and SA modules separately.** **A6**: We conducted separate ablation studies on the SC and SA modules. | Settings|ADE|FDE| |:---- |:---- |:---- | |LSTMOnly|0.66|1.05| |LSTM+SC|0.64|1.04| |LSTM+SA|0.61|1.02| |LSTM+SA+SC|0.58|1.02| The results indicate that removing the SC or SA module degrades the model's performance, highlighting their contributions. *** **Q7: Generalization to 2D Datasets with more baselines.** **A7**: We evaluated our prediction strategy on four additional baselines using 2D datasets: ETH&UCY and SDD. To ensure a fair comparison, all models were trained and tested on the same machine. |Methods|ETH&UCY|SDD| |:----|:----|:----| |PECNet|0.30/0.48|10.02/15.79| |PECNet+our|0.25/0.42|9.45/15.26| |NPSN|0.28/0.44|8.56/14.95| |NPSN+our|0.25/0.39|8.34/14.36| |MSRL|0.20/0.36|8.36/13.85| |MSRL+our| 0.19/0.34 | 8.29/13.56| |TrajCLIP|0.21/0.35|7.69/13.31| |TrajCLIP+our|0.19/0.32|7.63/13.30| These results show that our approach consistently enhances the performance of multiple baselines, further validating its generalization capability to 2D trajectory prediction. *** **Q8: Refine the implementation principles of the SA and SC modules in Figure 4.** **A8**: We added the implementation details of the SA and SC modules in Figure 4. Please refer to Figure-1 at URL <https://anonymous.4open.science/r/ICML_ID4436/README.md>. *** **Q9: How does your method perform in real-world scenarios?** **A9**: As discussed in **Q2**, our method has been evaluated in urban road airspace scenarios, where groups of UAVs navigate complex intersections to simulate 3D logistics transmission. These scenarios involve complex interactions. Our method performs best in this environment, demonstrating its scalability to real-world complexities.
Summary: This paper proposes a 3D trajectory dataset named 3DMoTraj collected from unmanned underwater vehicles (UUVs) in ocean environments, which fills the research gap in this field. Regarding the setting of 3D trajectory prediction, the paper highlights the challenge of computational complexity and provides theoretic proofs. The paper proposes a decoupled framework to mitigate the computational complexity, experiments demonstrate the superior accuracy of 3D trajectory prediction. Claims And Evidence: The paper proposes a decoupled framework to mitigate the computational complexity, but it has not provided ablations or method comparisons on inference cost. Methods And Evaluation Criteria: Yes. Theoretical Claims: The theoretical claims that support the key contributions of this paper are correct. Experimental Designs Or Analyses: The paper has not provided experiments to support the claims of mitigating the computational complexity of the 3D trajectory prediction task. Supplementary Material: I have reviewed all the content of the supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are related to previous 2D trajectory prediction datasets and methods, as well as future 3D trajectory prediction tasks. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper proposes a large-scale and diverse 3D trajectory dataset collected from unmanned underwater vehicles in ocean environments, which is a good contribution to the related field. 2. The computational complexity challenges discussed in this paper are well-justified. 3. The performance of the proposed decoupled framework surpasses that of several methods in the 3D trajectory prediction problem, and ablations demonstrate that the proposed modules can enhance prediction accuracy. Weaknesses: 1. The 3DMoTraj dataset is collected in ocean environments. Extending it to general 3D scenarios is difficult due to underwater dynamics, and the paper should elaborate on the limitations of the application scenarios more formally for clarity. 2. Apart from 2D and 3D, the paper has not provided further discussions on the differences between the proposed framework and other trajectory prediction methods. Trajectory refinement is a common practice in trajectory prediction tasks, such as in MTR, QCNet, SmartRefine, etc. Other Comments Or Suggestions: Please see the weaknesses. Questions For Authors: Is there any available 3D trajectory dataset collected from unmanned aerial vehicles? If so, please provide a discussion on it. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1: The paper tries to mitigate the computational complexity but has not provided ablations or method comparisons on inference cost.** **A1**: At first, we clarify that the complexity our paper aims to mitigate is prediction complexity, which is crucial for optimizing 3D trajectory prediction. However, this is distinct from computational complexity. It is our fault for misunderstanding. To avoid any confusion, we will explicitly define the term of prediction complexity in our paper. Theoretical analysis in the Appendix and ablation studies confirm that our method effectively reduces the prediction complexity of 3D trajectories. Computational complexity is indeed an important aspect to evaluate the proposed method; therefore, below we compared inference costs across several state-of-the-art methods using an NVIDIA 2080 Ti GPU. To ensure fair comparisons, all tested methods were modified to accept inputs of shape 70×8×3. | Methods|Parameters (M)|FLOPs (G)|Inference time (s)|ADE|FDE| |:----|:----|:----|:----|:----|:----| |MSRL|0.59|0.12|0.09|1.50|2.13| |LBEBM|1.24|0.09|0.05|0.84|1.47| |NPSN|0.22|0.14|1.29|0.75|1.04| |CausalHTP|0.04|0.16|2.54|0.71|1.30| |MRGTraj|4.36|20.04|0.06|0.69|1.36| |Our|3.41|0.24|0.08|0.58|1.02| These results show that our method performs best while maintaining relatively high model efficiency, making it suitable for deployment on embedded robotic systems. *** **Q2: Extending 3DMoTraj dataset to general 3D scenarios is difficult due to underwater dynamics.** **A2**: From the methodological perspective, our motivation is to reduce the prediction complexity of 3D trajectory in a manner that ensures robustness across diverse scenarios. The 3DMoTraj dataset is only one of the validation scenes for our method. To further establish its generalizability, we are constructing a large-scale 3D trajectory dataset for unmanned aerial vehicles (UAVs). Preliminary results from this dataset confirm that our method extends effectively to general 3D trajectory prediction. Please refer to **Q4** below for more details. *** **Q3: Further discussions on trajectory refinement works, such as MTR, QCNet, SmartRefine, etc.** **A3**: We modified the trajectory refinement components of MTR [1], QCNet [2], and SmartRefine [3] by replacing them with the refinement module in our method. |Settings|ADE|FDE| |:----|:----|:----| |Decoupled prediction+MTR|0.69|1.15| |Decoupled prediction+QCNet|0.64|1.04| |Decoupled prediction+SmartRefine|0.61|1.05| |Decoupled prediction+SCA-LSTM (Our)|0.58|1.02| The results show that our SCA-LSTM performs best in refining initial predictions. This outcome is expected, as SCA-LSTM explicitly models the inter-axis correlations of predicted trajectories, which are intentionally ignored in the decoupled prediction stage to simplify optimization. In contrast, the refinement modules of MTR, QCNet, and SmartRefine focus on modeling interactions between predictions, global intentions, and local maps for refinement. We will further elaborate on the differences between our method and other 2D refinement-based methods in our paper. *** **Q4: Provide a discussion on 3D trajectory dataset collected from unmanned aerial vehicles.** **A4**: As outlined in our conclusion, future work will involve collecting a large-scale 3D trajectory dataset from unmanned aerial vehicles (UAVs) to validate our method further. Specifically, we plan to capture trajectories across nine distinct environments. For each environment, we aim to collect six hours of trajectories, allocating three hours for training, two for validation, and one for testing. At the current stage, we have collected two environments—large-scale indoor mall scenarios and urban road airspace scenarios—though annotations remain incomplete. We randomly selected one hour of annotated data from these two scenarios (30 minutes for training, 20 for validation, and 10 for testing) and benchmarked our approach against other state-of-the-art methods. |Methods|Large-scale indoor mall scenarios|Urban road airspace scenarios| |:----|:----|:----| |NPSN|0.72/0.97|0.83/1.31| |MRGTraj|0.60/0.78|0.73/1.23| |CausalHTP|0.54/0.71|0.68/1.18| |S-Implicit|0.52/0.73|0.65/1.25| |MS-TIP|0.47/0.77|0.62/1.12| |TrajCLIP|0.45/0.74|0.61/1.13| |Our|0.32/0.67|0.42/0.95| The results demonstrate that our method outperforms all the competitors, validating its effectiveness in diverse 3D scenarios. We have also visualized several predicted trajectories in Figure-4 and Figure-5 at the anonymous URL <https://anonymous.4open.science/r/ICML_ID4436/README.md>, further illustrating the ability of our method in general 3D trajectory prediction. *** **Reference** [1] Shi S, et al. Motion transformer with global intention localization and local movement refinement. NeurIPS, 2022. [2] Zhou Z, et al. Query-centric trajectory prediction. CVPR. 2023. [3] Zhou Y, et al. Smartrefine: A scenario-adaptive refinement framework for efficient motion prediction. CVPR. 2024. --- Rebuttal Comment 1.1: Comment: After considering the authors' response and other reviewers' comments, I acknowledge that leveraging customized modules to represent intra-axis features and inter-axis correlations respectively could be a potential way to simplify the optimization process of modeling the Gaussian distribution for future trajectories. The authors have provided theoretical analysis and experiments in 2D/3D scenarios to support the method's effectiveness. My remaining concerns are as follows: 1. In the experiments on the ETH&UCY and SDD datasets, the authors compare the baselines with their combinations involving the proposed components. For a fair comparison, it is necessary to also provide an analysis of model parameters and computational complexity. 2. Regarding the proposed 3DMoTraj dataset, I appreciate the authors' effort to collect a more diverse 3D trajectory dataset in the future. However, if the additional data will not be included in the current work, it would be beneficial to indicate the UUV scenario more formally (such as in the dataset's name) for clarity. --Post rebuttal: Thank you for your response. Most of my concerns have been addressed, and I believe this work will provide a valuable contribution to the community. Therefore, I will raise my rating. --- Reply to Comment 1.1.1: Comment: # To Reviewer 3PK9 / follow-up comments Dear Reviewer 3PK9, We sincerely appreciate your follow-up comments to help us improve our work. Accordingly, we have responded to each comment as follows. *** **Q1: For a fair comparison, it is necessary to also provide an analysis of model parameters and computational complexity for experiment on 2D datasets.** **A1**: Following your suggestion, we added an analysis of model parameters and computational complexity for our comparative experiments on the 2D datasets ETH&UCY and SDD. All models are trained and tested using the same machine equipped with an NVIDIA 2080 Ti GPU, and modified to accept an input shape of 70×8×3. The results are presented below. |Methods|Parameters (M)|FLOPs (G)|Inference time (s)|ETH&UCY (ADE/FDE)|SDD (ADE/FDE)| |:----|:----|:----|:----|:----|:----| |PECNet|1.23|0.12|0.03|0.30/0.48|10.02/15.79| |PECNet + our|2.31|0.18|0.05|0.25/0.42|9.45/15.26| |NPSN|0.22|0.14|1.26|0.28/0.44|8.56/14.95| |NPSN + our|1.19|0.21|1.32|0.25/0.39|8.34/14.36| |LBEBM|1.24|0.09|0.05|0.22/0.40|9.20/16.47| |LBEBM + our|2.32|0.16|0.07|0.21/0.38|8.98/15.93| |MSRL|0.59|0.12|0.09|0.20/0.36|8.36/13.85 | |MSRL + our|1.47|0.17|0.12|0.19/0.34|8.29/13.56| |TrajCLIP|14.94|18.96|0.28|0.21/0.35|7.69/13.31| |TrajCLIP + our|16.05|19.02|0.31|0.19/0.32|7.63/13.30| The results show that integrating our decoupled trajectory prediction and correlated trajectory refinement introduces minimal overhead among all the methods. On average, it results in an increase of approximately 1M parameters, 0.05-0.08 GFLOPs, and 0.02-0.06 seconds in inference time. These results further validate the efficiency and effectiveness of our method's main components, even in general 2D scenarios—making it practical for deployment on various hardware platforms. *** **Q2: It would be beneficial to indicate the UUV scenario more formally (such as in the dataset's name) for clarity.** **A2**: The 3DMoTraj dataset primarily focuses on 3D trajectories of UUV operating in oceanic environments. To enhance clarity, we will rename the dataset to 3DMoTraj-UUV to explicitly indicate the UUV scenario. Since accurate 3D trajectory prediction is critical for UUVs, enabling efficient path planning, real-time coordination, and robust obstacle avoidance, we believe that the 3DMoTraj-UUV dataset will be a valuable resource for advancing research in marine robotics and related fields. Inspired by your suggestion, for the unmanned aerial vehicles (UAV) datasets currently under collection, we will name them according to their collected scenarios. These datasets will be utilized to further evaluate the generalization ability of our 3D trajectory prediction method in our future work. We would also like to clarify that the UAV data will not be included in the current paper, as they are still in the early stages of collection and not yet ready for release. *** Thank you once again for your helpful feedback. We hope our response addresses your concern.
null
null
null
null
null
null
Enforcing Idempotency in Neural Networks
Accept (poster)
Summary: Idempotent Generative Networks (IGN) require an operator $f$ to satisfy $f = f \circ f$. This paper addresses such idempotency by analysis from perturbation theory, identifying the polynomial $3K^2 - 2K^3$ as one that projects matrices to the idempotent matricse manifold. The authors then adapt this approach to the non-linear case. Specifically, they override the gradient of $||f(f(x))-f(x)||$ with $3f(y) - 2f(f(y)) - y$, thus simplifying backprop. Experiments on MLPs and on MNIST with a U-net style DCGAN show reduced idempotent error. I am listing strengths, weaknesses, and questions throughout these review fields, marked as **(S), (W), (Q)**. ## update after rebuttal As mentioned in my comment below, after some of my concerns have been addressed I would like to maintain my score and to state that this paper is safely in the "4" zone. Thanks again to the authors. Claims And Evidence: **(W1)** The paper claims the polynomial-based gradient step outperforms naive backprop in achieving $f(f(x)) \approx f(x)$, demonstrated with synthetic MLPs and a small U-net on MNIST. While the results are persuasive for those cases, there is no large-scale benchmark comparison. Methods And Evaluation Criteria: **(S1)** The authors propose a new method—“modified backprop”—that replaces $\partial L/\partial y$ by $3f(y)-2f(f(y))-y$, avoiding the usual chain rule on $f\circ f$. This is simple to implement and apparently stable. **(W2)** Evaluation focuses on how quickly $\|\,f(f(x)) - f(x)\|\,$ shrinks. There is no direct measure of runtime or memory cost versus naive double-backprop, which would have clarified efficiency gains. Theoretical Claims: **(S2)** The linear case for $K$ is well understood: $3K^2 - 2K^3$ projects $K$ onto the set of idempotent operators. The paper extends this to non-linear networks via the gradient substitution. This is conceptually clean, though no convergence proof is provided for deep architectures. **(W3)** The derivation relies on near-idempotency perturbation arguments but does not fully address whether one can recover from being far from idempotent. Experimental Designs Or Analyses: **(S3)** They present ablation across different MLP depths and widths, showing a consistent drop in idempotent error. On MNIST, they adapt IGN method with “tightness” and show visually plausible results. **(W4)** The dataset variety is minimal (mostly random MLP inputs, plus MNIST). Additional experiments on complex images or tasks would strengthen the claims. Supplementary Material: **(S4)** Appendix contains code snippet, proofs and analysis. It complements the main paper well. Relation To Broader Scientific Literature: **(S5)** The paper references Shocher et al. (Idempotent Generative Networks) and classical perturbation theory for linear idempotent operators. This is interdisciplinary, connecting different fields and gets benefits. This topic is mostly self-contained so not too much context is needed, I think the authors describe the relation well Essential References Not Discussed: None that are strictly missing, but further parallels to other “fixed-point” networks or to full second-order approaches could be informative. Other Strengths And Weaknesses: **(S6)** This is a very elegant idea and an impressive solution. At first glance it seems impossible that gradients taken only through one copy of the network yied optimization equivalent to taking the through recursion of two copies of the network. This is a solid mathematical solution. **(S7)** What the authors propose provides stability and freedom for IGN. It allows architectures that couldn't work with the original IGN and makes training more stable. This is a step towards making IGN practical. **(W5)** I would want to see projection of out-of-distribution data as shown in the original IGN paper, I think it is a key feature of IGN. **(S8)** the convergence diagram at Fig.2 is eye opening and very insightful. Other Comments Or Suggestions: I find this paper an important contribution, taking a step with a currently not very practical generative model, making it more feasible. I think this has much potential, having in mind the first generative diffusion models paper (Sohl-Dickstein 2015) that took some time to adapt until it became a major thing. I am also impressed with the Interdisciplinarity and the idea of bringing in perturbation theory to solve this problem. I find it novel, creative and out of the main stream works. However, the experimental side of this work is not strong enough. I would expect a step up in terms of scale, data, quality w.r.t. the previous IGN paper. I see the experimental and evaluation side as problematic. Nevertheless I think this is a definite accept. Questions For Authors: **(Q1)** I was trying to interpret what is being optimized by the modified backprop step, and I speculate it can be seen as the idempotency of the Jacobian of $f$ at location $x$. Specifically, it is the distance between the Jacobian and the polynomial of that Jacobian $||(3J_\theta(x)^2 - 2J_\theta(x)^3 - J_\theta(x))x||^2$. 1. In the linear setting, where $f(x) = Kx$ is constant, directly enforcing $K = 3K^2 - 2K^3$ yields idempotency. 2. For the non-linear case, one could want each local Jacobian$J_\theta(x)$ to act similarly, but that Jacobian depends on $\theta$ and $x$ and is not fixed. So what one can do is train the network weights so that the Jacobian is close to the polynomial, providing the objective $L = E_x[||(3J_\theta(x)^2 - 2J_\theta(x)^3 - J_\theta(x))x||^2] = E_x[||3f(y) - 2f(f(y)) - y||^2]$. 3. Taking the gradient of this loss with respect to $y$ in such a construction is proportional to $3f(y) - 2f(f(y)) - y$, which matches the polynomial update the authors propose in their modified back-propagation. 4. This equivalence suggests the network is being trained so that, at for all inputs (noise or images) in the training distribution, the first order approximation is close to its projection onto the manifold of idempotent matrices. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and valuable feedback. We are pleased that the reviewer finds this to be an important contribution with much potential, and with a strongly interdisciplinary character. We appreciate the concerns raised, and we aim to address these below. **(W2) Runtime and memory cost.** Please see our response to reviewer "kYga" for a discussion of the runtime cost of Modified Backpropagation vs. Ordinary Backpropagation. **(W4 + W5) Dataset variety and out-of-distribution experiments.** Please see our response to reviewer "hBam" where we give indicative results on CelebA and replicate the out-of-distribution experiment of the IGN paper. **(W3) Reliance on near-idempotency.** Our illustration Figure 2 shows a relatively large domain of convergence around the fixed points for the linear case. In the generative setting, other elements of the loss function already help the network to learn an approximately idempotent behaviour, moving the model weights to a near-idempotent regime where our method can be most effective. Questions of convergence in the general non-linear case are fascinating to consider and we propose to investigate this in further work. **(Q1).** The reviewer's perspective on how our method extends the idempotence condition from the linear to the non-linear case is very well-stated, and agrees with our current understanding. We are grateful to the reviewer for setting this out, and if accepted, we propose to revise the start of Section 2.3 to include this perspective. When extending Modified Backpropagation to the non-linear case, we wish for the network to act in an idempotent way around inputs taken from the training distribution (and hope that enough such points yields idempotent behaviour for the rest of the distribution). Indeed, by treating the behaviour of the network around inputs as locally linear, Eq. 15 becomes the ``obvious'' choice for optimizing idempotency in the context of the method we discuss in Section 2 of our paper. We therefore agree with the reviewer's suggestion that setting $\frac{\partial{L}}{\partial{\mathbf{y}}} = 3f(\mathbf{y}) - 2f(f(\mathbf{y})) - \mathbf{y}$ in Modified Backpropagation effectively trains the first order approximation of the network to act as an idempotent mapping. In particular, what we are optimizing is the idempotence property of the Jacobian of $f$ around input points. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. Most of my concerns have been addressed: **(W4+W5)** The authors provided results on another, slightly bigger dataset CelebA and showed projection from OOD. While not perfect and I think inferior to original IGN results, it seems to be possible and the gap is some tuning and surface engineering. **(W3)** I'm glad the authors pointed me to Fig.2 as it does show the tendency for the the idempotency to facilitate long-term. Of course it is hard to determine in general, but seems to be reasonable. **(W2)** I appreciate the theoretical compute calculation and convinced that theoretically there should be the same runtime, with faster convergence which makes it more efficient. I would have been more convinced by empirical evaluation as there can be overheads. Before the rebuttal my opinion was that this paper is a definite accept, with meaningful scientific contribution and elegant interdisciplinarity. In their response, the authors showed that their method is indeed more efficient and capable of handling more data and projection task. It is a bit disappointing that results-wise there is no step forward from the original IGN, but it seems like this kind of work helps getting there by allowing stability and more architectural freedom. Overall, my concerns were mostly addressed. I don't think this paper should be rated 5. However, I can state that as I see it now, this work is more safely located in the 4 rating than before. I think this paper needs to be accepted. **(Q1)** Yes, you have my permission to add this analysis to the paper if you wish to do so.
Summary: This paper introduces a novel approach for training idempotent neural networks. Leveraging techniques from perturbation theory on idempotent matrices, the authors propose a new method for projecting matrices onto the idempotent manifold. They further extend this approach to nonlinear neural networks. Finally, the paper presents experiments, primarily on toy datasets and MNIST generation. Claims And Evidence: I find the claims to be convincing. A key challenge arises when extending linear idempotency to the nonlinear case. To address this in a practical and reasonable manner, the authors approximate the gradient of the loss as a local linear matrix and incorporate its direction into the gradient update to reduce idempotency error. I believe this approach is fairly reasonable. Methods And Evaluation Criteria: Yes. More discussions on "Experimental Designs Or Analyses" section. Theoretical Claims: Theory in idempotency seems to be correct. Experimental Designs Or Analyses: - The visualizations of optimizer trajectories (Figure 3), absolute cosine similarity (Figure 4), and gradient norms (Figure 5) demonstrate that idempotent guidance facilitates the training of IGN. These visualizaton (Figures 3–7) provide valuable insights into the properties emerging from this idempotency guidance, effectively illustrating the benefits of the modified backpropagation. - A major limitation of this paper is that the experiments are restricted to toy datasets and MNIST. Could the authors extend the experiments to CelebA, following the experiment protocol in IGN? Supplementary Material: I've checked the algorithm in Appendix C. Relation To Broader Scientific Literature: . Essential References Not Discussed: . Other Strengths And Weaknesses: . Other Comments Or Suggestions: . Questions For Authors: Could the authors provide a similar analysis for the MNIST experiments as presented in Figures 3–7? It would be valuable to see whether the observed properties hold in a larger-scale setting. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time and valuable feedback. We are happy to respond to the questions that have been raised. **Extended experiments with CelebA.** We agree with the reviewer that extending experiments to cover other large datasets is interesting. In the graph (https://imgur.com/a/w89mL6I) we demonstrate generation of CelebA images from noise, following the experimental protocol of IGN, adjusted to use our Modified Backpropagation method. We train the same U-Net DCGAN architecture in the way outlined in our paper. Hyperparameters $\lambda$ were chosen as $\lambda_r = 20$, $\lambda_i = 0.006$, $\lambda_t = 0.02$ by trial and error. As with MNIST (and IGN), we sample noise with similar frequency-statistics as images from the dataset. Although the results are qualitatively inferior to state-of-the-art generative models, we believe that this is partly due to suboptimal hyperparameter selection, which would likely be improved with further fine-tuning. Nevertheless, the provided graph clearly demonstrates similar behaviour to that observed in our paper for the MNIST dataset (Figures 8 and 9), and in the IGN paper for MNIST and CelebA (Figure 4.) In particular, the images $f(\mathbf{z})$ and $f(f(\mathbf{z}))$ are highly similar as one expects from an idempotent mapping. Furthermore, we also observe the desired self-correcting property, with some small defects in background, hairstyle, and facial features visible in the image $f(\mathbf{z})$ then being corrected in the image $f(f(\mathbf{z}))$. A key result of the IGN paper is the ability to correct degraded images and map them into the distribution. In the graph (https://imgur.com/a/j4d20Ni) we demonstrate how a model trained using our method can correct images from the CelebA dataset after applying added noise, a greyscale filter, and a sketch filter. As in IGN, or model does not perfectly recover the original image, but characterizing features are clearly recovered in many cases. The results on CelebA were not initially included in the paper as we wish to focus primarily on the task of finding general idempotent mappings, as opposed to focusing on generative applications of the method. However, as experiments on CelebA was requested by several reviewers, we propose to include a short section on these results in the final paper, if our work is accepted. **Figures 3-7 in larger-scale settings.** The larger-scale generative setting we consider in this paper has a loss function which is composed of several components, some of which are adversarial in nature. It is well known that training generative models requires careful balancing of the hyperparameters in order to achieve good qualitative results, and it is rarely beneficial to let the idempotent loss factor too greatly in the combined loss. On that basis, even if our approach would be able to find models with lower idempotent loss, we believe that reproducing Figures 6-7 in this setting would be misleading, as they cannot be used meaningfully to judge the quality of the trained network. Figures 3-5 could indeed be reproduced meaningfully for our MNIST dataset, and we agree with the reviewer that it would be interesting to see whether the behaviour we observe on toy networks also applies to the larger-scale setting. We plan to do this thoroughly in future work, where we can give a fuller analysis of our method for more complex architectures. --- Rebuttal Comment 1.1: Comment: I thank authors for the detailed response. I'll keep my score.
Summary: The paper presents a new approach to enforcing idempotency in neural networks through a modification of the backpropagation algorithm, termed Modified Backpropagation. The key idea is the derivation of an idempotent corrector function $g(K) = 3K^3 - 2K^2$, which iteratively projects a real-valued matrix onto the manifold of idempotent matrices. The authors extend this idea to general neural network training by modifying the canonical gradient-based optimization approach. Experimental results demonstrate that this method reduces idempotent error and outperforms ordinary backpropagation in multiple MLP and CNN architectures. Claims And Evidence: The claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method makes sense for the problem and its application. Theoretical Claims: I have checked all details of the theory part. There is no actual proof. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper provides a new way to enforcing idempotency in neural networks through a modification of the backpropagation algorithm. Essential References Not Discussed: I believe that most relevant works have been cited in the paper. Other Strengths And Weaknesses: **Strengths** - The paper is well-written and it is easy to follow. - The introduction of an idempotent correction function and its integration into neural network training is a promising research topic as it has not been explored deeply in the literature. The relation with perturbation theory is interesting. - The paper includes experiments demonstrating improved idempotent properties in trained networks compared to ordinary backpropagation. - The method has been applied to generative models. **Weaknesses** - The theory part of the paper is weak. The relation of the proposed method to the perturbation theory as well as the stability analysis is quite easy and straightforward. - The paper primarily focuses on MLPs and CNNs, but it does not provide results on more complex architectures like transformers or diffusion models. - The computational cost of enforcing idempotency through Modified Backpropagation is not discussed in detail. Does the method introduce significant overhead compared to traditional approaches? - It is unclear how much each component (e.g., the choice of recurrence relation, hyperparameter γ) contributes to the observed improvements. Other Comments Or Suggestions: No. Questions For Authors: - How does the proposed method generalize to architectures beyond MLPs and CNNs, such as transformers or graph neural networks? - In Figure 6-B1, why the error of the modified backpropagation it not stable. Is there any reason why? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and valuable feedback. We appreciate the concerns raised, and we aim to address these below. **Theoretical development.** We employ a novel theoretical framework which allows gradient-free training of an idempotent property. For us it is interesting that there exists a unique order-3 polynomial iterator $\mathbf{K}' = 3 \mathbf{K}^2 - 2 \mathbf{K}^3$ for the idempotence property. While the stability analysis may be technically straightforward, we have not seen perturbative techniques used for similar purposes in the machine learning community, and we believe our methods could be extended to other algebraic structures beyond idempotence. We therefore suggest that our work could be of theoretical interest to others in the community. **Computational cost.** A theoretical analysis shows that ordinary backpropagation and modified backpropagation have the same computational complexity. These results are reflected in practical experiments (https://imgur.com/a/S8FSExJ), which show that both algorithms run in approximately the same amount of time. These results show, for networks B1-B4 in Table 1, average wall-clock running time consumed per training epoch across 250 repetitions of each algorithm, with 98\% confidence intervals. Across all configurations, both algorithms take approximately the same amount of time, with a slight advantage to Modified Backpropagation with relative improvement ranging from $1-38\%$. The theoretical argument goes as follows. We here consider the growth in the number of matrix multiplications required as the number of layers increases. Defining $y=f(\mathbf{x})$, the loss function from Eq 2 can be written as $L(\mathbf{y}) = \frac{1}{m} \sum (f(\mathbf{y}) - \mathbf{y})^2$. By the chain rule $\frac{\partial{L}}{\partial{\mathbf{W}}} = \frac{\partial{L}}{\partial{\mathbf{y}}} \frac{\partial{\mathbf{y}}}{\partial{\mathbf{W}}}$. Computing $\frac{\partial{\mathbf{y}}}{\partial{\mathbf{W}}}$ using backpropagation will generally use $O(k)$ matrix multiplications for a $k$-layer MLP. This quantity is computed in the same way for both Modified Backpropagation and Ordinary Backpropagation. For Ordinary Backpropagation, the quantity $\frac{\partial{L}}{\partial{\mathbf{y}}}$ can also be unfolded via the chain rule and its evaluation requires computing $\mathbf{y} = f(\mathbf{x})$, $f(\mathbf{y})$, as well as $\frac{\partial{f(\mathbf{y})}}{\partial{\mathbf{y}}}$, which each can be computed in $O(k)$. Thus, assuming memoization is used (as is the case by design in most autodiff frameworks, like PyTorch), Ordinary Backpropagation can compute $\frac{\partial{L}}{\partial{\mathbf{W}}}$ in $O(k)$ time. (In Section 1 around Eq. 2 in the paper we argue that the computational cost of evaluating this gradient in Ordinary Backpropagation grows exponentially in the number of layers. Memoisation reduces this to $O(k)$ time, and we propose to clarify this point in the paper.) For Modified Backpropagation we compute $\frac{\partial{L}}{\partial{\mathbf{y}}} = 3f(\mathbf{y}) - 2f(f(\mathbf{y})) - \mathbf{y}$. Again, assuming memoization, each of these terms take $O(k)$ time to evaluate, so Modified Backpropagation also runs in $O(k)$ time. In the final paper, if accepted, we will include content on the theoretical discussion on relative running time, as well as the graph showing our practical experiments which demonstrate this point. **Error spikes in Figure 6-B1.** The spikes are caused by some runs of Modified Backpropagation converging (within machine precision) to the zero-matrix during training. Due to floating-point imprecision this is rounded down and causes the effect. We chose not to exclude these runs from analysis as we deem it important to expose this behaviour. Note that this only occurs on the small architecture B1. The last sentence in the caption for Figure 6 addresses exactly this concern. In the final paper, if accepted, we will add a further sentence in the main body to make this point more clearly. **Application to other network architectures.** We appreciate the relevance of exploring more complex network architectures. This work proves the principle that a fundamentally new approach to learning idempotent networks is possible. In this early work it has been important to explore behaviour of Modified Backpropagation in controlled environments with few external factors in order to give convincing evidence of the method's efficacy. To this end, we have evaluated our method on a variety of MLP networks and synthetic datasets which illuminates behaviour such as sensitivity to network size, dataset distributions, and running time. Nevertheless, wider application of the method in practice will depend on generalization to the architectures mentioned by the reviewer, and we are eager to explore this in future work.
null
null
null
null
null
null
null
null
Continuously Updating Digital Twins using Large Language Models
Accept (poster)
Summary: This paper proposes CALM-DT (Context-Adaptive Language Model-based Digital Twin), a novel digital twin framework that leverages large language models (LLMs) for simulation of dynamical systems. Claims And Evidence: Key claim: LLMs can serve as digital twins that continuously update without re-design or retraining. Yes it is supported by the experiments Methods And Evaluation Criteria: Yes the evaluations make sense Theoretical Claims: The paper does not present new formal proofs (e.g., for convergence or bounds). Experimental Designs Or Analyses: Yes, the designs look valid. Supplementary Material: I haven't. Relation To Broader Scientific Literature: Introduces LLM-based, context-adaptive simulation, which relates to ongoing work in time-series forecasting via large language models but extends it by including actions/policies and retrieval-based updates. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: None Other Comments Or Suggestions: I don't have additional comments Questions For Authors: Does the evaluation exceed the model's context window (i.e. 128k)? It seems not. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions. We give answers to the following: - (A) Is the context window exceeded? --- **(A) Is the context window exceeded?** Thank you for raising this point. You are correct in saying that the initial cystic fibrosis (CF) dataset we investigated does not exceed the LLM context limit (its combined textual representation of all samples is approximately 100k tokens) and it can therefore fit entirely within the LLM's context window. However, we have now **tested this scenario of including the entire training dataset in context as a further ablation**, and we find that it **leads to suboptimal results, significantly underperforming compared to our sample selection method** with context set size $c=5$ (See below table, which is our original Table 3 with the new ablation 'Whole dataset in context' added). Table 1: Sample selection ablations | Sample selection | MSE ($\downarrow$) | MAE ($\downarrow$) | |----------------------------------|--------------------|--------------------| | $\text{CALM-DT}_\text{Zero}$ | 74.816 $\pm$ 0.411 | 5.241 $\pm$ 0.021 | | $\text{CALM-DT}_\text{Random}$ | 73.761 $\pm$ 3.221 | 5.224 $\pm$ 0.100 | | $\text{CALM-DT}_\text{No train}$ | 67.055 $\pm$ 0.894 | 4.953 $\pm$ 0.040 | | Whole dataset in context | 65.3 $\pm$ 3.90 | 4.81$\pm$0.102 | | $\text{CALM-DT}$ | 55.336 $\pm$ 0.811 | 4.634 $\pm$ 0.045 | This finding **aligns with our existing context size ablation study in Section 6.2**, which demonstrates that optimal performance is achieved with $c=5$, and addition of more samples leads to performance degradation. These results further suggest that including excessive, potentially irrelevant samples degrades simulation performance. This observation is **supported by existing literature on LLM performance degradation with excessive context lengths** [1, 2]. Additionally, we have now **conducted experiments on a larger dataset** to evaluate simulation performance for patients with Non-Small Cell Lung Cancer (NSCLC). We consider a dataset of 500 patients, each with 60 daily values recorded for the state variables 'Tumour volume' and 'Chemotherapy concentration,' as well as the action variables 'Chemotherapy dosage' and 'Radiotherapy dosage'. The size of each instance for this dataset is therefore substantially larger than in our previous CF experiment, making it **infeasible to tokenize the entire dataset and fit it within the LLM's context window** (the full dataset amounts to approximately 350K tokens). As a result, context sample selection is crucial in this setting. Our method demonstrates **strong performance on this dataset as well**, providing further evidence for the efficacy of our approach (See table below comparing 30 day simulation accuracy of CALM-DT with baseline models). CALM-DT achieves the highest accuracy in terms of MSE, and the second highest in terms of MAE, again using context set size of $c=5$, and GPT-4o as the base LLM. Table 2: 30 day simulation results on NSCLC dataset. | Model | MSE ($\downarrow$) | MAE ($\downarrow$) | |-------------|--------------------------|--------------------| | SINDy| $1.90 \times 10^4 \pm 0$ | $37.6 \pm 0$ | | Transformer | $116 \pm 52.7$ | $5.86 \pm 2.44$ | | RNN | $115 \pm 4.35$ | $6.04 \pm 0.645$ | | DyNODE | $104 \pm 63.6$ | $4.67 \pm 2.06$ | | HDTwin | $80.6 \pm 11.8$ | $3.42 \pm 0.717$ | | CALM-DT | $79.4\pm8.57$ | $4.28\pm0.157$ | Update: We have **run an additional sample selection ablation** for the CF experiment and included results in Section 6.2, evaluating the use of the **entire training dataset as a context set**. Furthermore, we have conducted an **additional experiment on a larger NSCLC dataset, which exceeds the LLM's context limit**. The results of this experiment are now included in Section 6.1. --- Thank you once again. We hope that we have addressed all your comments, and we greatly appreciate your feedback. --- [1] Liu, N. F., Lin, K., Hewitt, J., Paranjape, A., Bevilacqua, M., Petroni, F., and Liang, P. Lost in the middle: How language models use long contexts. Transactions of the Association for Computational Linguistics, 12:157–173, 2024 [2] Li, T., Zhang, G., Do, Q.D., Yue, X. and Chen, W., Long-context llms struggle with long in-context learning, 2024. URL https://arxiv.org/abs/2404.02060.
Summary: This paper presents CALM-DT, a framework using large language models to create digital twins that can update continuously without redesign or retraining. Unlike traditional approaches, CALM-DT handles new variables and incorporates new information through in-context learning. Testing on cystic fibrosis patient data shows it outperforms existing methods and adapts seamlessly to changes like new treatments, making it valuable for dynamic real-world applications. Claims And Evidence: The authors only test their methodology in a single setting. For the general claims they are trying to make (proposing this as a general method), it seems important to have at least 3 or so settings. It seems hard to tell how generalizable some of the choices the authors make are across settings, and I think the effectiveness of certain choices would plausibly vary a lot by application setting. Other than that, as far as I could tell, their claims and experiments were sound (with the exception of the things mentioned below). Methods And Evaluation Criteria: As mentioned above, I think the authors should have tested the method in more than one setting to make claims as general as they do (e.g. the title): as of right now, the work functions more so as a case study. > However, since time-series data may differ significantly from typical NLP training distributions, we enhance each trajectory’s textual representation by appending an LLM-generated summary of its trends. As far as I could tell, you do not validate with an ablation that appending an LLM-generated summary helps and how much. While I think this is probably quite context-specific (as I would guess many aspects of the method are). This seems especially important given that this is one of the points that the authors discuss as being an area of novelty ("Additionally, novelty arises in our sample-selection method, as we are the first to propose retrieval of time-series data by leveraging LLM generated summaries to enhance NLP encoder capabilities."). Theoretical Claims: I don't believe there are any theoretical claims in the paper. Experimental Designs Or Analyses: Looking at the prompts, it's often not clear what is part of the actual prompt and what would be substituted with different values: e.g. in line 667, would there be an actual weight number provided? Would the `X` values be replaced with actual values later in the prompt? This seems inconsistent with providing the years. I think giving an example of the prompt in which all the variables are given explicitly (but having some way to highlight them as variables) would be helpful to better understand the setup. Supplementary Material: Mostly just looked at the prompts. Relation To Broader Scientific Literature: There is no mention in the paper of the relationship between digital twins and "world models". It's not clear to me what the difference is, and this makes it harder to assess the novelty of the paper is: that LLMs are relatively good world models is something that has already been studied quite a bit (even using in-context learning to simulate consequences of actions). For instance: [this paper](https://aclanthology.org/2025.coling-main.503.pdf), but there are likely many other works in this area. For example, based on the work linked, claims like the following seem likely incorrect to me: > To the best of our knowledge, CALM-DT is the first proposed context-adaptive simulation method, that dynamically adjusts its knowledge and data base mid-generation. As of right now my understanding is that the main novel contribution of this paper may be combining retrieval with using LLMs as world models (a relatively simple scaffolding change whose effectiveness is probably quite context-dependent), plus doing an in-depth case study in one specific application area. Another area of claimed novelty may be the adaptation on the fly to new dynamics (e.g. new treatment options). However, this also likely has precedents in the literature: although most literature on world models is about static dynamics, I would be surprised if there wasn't any work that shows that e.g. LLMs are able to adjust their predictions on the fly when changing the rules of a game. A related line of work is that of LLMs as simulators of human behavior: even there, the premise is that LLMs would be able to predict human behavior in unseen situations, generalizing to new dynamics (e.g. [Generative Agent Simulations of 1,000 People](https://arxiv.org/abs/2411.10109)). I think a much deeper engagement with analogues of this idea across the LLM literature is missing, as it seems essential to situate the contribution of the work. This is the main factor that makes me lean towards not recommending acceptance, paired with the limited scope of the evaluation. Essential References Not Discussed: I don't know references off the top of my head as this is not my main area of expertise, but mentioned something above that I could find with a quick search. My guess is that there are many other papers in that vein. Other Strengths And Weaknesses: Nothing I didn't already mention above! Other Comments Or Suggestions: Line ~380: "acros" -> "across" Line 075: "accomodate" -> "accommodate" Line 205: "flexbily" -> "flexibly" Line ~375: "chane" -> "change" Typo in one of the LLM prompts: "measurments" should be "measurements" (About half of these typos were found by Claude.) Questions For Authors: I think it would be interesting to ask the LLMs used in your experiments to hypothesize what "drug X" is referring to. I wouldn't be surprised if – when asked – they would be able to infer that it is Ivacaftor. I don't think this is a major methodological issue, but just wanted to point out that simple renaming may not be *entirely* sufficient to isolate the effect of your method, and this is something one could try to measure explicitly. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions. We address the following: - (A) Additional settings - (B) Time-series summary ablation - (C) Prompts - (D) World models - (E) LLM thoughts on 'drug X' --- **(A) Additional settings** We evaluate CALM-DT on three additional datasets: 1) Non-Small Cell Lung Cancer (NSCLC) patients undergoing chemo- and radiotherapy (See Table 2 in our response to reviewer dKC6) 2) Di-trophic ecological system of hare and lynx population dynamics (See Table 1 in our response to reviewer Y6u8) 3) Tri-trophic ecological system of algae, flagellates, and rotifers (See Table 2 in our response to reviewer Y6u8) CALM-DT consistently outperforms, achieving the lowest MSE across all datasets and the lowest MAE on all but NSCLC. This shows the generalisability of CALM-DT and its capability to accurately model various dynamical systems. **_Update:_** We have **extended Section 6.1 with these 3 datasets**. --- **(B) Time series summary ablation** We conduct ablations for the LLM generated summaries for sample selection. We compare standard CALM-DT, using an encoder finetuned and tested with summaries appended to the time-series, to the use of finetuned encoders that are trained and tested without summaries on CF and NSCLC data. We also compare pretrained encoders, with/without summaries appended at inference time. [Link to table 1: CF results](https://anonymous.4open.science/r/CALM-DT-Rebuttals-65B2/CF_summary_ablation_results.png) [Link to table 2: NSCLC results](https://anonymous.4open.science/r/CALM-DT-Rebuttals-65B2/NSCLC_summary_ablation_results.png) On both datasets, appending LLM generated summaries allows the encoders to select better samples. This is true in the finetuned case, and the pretrained case. **_Update:_** We have **added an additional ablation to Section 6**, showing the importance of LLM generated summaries in sample selection. --- **(C) Prompts** Thank you for raising this point. In line 667, there would _not_ be an actual weight provided, as this is the _variable descriptions_ part of the prompt (in lines 664-671, the state/action variables are described). You are correct that the _X_ values in the supplied prompt would be replaced with actual values. We will make this clear by providing a full prompt in the appendix. **_Update:_** We have **added a full prompt to the appendix**. --- **(D) World models** We clarify that we see DTs as a class of world model with particular connotations. It is crucial that DTs (not necessarily world models in general) exhibit **continuous time dynamics** [1], to allow simulation of arbitrary length, and since a DT is tied to a specific individual system, it should **update its dynamics alongside it, as the system and its environment change** [2]. While world models can focus on more 'static' environments, **DTs must be designed to update**. Our method fits these DT requirements, given the adaptability which we show, and that LLMs can be prompted to simulate with arbitrary horizons and step sizes. We agree that a better contextualisation of this positioning is required, and, given space limitations, we will include an overview of relevant world models, especially those that incorporate LLMs, in our related works. We maintain that we are the first to propose context-adaptive simulations, enabled by sample retrieval, for LLM-based world models, which is essential to allow data-driven insights in unseen state-action pairs, without fine-tuning. While prior studies have indeed explored the use of LLMs as world models, they typically rely on static context/prompts during simulation, or require fine tuning to be performant. It is a further novelty that we show how LLM time-series in-context learning ability scales (Section 6.2). **_Update:_** We have **extended our related works** to build on the above comparison with world models. --- **(E) LLM thoughts on 'drug X'** Thank you for this insightful suggestion. We queried the LLM on drug X, and indeed Ivacaftor was one of its suggested options. To avoid this potential 'data leakage' we instead create a 'fake' drug name which is less clearly a placeholder than drug X. We replace 'drug X' with a more realistic, yet fake, CF drug name - 'Pulmurex' - and we see that it makes no appreciable difference to the post Ivacaftor results. Querying the LLM on Pulmurex, it no longer suggests it could be a placeholder for Ivacaftor. **_Update:_** We have **changed 'drug X' to a more plausible drug name** in our Ivacaftor experiments. --- Thank you once again. We hope that we have addressed all your comments, and we greatly appreciate your feedback. --- [1] Chen, H., Yang, J., Chen, J., Wang, S., Wang, S., Wang, D., Tian, X., Yu, Y., Chen, X., Lin, Y. and He, Y., 2024. Continuous-Time Digital Twin with Analogue Memristive Neural Ordinary Differential Equation Solver. [2] National Academies of Sciences. Foundational research gaps and future directions for digital twins. 2023 --- Rebuttal Comment 1.1: Comment: Thank you for responding to my points. I appreciate the additional experiments, and the discussion of world models. While I remain somewhat skeptical that you're the first to propose context-adaptive simulations for LLM-based world models, I agree that you're likely the first to do so with sample retrieval. I've raised my score. --- Reply to Comment 1.1.1: Comment: Thank you Reviewer SWpa, we are very grateful for your numerous helpful comments and continued engagement! Sample retrieval is a critical component of our context-adaptive simulation approach, as it enables CALM-DT to incorporate data-driven insights into simulation in an efficient and automatic way, by dynamically adjusting its context with the most relevant samples mid-simulation. Attempting to incorporate data-driven insights into simulation without sample retrieval, such as using a fixed context with a large number of samples included, leads to sub-optimal results (please see our response to Reviewer dKC6, part (A), where we show that including the entire CF dataset in context is less performant than our context-adaptive approach). We are glad that we have addressed your concerns, and we believe our manuscript has significantly improved as a result of these changes.
Summary: The paper addresses the challenge of maintaining the relevance of digital twins in dynamic environments where state/action variables and relevant information constantly change. The authors frame digital twinning as an in-context learning problem using LLMs. They propose CALM-DT, which uses fine-tuned encoders for sample retrieval, enabling accurate simulation across diverse state-action spaces during in-context learning. The paper Identifies the limitations of existing DTs in dynamic environments (need for re-design or re-training) and its main contributions are: (1) Establishing design requirements for DTs for dynamic environments; (2) Demonstrating that LLMs can meet these requirements and proposing CALM-DT, which adapts to changes in its modeling environment without re-design or re-training; (3) Developing a simulation method that adjusts the information supplied to the LLM mid-generation to handle excessive context window lengths. The paper empirically showing that CALM-DT outperforms existing DTs and can adapt to changes in modeling environments. Claims And Evidence: The paper’s claims are generally well-supported, with a couple points for improvement. 1. LLM Reliance- The performance of CALM-DT is heavily reliant on the underlying LLM. It would be beneficial to see a more thorough discussion of the limitations and potential risks associated with LLM biases, hallucinations, and failures. 2. More experiments- The experiments are primarily focused on modeling CF progression. While this is a relevant and complex application, it is important to examine the generalization of CALM-DT to other domains. The authors should provide more discussion and evidence on how CALM-DT could be applied to different types of dynamic systems. 3. More evaluation metrics- The paper primarily uses MSE and MAE for evaluation. It would be helpful to include other evaluation measures relevant to the specific application such as predicting critical events. Methods And Evaluation Criteria: The proposed method and evaluations are appropriate for addressing the problem. As noted- another evaluation metrics would be good. Theoretical Claims: No obvious problem. Experimental Designs Or Analyses: The paper is primarily centered on the CF domain. To strengthen the findings, the authors should broaden the empirical analysis to include additional domains and demonstrate CALM-DT's performance in more diverse settings. This would address concerns about the model's ability to generalize. In the ablation study for sample selection, the paper compares different encoder-based methods. A relevant baseline would be to evaluate a selection strategy that uses another LLM (maybe a smaller one) to choose the context set $C_f$ based on the history, instead of relying on a trained encoder. This would provide insight into the effectiveness of the encoder approach compared to a more direct LLM-driven selection. Supplementary Material: Yes. Relation To Broader Scientific Literature: This work is inspired by the broader context of LLMs and their ability to handle in-context learning and time-series data. These capabilities suggest that LLMs can function as adaptive data mechanisms, with applications in areas like medicine and finance. Essential References Not Discussed: None Other Strengths And Weaknesses: The paper lacks clarity regarding how CALM-DT samples actions during the generation process. Key details about the action policy implementation are unclear and require further explanation. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions. We address the following: - (A) LLM reliance - (B) Further experiments - (C) Application-specific metrics - (D) LLM sample selection - (E) Actions --- **(A) LLM reliance** We agree it is important to elaborate on hallucinations, bias, and failures that can arise due to tokenization and formatting. - **Hallucination** can result in implausible patterns, spurious spikes/dips, or inconsistencies with known domain constraints. These can be difficult to predict, so careful analysis of outputs is critical. - **Biases** in training corpora can influence model predictions, potentially leading to disparities in simulations across populations. - Typical **tokenization** (e.g. BPE) can limit precision, as continuous variables are discretised, obliging caution in scenarios where high precision is necessary. - **Correctly structured outputs** cannot be guaranteed, although, in practice, we rarely experienced such issues. At times, textual explanations were included in outputs. **_Update:_** We have **extended Section 7** to expand on the above. --- **(B) Further experiments** We agree that demonstrating generalisation is crucial. Therefore, we evaluate CALM-DT on three additional datasets (as in [1]): 1) Non-Small Cell Lung Cancer (NSCLC) patients undergoing chemo- and radiotherapy (See Table 2 in response to reviewer dKC6) 2) Di-trophic ecological dynamics (Hare-Lynx population) (See Table 1 below) 3) Tri-trophic ecological dynamics (Algae-Flagellates-Rotifers population) (See Table 2 below) The results consistently demonstrate CALM-DT's superiority across these diverse settings. Table 1: Hare-Lynx 5 day simulation |Model|MSE($\downarrow$)|MAE($\downarrow$)| |---|---|---| |HDTwin|$1.11\times10^{4}\pm1.93\times10^4$|$29.6\pm9.21$| |Transformer|$2.52\times10^3\pm796$|$31.7\pm5.44$| |SINDy|$1.05\times10^3\pm0.00$|$26.5\pm0.00$| |DyNODE|$895\pm212$|$22.1\pm2.65$| |RNN|$563\pm39.5$|$19.7\pm0.611$| |CALM-DT|$453\pm54.3$|$15.3\pm0.999$| Table 2: Algae-flagellates-rotifers 5 day simulation |Model|MSE($\downarrow$)|MAE($\downarrow$)| |---|---|---| |RNN|$0.156\pm8.01\times10^{-3}$|$0.354\pm7.44\times10^{-3}$| |SINDy|$0.0265\pm0$|$0.0994\pm0$| |HDTwin|$2.89\times10^{-3}\pm1.48\times10^{-3}$|$0.0316\pm8.58\times10^{-3}$| |DyNODE|$2.57\times10^{-3}\pm1.05\times10^{-3}$|$0.0341\pm6.98\times10^{-3}$| |Transformer|$1.66\times10^{-3}\pm7.54\times10^{-4}$|$0.0283\pm5.70\times10^{-3}$| |CALM-DT|$3.87\times10^{-4}\pm4.65\times10^{-5}$|$0.0101\pm5.55\times10^{-4}$| **_Update:_** We have **extended Section 6.1** to include these 3 datasets. --- **(C) Application-specific metrics** We agree on the importance of application-specific metrics, and include an evaluation of critical event prediction. On NSCLC data (without treatments) we simulate time to patient death (set at tumour diameter of 13cm [2]). The table below shows the error (in days) for simulated death compared to actual death. |Model|Error| |---|---| |RNN|$10.25\pm0$| |Transformer|$6.32\pm1.72$| |HDTwin|$5.89\pm0.193$| |CALM-DT|$5.19\pm0.496$| |SINDY|$4.96\pm0$| |DyNODE|$4.91\pm0.146$| CALM-DT achieves third best. This, combined with the strong performance above, indicates the accuracy of CALM-DT's simulations overall and in capturing critical event timings. **_Update:_** We have **added an experiment on time-to-death simulation** to the appendix. --- **(D) LLM sample selection** Thank you for suggesting this insightful comparison with LLM-based sample selection. We test this on the CF dataset, querying an LLM, with the entire training dataset as context, for the top 5 samples for the current history. We see that CALM-DT is significantly better than this in both MSE and MAE. |Sample selection|MSE($\downarrow$)|MAE($\downarrow$)| |---|---|---| |LLM-based selection|65.6$\pm$2.19|$4.88\pm0.116$| |CALM-DT|55.3$\pm$0.811|4.63$\pm$0.045| Also, this is infeasible when training data exceeds the LLM context limit, e.g. for the NSCLC dataset. Our method for encoder finetuning with contrastive learning, determining positive/negative samples based on the simulation accuracy they induce, gives a useful signal to the encoder, resulting in better selections than an LLM. **_Update:_** We have **added this ablation** to Section 6.2. --- **(E) Actions** Consistent with [1], we consider a deterministic policy in testing. That is, at each simulation time point, we apply the same action that was applied to the test sample. **_Update:_** We **clarify action sampling in Section 6**. --- Thank you once again. We hope that we have addressed all your comments, and we greatly appreciate your feedback. --- [1] Holt, S., Liu, T. and van der Schaar, M., 2024. Automatically Learning Hybrid Digital Twins of Dynamical Systems [2] Geng, C., Paganetti, H. and Grassberger, C., 2017. Prediction of treatment response for combined chemo-and radiation therapy for non-small cell lung cancer patients using a bio-mathematical model --- Rebuttal Comment 1.1: Comment: The authors have addressed all of my concerns including conducting additional experiments and ablations. Thank you for the effort, I'll raise my score. --- Reply to Comment 1.1.1: Comment: Thank you Reviewer Y6u8, we are very grateful for your numerous helpful comments and continued engagement! We are glad that we have addressed your concerns, and we believe our manuscript has significantly improved as a result of these changes.
Summary: This paper proposes a way to use a frozen LLM (e.g. GPT-4o) to construct an auto-regressive model of the temporal evolution of a few variables, like a medical patient's height, weight, and lung function measurement, in response to certain interventions (administration of medications). The main insight is that using an LLM instead of a hand-designed quantitative model allows the simulator to adapt to changes in the state or action space without requiring new iterations of manual modeling. They devise a scheme for retrieving similar trajectories from a dataset for use in in-context learning, based on fine-tuning encoders using a contrastive loss that is itself based on the LLM. They show that their method forecasts lung function measurements with less error than some other machine learning based approaches on a cystic fibrosis dataset (CF), and they show an example of how their methodology adapts to introduction to a new CF treatment without requiring retraining. Claims And Evidence: Their methodology does seem more capable in principle of adapting to changes in the state and action space without expert human modeling intervention, and this feature makes this research direction promising. But I do not find their demonstration of the method's in-context adaptation to a new Ivacaftor action (Section 6.3) very compelling. In particular, it is concerning that the mean absolute error increased when trajectories that include Ivacaftor were introduced (Table 4, MAE, K update vs K + D update). The amount of variability in Table 4 and Table 5 also makes it difficult to evaluate the (i) the capability of the methodology to adapt to a new action (Table 4) and (ii) the capability of the model to learn from additional data (Table 5). Also, they claim (Table 1) that their method "allows uncertainty in simulation" (presumably implemented by running simulations multiple times), but there is no analysis of the variability in model outputs and so there's no reason to believe that this variability is meaningful (especially since the simulation kernel LLM was not trained using a prediction loss that would in theory relate to the distribution in outputs to the epistemic uncertainty in the predictions). Methods And Evaluation Criteria: The dataset they use does seem suitable for demonstrating their methodology. However, given the lack of a mechanistic principle for this methodology (i.e. the model does not seem to be trained to minimize prediction error), more evaluation is needed to understand whether the predictions are meaningful. I would have liked to (i) see simpler baselines in Table 2 (constant prediction, K-Nearest-Neighbors), and (ii) randomly selected plots of concrete predictions of this model (with multiple simulations to convey something about the variability of the outputs and any LLM-related artifacts) compared to the ground truth values and baseline models. In order to better convey whether this evaluation is meaningful, the paper could compare the prediction error against state-of-the-art longitudinal models in the evaluation domain (not only against generic machine learning models or models in the digital twins literature). For example, I would like to know how the prediction error compares to that of this recent paper: "Predicting lung function decline in cystic fibrosis: the impact of initiating ivacaftor therapy", Respiratory Research 2024. I understand that a systematic comparison to this model might not be possible due to data limitations, but do the predictions have comparable levels of error? How might the types of errors produced by this method differ from those produced by a hand-designed model? Yes, the paper's proposed approach doesn't require human modeling, but the paper should investigate the tradeoffs of the proposed methodology with alternative approaches. Theoretical Claims: No. Experimental Designs Or Analyses: No. Supplementary Material: Yes. Relation To Broader Scientific Literature: As mentioned above, the paper does not adequately evaluate the proposed method relative to the broader scientific literature. For example, it is not clear how this methodology compares to state-of-the-art models from the biostatistics literature, like "Predicting lung function decline in cystic fibrosis: the impact of initiating ivacaftor therapy", Respiratory Research 2024. Clearly, the proposed methodology requires less manual effort, but at what cost? Essential References Not Discussed: See "Relation to Broader Scientific Literature" above. Other Strengths And Weaknesses: I think that the capability of LLMs to adapt to changes in state and action space parameterizations is an important idea, and I think this paper identified a suitable setting to investigate these ideas. The paper is also clearly written. I have concerns about the significance of the paper, due to a lack of connection to state-of-the-art models in the evaluation domain (which is biostatistics / medical informatics). Other Comments Or Suggestions: No. Questions For Authors: 1. How does your model's prediction accuracy compare to that of expert models like the Respiratory Research paper I cited above? If a comparison is not possible, are there any other works outside of the digital twins literature that can help ground these results and help readers determine the potential impact of this approach to this domain? 2. Is the model's uncertainty (probability distribution on forecasts) meaningful or interpretable in any way? For example, are there multiple modes of disease progression that it captures? 3. What is the tokenization that the LLM emits for quantitative predictions? Does each digit typically get one token? Do the forecasts have any artifacts related to tokenization or use of language modality for prediction? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments and suggestions. We address the following: - (A) Ivacaftor experiments - (B) Uncertainty - (C) Simple baselines - (D) Domain-specific model - (E) Tokenization --- **(A) Ivacaftor experiments** We agree that more robust evidence is necessary for demonstrating our method's adaptability. Initially, limited sample size and few iterations contributed to high variance in results. To rectify this, we've increased test set size to 50 patients and performed 20 simulation iterations, resulting in clearer outcomes. Table 1: Adapting to new action |Method|MSE($\downarrow$)|MAE($\downarrow$)| |---|---|---| |No update|59.7$\pm$0.472|4.95$\pm$0.0166| |$K$ update|54.4$\pm$ 0.472|4.75$\pm$0.0242| |$K$ + $D$ update| 50.4 $\pm$ 0.477| 4.28 $\pm$ 0.0251| Table 2: Learning from new data |Context future|MSE($\downarrow$)|MAE($\downarrow$)| |---|---|---| |One year|50.4$\pm$0.477|4.28$\pm$0.0251| |Two years|50.0$\pm$1.00|4.29$\pm$0.0468| |Three years|47.8$\pm$1.01|4.19$\pm$0.0523| We now have more robust and interpretable results. Table 1 demonstrates that updating K and incorporating Ivacaftor trajectories into D both significantly improve simulation accuracy, as evidenced by substantial reductions in MSE and MAE, illustrating CALM-DT's capability to adapt effectively to new actions. Similarly, in Table 2, we see that CALM-DT can learn from new data, as there is a clear improvement in simulation from using 1 to 3 years of context data. **_Update:_** We have **improved our experimental set-ups in Sections 6.3 and 6.4**, giving **more clear takeaways**. --- **(B) Uncertainty** We acknowledge the need for deeper analysis of simulation uncertainty. To address this, we provide illustrative plots demonstrating variability across simulations of cancer tumours (please see our response to reviewer dKC6 for context) under certain settings (Zero, one, and five shot) in our anonymous repository [here](https://anonymous.4open.science/r/CALM-DT-Rebuttals-65B2). The dark line is the true trajectory, and the grey lines are individual simulations. With more context, simulation variance decreases, thus reflecting meaningful epistemic uncertainty. Quantitatively, average simulation variance across the NSCLC dataset decreases from 2.79 (zero-shot) to 1.58 (five-shot) Also, we note that in our encoder finetuning, we use the Continuous Ranked Probability Score (CRPS) to guide the selection of positive and negative samples for contrastive learning. Since CRPS measures how well the simulated distribution aligns with the true outcome, this training procedure encourages selection of related samples that calibrate the uncertainty of the LLM well. **_Update:_** We **include illustrative plots of CALM-DT simulations**, showing how context reduces simulation uncertainty. --- **(C) Simple baselines** We appreciate your suggestion to benchmark our method against simpler baselines. We now include constant prediction and nearest-neighbour baselines for the CF data: |Model|MSE($\downarrow$)|MAE($\downarrow$)| |---|---|---| |Constant prediction|86.8|5.84| |1-NN| 107|6.96| |K-NN (K=12)|65.1|5.47| CALM-DT remains the significantly best performing simulation method in both MSE and MAE. **_Update:_** We have **expanded our baseline methods** in Section 6.1 to include constant prediction, 1-NN, and K-NN. --- **(D) Domain-specific model** We agree that it is very useful to give context of SOTA domain-specific methods. The provided paper reports a RMSE for _FEV1pp_ (which we also simulate) of 6.78 over 6 months for patients on Ivacaftor. From our Ivacaftor experiment, using our most performant setting, CALM-DT's _FEV1pp_ RMSE for 1 year is 8.11, and for 3 years is 8.58 (note we can only give yearly errors, as this is the granularity of the UK CF registry data). Using this domain-specific model as an upper bound on performance, we see that CALM-DT performs relatively well, especially given its low expertise requirements and seamless adaptability to changes in environment. We believe this added comparison strengthens the significance of our method. **_Update:_** We have **added a discussion in Section 6.1** comparing our results to the provided reference. --- **(E) Tokenization** We agree on the importance of considering tokenization impacts. Since we propose a simulation method that is compatible with any LLM, tokenization is not bespoke for CALM-DT. Although tokenization in GPT-4o (the model we use in experiments) is undisclosed, typical tokenization approaches (e.g., BPE) inherently limit numerical precision. Continuous variables are discretised into tokens, and so we suggest caution in scenarios where high precision is necessary. **_Update:_** We **extend our discussion in Section 7** to elaborate on LLM limitations. --- Thank you once again. We hope that we have addressed all your comments, and we greatly appreciate your feedback.
null
null
null
null
null
null
HGOT: Self-supervised Heterogeneous Graph Neural Network with Optimal Transport
Accept (poster)
Summary: The paper proposes a self-supervised heterogeneous graph neural network (HGNN) coined ``HGOT''. It first incorporates optimal transport (OT) into heterogeneous graphs to better facilitate the learning of a more semantic accurate similarity measure between graph instances and structure. The method introduces three components: node feature transformation, which projects different types of node features into one latent space; aggregated view generation, which constructs multiple meta-path-based views and an aggregated view on the entire graph; and multi-view OT alignment, which calculates the optimal transport plan and minimizes the difference between views and costs between feature and structural representations. Claims And Evidence: Most parts of the paper are clearly written and easy to read, despite some grammatical errors. The authors have clearly demonstrated the semantic advantage of local-global alignment between the meta-path view and the aggregated view in their proposed method. However, I have some concerns about the clarity and persuasiveness of the methodology section. (1) The authors do not explicitly clarify the significance of using OT for graph self-supervised learning. What are the advantages of calculating the distribution distance between views compared to other graph self-supervised learning methods? For instance, why is the OT-based approach superior to reconstruction-based methods (e.g., HGATE [1], HGMAE [2], RMR [3]) when both of them do not require augmentation methods and negative sampling? (2) The explanation of the OT distance is unclear. Authors should provide an intuitive explanation for the OT distances designed for feature and edge information, and it would be better with theoretical or empirical comparisons to other metrics. Additionally, a more convincing analysis for choosing cosine similarity and absolute adjacency matrix difference as $\mathcal{C}_X$ and $\mathcal{C}_A$ is expected. (3) The ablation study results cannot validate the effectiveness of the aggregated view, as it only provides relatively little contribution to the overall model. [1] Wang, W., Suo, X., Wei, X., Wang, B., Wang, H., Dai, H. N., and Zhang, X. 2021. HGATE: heterogeneous graph attention auto-encoders. TKDE. [2] Tian, Y., Dong, K., Zhang, C., Zhang, C., and Chawla, N. V. 2023. Heterogeneous graph masked autoencoders. In AAAI. [3] Duan, H., Xie, C., and Li, L. 2024. Reserving-masking-reconstruction model for self-supervised heterogeneous graph representation. In KDD. Methods And Evaluation Criteria: On the methodology side, I believe that the proposed method is an application of existing self-supervised learning techniques to heterogeneous graphs, rather than presenting innovative insights or designs. (1) Node feature transformation and attention-based meta-path representation aggregation are very common practices in HGNNs, as seen in HAN [1] and HeCo [2]. (2) Self-supervised learning using Wasserstein distance across different contrastive views has already been proposed in COLES [3]. Although the authors claim that they are the first to apply OT to heterogeneous graphs, they do not explain or analyze how OT facilitates the learning of heterogeneous representations. Instead, heterogeneous information is only considered in feature transformation and the construction of contrastive views, while the OT-based loss itself seems unrelated to heterogeneous graph learning. For example, OT could have been used to compute the distance between different node/edge type distributions to facilitate the learning of cross-type heterogeneous representations. However, this paper only focuses on learning the distribution distances of node features and adjacency matrices under the homogeneous transformation, which raises doubts about the necessity of the OT strategy for HGNNs. Therefore, I find the innovation in this paper to be limited. On the evaluation side, the authors selected four heterogeneous benchmark datasets and equipped each downstream task with multiple evaluation metrics. The overall evaluation setup is sound to me except for the absence of dataset split information. [1] Wang, X., Ji, H., Shi, C., Wang, B., Cui, P., Yu, P., and Ye, Y. 2019. Heterogeneous graph attention network. In WWW. [2] Wang, X., Liu, N., Han, H., and Shi, C. 2021. Self-supervised heterogeneous graph neural network with co-contrastive learning. In KDD. [3] Zhu, H., Sun, K., and Koniusz, P. 2021. Contrastive Laplacian Eigenmaps. In NeurIPS. Theoretical Claims: The theoretical analysis is insufficient. I suggest that the authors include some theoretical analyses of the OT-based learning objective, including but not limited to the connection with other self-supervised objectives such as the InfoMax principle, generalization bounds, and cross-domain transferability. Particularly, contributions of OT in heterogeneous representation learning would be analyzed to support the claims and establish a solid theoretical foundation for the proposed method. Experimental Designs Or Analyses: I appreciate that the authors conducted experiments on both node classification and node clustering to demonstrate the generalization performance of HGOT across different scales of graph structure. Despite that HGOT achieves significant performance improvements on all datasets and multiple evaluation metrics, I believe that its performance does not reach the state-of-the-art claimed by the authors, as only one baseline method published after 2020 (HGBER) is included. More recent works should be incorporated. Besides, the paper fail to provide an in-depth empirical analysis of HGOT's advantages. Some additional experiments need to be included: (1) More cutting-edge reconstruction-based and attention-based methods should be added as baselines, such as those mentioned in Claims and Evidence [1], [2], [3]. Considering the similarity between HGOT and COLES [4], the authors should also elaborate on the difference and advantages of the proposed OT loss over COLES by comparative experiments. (2) Since OT plays a positive role in domain adaptation, I recommend that the authors design transfer learning-based experiments to evaluate HGOT's transferability across different data domains or to out-of-distribution samples. (3) Considering the complexity of calculating distribution distances for large-scale networks, the efficiency and scalability of HGOT are not guaranteed, considering that it utilizes multiple meta-path views during training. Although the authors provide a rough efficiency analysis of HGOT in Appendix D, I suggest they further present a complexity comparison and efficiency evaluation of HGOT against other self-supervised models. This would provide a clearer validation of HGOT's efficiency without augmentation methods and negative samples. [1] Wang, W., Suo, X., Wei, X., Wang, B., Wang, H., Dai, H. N., and Zhang, X. 2021. HGATE: heterogeneous graph attention auto-encoders. TKDE. [2] Tian, Y., Dong, K., Zhang, C., Zhang, C., and Chawla, N. V. 2023. Heterogeneous graph masked autoencoders. In AAAI. [3] Duan, H., Xie, C., and Li, L. 2024. Reserving-masking-reconstruction model for self-supervised heterogeneous graph representation. In KDD. [4] Zhu, H., Sun, K., and Koniusz, P. 2021. Contrastive Laplacian Eigenmaps. In NeurIPS. Supplementary Material: I have reviewed the supplementary material and noted that the authors discussed the details of data, experimental configurations, ablation studies, and complexity analysis. By the way, I suggest that the authors incorporate Appendix C into the main text and further formalize and extend the relevant analyses. This would significantly enhance the soundness of the proposed method. Relation To Broader Scientific Literature: Self-supervised learning on heterogeneous graphs is one of the key approaches for exploring heterogeneous data structure. This field can provide meaningful insights for building graph foundation models across different graph types in the future. Essential References Not Discussed: See Claims And Evidence & Experimental Designs Or Analyses. Other Strengths And Weaknesses: See other parts of the review. Other Comments Or Suggestions: (1) The references format is inconsistent, making the citations difficult to retrieve. (2) The mixed use of certain symbols, such as $\sigma$ and $\mathcal{E}$, can lead to confusion. Questions For Authors: (1) The authors should provide clear definitions of Wasserstein distance, Gromov-Wasserstein (GW) distance, and fused GW distance in the preliminary section, along with appropriate citations, to help readers unfamiliar with OT. Are eq(8)--(13) the original definitions of these distances? What does the symbol $\otimes$ represent? (2) Caption of Figure 1 "We first transform all nodes of the original heterogeneous graph into node features" is ambiguous. Does HGOT utilize special encoding methods to generate node features $X$? (3) HeCo conducted experiments under different dataset splits. What is the dataset split in this work? (4) Will the code implementation of HGOT be open-sourced? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: R(1): Different from other self-supervised learning methods, optimal transport (OT) can capture the matching information from the original graph space to the representation space, obtaining node representations that exhibit consistency with the optimal transport plans. Second, reconstruction-based methods usually involve mask-reconstruction mechanisms, which require a certain degree of damage to the input information. While our method directly extracts useful supervisory signals for matching information based on the characteristics of the heterogeneous graph itself. R(2): The Wasserstein distance is a metric used to quantify the difference between nodes in the graph, measuring the minimal cost required to transform one distribution into another. The Gromov-Wasserstein distance is a metric used to quantify the difference between edges in the graph. The fused Gromov-Wasserstein distance is the fusion of above two distances through a trade-off parameter to achieve optimal transport over the entire graph. The reasons for choosing CX and CA as the cost matrix are as follows: The cosine similarity (CX) is popularly used in most work. The absolute adjacency matrix difference (CA) is convenient to reduce the computational complexity for the calculation of four-dimensional tensors. R(3): The aggregated view, as a center view containing all semantic information, enables each branch view to capture the transport relationship between semantic information through optimal transport. By transporting each branch view to the center view, the transport relationship from each semantic to the comprehensive semantic is obtained, which can better promote heterogeneous representation learning. In summary, although the contribution of this part to the overall model is not very large as shown in the ablation study results, the performance of the algorithm is still improved, and the aggregated view plays an important role in the optimal transport process. R(1): By using node feature transformation, the feature dimensions of all nodes are unified, which facilitates subsequent optimal transport calculations. Different from HeCo and HAN, which only use the attention mechanism to aggregate node-level representations, we aggregate nodes and edges separately to obtain a complete aggregated graph structure. Then, the relationship between the aggregated view and each meta-path view is obtained through optimal transport to promote heterogeneous representation learning. R(2): Heterogeneous graph contains different semantic information. Meta-paths are used to capture these semantic information in our work. By using OT, we can capture the optimal transport plan from each meta-path view to the aggregated view, and calibrate the encoder by aligning each plan to obtain a better node representations. Therefore, OT promotes heterogeneous representation learning and captures rich semantic information in heterogeneous graphs. It will be clarified in the final version. Theoretical Claims: We will discuss the connection between OT and other self-supervised learning objectives such as the InfoMax principle in the final version. Experimental Designs Or Analyses: We will make revisions based on your comments. It is worth mentioning that, different from COLES, HGOT uses OT theory to capture rich information in heterogeneous graph to obtain better node embedding. Our method does not require consideration of complex data enhancement and the selection of positive and negative samples.While COLES is based on the Laplace eigenmap method, it still requires the selection of positive and negative samples. Supplementary Material~Other Comments Or Suggestions: We will make revisions based on your comments. Questions For Authors: The responses to the four questions you raised are as follows: (1)We will explain the three OT formulas in more details in the final version and will give the relevant definitions in the introduction, and add several references into this section. Formulas (8)-(13) are formally the original definitions of these distances. We modified them according to some related works (only some symbolic definitions in the formulas have been modified, so the complete physical meaning remains unchanged). The symbol ⊗ is defined for the Hadamard product (element-wise multiplication of matrices), which will be added in the final version. (2)This is a mistake in our writing. We meant to say "we first project the node features in the heterogeneous graph into the same feature space". The node feature X is defined as the original feature of the node in the heterogeneous graph. We will make the corresponding changes in the final version. (3)In our experiment, we divide each dataset into 8 (training set): 1 (validation set): 1 (test set) and report the results. Same as HeCo, we also conducted experiments with other ratios, but they were not presented in the paper. We will add them into the final version. (4)We will open source our code in the future. --- Rebuttal Comment 1.1: Comment: I thank the authors response. I have read the authors’ response to all reviewers, and some of my concerns have been addressed. Overall, I am willing to increase my rating to Weak Accept if the authors address the following. 1 The authors fail to provide any requested empirical results in their response, 2 Some claims remain unpersuasive to me. For example, I know that OT is used to align meta-path views with the aggregated view in heterogeneous graphs. But the question is one can replace the OT loss with any other self-supervised losses (e.g., InfoNCE, VICReg or KL divergence) without losing any **heterogeneous** information, since such information is only related to the construction of meta-path views in the proposed method. I wonder "why is OT necessary for heterogeneous graphs". --- Reply to Comment 1.1.1: Comment: **Q: The question is one can replace the OT loss with any other self-supervised losses (e.g., InfoNCE, VICReg or KL divergence) without losing any heterogeneous information, since such information is only related to the construction of meta-path views in the proposed method. I wonder "why is OT necessary for heterogeneous graphs".** R: Many thanks. Optimal transport (OT) is a optimization theory that studies the most efficient way to redistribute mass (or resources) from one probability distribution to another, while minimizing a specified cost function. In our study, OT is adopted to calculate the transport plan between the meta-path view and aggregated view. It has the following advantages: (1) It is no longer necessary to perform data augmentation and provide positive and negative samples for heterogeneous graph self-supervised learning; (2) It utilizes the optimal transport plan between the local semantics (branch view) and the global semantics (central view) of heterogeneous graphs to align the matching relationship between the graph space and the representation space, thereby obtaining higher-quality node representations. However, InfoNCE, VICReg and KL divergence are all used to capture the similarity between two distributions in the data. InfoNCE relies on data augmentation and selection of positive and negative sample pairs. VICReg cannot be used to extract the comparative information from the meta-path view and the aggregated view in the heterogeneous graph. Although KL divergence measures the differences between two distributions, it ignores the connection between local semantics (branch view) and the global semantics (central view). Therefore, OT is a better self-supervised learning method for heterogeneous graphs. **Experimental supplement:** 1.The performance comparison on node classification and node clustering tasks : Classification: | Dataset-(Metric) | DBLP-(Micro-F1) | DBLP-(Macro-F1) | ACM- (Micro-F1) | ACM- (Macro-F1) | IMDB-(Micro-F1) | IMDB-(Macro-F1) | Yelp- (Micro-F1) | Yelp- (Macro-F1) | | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | HGCML[1] | 91.11 ± 0.5 | 90.47 ± 0.3 | 87.90 ± 0.3 | 88.46 ± 0.4 | 57.19 ± 0.7 | 51.03 ± 0.4 | 73.35 ± 0.8 | 52.90 ± 0.6 | | HGMAE[2] | 91.89 ± 0.3 | 91.60 ± 0.5 | 89.15 ± 0.4 | 89.29 ± 0.4 | 60.13 ± 0.4 | 60.09 ± 0.6 | 72.57 ± 0.6 | 56.33 ± 0.3 | | **HGOT** | **95.66 ± 0.4** | **95.14 ± 0.3** | **94.49 ± 1.0** | **94.60 ± 0.8** | **62.71 ± 1.3** | **62.34 ± 0.6** | **77.58 ± 0.8** |**65.12 ± 0.4** | Clustering: | Dataset-(Metric) | DBLP-(ACC) | DBLP-(NMI) | DBLP-(ARI) | ACM-(ACC) | ACM-(NMI) | ACM-(ARI) | IMDB-(ACC) | IMDB-(NMI) | IMDB-(ARI) | Yelp-(ACC) | Yelp-(NMI) | Yelp-(ARI)| | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | | HGCML | 90.36 | 73.28 | 79.61 | 89.21 | 65.13 | 71.05 | 50.66 | 9.34 | 9.15 | 65.72 | 37.71 | 42.49 | | HGMAE | 91.47 | 76.92 | 82.34 | **89.83** | **66.68** | 71.51 | 53.90 | 11.36 | 12.03 | 62.10 | 39.04 | 42.88 | | **HGOT** | **93.41** | **78.05** | **84.00** | 89.77 | 66.12 | **71.94** | **60.80** | **16.28** | **18.04** | **66.23** | **39.15** | **43.07** | We add two self-supervised learning methods, including a contrastive method HGCMLand a generative method HGMAE. .The node classification results demonstrates that HGOT achieves the best performance over all baselines. In addition, HGOT achieves the better performance on most datasets in clustering task. [1]Wang Z, Li Q, Yu D, et al. Heterogeneous graph contrastive multi-view learning[C]//Proceedings of the 2023 SIAM international conference on data mining (SDM). Society for Industrial and Applied Mathematics, 2023: 136-144. [2]Tian Y, Dong K, Zhang C, et al. Heterogeneous graph masked autoencoders[C]//Proceedings of the AAAI conference on artificial intelligence. 2023, 37(8): 9997-10005. 2.Comparison of the average time consumption per training epoch of different contrastive self-supervised methods on DBLP dataset: | Method | Training time (per epoch) / s | | ---- | ---- | | HeCo | 0.8193 | | MEOW[3] | 1.9802 | | HGCL[4] | 1.6933 | | HGCML | 2.5048 | | HGMAE | 2.1471 | | **HGOT** | **0.3495** | From the results, we can see that the time complexity of our model HGOT surpasses than the contrastive learning model that requires positive and negative sample pairs and the generative learning model with mask mechanism. For a more detailed and comparative analysis of the complexity of self-supervised learning methods, please see the sections C and D of the Appendix. [3]Yu J, Ge Q, Li X, et al. Heterogeneous graph contrastive learning with meta-path contexts and adaptively weighted negative samples[J]. IEEE Transactions on Knowledge and Data Engineering, 2024. [4]Chen M, Huang C, Xia L, et al. Heterogeneous graph contrastive learning for recommendation[C]//Proceedings of the sixteenth ACM international conference on web search and data mining. 2023: 544-552.
Summary: This paper presents HGOT, a self-supervised heterogeneous graph neural network that harnesses optimal transport theory to establish an optimal transport plan between the meta-path and aggregated views. By compelling the model to learn node representations that faithfully preserve the intrinsic matching relationships between these two views, HGOT significantly enhances the quality of representation learning. Comprehensive experiments conducted on multiple benchmark datasets validate the superiority of the proposed approach, consistently achieving state-of-the-art performance. Claims And Evidence: The paper claims the integration of optimal transport with heterogeneous graph neural networks (HGNNs) is novel. Specifically, the alignment between graph-space and representation-space transport plans provides a fresh perspective for self-supervised learning, circumventing the limitations of traditional contrastive learning. Experimental results validate the effectiveness of the proposed method, showing state-of-the-art performance on benchmark datasets. Methods And Evaluation Criteria: This paper formulates the alignment problem using Gromov-Wasserstein optimal transport and employs a self-supervised learning framework to refine node representations. The proposed method is technically sound. The evaluation metrics used to benchmark HGOT’s performance include accuracy and clustering-based evaluation on multiple real-world heterogeneous graph datasets. The evaluation criteria are widely used and appropriate. Theoretical Claims: This work provides mathematical definitions of heterogeneous graphs and optimal transport. The proposed method itself has no theoretical claims. Experimental Designs Or Analyses: The proposed method is evaluated on four real-world datasets under different tasks, including node classification, node clustering, and visualizations. The experimental designs and performance comparisons are sound. Supplementary Material: The authors provide experimental details, additional parameter and complexity analyses, and discussions with graph contrastive learning in the supplementary material. Relation To Broader Scientific Literature: This work may be helpful for researchers interested in molecule structure learning. Essential References Not Discussed: NA Other Strengths And Weaknesses: One of the key strengths of HGOT is its novel integration of OT with HGNNs, providing an alternative to contrastive learning. However, the complexity of Gromov-Wasserstein optimization raises scalability concerns for large graphs. The paper lacks a discussion on runtime and memory overhead, which are crucial for practical deployment. Furthermore, interpretability could be improved with case studies or attention weight analysis. Other Comments Or Suggestions: In the caption of figure 3, explanations of different ablation settings could be provided for better readability. Questions For Authors: In the ablation study (Section 5.5), the parameter sensitivity analysis indicates that outperforms the combined node-edge alignment. Does this suggest that incorporating edge information may introduce noise or redundancy in certain scenarios? Furthermore, would this effect vary depending on the dataset characteristics? More intuitive explanations are expected to intuitively understand the effectiveness of equations 13-16. Could the authors provide further insights or illustrative examples to clarify the reasoning behind their derivations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Other Strengths And Weaknesses: Q: However, the complexity of Gromov-Wasserstein optimization raises scalability concerns for large graphs. The paper lacks a discussion on runtime and memory overhead, which are crucial for practical deployment. Furthermore, interpretability could be improved with case studies or attention weight analysis. R: Thank for your comments. We discuss the complexity issue in Appendix D, and we will move this section to the main text in the final revision of the paper. On the other hand, we will provide a discussion of running time and memory overhead, and perhaps we will provide the result in the form of charts in the experiments. Other Comments Or Suggestions: Q: In the caption of figure 3, explanations of different ablation settings could be provided for better readability. R: Many thanks. We will provide the detailed settings of the ablation experiments in the caption of figure 3 in the final version. Questions For Authors: Q1: In the ablation study (Section 5.5), the parameter sensitivity analysis indicates that outperforms the combined node-edge alignment. Does this suggest that incorporating edge information may introduce noise or redundancy in certain scenarios? Furthermore, would this effect vary depending on the dataset characteristics? R1: Thank for your comments. It can be inferred from the experimental results: In some cases (edge information is redundant and complicated), we can appropriately abandon the connection of edges and only consider node features. Too much edge information may bring noise and redundancy, the experimental results have verified this to a certain extent.. In addition, we only shows results on two datasets, but the same results are obtained in the other two datasets. It will be clarified in the final version. Q2: More intuitive explanations are expected to intuitively understand the effectiveness of equations 13-16. Could the authors provide further insights or illustrative examples to clarify the reasoning behind their derivations? R2: Thank you. Formula 13 is the fused Gromov-Wasserstein distance (considering both nodes and edges) commonly used for OT. Formula 14 is a shorthand for the right side of the equal sign in Formula 13. Formulas 15 and 16 are used to calculate the optimal transport plan Π from the corresponding OT distances. We will provide a more detailed explanation of the formula in the final version.
Summary: This paper proposes a novel self-supervised heterogeneous graph neural network (HGOT), which aims at addressing the limitations of existing contrastive learning methods on heterogeneous graphs by combining Optimal Transport method. HGOT avoids data augmentation and the construction of positive and negative sample pairs, and proposes a new matching mechanism that performs optimal transmission matching between local (branch view) and global (center view) semantics to improve the quality of node representation. Extensive experiments on four public datasets have validated the effectiveness of HGOT. Claims And Evidence: Yes. S1: Applying optimal transport to self-supervised learning on heterogeneous graphs is a unique contribution. This method helps bypass the challenges associated with graph augmentation and the selection of positive and negative samples in traditional contrastive learning. S2: HGOT optimizes the self-supervised learning strategy of heterogeneous graph neural networks from the perspective of information matching, avoiding the dependence of contrastive learning on data augmentation and positive negative sample pairs. Methods And Evaluation Criteria: Yes. S3: The paper has complete experiments and visually demonstrates the effectiveness of HGOT. S4: This work integrates OT with heterogeneous graph learning, offering a principled alternative to contrastive SSL; Theoretical Claims: yes. Experimental Designs Or Analyses: Yes. S5: Achieves SOTA results on node classification (6%+ accuracy improvement) and clustering across four datasets, demonstrating robustness. W2: The paper does not provide sufficient explanation or analysis for the selection of certain parameters (such as the σ and ρ parameters in optimal transport), particularly lacking a comprehensive discussion on the impact of different parameter values on model performance. As is well known, the training cost of deep learning is very high, and it is not feasible to manually adjust these hyperparameters through repeated experimentation. The paper needs to explain the specific values of these parameters (such as σ and ρ) and provide guidance for their reasonable selection in practical applications. W3: Due to the significant improvement in some metrics in this paper, it is recommended to provide significant analysis results such as t-test in Table 3. W5: Computational complexity: Solving fused Gromov-Wasserstein distances involves cubic time complexity in node count, limiting scalability to large graphs. W6: Meta-Path Dependency: Performance may hinge on pre-defined meta-paths, yet the paper does not explore automated meta-path selection or robustness to suboptimal choices. Supplementary Material: None. Relation To Broader Scientific Literature: S6: Eliminates cumbersome graph augmentation and sample selection, simplifying self-supervise training pipelines. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: Please refer to the above comments. Other Comments Or Suggestions: W1: The abstract is overly verbose. For example, the implementation details of the method can be omitted. The abstract should clearly and concisely convey what the paper has done and what problem it addresses, rather than focusing too much on the specific details of the method. The author spends almost half of the abstract discussing the method's implementation, which makes the abstract appear redundant and detracts from its clarity and focus. W4: The symbol definitions in the formulas are quite complex. The authors can add specific definitions for each symbol in the appendix and check if there are any inconsistent definitions (such as σ in Eq.13 and Eq.2) W7: About parameter sensitivity: the experimental results shows that the parameter σ (node and edge weights) has a significant impact on the results, and the best effect is achieved when it only relies on node attributes, which may mean that it is lacking in processing edge information W8: Experiment incomplete: The proposed method is an algorithm for heterogeneous graphs, but only experiments on point classification and point clustering are conducted, without explanation for other graph problems such as link prediction. Questions For Authors: Please refer to the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Experimental Designs Or Analyses: Q1: The paper does not provide sufficient explanation or analysis for the selection of certain parameters (such as the σ and ρ parameters in optimal transport), particularly lacking a comprehensive discussion on the impact of different parameter values on model performance. As is well known, the training cost of deep learning is very high, and it is not feasible to manually adjust these hyperparameters through repeated experimentation. The paper needs to explain the specific values of these parameters (such as σ and ρ) and provide guidance for their reasonable selection in practical applications. R1:Thank you. In the parameter experiment in Section 5.6, we can see that the model performs better when the parameterρis larger, indicating the importance of implicit structural loss. Similarly, when the parameterσis larger, the model performs better, indicating that the transport information between nodes is more important in the fused Gromov-Wasserstein distance formula. We will explain the selection of these hyperparameters and discuss the impact of different parameter values on model performance in the final version. Q2: Due to the significant improvement in some metrics in this paper, it is recommended to provide significant analysis results such as t-test in Table 3. R2: Thank for your comments. We will add variance analysis to the clustering results experiment in Table 3 to demonstrate the stability of our method in the clustering experiment. Q3: Computational complexity: Solving fused Gromov-Wasserstein distances involves cubic time complexity in node count, limiting scalability to large graphs. R3: Thanks. Although the fused Gromov-Wasserstein distance requires more calculations, it does not take up too much memory or too much running time. In Appendix D, we give the complexity analysis. We employ the OT method instead of traditional contrastive learning to significantly reduce complexity. Consequently, compared to other self-supervised learning methods, our approach maintains a slight edge in terms of complexity. Q4: Meta-Path Dependency: Performance may hinge on pre-defined meta-paths, yet the paper does not explore automated meta-path selection or robustness to suboptimal choices. R4: Thank you. Inspired by your suggestion, we will discuss how to automatically select meta-paths or directly decompose the heterogeneous graph into several subgraphs to drop the dependence on meta-paths. Other Comments Or Suggestions: Q1: The abstract is overly verbose. For example, the implementation details of the method can be omitted. The abstract should clearly and concisely convey what the paper has done and what problem it addresses, rather than focusing too much on the specific details of the method. The author spends almost half of the abstract discussing the method's implementation, which makes the abstract appear redundant and detracts from its clarity and focus. R1: Many thanks. We will revise the abstract based on your valuable suggestions, and delete unnecessary parts and made the abstract more concise. Q2: The symbol definitions in the formulas are quite complex. The authors can add specific definitions for each symbol in the appendix and check if there are any inconsistent definitions (such as σ in Eq.13 and Eq.2). R2: Many thanks. According to theσsymbol in formula 2 and 13 you mentioned,σin formula 2 represents the activation function, andσin formula 13 is an adjustable hyperparameter. To avoid confusion, we will replace the activation function's σ with a capital Greek letter. In addition, we will give the detailed meaning of each symbol in the appendix in the final version. Q3: About parameter sensitivity: the experimental results shows that the parameter σ (node and edge weights) has a significant impact on the results, and the best effect is achieved when it only relies on node attributes, which may mean that it is lacking in processing edge information. R3: Thank you. Our experiments show that in some cases (edge information is redundant and complicated), we can appropriately abandon the connection of edges and only consider node features, which can reduce the complexity of the model. It will be clarified in the final version. Q4: Experiment incomplete: The proposed method is an algorithm for heterogeneous graphs, but only experiments on point classification and point clustering are conducted, without explanation for other graph problems such as link prediction. R4: Thanks. Inspired by your advice, we will study deeper into this issue and discuss in the final version.
Summary: This paper proposes a novel self-supervised learning framework for heterogeneous graphs that leverages optimal transport theory to align meta-path views with an aggregated central view, eliminating the need for graph augmentation or explicit positive/negative sampling. The method achieves state-of-the-art performance on node classification, clustering, and visualization tasks across four real-world datasets. Claims And Evidence: The claims made in the submission are well-supported by clear and convincing evidence. The authors propose a novel self-supervised heterogeneous graph neural network with optimal transport (HGOT) and provide detailed algorithm design. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate and well-justified for the problem at hand. Theoretical Claims: The paper primarily focuses on methodological innovation and empirical validation rather than formal theoretical proofs. The authors provide detailed formulations of the optimal transport-based alignment mechanism and its integration into heterogeneous graph learning. A concern is why the fused Gromov-Wasserstein distance is the optimal choice for heterogeneous graphs? A theoretical or empirical comparison with alternative OT metrics (e.g., Wasserstein barycenters) is missing. Experimental Designs Or Analyses: I have checked the soundness of the experimental designs and analyses, and they appear appropriate and valid. The authors evaluate HGOT on four widely used heterogeneous graph datasets (DBLP, ACM, IMDB, Yelp), covering diverse application domains. They compare against a comprehensive set of baselines, including both supervised and self-supervised methods, which ensures fair and relevant comparisons. My concern about this part is that the parameter sensitivity analysis (Section 5.6) lacks depth. For example, the claim that "abandoning edge information improves performance” (p. 24) conflicts with the fundamental role of edges in graph learning. This requires further exploration (e.g., edge sparsity analysis). Supplementary Material: N/A Relation To Broader Scientific Literature: The paper builds on and extends prior work in self-supervised heterogeneous graph learning and optimal transport. It addresses key limitations of existing contrastive self-supervised methods (e.g., HeCo, HDGI) that rely on graph augmentations and positive/negative sample selection, by introducing an optimal transport-based alignment mechanism. Essential References Not Discussed: Most of key references are cited. Other Strengths And Weaknesses: The method section (Section 4) lacks intuitive explanations. Visualizations of the OT alignment process would aid understanding. Other Comments Or Suggestions: No Questions For Authors: Please refer to above sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Theoretical Claims: Q: A concern is why the fused Gromov-Wasserstein distance is the optimal choice for heterogeneous graphs? A theoretical or empirical comparison with alternative OT metrics (e.g., Wasserstein barycenters) is missing. R: Thank for your comments. The fused Gromov-Wasserstein distance considers both nodes and edges, the information in the heterogeneous graph is more comprehensive. Therefore, compared with other OT distances, the fused Gromov-Wasserstein distance is the best choice for optimal transmission on heterogeneous graphs. We will present experimental analysis with other OT distance metrics in the final version. Experimental Designs Or Analyses: Q: My concern about this part is that the parameter sensitivity analysis (Section 5.6) lacks depth. For example, the claim that "abandoning edge information improves performance” (p. 24) conflicts with the fundamental role of edges in graph learning. This requires further exploration (e.g., edge sparsity analysis). R: Thank you. The results of this experiment can be inferred that: Although the edge information on the graph is important, in some cases (redundant and complex edges), we may abandon the connection relationship of some edges and only consider the node feature information, which may lead to better results. We will discuss this issue more in the final version. Other Strengths And Weaknesses: Q: The method section (Section 4) lacks intuitive explanations. Visualizations of the OT alignment process would aid understanding. R: Many thanks. We will try to provide the visualization of the optimal transport to help understand the transporting and matching process in the final version.
null
null
null
null
null
null
Text-Image Dual Consistency-Guided OOD Detection with Pretrained Vision-Language Models
Reject
Summary: This paper introduces DualCnst, a novel text-image dual consistency framework for zero-shot Out-of-Distribution (OOD) detection using pretrained Vision-Language Models (VLMs) like CLIP. The core idea is to leverage both semantic similarity (text-based) and visual similarity (image-based) by generating synthetic ID/OOD images via Stable Diffusion. The proposed approach outperforms state-of-the-art (SOTA) methods such as NegLabel and MCM on multiple OOD detection benchmarks, demonstrating superior generalization across diverse datasets. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The paper proposes a new multimodal OOD detection approach, integrating textual and visual consistency. Unlike existing methods (e.g., NegLabel), which focus only on text-based features, DualCnst explores the underutilized visual similarity. 2. The approach achieves state-of-the-art (SOTA) performance on standard benchmarks, with notable improvements in far OOD (+2.35%), near OOD (+3.9%), and robust OOD detection (+9.9%). Weaknesses: See comments. Other Comments Or Suggestions: Comments: 1. The paper assumes that Stable Diffusion can reliably supplement visual information for ID/OOD detection. However, this assumption may not always hold: (1) Stable Diffusion may generate images with noise, style variations, or artifacts, potentially misleading the OOD detection model. (2) The paper does not analyze how synthetic images compare with real ID data. If the generated images significantly deviate from the true ID distribution, they may negatively impact OOD detection performance. (3) The paper does not assess whether the generated images faithfully represent ID/OOD distributions or if they introduce biases that could mislead the detection process. 2. The proposed DualCnst score function is a simple weighted combination of textual and visual similarities, but there is no theoretical analysis to justify the chosen weighting strategy. 3. The paper does not provide a mathematical proof of convergence or generalization for the score function, relying only on empirical validation. A more rigorous theoretical foundation would strengthen the contribution. 4. The paper provides a discussion on computational cost in Table 19, reporting that DualCnst requires 10 hours and 22 minutes to generate synthetic images, while inference takes 17 minutes, compared to NegLabel’s 14 minutes and 35 seconds for inference on ImageNet-1k. While the inference overhead is reasonable, the preprocessing time for image generation is significantly high, which may limit the practicality of the approach in real-world applications. The authors should explore ways to reduce computational overhead. Questions For Authors: Questions: 1. Stable Diffusion performs well on natural image datasets, but is it still effective for OOD detection in domains such as medical imaging or remote sensing? Have the authors considered evaluating its generalization across different types of datasets? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Response to Reviewer aTAh We thank the reviewer aTAh for the valuable feedback. We addressed all the comments. Please find the point-to-point responses below. Any further comments and discussions are welcomed\! **W1:** The paper assumes that Stable Diffusion can reliably supplement visual information for ID/OOD detection. However, this assumption may not always hold: (1)**Generation Quality**(2)**Distribution Alignment**(3) **Representation Fidelity** **Reply:** **R1:** We appreciate the suggestion. While synthetic images may contain noise, style variations, or artifacts that could potentially mislead OOD detection models, our score function incorporates multi-level features from pixel to semantic levels. Thus, even when pixel-level features are affected by such deviations, language-level features remain effective. To validate this, we generated **oil painting-style** images to simulate style shifts. As shown in Table A, our method maintains robust performance despite this variation. Due to space constraints, the complete table is available at: https://anonymous.4open.science/r/fjutlfy-31D8 Table K. Table A: Experimental comparison under generated image style shifts. ID dataset: ImageNet-1k. | Image Style (Method) | Average | | :---: | ----- | | | FPR95 | | Natural Images (DualCnst) | **23.24/94.55** | | Oil Painting Images (DualCnst) ) | 23.83/94.37 | | NegLabel | 25.40/94.21 | **R2 and R3:** We confirm that the synthetic ID/OOD data maintain consistent language-level features with real data, ensuring the robustness of our method (as validated above) . Notably, as demonstrated in Experimrnts (Table A) in R1, we deliberately generated oil-style images that deviate from the ID distribution, yet they still contribute to improved detection performance. This further demonstrates the robustness of language-level features. **W2:** The proposed DualCnst score function is a simple weighted combination of textual and visual similarities, but there is no theoretical analysis to justify the chosen weighting strategy. **Reply:** We gratefully acknowledge the reviewer's suggestion. Our supplemented theoretical analysis demonstrates that under a weighted combination strategy of textual and visual similarities, the false positive rate ($\\text{FPR}\_\\lambda$) exhibits a monotonic decrease with increasing multimodal (visual) labels under certain conditions. This result not only validates that incorporating auxiliary visual features enhances OOD detection performance, but also confirms the appropriateness of the weighted combination strategy. Due to space limitations, the detailed theoretical analysis can be found in the response to Reviewer YYQ7-W1. **W3:** The paper does not provide a mathematical proof of convergence or generalization for the score function, relying only on empirical validation. A more rigorous theoretical foundation would strengthen the contribution. **Reply:** As noted in our W2 reply, we have discussed the proposed score function's ability to improve the separability between ID and OOD samples in terms of the false positive rate ($\\text{FPR}\_\\lambda$). Specifically, the proposed score function integrates multi-level synthetic image features into the existing text-based labels. Theoretically, we consider a more general scenario—how expanding multimodal labels improves OOD detection. We prove that, under certain conditions, $\\text{FPR}\_\\lambda$ decreases as the number of multimodal labels increases, demonstrating that incorporating additional auxiliary modalities into labels improves OOD detection performance. **W4:** DualCnst's synthetic image generation incurs significantly higher costs than baselines, potentially limiting real-world applicability. Urge optimization of computational overhead. **Reply:** Thank you for your comment. Regarding the efficiency of Stable Diffusion generation, we address the issue through two key optimizations.Due to space limitations, Reviewer ViPC raised the same question. Please refer to our detailed response under W5 in the **Reviewer ViPC** section. **Q:** Stable Diffusion performs well on natural image datasets, but is it still effective for OOD detection in domains such as medical imaging or remote sensing? Have the authors considered evaluating its generalization across different types of datasets? **Reply:** We have supplemented experiments on remote sensing data using UCM\[1\] as the ID dataset and AID\[2\] as the OOD dataset. As shown in Table D, our method achieves superior performance compared to the baseline NegLabel. \[1\]Y. Yang and S. Newsam, "Bag-of-visual-words and spatial extensions for land-use classification " \[2\]Zhong et al. "AID: A benchmark dataset for performance evaluation of aerial scene classification " Table D: OOD Detection Experiments on Remote Sensing Data. | Method | AID | | :---: | :---: | | | FPR95 | | NegLabel | 97.98/62.46 | | DualCnst(ours) | **97.19/66.11** |
Summary: This paper propose DualCnst for CLIP-based zero-shot OOD detection. It enhances zero-shot OOD detection by combining text-image dual consistency, leveraging both semantic similarity to textual labels and visual similarity to synthesized images. This unified framework achieves state-of-the-art performance across benchmarks and generalizes well across VLM architectures. Claims And Evidence: The main claim of this paper is that 1. adding visual feature to CLIP-based zero-shot OOD detection can improve the performance. 2. The proposed method out-perform previous sota. By properly introduce the visual feature from CLIP-based models with stable diffusion, this method work well and out-perform previous sota through extensive experiments. Methods And Evaluation Criteria: Most descriptions are clear to me. Just a few questions leaving: 1. Why ID images are not accessible (fig. 1)? Please justify. I think most testing benchmarks provide training samples which can be used as "actual ID Image". This scenario should be consider as a baseline. 2. Is it necessary to have negative labels/images? If it is necessary, it would be hard to judge the contribution of this paper over NegLabel. The authors are encouraged to include an ablation study to emphasize image feature without using negative labels/images. Theoretical Claims: There is no theoretical claims. Experimental Designs Or Analyses: 3. Considering negative images, is the upper bound given by using real OOD samples? The authors are encouraged to include this as one of the ablation studies. 4. The training of CLIP and stable diffusion both involves large amout of data, which may cover the OOD datasets. How to ensure fairness in evaluation? Especially consider that the authors have differentiated some methods using massive auxilary dataset in Table 1 & 3. Supplementary Material: I have check the Appendix. Relation To Broader Scientific Literature: 5. OOD detection is highly valuable for understanding and modeling the boundaries of models. The method proposed in this paper effectively leverages the text and image modalities provided by the CLIP model, combined with generative models, offering significant insights for the future development of OOD detection in the era of large models. Essential References Not Discussed: 6. While L421 mentioned VAE-based OOD detection methods, they are pretty old. It would be better to discuss relationship with some new methods based on generative models, such as [a] with GAN and [b-c] with diffusion model. Especially, [b-c] also use stable diffusion and perform training-free OOD detection, should also be discussed in the last paragraph of Sec. 5. [a] Out-of-Distribution Detection with Semantic Mismatch under Masking, Yang et al. ECCV 2022. [b] DiffGuard: Semantic Mismatch-Guided Out-of-Distribution Detection using Pre-trained Diffusion Models, Gao et al. ICCV 2023. [c] Outof-distribution detection with diffusion-based neighborhood, Liu et al. 2023. Other Strengths And Weaknesses: **Strengths**: 7. This article conducts comprehensive experimental comparisons on multiple benchmarks, demonstrating the effectiveness of the proposed method. 8. The paper is well-written and easy to follow. **Weaknesses**: 9. I noticed that the image generation time is pretty long in Table 19, which should be emphasized in the main paper as one of the limitations. Other Comments Or Suggestions: 10. The authors have talked about different CLIP models but the architectures for diffusion models are similar. The authors are encouraged to consider other models such as DiT-based ones. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Response to Reviewer Ci5u We thank the reviewer Ci5u for the valuable feedback. We addressed all the comments. Please find the point-to-point responses below. Any further comments and discussions are welcomed\! **W1:** Why ID images are not accessible (fig. 1\) ? Please justify. I think most testing benchmarks provide training samples which can be used as "actual ID Image". This scenario should be consider as a baseline. **Reply:** Thank you for your question. However, in the **zero-shot** OOD detection setting considered in this paper, training samples are not ID images. Specifically, the definitions of ID and OOD labels are as follows. As stated in the **Introduction section of ZOC**. **W2:** If negative labels are essential, the method’s novelty over NegLabel is unclear. An ablation study (removing negatives) is needed to isolate the impact of synthetic image features. **Reply:** Our method is not limited to applications with NegLabel. As shown in Table A, the results demonstrate that incorporating synthetic image labels can effectively enhance OOD detection performance even without negative labels/images. Due to space constraints, the complete table is available at: https://anonymous.4open.science/r/fjutlfy-31D8 Table H. Table A: Performance comparison of integrating DualCnst with MCM for OOD detection on ImageNet-1k (ID dataset) . | SD model | Average | | :---: | ----- | | | FPR95 | | MCM | 65.20 | | MCM+DualCnst | **63.73** | **W3:** Considering negative images, is the upper bound given by using real OOD samples? The authors are encouraged to include this as one of the ablation studies. **Reply:** Yes, we have conducted ablation studies using real OOD samples (Table B) . The results demonstrate that while the proposed method achieves better performance with real samples compared to synthetic ones, the improvement margin is not substantial, demonstrating synthetic samples' effectiveness. Due to space constraints, the complete table is available at: https://anonymous.4open.science/r/fjutlfy-31D8 Table I. Table B: Experimental comparison between real OOD samples and synthetic OOD samples, with ImageNet-1k as the ID dataset. | Source of OOD images | Average | | :---: | ----- | | | FPR95 | | Synthetic OOD samples | 23.24 | | Real OOD samples | **12.36** | **W4:** How to ensure evaluation fairness for models (e. g. , CLIP/Stable Diffusion) trained on large datasets that may include OOD data, particularly when comparing methods with/without auxiliary datasets? **Reply:** Thanks for your question. We would like to clarify our definition of ID classes. Following zero-shot OOD detection \[1, 2, 3\], in our setting, the ID classes are defined based on the classification task of interest rather than the classes used in pre-training. Additionally, we adopt widely recognized zero-shot OOD detection benchmarks, where the label spaces of ID and OOD datasets do not overlap. \[1\] Ming et al. Delving into Out-of-Distribution Detection with Vision-Language Representations. \[2\] Jiang et al. Negative Label Guided OOD Detection with Pretrained Vision-Language Models. \[3\] Esmaeilpour et al. Zero-Shot Out-of-Distribution Detection Based on the Pre-trained Model CLIP. **W5:** Suggest expanding discussion in Sec. 5 to include newer GAN/diffusion-based OOD methods (e. g. , \[a\] with GAN, \[b-c\] with training-free Stable Diffusion) rather than older VAE-based approaches. **Reply:** Thank you for your suggestions. These three articles will be incorporated into the Related Work section. **W6:** I noticed that the image generation time is pretty long in Table 19, which should be emphasized in the main paper as one of the limitations. **Reply:** We acknowledge this as a limitation of our current framework and will include it in the limitations. However, we note that the computational efficiency can be significantly optimized without compromising detection performance. Due to space limitations, Reviewer ViPC raised the same question. Please refer to our detailed response under W5 in the **Reviewer ViPC** section. **W7:** The authors have talked about different CLIP models but the architectures for diffusion models are similar. The authors are encouraged to consider other models such as DiT-based ones. **Reply:** We appreciate the suggestion. we incorporate synthetic images generated by Hunyuan-DiT \[1\] in our experiments, compared to SD1.5, Hunyuan-DiT achieved better results on the FPR95 metric.Due to space constraints, the complete table is available at: https://anonymous.4open.science/r/fjutlfy-31D8 Table J. \[1\] Hunyuan-DiT: A Powerful Multi-Resolution Diffusion Transformer with Fine-Grained Chinese Understanding. Table E: Performance comparison between SD model and DiT models in the DualCnst method. | SD model | Average | | :---: | :---: | | | FPR95 | | SD1. 5 | 23.24 | | Hunyuan-DiT | **23.19** |
Summary: This paper presents a simple and effective method to enhance the performance of OOD detection. In addition to utilizing the similarity between test images and text features, it also introduces images through a diffusion model, thereby leveraging the similarity between test images and generated images to further improve OOD detection performance. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: NA Essential References Not Discussed: The main contribution of this paper is to enhance the zero-shot OOD detection capability of the CLIP model. Recently, there have been many related papers in this direction, such as [1,2]. It is suggested that the authors add discussion and comparison. [1] LAPt: Label-driven Automated Prompt Tuning for OOD Detection with Vision-Language Models, ECCV2024 [2] CLIPScope: Enhancing Zero-Shot OOD Detection with Bayesian Scoring Other Strengths And Weaknesses: The biggest advantage of this paper is that the method is simple and effective. However, this might also be its biggest disadvantage: because the contribution of the technique is relatively small. Therefore, it is recommended that the authors add some profound explanations, such as theoretical support or vivid visual demonstrations, to enrich the content. Other Comments Or Suggestions: NA Questions For Authors: See questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Response to Reviewer yyQ7 We thank the reviewer yyQ7 for the valuable feedback. We addressed all the comments. Please find the point-to-point responses below. Any further comments and discussions are welcomed\! **W1:** While the method’s simplicity is a strength, it risks underselling technical novelty. Expand theoretical guarantees or visual evidence to bolster significance. **Reply:** Thank you for your comment. Yes, our method is simple and effective, but it initially lacked a theoretical explanation. To address this, we have supplemented the theoretical analysis. The proposed method primarily enhances the diversity of the label space by incorporating multi-level synthetic image features into the existing text-based labels. Theoretically, we consider a more general scenario—how expanding multimodal labels improves OOD detection. We prove that, under certain conditions, the false positive rate ($\\text{FPR}\_\\lambda$) decreases as the number of multimodal labels increases, demonstrating that incorporating additional auxiliary modalities into labels enhances OOD detection performance. The specific theoretical details are as follows: ### Theoretical Analysis of Multimodal Label Enhancement **Core Contribution** We prove that expanding multimodal labels reduces OOD detection false positive rate ($\text{FPR}_\lambda$). The method enhances label diversity through multi-level synthetic image features, improving separability between ID/OOD samples. --- #### Key Theoretical Steps 1. ​**Multimodal Label Definition** Define $N$-modal negative labels $ \\widetilde{Y}\_i = \\{\widetilde{y}\_{i,1}, \\dots, \\widetilde{y}\_{i,N} \\}$, where $\\widetilde{y}\_{i,1}$ is the primary modality (text) and $\\widetilde{y}\_{i,2}, \\dots, \\widetilde{y}\_{i,N}$ are auxiliary modalities (synthetic image embeddings). 2. ​**Weight Allocation** Assign non-uniform weights: $$w\_j \= \\begin{cases} \\frac{a}{N}, & j=1 \\quad \\text{(primary modality weight)} \\\\ \\frac{1 \- \\frac{a}{N}}{N \- 1}, & j=2, \\dots, N \\quad \\text{(auxiliary modality weights)} \\end{cases}$$ 3. ​**Aggregated Similarity Score** Compute weighted similarity: $$s_i = \frac{a}{N} s_{i,1} + \sum_{j=2}^N \frac{1 - \frac{a}{N}}{N - 1} s_{i,j}$$ 4. ​**Statistical Properties** For i.i.d. $s_{i,j} \sim (\mu, \sigma^2)$: $$\mathbb{E}[s_i] = \mu, \quad \text{Var}(s_i) = \frac{\sigma^2}{N'(N)}, \quad N'(N) = \frac{N(N-1)}{a^2 + N - 2a}$$ 5. ​**OOD Score Distribution** Match count $c = \sum_{i=1}^M \mathbb{I}[s_i \geq \psi]$ follows: $$c_{\text{in}} \sim \mathcal{N}(Mp_1, Mp_1(1-p_1)), \quad c_{\text{out}} \sim \mathcal{N}(Mp_2, Mp_2(1-p_2))$$ where $p_1 = 1-\Phi(k_1\sqrt{N'})$, $p_2 = 1-\Phi(k_2\sqrt{N'})$ with $k_1 = \frac{\psi-\mu_1}{\sigma}$, $k_2 = \frac{\psi-\mu_2}{\sigma}$. 6. ​**FPR Analysis** $$\text{FPR}_\lambda = \Phi\left(\frac{\sqrt{M}(p_1 - p_2) + \sqrt{p_1(1-p_1)}\Phi^{-1}(\lambda)}{\sqrt{p_2(1-p_2)}}\right)$$ 7. ​**Critical Result** $\\frac{\partial \\text{FPR}\_\\lambda}{\\partial N} < 0 \\quad \\text{when} \\quad \\mu_1 + \\frac{\\sigma}{\\sqrt{N'}} > \\psi > \\mu_2$ ​**Conclusion**: Increasing $N$ strictly reduces $\text{FPR}_\lambda$. --- #### Remark on Assumptions - The i.i.d. assumption on $s_{i,j}$ can be relaxed to dependent variables. - Variance reduction ($\text{Var}(s_i) \propto 1/N$) remains the driving mechanism. **Theoretical Impact**: Provides mathematical guarantee for multimodal label enhancement in OOD detection. A detailed proof will be provided in the paper. **W2:** The main contribution of this paper is to enhance the zero-shot OOD detection capability of the CLIP model. Recently, there have been many related papers in this direction, such as \[1, 2\]. It is suggested that the authors add discussion and comparison. \[1\] LAPt: Label-driven Automated Prompt Tuning for OOD Detection with Vision-Language Models, ECCV2024 \[2\] CLIPScope: Enhancing Zero-Shot OOD Detection with Bayesian Scoring **Reply:** We appreciate the reviewer's valuable suggestion. In response, we have incorporated the LAPt method into the main experiments for comparison, as shown in Table A. Although LAPt adapts to ID data by fine-tuning prompt parameters, our method still achieves superior performance on the FPR95 metric. Table A: Experimental comparisons on the ImageNet-1k task, with iNaturalist, SUN, Places, and Texture used as out-of-distribution (OOD) data for evaluation. | Method | iNaturalist | SUN | Places | Texture | Average | | :---: | ----- | ----- | ----- | ----- | ----- | | | FPR95 | FPR95 | FPR95 | FPR95 | FPR95 | | DualCnst(ours) | 1.29/99.65 | 17.60/95.89 | 31.91/92.13 | 42.15/90.51 | **23.24**/94.55 | | LAPt | 1.16/99.63 | 19.12/96.01 | 33.01/92.01 | 91.06/40.32 | 23.40/**94.68** |
Summary: This paper proposed a novel OOD approach named DualCnst, based on text-image dual consistency. In addition to detecting OOD samples by assessing the similarity between test images and ID/OOD label texts, this paper synthesizes OOD images using text-to-image models and incorporates the visual similarity between test images and ID/OOD images to enhance the model's performance. This paper achieves good performance on ImageNet-level OOD benchmarks. Claims And Evidence: Yes. This paper claims that incorporating visual information in VLM-based zero-shot settings is beneficial for OOD detection, as proven by experiments. Methods And Evaluation Criteria: Although the novelty of the method in this paper is limited, it is reasonable within the field of OOD detection. Theoretical Claims: This paper does not provide significant theoretical proof, and no obvious errors were found in the formulas. Experimental Designs Or Analyses: I have checked all experiments and analyses, and the specific issues are as follows: 1. I noticed in Appendix Sec. B.4 that for each OOD test dataset, the best-performing parameter \alpha was selected, which is very unreasonable. An important principle in OOD detection tasks is OOD agnosticism, meaning that in real-world environments, the categories and domain of OOD samples are broad and unknown. The use of OOD-specific hyperparameters in this paper conflicts with this principle. 2. I am confused about the fixed \omega in Sec.4 (such as in Line 313). According to the description of the method in this paper, the weights \omega of the intermediate and final layers of the image encoder should be different and sum up to 1. Why does a fixed \omega appear? I suspect that it might actually refer to \r? 3. The design of the Robust OOD Detection experiment in this paper is unreasonable. The experiment incorrectly replaces the ID dataset with a covariance shift ID (csID) dataset to explore the model's generalization on ID data. Instead, it should follow the full-spectrum OOD setup [1, 2], which retains the ID data while adding the csID data to the test samples. Otherwise, generalizing from ImageNet to ImageNet-R is too easy for models like CLIP and SD that have been exposed to large amounts of data, which fails to demonstrate the effectiveness of the proposed method. [1] Full-Spectrum Out-of-Distribution Detection. [2] OpenOOD v1.5: Enhanced Benchmark for Out-of-Distribution Detection. Supplementary Material: Yes. I reviewed the entire Supplementary Material, with a particular focus on Sections B.4 to B.6, which cover the ablation study on hyperparameter and the design of other OOD scores for the Dual Consistency approach. Relation To Broader Scientific Literature: This work introduces ID/OOD images generated by Stable-diffusion based on NegLabel [1], and uses visual similarity alongside text-image similarity from VLMs as an OOD scoring function. Additionally, prior works [2, 3] have implemented OOD detection using geneartion methods based on Stable-diffusion. [1] Negative label guided ood detection with pretrained vision-language models. [2] Unsupervised out-of-distribution detection with diffusion inpainting. [3] Denoising diffusion models for out-of-distribution detection. Essential References Not Discussed: VLM-based OOD methods do take visual information into account. LoCoOp [1] aligns the semantic-relevant visual region with language features, and LSA [2] synthesizes high-likelihood ID features from class-specific Gaussian distributions to enhance the model's perception of ID semantics. Although these methods do not operate in a zero-shot setting, it is necessary to discuss and compare them, especially considering that this paper introduces an additional powerful yet time-consuming model like Stable-diffusion. [1] LoCoOp: Few-Shot Out-of-Distribution Detection via Prompt Learning. [2] Likelihood-Aware Semantic Alignment for Full-Spectrum Out-of-Distribution Detection. Other Strengths And Weaknesses: Strengths: This paper is well-written in general, and the method design is reasonable. Weaknesses: Aside from the issues in the experiments, the paper is limited in methodological novelty, as it combines NegLabel with a text-to-image approach. Furthermore, although the authors mention the computational burden advantages of this method compared to LMD, generating images with stable-diffusion remains a time-consuming process, which cannot meet the fast response requirements when deployed in real-world environments. Other Comments Or Suggestions: No more comments. Questions For Authors: No more questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Response to Reviewer ViPC We thank the reviewer ViPC for the valuable feedback. We addressed all the comments. Please find the point-to-point responses below. Any further comments and discussions are welcomed\! **W1:** The paper uses OOD-specific α (tuned per OOD dataset) , violating OOD agnosticism. **Reply:** We thank the reviewer for the feedback. **Experiments with a fixed α=0.1 achieve performance comparable to our original method while retaining significant advantages over baselines**. Due to space constraints, the complete table is available at: https://anonymous.4open.science/r/fjutlfy-31D8 Table A. Table A: Comparison with Other Baselines Using Fixed α=0.1. | Method | Average | | :---: | ----- | | | FPR95 | | MCM | 42.74 | | NegLabel | 25.40 | | ours(Optimal α) | 23.05 | | ours(Fixed α=0. 1\) | 23.24 | **W2:** Sec4's fixed ω contradicts method's varying encoder ω (sum=1). Typo? (e.g., should be r)? **Reply:** Thank you for catching this \- we've corrected ω to r in the manuscript. **W3:** The current OOD detection framework oversimplifies evaluation. Adopting full-spectrum OOD would enable rigorous validation. **Reply:** Following OpenOOD v1.5 protocols, we now use ImageNet-1k/R as ID with near-OOD benchmarks (SSB-Hard, NINCO). Our method maintains superiority over baselines in this enhanced evaluation. Due to space constraints, the complete table is available at: https://anonymous.4open.science/r/fjutlfy-31D8 Table B. Table B: Robustness experiments on ImageNet-1k and ImageNet-R. | Method | Average | | :---: | :---- | | | FPR95 | | MCM | 81.58 | | NegLabel | 50.04 | | DualCnst(ours) | **48.38** | **W4:** VLM-based OOD methods (e. g. , LoCoOp\[1\]/LSA\[2\]) already use visual semantics without costly generators (e. g. , SD) . Comparison needed to justify added complexity. **Reply:** We thank the reviewer for the suggestion. Added comparisons with LoCoOp (Table C) and LSA (Table D) show our method achieves superior performance over few-shot VLM baselines, with added compatibility for text enhancement (e.g., NegLabel). While LSA's official code is not fully released, we implemented it on their original dataset for fair evaluation. Due to space constraints, the complete table is available at: https://anonymous.4open.science/r/fjutlfy-31D8 Table C & D. Table C: Experimental comparisons on the ImageNet-1k benchmark. | Method | Average | | :---: | ----- | | | FPR95 | | LoCoOpGL | 28.66 | | LoCoOpMCM | 33.98 | | DualCnst(ours) | **23.24** | Table D: Experimental comparisons on the ImageNet-1k task, with near and far datasets evaluated as OOD data. | Method | Average | | :---: | ----- | | | FPR95 | | NegLabel | 51.60 | | LSA | 58.72 | | DualCnst(ours) | **50.03** | **W5: 1\.** High time cost of Stable Diffusion image generation undermines real-time deployment claims. **2\.** Limited methodological novelty: The approach primarily combines NegLabel with text-to-image synthesis **Reply:** **R1:** Regarding the efficiency of Stable Diffusion generation, we address the issue through two key optimizations: **First**, numerous accelerated versions of Stable Diffusion are now available. SDXL-Turbo achieves **10× faster generation** than SD1.5 (reducing time from 55m42s to 4m10s) while maintaining equivalent detection performance (Table E). This acceleration also enhances overall method performance (Table F). **Second, our synthetic images only need to be generated once, eliminating the need for repeated generation.** Due to space constraints, please refer to the full table at the link https://anonymous.4open.science/r/fjutlfy-31D8 Table E & F. Table E: Time comparison for accelerated SD models. | SD model | Time to generate ID images | Time to generate both ID and OOD images | | :---: | :---: | :---: | | SD1. 5 | 55m42s | 10h22m | | SDXL-Turbo | **4m10s** | **55m52s** | Table F: Performance comparison between SD models in the DualCnst method. | SD model | Average | | :---: | ----- | | | FPR95 | | SD1. 5 | 23.24 | | SDXL-Turbo | **22.95** | **R2: ** To better understand our approach, we provide deeper theoretical insights into our approach, and have supplemented the theoretical analysis. The proposed method primarily enhances the diversity of the label space by incorporating multi-level synthetic image features into the existing text-based labels. Theoretically, **we consider a more general scenario—how expanding multimodal labels improves OOD detection. We prove that, under certain conditions, the false positive rate ($\\text{FPR}\_\\lambda$) decreases as the number of multimodal labels increases, demonstrating that incorporating more auxiliary modalities in labels enhances OOD detection performance.** Due to space limitations, the detailed theoretical analysis can be found in the response to Reviewer yyQ7-W1. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. My concerns on the experimental part has been addressed in the rebuttal. But I still think this paper is limited in methodological novelty. Therefore I will increase my rating to WA. --- Reply to Comment 1.1.1: Comment: **Thanks for raising the score** Dear Reviewer ViPC, We thank the reviewer for raising the score! We sincerely appreciate your acknowledgment of our experimental revisions and your valuable input, which has helped strengthen our work. Regarding the novelty and contribution, we would like to clarify further: - **Simple yet Effective Framework (DualCnst):** Our approach uniquely integrates both semantic-textual similarity and visual similarity metrics between test samples and synthesized ID/OOD labels, significantly improving VLM-based OOD detection accuracy. - **Theoretical Analysis:** As detailed in our supplemental materials, we theoretically demonstrate that leveraging multimodal label spaces (text + synthetic images) reduces the false positive rate under certain conditions. This proves that incorporating auxiliary modalities enhances OOD detection performance. Best regards, Authors of 3561
null
null
null
null
null
null
Enabling Optimal Decisions in Rehearsal Learning under CARE Condition
Accept (poster)
Summary: The paper introduces a CAnonical REctangle (CARE) condition for the Avoiding Undesired Future (AUF) problem. Under this CARE condition, along with additional assumptions on the problem structure and the noise term, the AUF problem can be reformulated as a convex optimization problem. The authors propose a projection-Newton-based algorithm with a provable sublinear convergence rate and extend the approach to cases where the CARE condition does not hold. Furthermore, for the special case where the target variable $Y$ is single-dimensional, the paper derives a closed-form solution that significantly reduces time complexity. Numerical experiments on both synthetic and real-world datasets are provided to demonstrate the effectiveness and efficiency of the proposed algorithms. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I reviewed the proofs of Theorem 3.6 and Theorem 3.8, and they appear to be correct to the best of my knowledge. Experimental Designs Or Analyses: While the overall experimental design seems sound, the paper does not provide sufficient details on the experimental setup in its appendix. For example, the procedure for generating the synthetic data is not clearly explained. Supplementary Material: I reviewed the supplementary material, including the proofs of Theorem 3.6 and Theorem 3.8 and the experiments part in its appendix. Relation To Broader Scientific Literature: The results presented can be applied to general AUF problems under specific conditions. Essential References Not Discussed: Not sure. Other Strengths And Weaknesses: Strengths: The paper is well written and provides clear remarks and visualizations that effectively illustrate the theoretical results. Weaknesses: The discussion regarding the additive noise is limited; it appears that the results may only hold under Gaussian noise assumptions. Additionally, the experimental section would be clearer with more detailed descriptions of the experimental setup. Other Comments Or Suggestions: None. Questions For Authors: a) If the additive noise in equation (1) does not follow a Gaussian distribution, do the proposed methods remain (numerically) effective? A discussion on potential performance variations or issues under non-Gaussian noise, or a potential extension method which can be applied in non-Gaussian noises would help clarify the broader applicability of the results. b) Related to the previous question, if Theorem 3.6 is valid only under the assumption of Gaussian noise, are there other families of distributions (e.g., those within the exponential family) for which the results could still hold? The limitation of Gaussian noises is my major concern. c) Could you provide more detailed information on how the synthetic data was generated, as well as a summary of the characteristics of the real-world dataset used in your experiments? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the valuable feedback! We hope our responses can address your concerns. **Q1.** Extension for non-Gaussian noise. **A1.** Thanks for your insightful question. We would like to clarify that the Gaussian noise assumption is primarily used to establish theoretical guarantees. For cases where this assumption does not hold, AUF can also be addressed effectively by: - **Gaussian approximation.** It is common to approximate irregular distributions using a Gaussian distribution. E.g., Laplace's approximation [1] can approximate unimodal distributions by fitting a Gaussian distribution centered at the mode. To validate the empirical effectiveness of our method under non-Gaussian noise, we provide additional experiments as shown in Fig. 1 of [Anon. Link](https://default-anno-bucket.s3.us-west-1.amazonaws.com/rebuttal.pdf). - **A numerical solution.** An alternative empirical approach can be used to solve AUF in Eq. (2) by training a sampler (e.g., normalizing flow) for the potentially non-Gaussian noise using residuals. This method involves sampling $n$ noise realizations from the trained sampler and selecting $z^\xi$ to maximize the number of instances where $Y\in S$. This can be formulated as the following mixed-integer linear program: $$\begin{aligned}\max_{e_i\in\{0,1\}, z^\xi}\ &\sum_{i=1}^n e_i\\\\s.t.\ &M(Ax+Bz^\xi+C\epsilon_i)-d\leq(1-e_i)\alpha,\ i\in[n]\end{aligned}$$ where $e_i$ indicates whether $i$-th sample is successful, $\alpha$ is a sufficiently large constant vector to tolerate failed samples, $M,d,A,B,C$ are defined in Thm. 3.6. This approach can be empirically applied with non-Gaussian noise. We will incorporate this discussion into the revised paper. Thanks! --- **Q2.** Potential distribution family making Thm. 3.6 valid. **A2.** Thanks for your question. The validity of Thm. 3.6 stems from the fact that Gaussian belongs to both elliptical and exponential families. Hence, the decomposition (Eq. 8) and the log-concavity (lines 611–628) can be established. The main challenge in extending these results to general distributions, such as the exponential family, is that simultaneously satisfying both the decomposition and log-concavity is difficult. Additionally, the PDF of a general distribution can be complex, limiting the applicability of existing analytical techniques. Recognizing the importance of extending theoretical results to non-Gaussian cases, we are currently investigating a similar theoretical foundation for the log-concave family (including Gaussian, uniform, and Laplace distributions, etc.) as part of our future work. In this case, we find that techniques used in this paper are insufficient, and the Prékopa–Leindler theorem [2] may provide a useful tool for proving log-concavity of generalized distribution families. However, explicit characterization for these distributions remains a significant challenge. Finally, we would like to emphasize that proving theoretical optimality for probabilistic optimization is challenging, even under Gaussian noise. Establishing theoretical guarantees under Gaussian noise provides insights that extend beyond Gaussian scenarios and is a common practice in studies on previous rehearsal learning [3, 4], time series analysis [5], and control problems [6], among others. We will incorporate this discussion in the revised paper. Thanks! --- **Q3.** Detailed data information. **A3.** Thanks for your question. - **For synthetic data**. Let $V=[feature_1,feature_2,C_{our},C_{cpt},P_{our},P_{cpt},NCT,TPF]$ with variables defined in Fig. 5. The data generation follows $V=PV+\epsilon,\epsilon\sim\mathcal{N}(0,\Sigma)$: $$P=\begin{pmatrix}0&0&0&0&0&0&0&0\\\\0&0&0&0&0&0&0&0\\\\10&0&0&0&0&0&0&0\\\\0&10&0&0&0&0&0&0\\\\0&0&2.0&0.4&0&0&0&0\\\\0&0&0.5&1.3&0&0&0&0\\\\0&0&1.6&0&-0.9&0&0&0\\\\0&0&-1&0&0.9&0&0&0\end{pmatrix},\Sigma=10^{-2}\begin{pmatrix}4&0&0&0&0&0&0&0\\\\0&4&0&0&0&0&0&0\\\\0&0&6&0&0&0&0&0\\\\0&0&0&3&16&0&0&0\\\\0&0&0&16&6&0&0&0\\\\0&0&0&0&0&6&0&0\\\\0&0&0&0&0&0&4&0\\\\0&0&0&0&0&0&0&12\end{pmatrix}.$$ As $feature_{1,2}$ have no parents, their covariance can be marginalized thus omitted. Parameters are identical to those used in the code provided in Supp. Material and experimental results can be reproduced by running the code. - **For real-world data**. The dataset records values of environment variables in Bermuda, and the decision target is to maintain a high NEC (net coral ecosystem calcification). Due to space limits, we welcome further questions on the dataset and will provide additional details in Appx. E.2 of the revised paper. Thanks again! --- **References:** [1] Pattern Recognition and Machine Learning, 2006 [2] On logarithmic concave measures and functions, 1973 [3] Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments, NeurIPS 2024 [4] Rehearsal Learning for Avoiding Undesired Future, NeurIPS 2023 [5] Time series analysis and its applications, 2000 [6] Online Linear Quadratic Control, ICML 2018
Summary: This paper addresses the AUF (Avoiding Undesired Future) problem in machine learning decision-making, where the goal is to identify actions that prevent undesirable outcomes predicted by ML models. It introduces the CARE condition (CAnonical REctangle), a novel assumption under which the AUF probability—i.e., the probability that a post-decision outcome falls within a desired region—can be explicitly expressed and transformed (via a negative log operation) into a convex function. This convexity enables efficient optimization. They present a projection-Newton algorithm that achieves superlinear convergence to the optimal decision alteration and an inner embedding technique for cases where the CARE condition does not hold. Additionally, the paper provides a closed-form solution when the outcome is a singleton, significantly reducing time complexity. The experimental results on both synthetic and real-world datasets demonstrate that the proposed approach not only improves the AUF probability but also enhances computational efficiency compared to existing rehearsal learning and reinforcement learning methods. Claims And Evidence: Most of the claims are well supported by both theoretical derivations and empirical results. However, the assertion that the CARE condition is prevalent in real-world scenarios deserves additional discussion. The authors claim that this condition particularly holds when the dimensions of Y, such as Y₁ and Y₂, are mutually independent. In many practical situations, however, these objectives tend to be dependent—improvements in one may lead to declines in another. For example, in an autonomous drone delivery system, reducing the risk of package loss may require higher altitudes or longer routes, which in turn increase delivery times; similarly, in healthcare treatment planning, enhancing treatment efficacy often comes with more severe side effects. In portfolio optimization, striving to maximize returns usually entails accepting higher risk, illustrating that objectives are frequently interdependent rather than mutually independent. This dependency calls for further clarification or evidence to convincingly support the claim regarding the prevalence of the CARE condition, which underpins the paper. **Generalization Beyond CARE via Inner CARE Embedding:** While the paper introduces an inner CARE embedding technique to handle scenarios where the CARE condition does not naturally hold, the evidence here is somewhat less extensive. Although Propositions 3.10 and 3.11 provide a theoretical basis and a demonstration for circular regions, the empirical evaluation of this generalization is more limited. In practical settings where the desired region S is irregular or non-canonical, additional experiments or case studies might be necessary to fully convince readers of its broad applicability. Methods And Evaluation Criteria: The proposed methods are well-aligned with the AUF problem, featuring a projection-Newton algorithm, an inner CARE embedding for irregular cases, and a closed-form solution for unidimensional outcomes. The evaluation criteria—including AUF probability, success frequency over multiple rounds, and decision-making time—directly reflect the challenges of immediate, interaction-free decisions in rehearsal learning. Additionally, the use of both synthetic and real-world datasets, along with comparisons to state-of-the-art methods, provides a comprehensive and practical benchmark for the proposed approach. Theoretical Claims: The theoretical proofs looks sound to me. Experimental Designs Or Analyses: The experimental design and analyses looks good to me. Supplementary Material: Yes, I went over all the parts. Relation To Broader Scientific Literature: Regarding the rehearsal learning and AUF problem characterized by linear structural equations scientific literature, this paper introduces the CARE condition which imposes the concave structure on the logged AUF probability, making this problem more tractable and paving the way to structured algorithmic solution like the projection-Newton method proposed herein. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: The paper is solid, and clear, with good theoretical and experimental results. However, the authors should backup the CARE condition with more real-world examples. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your valuable feedback and appreciation of our work! We hope that our responses can address your concerns. **W1.** Further Discussion on the CARE Condition. **A1.** Thanks for your insightful question. In practice, the dimensions of $\mathbf{Y}$ are often dependent, meaning that the common region defined as {$a_i \leq Y_i \leq b_i$}, $i=1,\dots,|\mathbf{Y}|$ does not necessarily form a canonical rectangle w.r.t. the covariance matrix of $\mathbf{Y}$. However, the CARE condition remains useful because the desired region $\mathcal{S}$ can be manually specified by the decision-maker, as decisions are made after obtaining some system information. The overall decision-making process in rehearsal learning proceeds as follows: - **Step 1.** Initially, historical observational data samples are available, enabling estimation of the underlying system parameters $\theta$. - **Step 2.** Consequently, matrices such as $C$ and $\Sigma$ (for Def. 3.5 of the CARE condition) are available before decision-making. Since the desired properties of the target $\mathbf{Y}$ are typically determined by the decision-maker, the target region can be manually adjusted based on the estimated $C$ and $\Sigma$ to ensure compliance with the CARE condition. - **Example.** In portfolio optimization, defining a desired region such as $\{a_1 \leq \text{returns} \leq b_1\}$ and $\{a_2 \leq \text{risk} \leq b_2\}$ ensures a balanced strategy with both favorable returns and controlled risk. Alternatively, using desired region such as $\{a \leq \alpha\cdot\text{returns}+\beta\cdot\text{risk}\leq b\}$ and $\{c \leq \beta\cdot\text{returns}-\alpha\cdot\text{risk} \leq d\}$ (where $\alpha, \beta\neq 0$ and depend on $C$, $\Sigma$) can also achieve a balanced strategy while satisfying the CARE condition, allowing for a theoretically optimal solution. Hence, the CARE condition can be satisfied by appropriately adjusting the desired region if such adjustments are feasible. Additionally, the inner CARE embedding of {$a_i \leq Y_i \leq b_i$} can be computed using techniques for identifying axis-parallel rectangles within polygons [1]. By defining a basis aligned with the specific matrix space, the CARE embedding of the polygon {$a_i \leq Y_i \leq b_i$} can be derived. Additional experimental results provide support of this argument as detailed in **A2**. We will incorporate this discussion in the revised version. Thanks! --- **W2.** Additional case studies of inner CARE embedding. **A2.** Thanks for your thoughtful question. We conducted additional experiments for cases where the dimensions of $\mathbf{Y}$ remain **dependent** even after alterations, demonstrating that our proposed inner CARE embedding method is effective and flexible. Specifically, the experiments include two types of desired regions: (i) Circular desired region, with the inner CARE embedding computed using Eq. (4); (ii) Axis-aligned rectangular region {$a_i \leq Y_i \leq b_i$}, with the inner CARE embedding computed using [1]. The visualizations are presented in Fig. 2 of [Anon. Link](https://default-anno-bucket.s3.us-west-1.amazonaws.com/rebuttal.pdf), and the detailed performance, measured by AUF probability, is listed below: | Region types| No action | QWZ23 [2] | MICNS [3] | Ours | | --------------------------- | :--------------: | ---------------- | ---------------- | ---------------- | | Circular region| $0.010\pm 0.036$ | $0.961\pm 0.013$ | $0.957\pm 0.024$ | $0.983\pm 0.018$ | | {$a_i \leq Y_i \leq b_i$} | $0.004\pm 0.021$ | $0.885\pm 0.028$ | $0.893\pm 0.044$ | $0.922\pm 0.083$ | These results show that our proposed generalization method, i.e., inner CARE embedding, is effective in several irregular or non-canonical scenarios. Additionally, we would like to emphasize that finding a maximal axis-parallel inner rectangle for an irregular region is a challenging geometric problem [1], especially in high-dimensional spaces. Hence in practice, one can instead identify an inner canonical rectangle (not necessarily the maximal one) and optimize using this inner rectangle as a surrogate, which still enables a deterministic transformation in Cor. 3.7. This approach remains valid, as any inner region of the original retains a probability mass less than or equal to that of the original region, ensuring consistency with Prop. 3.10 due to the non-negativity of the probability density function. We will incorporate these results and discussions into the revised paper. Thanks! --- **References:** [1] Finding the largest area axis-parallel rectangle in a polygon, Computational Geometry 1997. [2] Rehearsal Learning for Avoiding Undesired Future, NeurIPS 2023. [3] Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments, NeurIPS 2024. --- We also take this opportunity to sincerely thank you for the careful review. Your suggestions are important for further improving the paper. Thanks again!
Summary: The paper proposes an algorithm for decision making that helps avoid undesirable future (AUF), i.e., increasing the AUF probability. The new algorithm is shown to reduce time complexity compared to prior work and has showed performance improvement compared to a few baselines. Claims And Evidence: The theoretical claims in the paper seem to be supported by proper arguments. There are also limited empirical evidence supporting the proposed method. Methods And Evaluation Criteria: The main empirical evaluation criterion is the probability of AUF, as measured by the empirical results. Theoretical Claims: I did not check the correctness of the proof. Experimental Designs Or Analyses: The experimental design seems reasonable though I think it might be good to add a few more baseline methods. Supplementary Material: NA Relation To Broader Scientific Literature: I think the work is related to reinforcement learning and especially its deployment in real applications. Essential References Not Discussed: NA Other Strengths And Weaknesses: The paper proposes a more efficient algorithm compared to prior work, which is very desirable and this has been established theoretically. In practice there are also empirical improvements compared to prior work. I think the main weakness is how the methodology here relates more broadly to the RL community, both in terms of the methodology relevance and application relevance. Other Comments Or Suggestions: NA Questions For Authors: === *relevance to RL literature* ==== From the looks of the definition of AUF, it seems that we can think of it as a RL problem where success gets $r=1$ and failure gets $r=0$, by maximizing the average reward we essentially maximize AUF. I wonder if framing the problem this way can help bridge the work's relevance to more general RL community. Do the authors agree with this characterization? === *RL algorithm baseline* === If the above characterization is proper, then I wonder whether there can be more RL related baselines to compare against in Fig 4 (c), such as reinforce policy gradient. I also find that the methodology proposed in this work has a rather strong "model based|" flavor, in that in order to obtain the solution we need to already know the graphical model behind the transition dynamics. This access to the ground truth model provides an advantage to the proposed method and I wonder what happens if the assumed model deviates from the real world domain, can we characterize the performance regret in that case? and what if we get to update the model based on real life applications? these ablations or discussions or bring the proposed method closer to the problems that RL community is well-versed in. === *time complexity* === As a minor point, I wonder what $p$ stands for in the time complexity comparison in Fig 1. I also assume that this complexity is obtained under assuming access to the ground truth graphical model of the problem, and I wonder what is the sensitivity of the theoretical result to the correctness of the graphical model (ie what happens if the model is $\epsilon$ away from the ground truth model. I think characterizing those will be more meaningful for practitioners. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your detailed feedback! We hope our responses address your concerns. **Q1.** Relevance to RL research. **A1.** Thanks for your insightful question. Below, we clarify the connection between RL and AUF problem and explain distinctions. - **Connection between RL and AUF.** When interactions are available, AUF can be formulated as an MDP (or reduced to a Bandit if no state transitions) and solved using RL methods. Specifically, at round $t$, state $x_t$ is observed, then an action is taken by altering $z_t$, and the environment provides a reward: $r=1$ if $y_t\in S$ else $r=0$. This aligns with our RL baselines (Tab. 2&3) and additional experiments in **A2**. - **Distinctions between RL and rehearsal learning.** Rehearsal learning focuses on a specialized decision-making setting where interactions are limited or even unavailable. In this case, variables **X,Z,Y** follow a structured generative process parameterized by $\theta$ (in MDP formulation, transition dynamics and reward function also depend on $\theta$). Leveraging this fine-grained structure, reliable decisions can be made using only a small set of observational samples (for estimating $\hat\theta$) without necessarily requiring interactions with the environment. In contrast, online RL methods rely on extensive interactions for effective policy learning. While offline/hybrid online-offline RL might seem applicable, direct applications would also be unsuitable, as actions significantly shift the distribution of **Y** as discussed in Sec. 2, rendering the reward functions different between offline and online data. While RL/MDP provides a general framework for decision-making, real-world interactions can be costly or even unethical. E.g., in healthcare, doctors cannot freely experiment with untested treatments. In such cases, leveraging structural knowledge enables effective policy without interaction-based exploration while also enhancing interpretability. Finally, integrating a rehearsal-learning policy as an initialization for online RL could serve as a potential bridge between our work and the broader RL community, offering a promising yet challenging direction to improve RL’s sample efficiency in certain cases. We will incorporate this discussion into the revised paper. Thanks! --- **Q2.** RL baseline&Discussion on $\hat{\theta}$. **A2.** Thanks for your question. We conduct additional experiments in Fig. 3 of [Anon. Link](https://default-anno-bucket.s3.us-west-1.amazonaws.com/rebuttal.pdf) to show that policy gradient RL methods can be effective for AUF when sufficient interactions are available. Note that Fig. 4c in paper illustrates the performance of rehearsal methods w.r.t. the number of ***offline observational samples***, making it unsuitable for evaluating RL methods (including offline/hybrid online-offline RL as discussed in **A1**). Furthermore, we address implications of using an $\epsilon$-approximate model $\hat{\theta}$, i.e., $||\hat{\theta}-\theta||\leq \epsilon$: - **Empirical validation.** As shown in [1], $||\hat{\theta}-\theta||$ decreases as the number of observational samples increases. Hence, Fig. 4c shows that our approach's performance improves as $\epsilon$ decreases in practice. - **Theoretical analysis.** We would like to clarify that our theoretical guarantees are established on the AUF probability conditioned on $\hat\theta$ rather than on the true parameter $\theta$, as discussed below Thm. 3.8. Moreover, deriving a regret bound similar to those in RL literature is generally challenging because the rehearsal-learning policy is obtained by solving a probabilistic optimization (Eq. 2). In this case, although the output action $z$ depends on $\theta$, properties such as L-Lipschitz continuity are extremely difficult to establish, making regret analysis nontrivial. However, if the function $\ell \circ z(\cdot)$ can be assumed to be L-Lipschitz continuous, then the following bound can be derived: $$\ell(z(\hat{\theta}))-\ell(z^*)\leq L||\hat{\theta}-\theta||_2\lesssim\mathcal{O}(\frac{1}{\sqrt{n}}),$$ where $n$ is the number of observational samples. The last inequality follows from [1]. Finally, online updating of $\hat{\theta}$ can also be incorporated in our approach via an offline parameter-update step after each decision round. We will refine this discussion in the revised version. Thanks! --- **Q3.** The meaning of $p$. **A3.** Thanks for your feedback. In Tab. 1, $p$ represents the dimensionality of actionable variables $z^\xi$. Rehearsal learning has two-stages: (i) estimate $\theta$ from historical samples; (ii) make decisions after observing $x$. Only stage (ii) requires immediate actions, and Tab. 1 reports its time complexity. The sensitivity of theoretical results is discussed in **A2**. We will incorporate this clarification in the revised version. Thanks again! --- **References:** [1] Avoiding Undesired Future with Minimal Cost in Non-Stationary Environments, NeurIPS 2024.
null
null
null
null
null
null
null
null
Global Context-aware Representation Learning for Spatially Resolved Transcriptomics
Accept (poster)
Summary: The paper introduces Spotscape, a novel framework for representation learning in Spatially Resolved Transcriptomics (SRT) data. The key contribution of the paper is the Similarity Telescope module, which captures global relationships between spots, addressing the limitations of existing graph-based methods that rely heavily on local spatial information. Additionally, the paper extends Spotscape to multi-slice tasks by introducing a prototypical contrastive learning (PCL) scheme and a similarity scaling strategy to mitigate batch effects during multi-slice integration. The authors conduct extensive experiments on multiple datasets, demonstrating the superiority in various downstream tasks. Claims And Evidence: The claims made in the paper are generally well-supported by clear and convincing evidence. The authors provide extensive experimental results across multiple datasets, showing that Spotscape outperforms existing baselines in tasks such as spatial domain identification, trajectory inference, and multi-slice integration. The ablation studies further validate the importance of each proposed module (e.g., Similarity Telescope, PCL, and similarity scaling). However, the paper could benefit from a more detailed discussion on the theoretical justification for the global similarity learning scheme, particularly how it addresses the limitations of local spatial information in SRT data. Methods And Evaluation Criteria: The proposed methods, including the Similarity Telescope module, prototypical contrastive learning (PCL) scheme, and similarity scaling strategy, are designed to address key challenges in Spatially Resolved Transcriptomics (SRT) data analysis. While the authors claim that these modules are novel and effective, I find that the theoretical justification and logical coherence of these methods are not sufficiently robust. For the Similarity Telescope Module, the authors argue that capturing global relationships between spots is crucial, especially for spots near spatial domain boundaries. However, the paper lacks a clear theoretical foundation for why global similarity learning is superior to local spatial information in all cases. Theoretical Claims: The paper does not present formal theoretical proofs. The authors provide intuitive explanations for the design choices, such as the global similarity learning scheme and the prototypical contrastive learning module. While the lack of theoretical guarantees is not a major drawback, a more formal analysis of the global similarity learning mechanism could strengthen the paper. Experimental Designs Or Analyses: The experimental design is sound and well-executed. The authors evaluate Spotscape on multiple datasets, covering both single-slice and multi-slice scenarios. The results are consistent across different datasets, demonstrating the generalizability of the proposed method. The ablation studies and sensitivity analysis provide valuable insights into the contribution of each module. However, the paper could benefit from a more detailed discussion of the limitations of the proposed method, particularly in scenarios where the global similarity assumption may not hold. Supplementary Material: The supplementary material includes additional experimental results, hyperparameter settings, and implementation details. The authors also provide a pseudo-code for Spotscape, which enhances the reproducibility of the work. The supplementary material is well-organized and complements the main paper effectively. Relation To Broader Scientific Literature: The authors acknowledge the limitations of existing graph-based methods and propose a novel approach to address these limitations. The introduction of global similarity learning and prototypical contrastive learning aligns with recent trends in self-supervised learning and graph representation learning. The paper builds on prior work such as STAGATE and SpaceFlow, but introduces significant improvements by incorporating global context and multi-slice integration. Essential References Not Discussed: The paper adequately covers the relevant literature, but it could benefit from a discussion of recent advances in SRT clustering and multi-view representation learning, which are closely related to the proposed method. Other Strengths And Weaknesses: Strengths: 1)The extension to multi-slice tasks using prototypical contrastive learning is well-designed and addresses a key challenge in SRT analysis. 2)The extensive experimental results and ablation studies provide strong evidence for the effectiveness of the proposed method. 3) The code is made publicly available, which enhances the reproducibility of the results. Weaknesses: 1) The paper lacks a formal theoretical analysis of the global similarity learning mechanism. 2) The limitations of the proposed method, particularly in scenarios where the global similarity assumption may not hold, are not thoroughly discussed. For example, considering the relationship between spots from a global perspective may introduce noise, making it difficult to learn spot information with the same semantic relationship. Other Comments Or Suggestions: The paper is well-written presents the contributions and results. However, the authors should consider adding a discussion on the limitations of their method, particularly in scenarios where the global similarity assumption may not hold. Questions For Authors: 1) Theoretical Justification:Could the authors provide a more formal theoretical justification for the global similarity learning mechanism? How does it address the limitations of local spatial information in SRT data? 2) What are the limitations of the proposed method, particularly in scenarios where the global similarity assumption may not hold? How might these limitations be addressed in future work? For example, considering the relationship between spots from a global perspective may introduce noise, making it difficult to learn spot information with the same semantic relationship. 3) In the comparison methods, please add the latest SRT clustering methods, such as [MAFN, TKDE'24] and [stGCL, ACM MM'24], to further validate the effectiveness of the proposed method. 4) The results indicate that Spotscape achieves fast training times, highlighting its practicality for high-throughput datasets (e.g., 100,000 spots) within a reasonable timeframe. However, the introduction of the prototypical contrastive learning (PCL) scheme might slow down the training process. Could the authors provide an analysis of the time complexity of each module, particularly focusing on the impact of PCL on the overall training time? This would help clarify the trade-offs between performance gains and computational cost. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time to provide constructive feedback on our paper. To address your concerns, we have added tables and figures in this [external link](https://anonymous.4open.science/r/Spotscape-31B6/Rebuttal.pdf) **Q1) Theoretical Justification** The similarity consistency loss in _Equation (1) of our manuscript_ serves two key purposes: **1) explicitly guiding the embedding space to capture quantitative similarity relationships**, and **2) encouraging the model to learn a global relational structure that spans all nodes**. In downstream tasks, we rely on cosine similarity between normalized embeddings. In other words, we treat the normalized embedding space as an Euclidean space, where distance serves as a proxy for semantic closeness. However, embeddings from GAE trained solely with the reconstruction loss are optimized for compression. As a result, the cosine similarity between embeddings lacks direct interpretability or consistency across different pairs. In an Euclidean space, the consistency loss equals zero ideally. Thus, the consistency loss serves as a regularizer that aligns the embedding space more closely with the desired geometry, making similarity values more meaningful and consistent. Moreover, the limited receptive field of GNNs is problematic because downstream tasks involve comparing representations of distant nodes. The consistency loss helps mitigate this limitation by considering similarity relations between all node pairs. The reconstruction loss satisfies $\frac{\partial^2 \mathcal{L}_{\text{recon}}}{\partial \tilde{Z}_i \partial \tilde{Z'}_j} = 0$ for $i \neq j$, which implies that updates to each embedding do not depend on the others. In contrast, the second derivative of the consistency loss, $\frac{\partial^2 \mathcal{L}_{\text{SC}}}{\partial \tilde{Z}_i \partial \tilde{Z'}_j}$, can have non-zero values, indicating that the update to each embedding depends on others as shown in our proof **R1.1 in the external link**. In summary, the consistency loss not only enhances the quantitative interpretability of similarity in the learned space, but also provides a mechanism for global information flow, thus improving the utility of the representations for downstream tasks. **Q2) Scenarios where the global similarity assumption may not hold** We understand your concern that Spotscape could introduce noise, particularly when spots far from the anchor node but still within the same spatial domain are considered. However, we want to clarify that our method does not rely on any form of aggregation from global nodes. Instead, Spotscape is designed to learn the similarities between all spots, irrespective of their global or local relationships. Our approach focuses on learning these similarities directly, without assuming that distant spots within the same spatial domain must always share meaningful information. Rather than aggregating information from any nodes, the model learns the relationships between spots through augmentation-based consistency, which expects that only meaningful relationships are captured. This avoids the risk of noise from distant spots and emphasizes learning the inherent semantic relationships between spots based on the data itself. In summary, Spotscape prioritizes the direct learning of spot similarities and consistency across augmentations, without aggregating information from distant or global nodes, which could potentially introduce noise. **Q3) More baselines** We have updated our experiments to include these two latest SRT clustering methods, MAFN (TKDE'24) and stGCL (ACM MM'24), across all relevant tasks, including clustering and integration. The results are now reported in **Tables 1, 2, and 3 in the external link**. Compared to these baselines, Spotscape still demonstrates superior performance, highlighting the effectiveness of our proposed method. **Q4) Trade-offs between performance gains and computational cost of the PCL scheme** To address your concern regarding the trade-offs between performance gains and computational costs associated with prototypical contrastive learning (PCL), we generated a synthetic dataset using scCube [1], with mouse embryo data as the reference, and reported the results in **Figure 6 (external link)**. This analysis demonstrates that while PCL does require more training time, it leads to better performance. We argue that in practical scenarios, the decision to use PCL depends on the user's preference for balancing training time and performance. However, we emphasize that PCL does not lead to impractical training times, as it is not excessively slow, and can still be viable for real-world applications. --- [1] "Simulating multiple variability in spatially resolved transcriptomics with scCube," Nature Communications
Summary: This paper proposed a new computational method, known as Spotscape, to integrate different spatial transcriptomics data. This model is improved by graph neural networks. ## update after rebuttal I keep my score. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes, I have checked the correctness of proofs and claims. Experimental Designs Or Analyses: Yes, I have checked the analyses and designs. I have some questions about this, which is discussed in my later question section. Supplementary Material: Yes, I have reviewed both codes and appendix. Relation To Broader Scientific Literature: The contribution is minor, but the scientists working on spatial transcriptomics analysis will be interested in reading it. Essential References Not Discussed: NA. Other Strengths And Weaknesses: Please see my comments. Other Comments Or Suggestions: Please see my questions. Questions For Authors: This paper proposed a method for spatial transcriptomics integration, which seems to have good performance, but I have some questions about benchmarking analysis and model design. If the authors can address my concerns, I may consider raising my score. 1. The presentation of Figure 1 is not clear to me. It seems that GAE variations and the proposed method identify similar regions. Why proposed method cannot exceed GAE variation in both metrics? What is the meaning of the colors presented in the right panels? I think it does not align well with the labels presented in the left panel. The authors should consider improving the presentation. 2. It has been shown that the variation of spatial transcriptomic data can be decomposed into cell type-specific signals and spatial signals. Therefore, is it still meaningful to use a GNN to encode cell-type specific signals? The answers should include oblivion studies based on normal MLP. 3. What about the scalability of the purposed models can it handle large-scale datasets? For example, the spatial atlas data suggested by spatial AD/HC database: https://ngdc.cncb.ac.cn/databasecommons/database/id/9046 or STImage 1K4M (https://github.com/JiawenChenn/STimage-1K4M)? 4. The authors do not present the turning results of baseline methods. Do they keep the default hyper-parameters for the baselines? If so, how to ensure the comparison is fair as the authors are tuning their proposed method across different datasets? Also, the authors may need to consider PASTE2 (https://github.com/raphael-group/paste2) or CAST (https://www.nature.com/articles/s41592-024-02410-7) as a baseline. 5. The UMAP presented in Figure 6 looks strange. Do they observe trajectory patterns for this figure? If so, it will be helpful if they also compute scores based on the raw expression profiles as I think the raw data capture the most information on trajectory relationship and it looks like we do not need to run other methods in analyzing this dataset. Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. To address your concerns, we upload additional results in the [external link](https://anonymous.4open.science/r/Spotscape-31B6/Rebuttal.pdf). **Q1-1) Presentation of Figure 1** _Figure 1_ highlights the limitations of previous methods that address the issues present in the Vanilla GAE (b-1), particularly the problem where boundary spots receive noisy information from spots in different spatial domains. While we describe the observation from _Figure 1 in the second paragraph of our manuscript_, we understand that the presentation of _Figure 1_ may have caused some confusion, as we did not clearly indicate which part of _Figure 1_ readers should focus on. The most important part of _Figure 1_ is the number of red dots, which represent **Wrongly Clustered in Boundary** and **boundary CA**. We will revise the manuscript to make this point more prominent. Additionally, the spot colors in _Figure 1_ (a) and (b) are independent. Please refer to the legend at the top of the _Figure 1_ (a) of our manuscript. **Q1-2) Spotscape vs. GAE variation** Spotscape did not exceed GAE with **oracle edges** (b-3), which is connected with the same ground truth only, in terms of Total CA, which is not a real scenario. However, Spotscape does outperform the oracle setting in terms of boundary CA, demonstrating the effectiveness of our proposed method in addressing the boundary-related issues. **Q2) GNN for cell-type specific signals** Your concern arises from the fact that all of the datasets we used are annotated by spatial domains rather than cell types, and that our evaluations only focus on spatial domains. You may also have assumed that as our graph is primarily constructed based on spatial coordinates, this would be only effective for capturing spatial signals, while not as useful for encoding cell-type-specific signals. To address this, we introduce a new dataset from the **Postnatal Mouse Brain (PMB)** (STOmicsDB ID: STDS0000004), **annotated by cell types**, and perform ablation studies with an MLP encoder. The clustering performance for PMB and DLPFC both is reported in **Table 5 and 6 in the external link**. We observe that the GNN-based encoder is less beneficial in the PMB dataset (cell-type) than in the DLPFC dataset (spatial domain) clustering. This indeed aligns with your intuition that the spatial graph carries more information regarding spatial-specific signals rather than cell-type-specific signals. However, GNN still provides a performance gain in the PMB data since spatial regions also implicitly carry signals related to cell types because the same cell types tend to cluster together within tissues, reflecting their functional and structural organization, as well as their interactions within specific tissue regions [1]. In fact, the homophily ratio of both the SNN graph in the DLPFC and the PMB data is 0.92, demonstrating that spatial regions also carry cell-type-related signals. **Q3) Scalability** Please refer to _Figure 10 in our manuscript_ about reasonable runtime up to 100,000 spots. To further address your concern, we conducted additional experiments using the Mouse Main Olfactory Bulb dataset (STOmicsDB ID: STDS0000142 - 1,792,797 spots in 39 slices). In **Figure 5 in the external link**, we present the runtime for different numbers of slices, demonstrating that Spotscape can handle large datasets without exponential runtime growth as the number of slices grows. **Q4-1) Hyper-parameters setting** Please refer to our rebuttal regarding **Q4** to reviewer **uFL9** due to the character limits. **Q4-2) More baselines** We have updated the baselines **in the external link**. * PASTE2: Slice-to slice alignment using optimal transport in **Table 4** * CAST: Integration and alignment using CCA-SSG in **Table 3** (Heterogeneous integration), **Figure 1** (UMAP for batch effect), **Figure 3** (Trajectory inference), and **Table 4** (Alignment) **Q5-1) Figure 6 UMAP & Trajectory of raw expression** We did not intend to show trajectory inference results in _Figure 6_. Our goal was to highlight the performance of the heterogeneous integration reducing batch effect. To address your concern, we show that the scores based on the raw expression do not provide reliable information on trajectory. To evaluate this, we calculate the pseudo-Spatiotemporal Map and report its correlation with the ground-truth layer, adding the results in **Figure 2 in the external link**. These results indicate that raw expression alone performs the worst, demonstrating the necessity of additional representation learning methods. Finally, we also conducted trajectory inference on this dataset and reported the results **in Figure 3 and 4 in the external link**. These results show that while Spotscape successfully captures the trajectory, the raw expression-based approach fails to do so. --- [1] "An introduction to spatial transcriptomics for biomedical research," Genome Medicine --- Rebuttal Comment 1.1: Comment: Thank you for submitting rebuttal information, but I think most of my concerns are still not well-resolved, and trajectory infernence is also important for evaluating batch effect correction, as mentioned in scIB. The improvement of this proposed method is also not very interesting, and I think this paper is more suitable for a bioinformatics-specific journals or conferences. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review my paper. > trajectory infernence is also important for evaluating batch effect correction, as mentioned in scIB Please note that as per the reviewer’s request, [in Q5-1 of our first rebuttal](https://openreview.net/forum?id=jeJGH6UDOL&noteId=yvHFdm7jgv), we have indeed conducted trajectory inference after integrating two slices (Refer to Figure 3 and 4 in the [external link](https://anonymous.4open.science/r/Spotscape-31B6/Rebuttal.pdf)). While we are not entirely certain why it may have seemed that trajectory inference was not addressed in our initial rebuttal, your comments suggest that you may have been referring to the "trajectory conservation" metric used in scIB. We would like to highlight the difference between our reported correlation and the "trajectory conservation" metric in scIB: we used Pearson correlation, whereas scIB employed Spearman correlation with scaling. In response to this, we have now included the "trajectory conservation" metric in Table 3 of the external link. The updated results are presented below. | | Trajectory conservation | |----|-----:| | Raw | 0.27 (0.00) | | GraphST | 0.16 (0.14) | | STAligner | 0.35 (0.26) | | CAST | 0.26 (0.45) | | **Spotscape** | 0.97 (0.02) | The updated results demonstrate that our method performs well in terms of conserving the biological signal related to the trajectory. > I think this paper is more suitable for a bioinformatics-specific journals or conferences We would like to point out that the [ICML 2025 call for papers](https://icml.cc/Conferences/2025/CallForPapers) explicitly highlights "Application-Driven Machine Learning" as one of their topics of interest, with "biosciences" mentioned as an example. In fact, research on ‘Spatially Resolved Gene Expression’ has also been published in NeurIPS (https://arxiv.org/pdf/2306.01859) and ICLR (https://arxiv.org/pdf/2501.15598), both of which are highly regarded venues with a similar scope and focus to ICML. Furthermore, "AI for Science" has emerged as a highly prominent and actively researched topic within leading computer science conferences. Therefore, we would like to respectfully emphasize that this topic is not only suitable for bioinformatics journals and conferences, but also highly relevant to broader machine learning venues. > The improvement of this proposed method is also not very interesting While we are not certain why our work may not have fully captured your interest, we would like to respectfully highlight that, in contrast to more established areas within computer science such as computer vision, the application of AI to spatial transcriptomics remains in its early stages and offers significant potential for impactful contributions. This area requires simple yet practical solutions. Our proposed method is the first to address a variety of downstream tasks with strong performance, even offering fast inference times. If the reason for your lack of interest is due to unresolved concerns in other questions, we would like to elaborate further on Q1–Q4. However, due to character restrictions, we will address these briefly: **Q1) Presentation of Figure 1** : We believe we have provided a more detailed explanation of this figure. I would like to emphasize once again that the performance of GAE with oracle edges (b-3) is based on a synthetic perfect graph, which fully reflects local information. This setting is not realistic, and it serves to highlight our contribution that global relationships are also important. This is why our method performs slightly lower than this setup in terms of Total CA. --- **Q2) Cell-Type Specific Signals**: We believe this concern is well addressed through the additional experiments presented in Tables 5 and 6 in the external link. | Spatial Domain | ARI | NMI | CA | |----|:----:|:----:|:----:| | Spotscape (w/ MLP encoder) | 0.20 (0.01) | 0.30 (0.01) | 0.42 (0.02) | | **Spotscape** | 0.48 (0.02) | 0.64 (0.01) | 0.61 (0.02) | | Cell-type | ARI | NMI | CA | |----|:----:|:----:|:----:| | Spotscape (w/ MLP encoder) | 0.58 (0.03) | 0.65 (0.02) | 0.67 (0.03) | | **Spotscape** | 0.61 (0.07) | 0.68 (0.03) | 0.74 (0.06) | --- **Q3) Scalability**: This question mainly concerns running time, which we have clearly explained. | # of slices (# of spots) | 2 (71,192) | 5 (222,482) | 10 (475,812) | 20 (941,625) | 39 (1,792,797) | |--|--:|--:|--:|--:|--:| | STAligner | 2,054 | 6,084 | 21,015 | 273,929 | OOM | | **Spotscape** | 2,113 | 3,755 | 12,221 | 18,467 | 35,586 | --- **Q4-1) Hyperparameter Settings**: These have been thoroughly covered in Appendix E.1 in our first submission. **Q4-2) Baselines**: The proposed baselines are included in our first rebuttal. --- We sincerely hope that all of your concerns have been fully addressed and kindly request that you update the score.
Summary: The paper introduces Spotscape, a novel framework for representation learning in Spatially Resolved Transcriptomics (SRT) data. Spotscape incorporates a Similarity Telescope module to capture global similarity relationships and integrates Prototypical Contrastive Learning (PCL) and a similarity scaling strategy to tackle challenges in both single-slice and multi-slice tasks. Extensive experiments demonstrate Spotscape’s superiority in spatial domain identification, trajectory inference, imputation, and multi-slice integration compared to other baselines. Claims And Evidence: Most of the paper’s claims are supported by the presented experiments and results; however, there are a few issues: 1. In Section 5.3 on Scalability, the authors discuss only the training time. They should also present evidence of performance improvements, as both efficiency and effectiveness are important when assessing scalability. 2. For imputation tasks, the manuscript lacks comparisons with domain-specific imputation methods. Including results from specialized methods would strengthen this claim. Methods And Evaluation Criteria: The methods and evaluation criteria are well designed. The paper provides rigorous benchmarking across diverse datasets and tasks, which supports the evaluation of Spotscape’s capabilities. Theoretical Claims: The submission does not introduce new theoretical claims or detailed proofs. Instead, the authors justify their methodological contributions through empirical experiments and biological reasoning. It would be beneficial for the authors to discuss any theoretical limitations or provide additional analysis—for instance, by citing references that explain the rationale behind the design of the loss functions in formula (8). Experimental Designs Or Analyses: The experimental framework is generally sound, with extensive benchmarking and detailed evaluations. However, some aspects could be improved: 1. The results are reported as mean ± standard deviation over 10 runs. Incorporating statistical significance tests (e.g., t-tests) would help confirm that the improvements are statistically robust. 2. It appears that Spotscape requires considerable effort for hyper-parameter tuning. The authors should provide guidance or strategies for tuning these hyper-parameters to benefit readers who may want to reproduce or build upon this work. Supplementary Material: The supplementary material was reviewed thoroughly and covers: Dataset statistics, Introduction of baseline methods, Pseudo-code of Spotscape, Additional analysis of experimental results and Future work.\ This supplementary content adds valuable context and supports the findings of the main manuscript. Relation To Broader Scientific Literature: The paper’s contributions address current limitations in spatially resolved transcriptomics (SRT) analysis, graph-based representation learning, and prototypical contrastive learning for biological data. Specifically, the work builds on:\ (1) Spatially resolved transcriptomics (baselines): STAGATE[1], SpaGCN[2], SpaceFlow[3], etc.\ (2) Graph-based representation learning (methodology): Graph Convolutional Networks (GCN)[4], etc.\ (3) Prototypical contrastive learning (methodology): scGPCL[5], scPoli[6], etc. [1] Deciphering spatial domains from spatially resolved transcriptomics with an adaptive graph attention auto-encoder[J]. Nature communications, 2022.\ [2] SpaGCN: Integrating gene expression, spatial location and histology to identify spatial domains and spatially variable genes by graph convolutional network[J]. Nature methods, 2021.\ [3] Identifying multicellular spatiotemporal organization of cells with SpaceFlow[J]. Nature communications, 2022.\ [4] Semi-supervised classification with graph convolutional networks[J]. 2016.\ [5] Deep single-cell RNA-seq data clustering with graph prototypical contrastive learning[J]. Bioinformatics, 2023.\ [6] Population-level integration of single-cell datasets enables multi-scale analysis across samples[J]. Nature methods, 2023. Essential References Not Discussed: For imputation tasks, the authors should compare Spotscape with specialized imputation methods. Including domain-specific methods would enhance the credibility of their claims and provide a more comprehensive evaluation. Other Strengths And Weaknesses: Other strengths: 1. The paper is well-organized and easy to follow. 2. Although the methodological contribution is incremental, the practical application in SRT data analysis is compelling. 3. The experiments are well-designed, with rigorous benchmarking across multiple datasets and tasks. Other weaknesses: 1. The scalability analysis is incomplete, as it focuses solely on training time without addressing performance improvements. 2. The manuscript does not compare Spotscape with specialized imputation methods, which is critical for validating the imputation task claims. 3. The paper could benefit from additional statistical analysis and clearer guidance on hyper-parameter tuning. Other Comments Or Suggestions: I recommend that the authors: 1. Include statistical significance tests (e.g., t-tests) to support the experimental findings. 2. Provide comparisons with domain-specific imputation methods to strengthen the claims related to imputation. 3. Offer additional discussion on the theoretical underpinnings of the loss functions (e.g., in formula (8)) and clarify any potential limitations of their approach. 4. Include guidance on the hyper-parameter tuning process to assist readers in replicating the experiments. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback regarding our manuscript. To address your concerns, we have added tables and figures in this [external link](https://anonymous.4open.science/r/Spotscape-31B6/Rebuttal.pdf). **W1) The scalability analysis is incomplete** In our _manuscript_, we initially focused on training time in _Figure 10_ because our performance improvements had already been demonstrated through various experiments. The scalability experiment was conducted using synthetic data without meaningful spot information, making it unsuitable for reporting performance metrics. To provide a more comprehensive evaluation, we generated another synthetic dataset using scCube [1], with mouse embryo data as the reference. In **Figure 6 in the external link**, we demonstrate that Spotscape not only achieves fast inference but also maintains robust performance across varying numbers of spots and slices. **Q1) t-tests results** In the **external link**, we have conducted t-tests for all our experiments. We indicate statistical significance with ** for p-value < 0.01 and * for p-value < 0.05. **Q2) Comparison with domain-specific imputation methods** Most existing imputation methods in spatial transcriptomics, such as Tangram [2] and stDiff [3], rely on single-cell RNA sequencing data as a reference for improvement. However, this approach is not always applicable, as single-cell reference data is not always available, making these methods unsuitable as direct baselines for our study. Given this limitation, we considered STAGATE, GraphST, SpaCAE, and SEDR as state-of-the-art baselines, as they denoise input expression data using decoded outputs and have also reported imputation results in their respective papers. To further address your concern, we additionally include stMCDI [4], which employs masked conditional diffusion strategies. To our knowledge, this is the most recent baseline that operates under the same setting as ours. Through this experiment in **Figure 7 (external link)**, we demonstrate that Spotscape continues to achieve the best results. **Q3) Discussion on the theoretical underpinnings of the loss functions** As you pointed out in your review, our paper primarily focused on practical applications rather than theoretical analysis. However, to provide more clarity on the theoretical underpinnings of our approach, we would like to elaborate on the similarity consistency loss presented in _Equation (1) of our manuscript_. This loss function is a key contribution and one of the main components in _Equation (8) of our manuscript_. We discussed this loss in our rebuttal regarding **Q1** for reviewer **aaPZ** due to character limits. Meanwhile, outliers may potentially impact the embeddings of other nodes, as the loss considers global relationships, which could be a limitation. Despite these challenges, the consistency loss remains an effective tool for improving the model's global contextual understanding. **Q4) Guidance on the hyper-parameter tuning process** _In lines 245-253 of our manuscript_, we clarify that > To ensure fairness, we conduct a hyperparameter search for all baseline methods instead of using their default settings, as optimal hyperparameters may vary across datasets. The best-performing hyperparameters are determined based on NMI using the first seed. For Spotscape, only the learning rate is searched using the same criterion. Details of the selected hyperparameters and search spaces are provided in _Appendix E.1 of our manuscript_. We ensure fairness in the comparison by conducting a broader hyperparameter search for the baseline methods compared to our proposed method, making the comparison as fair and consistent as possible. Specifically, _in Appendix E.1 of our manuscript_, we state that for Spotscape, we utilized default parameters across all datasets except for the learning rate. However, for the baseline methods, additional parameters, such as loss balancing, are also tuned along with the learning rate because some baselines are highly sensitive to them. In this section, we also provide the full search spaces for each baseline. To further address your concern, we have now added the final selected hyperparameters for all baseline methods across all datasets to our [Anonymous Github](https://anonymous.4open.science/r/Spotscape-31B6/config/README.md), ensuring full transparency and reproducibility. --- [1] "Simulating multiple variability in spatially resolved transcriptomics with scCube," Nature Communnications [2] "Deep learning and alignment of spatially resolved single-cell transcriptomes with Tangram," Nature Methods [3] "stDiff: A diffusion model for imputing spatial transcriptomics through single-cell transcriptomic," Briefings in Bioinformatics [4] "stMCDI: Masked Conditional Diffusion Model with Graph Neural Network for Spatial Transcriptomics Data Imputation," BIBM 2024
null
null
null
null
null
null
null
null