title
stringlengths
15
163
paper_decision
stringclasses
4 values
review_1
stringlengths
853
32.6k
rebuttals_1
stringlengths
0
15.1k
review_2
stringlengths
1.03k
35.6k
rebuttals_2
stringlengths
0
15.1k
review_3
stringlengths
807
27.4k
rebuttals_3
stringlengths
0
15k
review_4
stringlengths
780
22.2k
rebuttals_4
stringlengths
0
15.1k
review_5
stringclasses
171 values
rebuttals_5
stringclasses
166 values
review_6
stringclasses
25 values
rebuttals_6
stringclasses
24 values
review_7
stringclasses
4 values
rebuttals_7
stringclasses
4 values
SPMC: Self-Purifying Federated Backdoor Defense via Margin Contribution
Accept (poster)
Summary: The authors proposed a new federated backdoor defense method named SPMC. SPMC employs Shapley value to assess the contribution of each client and then reweights their model updates (server side). Additionally, SPMC utilizes knowledge distillation to calibrate the direction of gradients (client side). The authors conducted several experiments to validate SPMC. Claims And Evidence: Some claims lack formal substantiation. For instance, it is unclear why the federation resulting from a new client joining the existing client pool would produce a malicious contribution (right column, Lines 196-198). Methods And Evaluation Criteria: The authors introduce a new method. This paper does not involve any new datasets. Theoretical Claims: The paper does not contain theoretical claims. Experimental Designs Or Analyses: The experiments are flawed as the authors have overlooked many recent backdoor attack methods. Supplementary Material: I reviewed Appendix A and Appendix B. Relation To Broader Scientific Literature: The proposed method is a new federated backdoor defense method, which could potentially enhance the robustness of federated learning. Essential References Not Discussed: The authors have missed some federated backdoor attack methods [1]. [1] Bad-PFL: Exploring Backdoor Attacks against Personalized Federated Learning, ICLR Other Strengths And Weaknesses: The paper is well-written and easy to follow. The paper has a good motivation and intuition, but I believe the experiments are insufficient. The authors only validated the effectiveness of their method against DBA. Other Comments Or Suggestions: * It would be beneficial to present experimental results demonstrating the proposed method's effectiveness against more advanced attack methods. * Since FedAvg is hardly used in Non-IID scenarios, I recommend the authors consider incorporating some personalized federated learning methods. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer 7qEv: Thank you for your thoughtful review and for raising key concerns regarding our work. We hope the following responses will address your concerns and update the score. **Question** **Q1: Lack of substantiation and theoretical claims.** (Claims And Evidence&Theoretical Claims) A1: We sincerely apologize for the confusion. Inspired by the Shapley value, we define a client's contribution to its marginal coalition as the marginal contribution. Specifically, for client $k$, its contribution to the marginal coalition is denoted as $\phi_k(k;\ \Gamma;\ D;\ N)=\mathbb{E}_{\Gamma\sim D^N}[\Gamma(N\setminus{k})-\Gamma({k})]$,where $D^N$ represents the collection of all clients' datasets, and $\Gamma(N\setminus{k})-\Gamma({k})$ captures the model parameter difference between the marginal coalition and client k. Next, we introduce the theorem to demonstrate the role of marginal contribution in identifying malicious clients. Theorem: Let parameter aggregation function $\Gamma$ be B-Lipschitz with respect to $Z$. Suppose $D_k$ and $D_p$ are two data distribution over $Z$. Then, for client $k,p\in N$, $$ \phi_k(k;\ \Gamma;\ D_k;\ N)-\phi_p(p;\ \Gamma;\ D_p;\ N)\le2NB·W(D_k,D_p), $$ where $W$ denotes the distributional difference between two data distributions. This theorem measures the marginal contribution changes under two different data distributions. Due to the significantly different distribution of malicious clients, the marginal contributions of malicious clients become easier to distinguish. We will add this theorem's derivation in final version. Thank you for the valuable comment. **Q2: Extended experiments on a new dataset.** (Methods And Evaluation Criteria) A2: We agree with you and have incorporated this suggestion throughout our paper. We extended our experiments on the CIFAR-100 dataset using ResNet-18. As shown in Table 1, even in more complex task scenarios, both DnC and SPMC maintain their defensive effectiveness. However, DnC struggles to match SPMC’s backdoor defense due to the limited information captured by its sub-dimensions. Moreover, the increased model complexity and task difficulty hinder convergence of the local gradient learning rate, significantly impacting RLR's main task performance. *Table 1: Comparison with the SOTA backdoor robust solutions on CIFAR-100 dataset using ResNet-18. The malicious proportion is γ=0.3 and the local data poisoned portion is set as 0.3.* | Method | A↑| R↑| V↑| |-----|-----|----- |-----| |Fedavg|55.29|6.91|31.10| |DnC|54.44|11.31|32.88| |RLR| 50.94|7.86|29.40| |Ours| 55.14|35.26|**45.20**| **Q3: Lack of comparison to recent backdoor attack method under PFL.** (Essential References Not Discussed&Other Comments Or Suggestions&Other Strengths And Weaknesses) A3: Thank you for your suggestion. We would like to clarify that the attack method used in our approach is BadNet, not DBA. BadNet aims to poison the model by injecting whole static triggers into the local dataset. In contrast, DBA constructs a global trigger by combining local triggers across multiple clients. To verify that SPMC can maintain the robustness of federated systems in the stronger attack, we followed your suggestion and introduced an attack method, Bad-PFL [1]. This method leverages natural data features as triggers, achieving both effectiveness and stealth in PFL method (i.e., FedProx [2]). As shown in Table 2, BadPFL achieves better attack performance compared to BadNet in PFL. *Table 2: Comparison with BadNet and Bad-PFL under PFL. The experiment is based on CIFAR-10 with malicious ratio of 0.3 and SimpleCNN.* ||Fedavg||Fedprox|| |:-----:|:-----:|:-----:|:-----:|:-----:| |Method|A↑|R↓|A↑|R↓| |BadNet|64.82|36.12|61.37|35.03| |Bad-PFL|**70.08**|**8.65**|**68.14**|**8.17**| Table 3 presents the resistance of two defense methods and one no-defense method (Equal) against the BadPFL attack in the context of personalized federated learning. *Table 3: Comparison with the SOTA backdoor defense with **Bad-PFL** attack under the PFL. The experiment is based on CIFAR-10 dataset with malicious ratio of 0.3 and SimpleCNN.* | | | Fedavg | | | Fedprox | | |:----:|:---:|:----:| :-----: | :---: | :-----: | :-----: | |Method|A↑|R↑|V↑| A↑|R↑|V↑| |Equal|70.08|8.65|39.37|68.14|8.17|38.16| |DnC|61.95|62.86|62.51|60.13|53.77|56.95| |Ours|67.72|71.87|**69.80**|58.65|61.58|**60.12**| As shown in Table 3, although BadPFL achieves robust attack performance in personalized federated learning settings, it can still be effectively detected by defense methods such as DnC and SPMC. Moreover, SPMC demonstrates better defense effectiveness compared to DnC. Overall, even in the face of advanced attack strategies and PFL method, SPMC is able to maintain the security of the federated system to a considerable extent. [1] Bad-pfl: Exploring backdoor attacks against personalized federated learning.In Proc. of ICLR, 2025. [2] Federated optimization in heterogeneous networks. In MLSys, 2020a.
Summary: This paper introduces a technique named SPMC (Self-Purifying Federated Backdoor Defense via Margin Contribution), which aims to detect and mitigate backdoor attacks in federated learning systems by leveraging the concept of Shapley values. SPMC not only focuses on the behavior of individual clients but also emphasizes the interactions among clients and their impact on the overall coalition model. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: SPMC provides a comprehensive summary of federated defense methods and clearly describes the motivation and details of the proposed method. Experimental Designs Or Analyses: The experimental metrics are acceptable, but lacks support from other experiments. Supplementary Material: The authors provide the codes. Relation To Broader Scientific Literature: SPMC not only defends against backdoor attacks at the server side but also prevents malicious updates locally, thereby extending the robustness of federated learning systems beyond traditional server-centric defenses. Essential References Not Discussed: No Other Strengths And Weaknesses: The authors categorize existing defense methods into three types and summarize that these methods are primarily based on two assumptions. They provide a very clear motivation and introduce SPMC, a method that leverages margin contributions to defend against backdoor attacks. The paper clearly explains how SPMC is deployed on both the client and server sides, and experiments demonstrate the effectiveness of SPMC. The major strengths of this work are as follows: - The author's summary of the federated defense methods is very interesting. The motivation of the article is clear. - Experimental results (Figure 7) have shown that SPMC can extract key features from the image even though the attackers add triggers to images. Weakness - The experimental results on traditional datasets like CIFAR-10 and MNIST demonstrate promising performance of SPMC. However, CIFAR-10 and MNIST are commonly used simple datasets. Have the authors attempted testing on complex datasets to further validate the effectiveness of SPMC? - The article seems to only discuss the scenarios with attack ratios of 0.2 and 0.3. I am curious to know what would happen when the attack ratio is set to 0.5. - I am still a bit confused about the concept of margin contribution. Can you briefly explain the concept of "margin contribution" as used in this paper? - The color contrast in the article is not strong enough, making it difficult for me to focus on the key points in the figures. Other Comments Or Suggestions: The elements in the framework diagram of the article are somewhat redundant. Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer VbBm: Thank you very much for your valuable suggestions—they have provided us with clear direction for further exploring the details of our method. We hope the following responses will address your concerns. **Question** **Q1: Lack of support from other experiments.** (Experimental Designs Or Analyses) A1: We will expand to more complex experimental settings and incorporate a wider range of attack types in the final version. Thank you for your suggestion. **Weakness** **W1:** **Evaluation on complex datasets.** A2: We agree with you and have incorporated this suggestion throughout our paper. We have extended our experiments on the CIFAR-100 dataset using ResNet-18 as the backbone. As shown in Table 1, even in more complex models and task scenarios, both DnC and SPMC are able to maintain their defensive effectiveness. However, DnC struggles to achieve a stronger backdoor performance compared to SPMC because of the poor sub-dimensions' information. In addition, due to the increased model complexity and the difficulty of classification tasks, the gradient learning rate on local clients fails to converge, which significantly undermines the RLR's performance of the main task. *Table 1: Comparison with the state-of-the-art backdoor robust solutions on CIFAR-100 dataset using ResNet-18 as the backbone. The malicious proportion is γ=0.3, and the local data poisoned portion is set as 0.3.* | Method | A↑| R↑| V↑| | ------ | ----- | ----- | --------- | | Fedavg|55.29|6.91|31.10| | DnC|54.44|11.31|32.88| | RLR| 50.94|7.86|29.40| | Ours| 55.14|35.26|**45.20**| **W2: Discussion the scenario with attack ratio set to 0.5.** A3: You have raised an interesting concern. We can compare the performance of SPMC under high attack ratio (i.e, γ=0.5) and low attack ratio (i.e, γ=0.3). As shown in Table 2, when the proportion of malicious clients increases to 0.5, it becomes more challenging to identify and mitigate the backdoor attack, as the probability that malicious clients have marginal contributions similar to those of the marginal coalition significantly increases. However, we believe that high attack ratio would be outside the scope of our paper. In real-world deployments of federated learning systems, the proportion of attackers is typically low [1]. Therefore, we focus on attack ratios of 0.2 and 0.3, which are more consistent with commonly observed threat models in practice. In this work, SPMC demonstrates reliable defense performance across multiple task settings, indicating its effectiveness in maintaining the robustness of real-world federated systems. *Table 2: Comparison with different malicious proportion γ on CIFAR-10 dataset with ResNet-18.* | | | 0.3 | | | 0.5 | | | :----: | :---------: | :---------: | :---------: | :---------: | :---------: | :---------: | | | A↑ | R↑ | V↑| A↑ | R↑ | V↑ | | Fedavg | 89.77 | 13.19 | 51.48 | 89.87 | 12.86 | 51.37 | | Ours | 90.32 | 31.69 | **61.01** | 87.83 | 18.68 | **53.26** | **W3: Explanation the concept of "margin contribution".** A4: Our method is inspired by the Shapley Value, using marginal contribution to measure each client’s impact on the overall federation in federated learning. In the SPMC approach, we estimate a client’s marginal contribution by simplifying the calculation of its influence on the coalition formed by the remaining clients. The server then reallocates aggregation weights based on these contributions, increasing the influence of benign clients while reducing that of malicious ones. On the client side, marginal contribution is used to guide local gradient adjustments, ensuring the update direction aligns with global knowledge and preventing interference from malicious data. This approach effectively identifies and mitigates the impact of attackers, enhancing the security and robustness of the federated learning system. We will add the detailed explanation this in our final version. Thank you for the comment. **W4: The color contrast in the article is not strong enough.** A5: We will make the necessary adjustments in the corresponding sections. Thank you for your feedback. [1] Back to the drawing board: A critical evaluation of poisoning attacks on federated learning. arXiv abs/2108.10241.
Summary: Existing defenses rely on assumptions like individual behavior isolation and passive purification, which malicious clients can bypass. This paper proposes SPMC, inspired by the Shapley Value. It measures inter-client margin contributions to identify malicious attackers and self-purifies the parameter distribution of potential malicious actors. This is achieved through margin contribution aggregation on the server side and local gradient alignment on the client side. Claims And Evidence: Yes. The authors categorize related work and introduce the crucial reference to induce their claims. Methods And Evaluation Criteria: The method is well evaluated with clear criteria. Theoretical Claims: This article utilizes the Shapley value to defend against backdoor attacks. It needs further exploration of the Shapley value theoretical description in future work. Experimental Designs Or Analyses: The experiments are well-executed and comprehensive, and reasonable explanations are provided for the experimental results. But the main results are summarized from small datasets and backbones, which could be evaluated in larger and more complex scenarios. Supplementary Material: Author provides the algorithm code. It is helpful to understand the method pipeline and the related methodThis article innovatively applies the Shapley value to defend against backdoor attacks in federated learning, contributing to the broader scientific literature on enhancing distributed system security. details. Relation To Broader Scientific Literature: This article innovatively applies the Shapley value to defend against backdoor attacks in federated learning, contributing to the broader scientific literature on enhancing distributed system security. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths 1 SPMC draws inspiration from the Shapley value by employing inter-client margin contributions to effectively identify malicious attackers. This approach enables a more robust and accurate detection mechanism within federated learning environments. 2 Furthermore, the experiments conducted are comprehensive, offering thorough and reasonable explanations for the observed outcomes, which validate the effectiveness of SPMC. The comparison between various defenses in different directions is interesting, showing that SPMC applies in many scenarios. Weakness 1 The self-purification part in Figure 2 seems to be unclear, which confuses me about the details of local updates. Could you explain the content of this module to me? 2 What kinds of triggers did the authors use in the experiments? 3 The criteria for selecting Coalitions are not provided in the article; does the choice of Coalitions affect the communication cost? 4 The complex extension to larger datasets and models is less explored and discussed in current version. Other Comments Or Suggestions: None Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer 7GF4: We sincerely thank you for the valuable comments and suggestions. We hope our responses below address your concerns and provide a clearer understanding of our approach and results. **Question** **Q1: Theoretical** **description for shapley value.** (Theoretical Claims) A1: Inspired by the concept of the Shapley value, we define a client’s contribution to its marginal coalition as its marginal contribution. For client $k\in\ N$, the marginal contribution is denoted as $\phi_k(k;\ \Gamma;\ D;\ N)=\mathbb{E}_{\Gamma\sim D^N}[\Gamma(N\setminus{k})-\Gamma({k})]$,where $D^N$ represents the collection of all clients' datasets, and $\Gamma(N\setminus{k})-\Gamma({k})$ captures the model parameter difference between the marginal coalition and local client. We introduce the theorem to demonstrate the role of marginal contribution in identifying malicious clients. Theorem: Let parameter aggregation function $\Gamma$ be B-Lipschitz with respect to Z. Suppose $D_k$ and $D_p$ are two data distribution over $Z$. Then, for client $k,p\in N$, $$ \phi_k(k;\ \Gamma;\ D_k;\ N)-\phi_p(p;\ \Gamma;\ D_p;\ N)\le2NB·W(D_k,D_p), $$ where $W$ denotes the distributional difference between two data distributions. This theorem measures the marginal contribution changes under two different data distributions. Due to the significant different distribution of malicious clients, the marginal contributions of malicious clients become easier to distinguish. **Q2: Evaluation on large-scale datasets and complex backbone.** (Experimental Designs Or Analyses&Other Strengths And Weaknesses) A2: Based on your suggestion, we have extended our experiments on the CIFAR-100 dataset using ResNet-18 as the backbone. As shown in Table 1, even in more complex models and task scenarios, both DnC and SPMC are able to maintain their defensive effectiveness. However, DnC struggles to achieve a stronger backdoor performance compared to SPMC because of the poor sub-dimensions' information. In addition, due to the increased model complexity and the difficulty of classification tasks, the gradient learning rate on local clients fails to converge, which significantly undermines the RLR's performance of the main task. *Table 1: Comparison with the state-of-the-art backdoor robust solutions on CIFAR-100 dataset using ResNet-18 as the backbone. The malicious proportion is γ=0.3 and the local data poisoned portion is set as 0.3.* | Method | A↑| R↑| V↑| | ------ | ----- | ----- | --------- | | Fedavg|55.29|6.91|31.10| | DnC|54.44|11.31|32.88| | RLR| 50.94|7.86|29.40| | Ours| 55.14|35.26|**45.20**| **Weakness** **W1: Explanation for the self-purification part of local updates.** A3: Thank you for the constructive comments. We believe it is important to emphasize the importance of the self-purification. Malicious clients may inject trigger patterns $\tau$ to alter original labels and distort gradient directions, deviating from those of benign clients. To counter this, SPMC introduces a self-purification update mechanism that adjusts local gradients using Equation (9), aligning them with the marginal federation model's knowledge while preserving useful benign local information; As shown in Table 1 of our paper, the self-purification component of local updates effectively preserves benign information and achieves strong defense performance. **W2: What kinds of triggers did the authors use in the experiments?** A4: We use a trigger pattern located at the top-left corner of the image with a size of 2 × 6. The poisoned label is set to class 2. Thank you for your feedback and we will emphasize this point in the final version. **W3: The criteria for selecting Coalitions and the communication cost for the choice of Coalitions**. A5: We thank the reviewer for pointing out this issue. We have added the detailed clarification in our revised manuscript. In fact, each client is associated with a corresponding marginal coalition. For example, for client i, its marginal coalition $S_k$ consists of all online clients excluding client $i$. Therefore, we do not need to spend significant computational resources to select the marginal coalition. As shown in the table 3, SPMC does not introduce significant computational cost on either the server or client side. We will update the detailed discussion in the final version. *Table 3: Computation cost comparison. $n$ refers to the number of online client, $|w|$ represents the scale of network, $|w|_{sub}$ represents the scale of sub-network and $E$ indicates the iteration times of using SVD.* | Method |Server-side |Client-side| | ------ | :------------------------: | :----------------: | |Fedavg|$\mathcal{O}(n\times\|w\|\)$|$\mathcal{O}(\|w\|)$| |DnC|$\mathcal{O}(n\times\|w\|+E \times\|W\|_{sub}) $ | $\mathcal{O}(\|w\|)$| |RLR|$\mathcal{O}(n\times\|w\| )$|$\mathcal{O}(\|w\| )$| |Ours|$\mathcal{O}(n\times\|w\|+n^2)$|$\mathcal{O}(\|w\|$)| --- Rebuttal Comment 1.1: Comment: Thank you for your response. The extra explanations clarify a lot, e.g., extension to large-scale dataset and complex backbone verify the stability of SPMC. I would like to keep my rating unchanged.
Summary: This paper presents a federated backdoor defense method named SPMC, which applies Shapley values to quantify the contribution differences among clients and implements margin contribution-based aggregation at the server side and gradient alignment technology at the client side. These measures work together to effectively improve the robustness and flexibility of federated learning systems, even when the number of attackers dynamically changes. Experimental results show that SPMC demonstrates superior defensive effects on multiple public datasets compared to existing methods. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. The methods are well evaluated. Theoretical Claims: This paper provides a clear theoretical explanation of proxy-free distillation and utilizes the framework diagram to explain how to apply Shapley values. Experimental Designs Or Analyses: The experimental metrics are very comprehensive, with experiments demonstrating the robustness of the method under multiple settings. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper innovatively applies Shapley values to proxy-free distillation, addressing the security issues associated with proxy datasets. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths: 1. The authors effectively highlight the challenges posed by adaptive malicious clients and the need for more flexible and robust solutions. Then, the authors propose SPMC, enabling self-purification without the need for predefined rules. 2. SPMC is novel and introduces an innovative approach to enhancing server-side aggregation and local gradient alignment. It’s easy to follow on other scenarios in the distributed system. 3. The article implements non-proxy distillation, providing a new approach to ensuring the security of proxy datasets in knowledge distillation. Weakness: 1. Although the experimental results in Table 1 demonstrate the effectiveness of combining the LGAlign and MCAgg modules, the author should present some discussion about why the combination of LGAlign and MCAgg leads to better performance. 2. From the equation (6), the author adopts cosine similarity to measure Γ (N\{n}) − Γ ({n}). The author needs to explain the reasoning behind using cosine similarity instead of other metrics. Are there any better metrics to measure this? 3. In Figure 2, non-proxy distillation seems to be related to the angle. But the article does not provide an intuitive explanation for the values of the hyperparameter λ. If the author can address all of my concerns, I am willing to increase my score. However, if the author does not address these concerns, I may consider lowering the score. Other Comments Or Suggestions: See weakness Questions For Authors: See weakness Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer hJgY: Thank you for your valuable feedback and for your time reviewing our work. We hope our responses below help clarify the issues and update the score. **Weaknesses** **W1: Missing an explanation why the combination of LGAlign and MCAgg leads to better performance.** A1: For our method, LGAlign represents the local update process. Specifically, it ensures that the update gradients of local clients remain aligned with the general knowledge of the edge federation, thereby preventing attackers from fitting malicious outcomes. As for MCAgg, it represents the server aggregation process. It encourages the positive influence of clients that contribute significantly to the edge federation, while suppressing the malicious impact of those with relatively small marginal contributions. The combination of LGAlign and MCAgg not only prevents local malicious gradients from steering the global update, but also promotes the influence of benign clients through inter-client interactions on the server side. We have rewritten this part to be more in line with your comments. We hope that the edited section clarifies the reason for the combination of LGAlign and MCAgg. **W2: The reason for choosing cosine similarity to measure Γ (N{n}) − Γ ({n}).** A2: Thank you for your helpful feedback. We have included a new table to further illustrate the effectiveness of cosine similarity. We used cosine similarity, Euclidean distance, and Wasserstein distance as evaluation metrics to measure Γ (N{n}) − Γ ({n}). As shown in Table 1, cosine similarity proves to be the most suitable metric. In SPMC, cosine similarity is chosen primarily because it effectively measures the directional difference between two vectors, rather than their magnitude (as in Euclidean distance) or the divergence between probability distributions (as in Wasserstein distance). In the model parameter space, this allows for the evaluation of directional consistency among client updates, which is particularly important for identifying malicious attackers. Malicious clients often train their local models on poisoned datasets, resulting in update directions that deviate significantly from those of benign clients. We will supplement this experiment along with the corresponding explanation in the final version. Thank you for your feedback again. *Table 1: Comparison with evaluation metrics on CIFAR-10 dataset with malicious proportion* *γ=0.3. We use the trade-off V to evaluate the performance of different evaluation metrics.* | | Cosine similarity | Euclidean distance | Wasserstein distance | | ---- | :-------------------: | :----------------: | :------------------: | | SPMC |**72.98**|45.30|69.09| **W3: Explanation of hyperparameter λ** **in non-proxy distillation.** A3: We thank the reviewer for pointing out this issue. Specifically, Equation (9) in the paper is as follows: $$ G_{locgrad} = \begin{cases} G_d, & \text{if } G_d \cdot G_g \geq 0, \\\\ G_d - \lambda \cdot \frac{G_d \cdot G_g}{\|G_g\|^2} G_g, & \text{otherwise}. \end{cases} $$ where $G_g$ represents the general knowledge between client and its marginal coalition, $G_d$ represents the local knowledge and $G_{locgrad}$ denotes the final local updated gradient. Next, we analyze the role of hyperparameter λ when $G_d \cdot G_g < 0$. When λ=1, the formula becomes $G_{locgrad} = G_d -\frac{G_d \cdot G_g}{\|G_g\|^2} G_g$. At this point, $G_{\text{locgrad}}$ is orthogonal to $G_g$, as shown by the following derivation: $$ \begin{aligned} G_{locgrad} \cdot G_g &= (G_d - \frac{G_d \cdot G_g}{\|G_g\|_2} G_g) \cdot G_g \\\\ &= G_d \cdot G_g - \frac{G_d \cdot G_g}{\|G_g\|_2} \|G_g\|_2^2 \\\\ &= G_d \cdot G_g - G_d \cdot G_g \\\\ &= 0. \end{aligned} $$ When λ>1, we have $G_{locgrad} \cdot Gg>0$. This means the angle between$G_{locgrad}$ and $G_g$ is less than 90°, indicating that their directions are more closely aligned. In this case, the update gradient leans more toward general knowledge, potentially at the cost of missing important local knowledge. As for λ<1, we have $G_{locgrad} \cdot Gg<0$. This means the angle between$G_{locgrad}$ and $G_g$ is more than 90°, indicating that a less noticeable adjustment toward general knowledge, causing the update gradient to stay closer to local malicious knowledge for attacker. The reviewer might have overlooked **"Figure 3. Comparison of different λ"** in our original manuscript. To further clarify, we present a comparison of different λ values again in **Table 2**. We sincerely appreciate your constructive suggestion, and we will include the corresponding derivation and explanation in the revised version. *Table 2: Comparison with different lambda on CIFAR-10 dataset with malicious proportion γ={0.2, 0.3}. We use the trade-off V to evaluate the performance of different lambda.* | $\lambda$ | 1.5 | 1.0 | 0.5 | | :-------: | :---: | :-------: | :--: | | γ=0.2 | 60.99 | **85.32** | 2.50 | | γ=0.3 | 51.87 | **80.14** | 3.53 | --- Rebuttal Comment 1.1: Comment: Thank you for the response. The clarifications and added results address my main concerns. Please revise the paper based on your rebuttal. Since the authors addressed my concerns, I decide to raise my score to 4. Good luck!
null
null
null
null
null
null
Improving LLM Safety Alignment with Dual-Objective Optimization
Accept (poster)
Summary: This paper introduces DOOR, a novel safety alignment framework for large language models that addresses vulnerabilities in existing methods like DPO. DOOR combines robust refusal training, which encourages refusal even when partial unsafe content is generated, with targeted unlearning of harmful knowledge. The authors enhance DOOR with token-level weighting (W-DOOR) and demonstrate improved resistance against various jailbreak attacks in both in-distribution and out-of-distribution settings. Claims And Evidence: The paper provides convincing evidence for its main claims through comprehensive experimental results, demonstrating improved attack resistance across different attack types and better capability retention with W-DOOR compared to DPO. Methods And Evaluation Criteria: While DOOR aims to address partial harmful generations continuing into unsafe content, the token-level weighting approach in W-DOOR seems misaligned with this goal - it's unclear how emphasizing refusal tokens would help when the model is already generating harmful content. Furthermore, the combination of token-level refusal training with sequence-level harmful knowledge unlearning appears disconnected, and the varying effectiveness across attack types suggests the approach may not fully address the core problem of preventing continuation of unsafe generations. Theoretical Claims: While the paper presents some gradient analysis suggesting advantages of DOOR over DPO, it lacks formal theoretical results or proofs to support these claims. Experimental Designs Or Analyses: The experimental methodology includes comprehensive evaluation across different attack types and model behaviors. However, the experimental design would benefit from two key additions: ablation studies to isolate component contributions, and hyperparameter sensitivity analysis to understand the stability of the proposed methods. Supplementary Material: Yes, I reviewed the Additional Experimental Results sections and Prefilling Evaluation and Multi-turn Evaluation examples. Relation To Broader Scientific Literature: The paper's key contribution lies in combining token-level weighted refusal training with negative preference optimization. Essential References Not Discussed: Based on the paper's content, I don't identify any essential missing references. Other Strengths And Weaknesses: Strengths: - While the paper combines multiple techniques into a working approach for LLM safety, the individual design choices lack strong theoretical motivation - The experimental evaluation demonstrates clear improvements over DPO in attack resistance - The method shows good capability retention compared to baseline approaches Weaknesses: - The interaction and potential conflicts between the dual objectives (refusal training and harmful knowledge unlearning) are not analyzed - The effectiveness varies significantly between attack types - The hyperparameter sensitivity of the method is not thoroughly explored Other Comments Or Suggestions: See strengths and weaknesses. Questions For Authors: What is the theoretical or empirical justification for combining token-level refusal training with sequence-level harmful knowledge unlearning? How do these different granularities interact? Could the authors elaborate on how token-level weighting helps prevent the continuation of harmful content, given that harmful generations may not contain refusal tokens? A clear explanation would strengthen the methodology. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedback and thoughtful questions. We address each of your concerns in detail below: ### 1. Justification for Combining Token-Level Refusal Training and Sequence-Level Unlearning This is a great question. Our design reflects the intuition that refusal and unlearning operate at different granularities: Refusal behaviors are typically localized in specific tokens or phrases (e.g., “Sorry,” “I cannot help with that.”). These early “transition” tokens are pivotal in shifting the model’s trajectory from harmful continuation to safe refusal. Hence, we apply token-level weighting to reinforce them. Harmful generations, on the other hand, often unfold over longer sequences and require sequence-level treatment to be effectively unlearned, particularly since the model may learn harmful behaviors in aggregate. While we do not yet provide a formal theoretical framework for this combination, empirical results show that W-DOOR consistently outperforms DOOR, demonstrating that token-level granularity enhances safety without disrupting the effect of sequence-level unlearning. We will revise the paper to better clarify this intuition and interaction. ### 2. Lack of Theoretical Guarantees for Gradient Analysis You're right to note that the gradient analysis in Section 3 is illustrative rather than formal. Our aim was to provide a mechanistic understanding rather than formal proof. Formal convergence or divergence speed analyses, such as those in Negative Preference Optimization (Zhang et al., 2024a), require strong assumptions about model distributions which may not hold in real-world LLMs. As such, we chose to emphasize empirical validation across multiple attack types and scenarios, which supports our claims more robustly in practical settings. ### 3. Lack of Ablations and Hyperparameter Sensitivity Analysis We appreciate this suggestion. While we did not initially label them as ablations, our experiments inherently contain component-level comparisons, including: - *Data Augmentation*: Comparing DPO vs. DPO w/o Aug isolates the effect of harmful prefix augmentation. - *Robust Refusal Training (SFT component)*: Comparing NPO (only unlearning) vs. DOOR (NPO + refusal SFT) isolates the benefit of adding the refusal training objective. - *Targeted Unlearning (NPO component)*: Comparing SFT (only refusal training) vs. DOOR (SFT + NPO unlearning) isolates the benefit of adding NPO. - *Token Weighting*: Comparing DOOR vs. W-DOOR isolates the effect of the reward-based token weighting. Furthermore, to directly address hyperparameter sensitivity for W-DOOR, we conducted additional experiments varying the calculation of the token weight $\beta_t$. We tested the default setting (exponential weighting, $\tau=5$, as in paper), $\tau=1$ for the exponential weighting, and replacing the exponential function with a sigmoid normalization. The results (Table below) show that the method performs robustly across these variations, consistently outperforming baselines. **W-DOOR Hyperparameter Sensitivity (Llama-3-8B)** | Method | Multi-turn ASR↓ | Prefilling ASR↓ | GCG ASR↓ | AutoDAN ASR↓ | HellaSwag Acc↑ | XSTest Rate↓ | | :-------| :---- | :------ | :------- | :--------- | :---------- | :-------- | | DOOR | 0.489 | 0.055 | 0.093 | 0.095 | 0.565 | 0.407 | | W-DOOR (exp, $\tau$=5, Paper) | 0.447 | 0.034 | 0.093 | 0.088 | 0.573 | 0.440 | | W-DOOR (exp, $\tau$=1) | 0.500 | 0.045 | 0.070 | 0.075 | 0.576 | 0.442 | | W-DOOR (sigmoid) | 0.447 | 0.042 | 0.073 | 0.078 | 0.570 | 0.424 | *(Note: Gemma results show similar robustness, omitted for brevity)* **We also have further results in this figure: https://anonymous.4open.science/r/icml25-8B51/safety_align.pdf** ### 4. How Does Token-Level Weighting Help Prevent Harmful Continuations? Excellent question. Token-level weighting helps the model learn to transition away from unsafe continuations, especially in the augmented training setup where harmful prefixes are inserted. Empirically, we observe that refusal tokens (e.g., “Sorry,” “I cannot...”) exhibit the highest divergence between the reference and target policies (Figures 18, 19, 20). W-DOOR amplifies gradients at those tokens, which steers the model to recognize harmful contexts and pivot to refusal. This is especially useful in prefix attacks where the model is “in the middle” of an unsafe sequence and needs to course-correct. ----- We appreciate the reviewer’s thoughtful critique and hope our clarifications strengthen your confidence in our methodology. We will revise the paper to better connect our design decisions, highlight ablation results more clearly, and clarify the interaction between granular objectives.
Summary: In this paper, the authors propose novel objectives to enhance safety refusal and harmful response unlearning in LLM post-training. The paper first introduces the objective DOOR, which is a linear combination of two objectives that focus on safety refusal enhancement (even when the model starts generating a harmful response) and harmful response unlearning. The paper then introduces the W-DOOR objective, which augments the safety refusal component in the DOOR objective by assigning higher weights to safety refusal-enhancing tokens. Some analysis of the gradient of the DOOR objective shows that the gradient updates to the LLM using the DOOR objective indeed enhance the safety refusal response generation ability of the LLM. The experiment results provided in the paper suggest that the proposed methods can alleviate harmful response generations, even in prefill attack and multi-turn settings, without degrading the utility of the model and degenerating into over-refusal. Claims And Evidence: Yes. Methods And Evaluation Criteria: * The benchmarks used make sense for the proposed method. However, better justification is needed for using part of the benchmark data as training data, and the small number of data used in the fine-tuning. Theoretical Claims: N/A Experimental Designs Or Analyses: * Better justification is needed for using part of the benchmark data as training data, and the small number of data used in the fine-tuning. * Results provided in Figure 3 seem too noisy to infer any meaningful conclusion. Also, Figure 3 seems to be not discussed anywhere in the text. Supplementary Material: Yes, Section B. Relation To Broader Scientific Literature: This paper extends the literature in the area of preference alignment of LLMs, focusing on enhancing safety refusals. Essential References Not Discussed: None Other Strengths And Weaknesses: Strengths * The paper investigates an important problem in enhancing the safety refusal of LLMs, especially in the setting of prefilling attacks on the LLM. * The authors provide a gradient analysis to show why the proposed objective can enhance safety refusal in an LLM, and provide some empirical validation for the applicability and efficacy of the proposed methods. Weaknesses * Please refer to the Experimental Designs Or Analyses section for major concerns. * Please refer to the Other Comments Or Suggestions Or Analyses section for minor concerns. Other Comments Or Suggestions: * There is a mismatch between the W-DOOR objective in (2) and in Figure 1, compared to the expression at the beginning of Section 3.4. * Statement "..we show that the robustness demonstrated in the prefilling attack generalizes to other forms of adversarial attacks (Figures 1)" is not correct (second column lines 222-224) * The results in Section B in Supplementary do not seem to accompany any analysis/discussion of the results. * It is recommended to refer to a particular Figure/Table when some result is discussed (e.g. second column lines 257-274), so the reader can better understand the context. Questions For Authors: * Please address the questions/concerns raised in previous sections. * How can DOOR/W-DOOR perform well on the XSTest benchmark, even when the objectives only focus on enhancing safety refusal without any consideration for preventing over-refusal? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful comments and constructive suggestions. Below, we address each concern in detail: > **However, better justification is needed for using part of the benchmark data as training data, and the small number of data used in the fine-tuning.** We appreciate your concern. Our design choice reflects two priorities: - *Minimizing distribution drift*: We intentionally use a small subset of benchmark data to prevent large shifts away from the original model’s capabilities, which often occurs in post-alignment tuning with large datasets. - *Avoiding evaluation leakage*: We ensure strict separation between training and evaluation splits within each benchmark. For example, we use a held-out subset of SORRY-Bench for training and evaluate on different, unseen prompts from the same benchmark. Additionally, we include out-of-distribution (OOD) evaluations, such as HarmBench and XSTest, to demonstrate generalization beyond the training set. > **Results provided in Figure 3 seem too noisy to infer any meaningful conclusion. Also, Figure 3 seems to be not discussed anywhere in the text.** Thank you for pointing this out. Apologies for this oversight and the lack of clarity.. We intended to discuss this figure in the paragraph "Robustness Against Stronger Multi-Turn Attacks" (Lines 321-329). The key takeaway, despite some noise due to limited samples at higher turn counts, is that W-DOOR generally maintains lower ASR compared to other methods, especially as the turn number increases, whereas methods like DPO show less robustness in these longer multi-turn interactions. This suggests W-DOOR's alignment is somewhat more resilient in conversational attack scenarios. > **There is a mismatch between the W-DOOR objective in (2) and in Figure 1, compared to the expression at the beginning of Section 3.4.** Great catch — thank you. The discrepancy between Equation (2), Figure 1, and the start of Section 3.4 arose due to a simplification: we use $x \oplus y^h_{<k}$​ as the new input $x$ during robust refusal training. We will clarify this notation to ensure consistency and prevent confusion in the revised version. > **Statement "..we show that the robustness demonstrated in the prefilling attack generalizes to other forms of adversarial attacks (Figures 1)" is not correct (second column lines 222-224)** You're absolutely right — this should refer to Table 1, not Figure 1. We appreciate you catching this, and will correct it in the revised manuscript. > **The results in Section B in Supplementary do not seem to accompany any analysis/discussion of the results.** We acknowledge the lack of accompanying analysis in the current draft. Due to time constraints before the deadline, we were unable to include our full commentary. We now have an updated version with complete discussions and improved clarity, and we will ensure this is reflected in the final camera-ready version. > **It is recommended to refer to a particular Figure/Table when some result is discussed (e.g. second column lines 257-274), so the reader can better understand the context.** Thank you — this is a valuable suggestion. We have revised the experimental section to explicitly reference specific figures and tables (e.g., lines 257–274), which greatly improves readability and traceability of claims to evidence. > **How can DOOR/W-DOOR perform well on the XSTest benchmark, even when the objectives only focus on enhancing safety refusal without any consideration for preventing over-refusal?** Excellent question. Our alignment objectives (DOOR and W-DOOR) do not directly optimize for minimizing over-refusal. However, our training pipeline includes a retain loss that helps preserve general capabilities and reduces over-conservatism. While W-DOOR shows slightly higher refusal rates on XSTest compared to the original model, we believe this is a reasonable trade-off given the substantial gains in robustness. We include a Pareto frontier analysis in Appendix Figure 14, plotting ASR vs. over-refusal rate. W-DOOR consistently dominates DPO and other baselines, achieving lower ASR with minimal over-refusal, placing it closer to the optimal trade-off. We will clarify this trade-off more explicitly in the revised version. ---- We hope these clarifications address your concerns and help strengthen our paper. Thank you again for your constructive feedback and insightful questions.
Summary: This paper proposes Dual-Objective Optimization for Refusal (DOOR), a novel alignment framework that addresses limitations in Direct Preference Optimization (DPO) for LLM safety. The authors identify two key issues with DPO: imbalanced refusal reinforcement and poor out-of-distribution generalization. DOOR combines robust refusal training (encouraging models to refuse unsafe content even with partial harmful generations) with targeted unlearning of harmful knowledge pathways. They further enhance this with Weighted DOOR (W-DOOR), which implements token-level weighting that prioritizes critical refusal tokens in adversarial contexts. Empirical evaluations demonstrate improved robustness against various jailbreak techniques including prefilling, suffix, and multi-turn attacks while maintaining general language capabilities and utility. Claims And Evidence: The paper presents interesting results on improved safety alignment, but some claims require stronger evidence: 1. The claim that the method "removes the harmful knowledge that might be triggered" lacks comprehensive supporting evidence. While lower attack success rates are demonstrated, this alone doesn't guarantee knowledge removal. A more thorough investigation referencing established unlearning literature (particularly the literature showing that approximately all current unlearning approaches are only surface-deep, e.g. Wu et al 2024, Evaluating Deep Unlearning in Large Language Models) would strengthen this claim. 2. The claim of increased general robustness is partially supported by generalization to different attacks like adversarial suffix attacks, but the data also shows all models (including the proposed ones) still have high susceptibility to multi-turn attacks. The addition of error bars in the evaluations would strengthen the evidence by helping readers determine which performance differences are statistically significant, especially when the numerical differences appear relatively small in some comparisons. Methods And Evaluation Criteria: The evaluation methodology emphasizes prefilling attacks, which potentially favors the authors' approach since their method is specifically training to be robust against prefilling. This evaluation choice somewhat stacks the deck in favor of the proposed method. A more diverse set of primary attacks and metrics would provide a more balanced assessment. Greater clarity about whether prefilling attacks serve as the primary threat model or as one example within a broader adversarial framework would help contextualize the contribution. This is particularly relevant given that some API providers disallow prefilling (e.g., OpenAI) while others permit it (e.g., Anthropic). Additional details about the KL divergence calculations would be beneficial, specifically whether forward or reverse KL is used and which training data subset these metrics are computed on. Theoretical Claims: No significant theoretical claims are made Experimental Designs Or Analyses: The lack of error bars or statistical significance tests makes it difficult to assess the reliability of the reported performance differences. This is particularly important in this evaluation. Supplementary Material: I briefly reviewed the supplementary material. Relation To Broader Scientific Literature: The paper builds upon the prior work in the area of alignment via RLHF, adversarial robustness, etc Essential References Not Discussed: The paper doesn't really engage with work on unlearning which has shown that many or all of the existing 'unlearning' methods do not actually remove dangerous knowledge, but simply obfuscate it. The authors make some claims about removing harmful knowledge, but do not engage with work such as Wu et al 2024, Evaluating Deep Unlearning in Large Language Models, which provide strong evidence that many unlearning methods do not remove harmful knowledge. Other Strengths And Weaknesses: The paper introduces interesting extensions to existing alignment techniques. To strengthen the contribution, the authors could more explicitly delineate which aspects are novel contributions versus adaptations of prior work such as negative preference optimization. The authors could also more clearly elaborate on the threat model/setting that the authors are addressing, as it sometimes appears inconsistent. For instance, if we are assuming access to an oracle model \pi^*, why don't we directly use this model to answer queries? Furthermore, it's unclear if the authors are directly targeting the prefilling attack as the primary attack that a malicious user would do (which is strange given that many API vendors simply disallow prefilling access), or if it is surrogate for any attack (which doesn't appear to be the case for e.g. the multi-turn attack). These confusions, and a bit of uncertainty about the exact novelty of the approach, keep me from making a more enthusiastic recommendation. Other Comments Or Suggestions: Typo on line 360: 'like like' Questions For Authors: 1. Could you clarify the specific novel contributions of this work compared to existing approaches? A more explicit distinction between components that build upon previous work (e.g., negative preference optimization) and new innovations would help readers better appreciate the advances. 2. Could you specify the threat model being addressed? Is the prefilling attack specifically targeted, or is it used as a representative example for broader jailbreak vulnerabilities? 3. What evidence supports the claim that your method "removes the harmful knowledge" rather than just making it harder to access? Have you considered evaluations based on papers on unlearning? 4. Can you including confidence intervals or error bars in the evaluations? 5. Could you provide additional details about the KL divergence computation - specifically whether it uses forward or reverse KL, and which training data subset it is computed on? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the detailed and insightful questions. Below, we clarify key aspects of our methodology, contributions, and evaluation setup: ### 1. Novel Contributions Relative to Prior Work Thank you for prompting us to make the contributions more explicit. Our work builds on prior safety alignment techniques (e.g., DPO and NPO) and introduces the following innovations: - *Dual-Objective Integration*: We combine robust refusal training via prefix augmentation with harmful content unlearning using NPO into a unified training objective (DOOR). To the best of our knowledge, this is the first work to effectively integrate these two complementary strategies in a cohesive framework. - *Token-Level Adaptive Weighting*: We propose W-DOOR, which enhances refusal learning by emphasizing critical refusal tokens via a KL-based reward weighting mechanism. This token-level gradient refinement is novel and leads to improved safety across adversarial attacks. - *Diagnostic Analysis of DPO Limitations*: We provide a gradient-based critique of DPO’s underperformance in safety contexts and show, both empirically and analytically, how DOOR and W-DOOR address these issues. We will revise the paper to more clearly distinguish these novel elements from prior work. ### 2. Threat Model and Scope of Evaluation Our training objectives are primarily motivated by prefilling attacks, which exploit partial unsafe generations. However, the robustness imparted by DOOR and W-DOOR generalizes well to other jailbreak scenarios, including suffix-based (AutoDAN, GCG) and multi-turn attacks (Crescendo), as shown in Table 1 and Appendix Figures 2–4. While prefilling motivates our data augmentation design, the dual-objective formulation is general-purpose and can be applied to other jailbreak modes, given appropriate adversarial data. ### 3. On "Removing" Harmful Knowledge vs. Making It Harder to Access We appreciate the reviewer referencing recent literature (e.g., Wu et al., 2024) and agree with the critique. Like NPO, our approach does not guarantee full removal of harmful knowledge but rather reduces its accessibility under typical prompting. We will revise the language in the paper to make this distinction explicit and avoid overstating unlearning effects. Thank you for catching this. ### 4. Confidence Intervals and Variance Analysis We conducted additional experiments over 5 random seeds and report the mean ± standard deviation below: **Gemma-2B Results:** | Method | Prefill | AutoDAN | GCG | XSTest | |----------|----------|--------------|--------------|--------------| | Original | 0.396±0.022 | 0.568±0.186 | 0.230±0.021 | 0.242±0.007 | | SFT | 0.013±0.010 | 0.104±0.027 | 0.115±0.014 | 0.393±0.007 | | DPO | 0.078±0.013 | 0.065±0.037 | 0.074±0.037 | 0.305±0.011 | | DOOR | 0.011±0.004 | 0.043±0.027 | 0.067±0.021 | 0.403±0.006 | | W-DOOR | 0.009±0.003 | 0.018±0.035 | 0.030±0.037 | 0.437±0.006 | **LLaMA-3-8B Results** | Method | Prefill | AutoDAN | GCG | XSTest | |----------|-----------|--------------|--------------|--------------| | Original | 0.532±0.014 | 0.086±0.008 | 0.304±0.016 | 0.409±0.004 | | SFT | 0.067±0.004 | 0.020±0.003 | 0.119±0.012 | 0.401±0.002 | | DPO | 0.204±0.015 | 0.058±0.003 | 0.130±0.004 | 0.453±0.003 | | DOOR | 0.056±0.007 | 0.011±0.007 | 0.101±0.009 | 0.407±0.003 | | W-DOOR | 0.042±0.012 | 0.018±0.004 | 0.104±0.002 | 0.434±0.003 | These results confirm that our reported improvements are robust and statistically significant. We will include these confidence intervals in the updated manuscript. ### 5. Details on KL Divergence Computation (Figure 5) We compute forward KL divergence, $D_\text{KL}(\pi_\theta \| \pi_\text{base})$, measuring how the aligned model diverges from the base model. This is computed on the evaluation subset of harmful prompts, which mirrors the training distribution but excludes any training samples. We will make the KL direction and evaluation set details explicit in the revised paper. ### 6. Why Not Use the Oracle Policy $\pi^*$ for Inference Directly? Thank you for raising this subtle point. In theory, an ideal oracle policy $\pi^*$ would be used directly. However, in practice: $\pi^*$ is an idealized abstraction and not available in most scenarios. We instead approximate $\pi^*$ using a stronger model aligned via DPO, serving as a proxy to compute token-level importance scores for weighting. Our goal is to train a smaller or aligned model that mimics the safety behavior of $\pi^*$ without requiring access to it at inference time. We’ll revise our explanation of this to improve clarity. ----- We hope these clarifications address your concerns and better highlight the novelty, scope, and rigor of our work. Thank you again for the thoughtful and constructive feedback. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. All your points make sense, and I welcome that you've toned down the language regarding the unlearning effect of NPO. Given this, I'll raise my score to 4. --- Reply to Comment 1.1.1: Comment: Thank you so much for your support! We promise to make the modifications in the revised draft!
Summary: The paper aims to improve Standard Direct Preference Optimization (DPO) against jailbreaking attacks. In particular, it proposes Dual-Objective Optimization for Refusal (DOOR) and its token-weighted variant, W-DOOR to aim for robust refusal training and targeted unlearning. Experiments show substantially improved resilience to various jailbreak attacks (prefilling, suffix, multi-turn) without excessive refusal on benign tasks. ## update after rebuttal The authors' rebuttal address my concerns. I will keep my score. Claims And Evidence: The paper’s main claims DOOR/W-DOOR’s improvements -- are well supported by experiments on SORRY-Bench, HarmBench, and different attacks. Methods And Evaluation Criteria: Yes. Training with harmful prefix augmentation and unlearning objective is well matched to jailbreaking scenarios. The chosen benchmarks (SORRY-Bench, HarmBench, etc) measure exactly those adversarial settings. Theoretical Claims: N/A (The paper is not a theory paper. Thus, does not provide any proofs for theoretical claims) Experimental Designs Or Analyses: The experiment designs are sound and the baselines chosen (SFT, DPO, NPO) makes sense. Supplementary Material: Yes. Appendix B Additional Experimental Results and Appendix D Dataset Examples. Relation To Broader Scientific Literature: The proposed dual-objective approach can improve on existing safety data augmentation techniques. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** See above **Weaknesses/Suggestions for improvement** 1. Change of loss function in LLM also needs to test the performance of other capabilities (e.g., math, coding). How does the proposed approach work with the other capabilities? How do we use the proposed method and combine it with other training data of different capabilities? 2. The proposed method is sensitive/dependent on the reference model performance using the NPO loss. What if the reference model contains harmful responses already? 3. In Table 3, W-DOOR increase over refusal evaluation? Can you provide more explanation on this? Other Comments Or Suggestions: N/A Questions For Authors: See **Weaknesses/Suggestions for improvement** in *Other Strengths And Weaknesses*. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback, as well as for recognizing the contributions of our work. Below, we address the specific concerns raised: > **Change of loss function in LLM also needs to test the performance of other capabilities (e.g., math, coding). How does the proposed approach work with the other capabilities? How do we use the proposed method and combine it with other training data of different capabilities?** Thank you for this important point regarding the impact on general capabilities beyond standard instruction following and safety. To address this, we conducted additional experiments evaluating the performance of our aligned models on standard math (MATH [1]) and coding (HumanEval [2]) benchmarks. The results for the Gemma-2-2B model are presented below: | Model | Math (4-shot, accuracy with math-verify) | Humaneval (pass@1) | |----------|------------------------------------------|---------------------| | Original | 0.212 | 0.622 | | SFT | 0.178 | 0.360 | | DPO | 0.181 | 0.366 | | DOOR | 0.174 | 0.348 | | W-DOOR | 0.179 | 0.390 | The results show that alignment methods generally cause a reduction in performance on these specialized tasks compared to the original model. This is likely because our general utility dataset (Alpaca) does not contain significant amounts of math or coding data, and the alignment process (including ours) shifts the model's distribution. However, **W-DOOR demonstrates the best retention** of these capabilities among the evaluated alignment methods (SFT, DPO, DOOR, W-DOOR). This suggests that our weighted approach, by focusing gradients more precisely, helps mitigate excessive capability degradation compared to other techniques. Moreover, incorporating coding and math data into the utility dataset could potentially alleviate the performance drop. > **The proposed method is sensitive/dependent on the reference model performance using the NPO loss. What if the reference model contains harmful responses already?** Thank you for raising this point. To clarify: in our implementation, the NPO loss is only applied to harmful responses generated by the reference model, i.e., cases where the reference itself outputs unsafe content. The intent of NPO in this context is not to imitate the reference but to unlearn these harmful generations by penalizing the target model's likelihood of reproducing them. Therefore, even if the reference model contains harmful outputs, NPO is used to actively reduce their influence in the aligned model. We will clarify this more explicitly in the final version. > **In Table 3, W-DOOR increase over refusal evaluation? Can you provide more explanation on this?** Thank you for noticing this. While W-DOOR does show a slightly higher refusal rate on XSTest than DOOR, it still maintains a significantly better trade-off compared to DPO, achieving lower attack success rates (ASR) and lower over-refusal than baseline methods overall. Moreover, we include a Pareto analysis in Appendix Figure 14 (Gemma: Prefill ASR vs. XSTest Refusal Rate), where each model's robustness is plotted against its over-refusal rate. DOOR/W-DOOR consistently lies on a better Pareto frontier—achieving strong safety without excessive conservatism. This indicates that the slight increase in over-refusal is a reasonable and effective trade-off for significantly improved robustness. We will add a clearer explanation of this in the revised version, highlighting the nuanced balance between robustness and utility. ------ We are grateful for your detailed feedback and hope our clarifications address your concerns. We are committed to further improving our paper based on this helpful input.
null
null
null
null
null
null
Temporal Misalignment in ANN-SNN Conversion and its Mitigation via Probabilistic Spiking Neurons
Accept (poster)
Summary: This paper analyzes the spike temporal dynamics in the ANN-SNN conversion framework and investigates the impact of spike firing timing on the stability of the conversion. It identifies a phenomenon termed temporal misalignment, where random spike rearrangements across SNN layers can lead to performance improvements. Furthermore, the paper introduces a bursting probabilistic spiking neuron to simulate and enhance this phenomenon, supported by theoretical justification, which improves performance on several tasks. Claims And Evidence: I appreciate the idea presented in this work; however, the writing is somewhat opaque. Despite reading the detailed appendices, I remain unsure whether I fully grasp the concepts. Below, I outline my understanding, which I hope the authors can confirm or correct. 1. My understanding is that the effectiveness of the temporal shuffle is due to the following reasons: - SNNs require potential accumulation to fire, leading to delayed responses, with most spikes occurring in later time steps, meaning that many neurons are unable to complete firing before time ending. - To mitigate this, we desire earlier spike emissions. By shuffling effect, neurons may "overdraft" and fire earlier based on their expected firing rate, instead of waiting for potential accumulation. - The method proposed in this paper uses a Bernoulli distribution to simulate this behavior, ensuring that the spike emission rate **at each time step** meets the conversion requirements, rather than only ensuring accuracy for the final result after $T$ step, as in standard ANN-SNN rate coding.  Based on this understanding, I raise the following question: 2. The concept of shuffle described in the paper seems not fundamental. It merely appears to achieve an even distribution of spikes. A more precise term of your method might be "expected firing rate estimation." This idea bears resemblance to the approaches in [1, 2], where spike attention outputs are predicted in the time dimension to ensure consistent expectations at each time step. Both papers address mechanisms for stabilizing variable multiplications in ANN-SNN conversion for Transformers, but could offer a simpler explanation for the phenomenon explored in this paper. 3. While Theorem 1 demonstrates that the proposed SNN neuron firing rate is an unbiased estimate of the ANN firing rate at each time step, the stability of this firing rate should not be overlooked. The randomness introduced by the Bernoulli distribution could cause significant variance in the results of the same inference task. With the proposed algorithm, this variance may not converge to zero over time (and may even diverge), leading to perturbations during both training and testing. Does this introduce any stability concerns for the algorithm? Could there be considerable room for improvement? Related theoretical analysis may be found in [1,2], and supplementary empirical analysis would also be beneficial. [1] Jiang, Yizhou, et al. "Spatio-Temporal Approximation: A Training-Free SNN Conversion for Transformers." The Twelfth International Conference on Learning Representations. 2024. [2] Huang, Zihan, et al. "Towards High-performance Spiking Transformers from ANN to SNN Conversion." Proceedings of the 32nd ACM International Conference on Multimedia. 2024. Methods And Evaluation Criteria: The evaluation criteria are appropriate, using standard datasets and backbone networks. 4. However, the algorithm presented in this paper appears limited to traditional convolutional networks with ReLU activation and may not extend to Vision Transformers with GELU activation and attention operations. This severely restricts its potential applications. Could the authors discuss possible future directions for overcoming this limitation? Theoretical Claims: I reviewed the proof of Theorem 1 in Appendix B and found no apparent issues. Experimental Designs Or Analyses: The experimental design and implementation appear sound. 5. However, the probabilistic neuron introduced in this work includes a stochastic component, which incurs computational and time costs. These costs are hardware-dependent (e.g., they could significantly affect performance on NVIDIA GPUs), potentially rendering the power consumption analysis incomplete and impacting the universality of its application. Supplementary Material: I have reviewed the theoretical analysis in Appendix B and the phenomenon analysis in G. Relation To Broader Scientific Literature: This paper builds on prior work in the ANN-SNN conversion domain, particularly related to phase lag and unevenness, and introduces a novel analysis and solution regarding neuron dynamics. These aspects are clearly discussed in the paper. Essential References Not Discussed: The authors might consider referencing the latest work on Transformer conversion algorithms, as noted above. Other Strengths And Weaknesses: While the paper's overall structure is clear, the writing clarity and organization could be improved. Despite some efforts to reduce the reading difficulty (e.g., Section 3.2, lines 191–215), the explanation could be more direct in clarifying the essence of the algorithm rather than the process. Other Comments Or Suggestions: I have no additional comments. Questions For Authors: My questions are aligned with those discussed in points 1–5 above. I hope the authors will address point 1 and provide clarification on point 2. Additionally, I particularly recommend a more explicit theoretical and practical analysis of point 3. I am generally enthusiastic about this work but may reconsider my rating if some key issues remain unclear. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time in assessing our paper and their constructive comments. **3.** We consider the setting from Theorem 1: we assume that the accumulated membrane potential in the first phase is $v:=TX$, that the threshold $\theta=1$ and that $0\leq TX\leq T$ (other cases being trivial). Let us also put $S[t]=\sum_{i=1}^ts[i]$. We consider two different situations: Case 1. $TX$ is an integer. In particular, Theorem 1 (b) tells us that at each time step $t$, as long as $v\geq S[t-1]$, the expectation of having a spike at $t$ given $S[t-1]$ is $\frac{v-S[t-1]}{T-t+1}$. We note that the last quantity is never strictly bigger than 1 and never strictly below 0. In particular, the conditional expectation (of mem. potential before spiking) $E[v[t+1]\mid S[t-1]]=v-S[t-1]-\frac{v-S[t-1]}{T-t+1}=v[t]+\frac{v[t]}{T-t+1}$. The total expectation then becomes $E[v[t+1]]=E[v[t]]-\frac{E[v[t]]}{T-t+1}=E[v[t]]\frac{T-t}{T-t+1}$. Initially, we have $E[v[1]]=v=TX$. Solving this recurrence gives $E[v[t+1]]=v\prod_{i=1}^t\frac{T-i}{T-i+1}=v\frac{T-t+1}{T}$. Hence, $E[s[t]]=\frac{E[v[t]]}{T-t+1}=\frac{v}{T}$. In particular, since the total number of spikes during $T$ steps is $v$ (Theorem 1. (c)) it follows that when $v$ is an integer in $[0,T]$, $S[t]$ is a random variable that follows **hypergeometric** distribution $H(T,v,t)$. The variance of the firing rate is then $V[\frac{1}{t}S[t]]=\frac{v}{tT}(1-\frac{v}{T})\cdot\frac{T-t}{T-1}$. Note that at the final step $T$, the firing rate is precisely $X$ with variance 0. Case 2. When $TX$ is not an integer, we no longer have a nice expression for the probability of having a spike at $t$. Namely, the modified expression becomes $E[s[t]]=\frac{\min(\max(v[t],0),1)}{T-t+1}$ as $v[t]$ in this case can be strictly negative or be strictly larger than 1. Hence, we cannot solve the resulting recurrence in a closed form. However, we do note that the final number of spikes at step $T$ is either $\lfloor TX\rfloor$ or $\lceil TX\rceil$ (Theorem 1. (c)). Both cases occur with non-zero probabilities $p$ and $1-p$n Then, the firing rate at the end is either $\frac{\lfloor TX\rfloor}{T}$ with probability $p$, either $\frac{\lceil TX\rceil}{T}$ with probability $1-p$. The expected firing rate and its variance are $\frac{\lfloor TX\rfloor}{T}+\frac{1-p}{T}$ and $\frac{p(1-p)}{T^2}$. In particular, we see that the firing rate stabilizes with $T$. It seems that our spiking process offers a continuous extension of hypergeometric distribution (with continuous $v$), but we were not able to precisely detect this distribution in the existing literature. Further theoretical study of this process seems like a promising continuation of this work. To shed more light on the situation, we provide the following plots. In the first plot, we fix $T=16$ and vary $v$ continuously. We plot probability (blue) and variance (orange) of the event $S[T]=i$, when $v$ varies through $[i,i+1]$. [Prob var](https://anonymous.4open.science/r/ICML2025_rebuttal-6705/figs_tables/prob-var.png) We also plot the probabilities for various steps $t$ of having a spike at that time step, in dependence of the inital $v$. We notice that when $v$ is an integer, the probability is uniform for all $t$, while in other cases it varies from step to step. [Prob spike](https://anonymous.4open.science/r/ICML2025_rebuttal-6705/figs_tables/prob-spike.png) **2.** We thank for the references provided. Comparison with [1] and [2]: We focus on Section 5.2 in [2] and in particular Theorem 3. We note the assumption that the assumption of the theorem is that the firing mechanism of spiking neurons are stationary processes with a fixed given number of produced spikes. However, in our case, as we discussed above, our proposed firing mechanism is dynamic, with bias depending on the emitted spikes. Furthermore, the number of spikes TPP neuron produces is ``guaranteed'' ($\lfloor TX\rfloor$ or $\lfloor TX\rfloor+1$) which offers more control in conversion. Based on our ablation studies (please refer to tables provide to Reviewer BQ8A), TPP neurons constantly outperform probabilistic neurons with fixed bias, and we expect that implementation of TPP with method of [2] would further ameliorate their respective results. We will deal with this more thoroughly in the future. **1.** You are on the point in your description of the motivation behind this work and the shuffling effect (See also Theorem 1 in Appendix G). However, we do emphasize subtle characteristics of our proposed Bernoulli firing process. **4.** Our work in progress deals with extension of TPP neurons to approximate non-linear (activation) functions. Please refer to **Going beyond convolution architectures** answer to the Reviewer UeYr. **5.** We acknowledge that the stochastic component may induce extra energy consumption for TPP neurons. However, we argue that low number of SOPs to achieve high accuracy may compensate for this energy consumption. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response. I really appreciate this job and believe that should be accepted. If given the opportunity, I would like to see its official version with improved writing clarity. --- Reply to Comment 1.1.1: Comment: We sincerely thank Reviewer CXU9 for their thoughtful feedback and encouraging support. We are especially grateful for their positive assessment and constructive suggestions. Authors
Summary: The authors in this paper report a seemingly unusual phenomena in the ANN to SNN conversion framework, wherein by random spike arrangement of the output of SNN layers there was a increase in performance. Following this, the authors introduce a probabilistic neuronal model namely TPP, which improves the accuracy of the resulting SNN closer to the baseline ANN. The authors evaluate their proposed approach on vision-based datasets such as CIFAR-10/100, ImageNet, etc and achieved SOTA performance. Claims And Evidence: The primary claim is substantiated with empirical results. However, there is no strong theoretical proof behind the underlying temporal misalignment phenomena. Methods And Evaluation Criteria: The experimental result bolsters the core contribution of the work. Theoretical Claims: The authors provided theoretical proofs regarding the design formulation of the TPP neurons. However, I could not find any substantial proof for the underlying temporal misalignment phenomena. Experimental Designs Or Analyses: The authors used standard datasets such as CIFAR-10/100, Imagenet, etc. to evaluate their approach. They also examined the membrane potential distribution across different models as a means of understanding why the base models underperformed. Supplementary Material: No code was provided as part of the submission. Since, this work largely rests on experimental findings, I think exploring the code would help the reviewers understand and appreciate the contributions more. Relation To Broader Scientific Literature: This work pertains to the domain of ANN-SNN conversion, which is relevant in the broader neuromorphic community. Essential References Not Discussed: The authors discussed all major references. Other Strengths And Weaknesses: Strengths: (a) Interesting observation regarding temporal misalignment. (b) proposed model achieves SOTA performance. Weaknesses: (a) No strong theoretical justification behind the observed event. (b) The work mainly explores convolutional architectures. Exploring transformer based architectures as well can provide a more complete picture. Other Comments Or Suggestions: Can explore transformer based architectures as well. Questions For Authors: (1) For SNNs operating on longer timesteps, how many runs were done while doing the permuted spikes experiment. It is very unusual that model performance will be consistently higher for all permutations. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Other Strenths and Weaknesses** (a) Theoretical justification: We argue that ``temporal misalignment'' happens primarily due to the fact that in ANN-SNN conversion, SNN models will need a few time steps to accumulate enough potential to start firing. We start by noting that in ANN-SNN conversion, at a particular activation layer of ANN, the thresholds for the corresponding spiking neurons SNN layer are chosen based on the distribution of the corresponding ANN activations, standard choice being the maximum activation or its percentile. In any case, there will be substantial activations that fall below this threshold. We argue that TPP neuron will produce more spikes at a first time step, than a vanilla spiking neuron, because of its probabilistic nature. To make this formal, we propose the following: **Proposition** Consider an ANN neuron with ReLu activation and let activation values follow distribution with PDF $p(x)$. Let further $\theta>0$ be the threshold of the corresponding vanilla SNN neuron. Then: - The expectation of a spike output at step $t=1$ of a vanilla spiking neuron is $\int_{\theta}^\infty p(x)dx$. - The expectation of having a spike at step $t=1$ of a TPP neuron is $\int_{0}^\infty p(x)dx$ which is the same as the expectation of the output of the ANN neuron with ReLU activation. Many of the baselines propose to start with an initial membrane potential, in order to ``encourage'' early spiking, however, there will always be a substantial part of the values that will fall below the threshold and the above proposition still holds. Empirical evidence for this is presented in Figures 4 and 7, to which we kindly refer. This absence of spikes in the first time steps further causes unevenness error of the spike outputs, as thoroughly discussed, for example, in AAAI’23, “Reducing ANN-SNN Conversion Error through Residual Membrane Potential.” Ideally, we expect the spikes received from the preceding layer to be uniformly distributed. In particular, this result shows that TPP neurons with their designed probabilistic spiking approximate the output of the ANN neurons as starting with the first time step, while for vanilla spiking neurons this approximation is coarser. Our Theorem 1 (b) then shows how this approximation evolves throughout the rest of the simulation time. **Other Comments Or Suggestions** **Going beyond convolution architectures** In our current work in progress, we consider generalization of TPP neurons that are specific to approximate various non-linear activation functions, as a first step towards conversion of various ANN architectures. In particular, design that we are currently exploring is the following (please see the following Figure for a visual explanation). [activation function](https://anonymous.4open.science/r/ICML2025_rebuttal-6705/figs_tables/act-func.png) We consider a general nice enough activation function $f$, and a sequence of thresholds $\theta_1,\dots,\theta_T$, chosen in such a way that the steps with length $\theta_i$ and height $s=1$ approximate the function $f$ in an ``optimal'' way (we do not discuss how to choose $\theta_i$ and what optimality would actually mean). If we denote by $\Theta_t:=\sum_{i=1}^t\theta_i$, the modified spiking mechanism of our TPP neuron becomes $s[t] = B(\frac{v[t-1]}{\Theta_{T-t+1}})$, $v[t+1] = v[t]-s[t]\theta_t$. We note that the mechanism generalizes our TPP in that for ReLU activation we had $\theta_t=\theta$ for all $t$. This situation requires more sophisticated approach for theoretical insights, and we decided to keep it separate from our submission. **Questions For Authors** For the experiments in the main body of the paper, we performed permutations with 5 different seeds and report the mean of the results. However, we kindly refer to Appendix G where we provide further experiments concerning permutations as well as theoretical insights into their functioning (Appendix G, Theorem 1). We would also like to point one curiosity here, as it is still beyond our understanding. Namely, in Figure 11, we report results when for latency $T=4$ and baseline model, a chosen permutation is applied to all the spiking layers of the model (so the permutation does not vary from layer to layer in a random way). We tested for all the 24 possible permutations of 4 elements and for all the 24 permutations, the performance improved. **Supplementary material** We provide anonymized code that was used in some of the experiments. [Code](https://anonymous.4open.science/r/ICML2025_rebuttal-6705/TPP_QCFS/README.md)
Summary: The paper investigates the ANN-SNN (Artificial Neural Network to Spiking Neural Network) conversion process, identifying a phenomenon called "temporal misalignment," where random permutations of spike trains across SNN layers improve performance. The authors propose a novel two-phase probabilistic (TPP) spiking neuron model to address this, featuring an accumulation phase followed by probabilistic spiking based on a Bernoulli process. Main findings include improved accuracy and reduced latency in SNNs compared to baseline methods, validated across datasets like CIFAR-10/100, CIFAR10-DVS, and ImageNet with architectures such as VGG-16, ResNet-20/34, and RegNet Claims And Evidence: The claims are generally well-supported, but the assertion that TPP neurons are universally superior might overreach, as evidence is limited to specific datasets and architectures. Generalization to other tasks is untested, and the paper lacks discussion on failure cases or limitations. Methods And Evaluation Criteria: The focus on accuracy alone neglects energy efficiency, a key SNN advantage, which could enhance the evaluation’s completeness. Theoretical Claims: Yes, theorems seem correct Experimental Designs Or Analyses: Spiking activity analysis (Tables 8-10) shows percentage differences, but the interpretation is unclear without statistical significance tests or error bars, reducing confidence in firing rate claims. CIFAR10-DVS (Table 2): Shows TPP outperforming direct training methods. The event-based dataset choice is apt, but the lack of latency or energy metrics limits depth. ImageNet Results (Table 1): Compares TPP with multiple baselines across timesteps. The use of five runs with averages and deviations (partially shown) supports validity, but incomplete data (e.g., missing T=4 for some methods) hampers full assessment. Supplementary Material: In parts Relation To Broader Scientific Literature: The paper builds on ANN-SNN conversion literature (e.g., Rueckauer et al., 2017a; Diehl et al., 2015) and spiking neuron dynamics (e.g., Izhikevich, 2007). The temporal misalignment concept extends prior work on phase lag (Li et al., 2022) and input unevenness (Bu et al., 2022c), offering a novel interpretation. Essential References Not Discussed: Hunsberger & Eliasmith (2015), "Spiking Deep Networks with LIF Neurons" (Neural Computation) Neftci et al. (2017), "Event-Driven Deep Neural Networks" (IEEE) Diehl et al. (2016), "Conversion of Continuous-Valued Deep Networks to Efficient Event-Driven Networks" (arXiv): Lee, Donghyun, et al. "TT-SNN: tensor train decomposition for efficient spiking neural network training." 2024 Design, Automation & Test in Europe Conference & Exhibition (DATE). IEEE, 2024. Other Strengths And Weaknesses: Strengths: Originality in identifying and leveraging temporal misalignment. Practical significance via state-of-the-art results on standard benchmarks. Clear exposition of TPP mechanics (Figures 2-3, Algorithms 1-3). Weaknesses: Limited discussion of computational cost or energy efficiency, critical for SNNs. Overemphasis on accuracy without addressing trade-offs (e.g., latency vs. precision). Clarity suffers from incomplete tables (e.g., Table 1 lacks full data) and missing proof details). Other Comments Or Suggestions: N/A Questions For Authors: 1) Energy Efficiency: Why was energy consumption not evaluated alongside accuracy, given SNNs’ energy-efficient premise? Including this could strengthen the paper’s practical impact—e.g., if TPP reduces energy use, it bolsters the contribution; if not, it reveals a limitation. 2) Generalization: How does TPP perform on non-classification tasks (e.g., reinforcement learning or time-series prediction)? Evidence of broader applicability could elevate the method’s significance; lack thereof might narrow its scope. 3) Statistical Significance: Can you provide p-values or confidence intervals for the firing count differences (Tables 8-10)? This would clarify if TPP’s spiking changes are meaningful, potentially affecting the perceived robustness of the approach. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their time in assessing our paper and constructive comments. **Essential References Not Discussed:** Thank you for pointing these out, we will include them in the revised manuscript. **Questions For Authors:** **1.** We reported in Tables 8-10 the number of spikes as, in general, this is a fair "measure" to use when comparing various methods for the same architectures and latency, on neuromorphic hardware. However, we provide further tables that compute the energy consumption on specialized hardware. In particular, our approach follows Merolla, et al. "A million spiking neuron integrated circuit with a scalable communication network and interface" where synaptic operations were used to calculate the energy, as this approach has been adapted in recent relevant literature. To estimate the energy consumptions per one SOP (FLOP), we use the values for the neuromorphic processor Qiao et al. "A reconfigurable on-line learning spiking neuromorphic processor comprising 256 neurons and 128K synapses". To be fair, we also calculated potential MAC operations in our and baseline models coming from the first layer, as the sample images use constant encoding. In the following table (please refer to Table 17), we compare the energy consumption for some of the baseline methods and our approach, together with the approximated ANN energy consumption. We compare the energy consumption based on the accuracy. As our method reaches the ANN accuracy using much lower number of SOPs, it provides a more energy efficient alternative. However, we acknowledge that the stochastic component of our method can induce further energy consumption, but we were not able to test for that. We will provide full tables and more comparison results in the updated manuscript. https://drive.google.com/drive/folders/1sbhqT9Nabl8BUIDs9dt8OFpsz1sMZK6w **Generalization** We applied our proposed method on a simple Reinforcement Learning (DQN) example. We compared performance of the baseline QCFS (L=8) and TPP on CartPole task with over 20 evaluation epochs. The following table reports the average episode length (with standard deviation in parentheses) for different time horizons (T). A higher episode length indicates better performance, with a maximum possible score of 500 steps. The baseline struggles at T=4 but improves with higher latency, while TPP achieves significantly better performance at lower T values and reaches optimal performance faster. [Reinforcement Learning](https://anonymous.4open.science/r/ICML2025_rebuttal-6705/figs_tables/t16.jpg) In our current work in progress, we consider generalization of TPP neurons that are specific to approximate various non-linear activation functions, as a first step towards conversion of various ANN architectures. In particular, design that we are currently exploring is the following (please see the following Figure for a visual explanation). [activation function](https://anonymous.4open.science/r/ICML2025_rebuttal-6705/figs_tables/act-func.png) We consider a general nice enough activation function $f$, and a sequence of thresholds $\theta_1,\dots,\theta_T$, chosen in such a way that the steps with length $\theta_i$ and height $s=1$ approximate the function $f$ in an ``optimal'' way (we do not discuss how to choose $\theta_i$ and what optimality would actually mean). If we denote by $\Theta_t:=\sum_{i=1}^t\theta_i$, the modified spiking mechanism of our TPP neuron becomes $s[t] = B(\frac{v[t-1]}{\Theta_{T-t+1}})$, $v[t+1] = v[t]-s[t]\theta_t$. We note that the mechanism generalizes our TPP in that for ReLU activation we had $\theta_t=\theta$ for all $t$. This situation requires more sophisticated approach for theoretical insights, and we decided to keep it separate from our submission. **3.** We provide the required tests in the following tables (please refer to Tables 3-15). The results confirm that the reported changes in TPPs spiking changes are meaningful. https://drive.google.com/drive/folders/1sbhqT9Nabl8BUIDs9dt8OFpsz1sMZK6w
Summary: This paper presents a new framework for ANN-SNN conversion, which is motivated by an interesting phenomenon called “temporal misalignment”. The authors observe that the performance of converted SNN becomes better if they rearrange the temporal order of output spike trains of each layer. Based on such observation, the authors propose a new method for ANN-SNN conversion, using a two-phase probabilistic spiking neuron which mimics the effect of “permuting spike trains”. The method achieves SOTA results on various image classification datasets. Claims And Evidence: Clear: The bio-plausibility and hardware implementation of the proposed TPP neurons is well presented. The effectiveness and efficiency of proposed method is verified on various image classification datasets with several network architectures, and the results are sound. Not Clear: The proposed TPP neuron seems ignore any temporal information of input spike trains, since the first phase will accumulate the input and IF neuron has no decay. Then why is the Bernoulli random firing mechanism needed? I think the output timing of spikes for TPP neuron won’t affect the next layer, so it won’t change the final result as long as TPP neuron just fire the expected number of spikes? Methods And Evaluation Criteria: The comparison with other ANN-SNN conversion methods is thorough and fair. Apart from accuracy, the spiking activity and membrane potential distribution are provided. Theoretical Claims: No checks Experimental Designs Or Analyses: No issues for experimental designs in Table1 and Table 2. In figure1 and figure4, even in appendix G, it’s not very clear to me how the authors do “shuffle” or “permute”. The analysis doesn’t make sense if it’s just a random permute of spike trains, since the authors claim the permute can significantly improve the performance. It is not clear how “permutation” inspires the authors to propose the TPP neurons. Section 3.2 is not very informative about this. Why TPP is doing (random) permutation? Supplementary Material: I go through all sections of supplementary materials. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weakness: The analysis for “temporal misalignment” is more empirical and not theoretical to me. I’m not sure why other methods have “temporal misalignment” and why the proposed TPP neuron can solve the “temporal misalignment”. Based on the figure 4 and figure 7, baseline and probabilistic modes should have completely different spike counts and firing rate, but in the figure 5 and figure 6 baseline and TPP have almost the same spike counts. As mentioned in previous sections, it is not clear how “permutation” is related to the TPP neurons. At least it is not clearly presented in the paper. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and constructive comments. **Claims And Evidence: Not Clear** It is important to ensure not only that the expected number of spikes matches the value of the ANN activation, but also that these spikes are distributed in a uniform manner. In particular, even though all spiking neurons in one layer emitted the expected number of spikes, when we apply the subsequent weights the next layer will not necessarily receive the expected input corresponding to the input of ANN layer. The key issue here is the unevenness error, as thoroughly discussed in AAAI’23, “Reducing ANN-SNN Conversion Error through Residual Membrane Potential”, as it can happen that the expected number of spikes arrives too early or too late. So, ideally, we expect the spikes received from the preceding layer to be uniformly distributed. However, as spikes propagate through deeper layers, their timing often becomes increasingly irregular, leading to deviations from the expected spike count and resulting in unevenness error. For your convenience, we compared our TPP method with a naive probabilistic spiking neuron, where after the accumulation of the membrane potential $v$, the neuron emits spikes with the constant bias $v/(\theta\cdot T)$ (so spiking becomes a stationary process). The following tables show that our design is outperforming the stationary process. Our Theorem 1. (b) offers an explanation, as the dynamic bias takes into account already emitted spikes and offers a more meaningful approximation of the ANN outputs. We will provide full tables in the revised manuscript. [Table1 and Table2](https://anonymous.4open.science/r/ICML2025_rebuttal-6705/figs_tables/t1_t2.jpg) **Experimental Designs Or Analyses** The shuffling is performed in the following way. Once a baseline method is fixed, we proceed in order, layer by layer. After each spiking layer we would collect spike trains of that layer, then we would permute them in temporal dimension (that is we rearrange the spikes in time). Such permuted spike trains are then passed to the next layer, and we continue this process until the output layer. The permutation we apply after each layer is random, based on the current seed. Generalization to TPP: Conceptually, a permutation would require to 1) collect the spike trains and 2) rearrange them in temporal dimension. These two correspond to two phases in our TPP neuron. In the second phase, TPP is producing the spike train, and the total number of spikes is given by Theorem 1. (c). Furthermore, at any given time step, there is a non-zero probability to have a spike (as long as there is non-zero residue voltage). Since in the end, produced spike trains will have "predetermined" number of spikes and there is a possibility to have a spike at every given time, it is as if TPP was acting as a permutation on a hypothetical spike train with the same number of spikes. We hope this clarification makes the connection between permutation and TPP neurons more explicit, but please also kindly refer to Section 3.2 in our paper. **Weaknessess** **Temporal misalignment in baselines** We consider the situation of an ANN neuron with ReLU activation and how its output is approximated with a vanilla spiking neuron and our proposed TPP neuron. **Proposition** Consider an ANN neuron with ReLu activation and let activation values follow distribution with PDF $p(x)$. Let further $\theta>0$ be the threshold of the corresponding vanilla SNN neuron. Then: - The expectation of a spike output at step $t=1$ of a vanilla spiking neuron is $\int_{\theta}^\infty p(x)dx$. - The expectation of having a spike at step $t=1$ of a TPP neuron is $\int_{0}^\infty p(x)dx$ which is the same as the expectation of the output of the ANN neuron with ReLU activation. In particular, this result shows that TPP neurons with their designed probabilistic spiking approximate the output of the ANN neurons as starting with the first time step, while for vanilla spiking neurons this approximation is coarser (see also Figures 8 for an empirical evidence of the above Proposition). Our Theorem 1 (b) then shows how this approximation evolves throughout the rest of the simulation time. **Different spike counts** Note that in Figure 4 and Figure 7 the distribution of membrane potential is presented for time steps 1 and 2. This is to empirically show that baselines do not have enough membrane potential to produce spikes in the first time steps, which eventually causes approximation errors as we discussed above. However, in Figures 5 nad 8, we are comparing spike counts for latencies 8 and higher. In particular, baselines are producing more spikes, however, these spikes trains are not "evenly" or "optimally" positioned in time, hence the gap in performance compared to our method. We hope that this discussion contributes to the clarity of the paper, but we will incorporate your objections in the revised manuscript.
null
null
null
null
null
null
Learning Time-Aware Causal Representation for Model Generalization in Evolving Domains
Accept (poster)
Summary: To solve the problem of poor evolving domain generalization caused by spurious correlation between data and targets across domains, this paper proposes a time-aware structural causal model with static-dynamic causal representation learning (SYNC). SYNC introduces mutual information to constrain the model to learn static-dynamic causal representations, produces good causal predictors by preserving intra-class compactness of causal factors both across and within domains. The results show that SYNC has better time generalization. The author's response addressed most of my concerns about this article, so I update the score to weak accept. Claims And Evidence: Most of the claims in this paper are clear and convincing, but I still have some concerns: Modeling the drift factors $Z^d$ may not be necessary. In the structural cause model, the drift factor $Z^d$ and the dynamic factor $Z^{dy}$ are very similar. They are both affected by the local variable L and they both affect the label Y, which means that their distributions both shift as L changes, and $Z^d$ can be viewed as a dynamic factor. Methods And Evaluation Criteria: 1. In Eq. (1), $z_{<t}^{dy}$ is the condition for the prior distribution and the posterior distribution of the dynamic factor $z_t^{dy}$, but we do not know the prior distribution $p(z_{<t}^{dy})$ when we infer the posterior distribution of $z_t^{dy}$. I think the true posterior distribution of $z_t^{dy}$ is $q_{\theta}(z_{t}^{dy}|x_{<t})$ or $q_{\theta}(z_{t}^{dy}|x_{≤t})$. 2. Eq. (3) minimizes the mutual information between static and dynamic representations to disentangle $z^{st}_t$ and $z^{dy}_t$, but Eq. (8) maximizes their mutual information to align static and dynamic representations of the same category within the domain. These two goals seem to be in complete conflict. 3. Eq. (11) uses the label $y_t$ to learn the drift factor $z^d$, and then $z^d$ is used to predict the label $\hat{y}_t$. There can be serious information leakage, because the true label and the predicted label may be directly correlated by the model. In extreme cases, the drift factor learned by the model may be invalid. Theoretical Claims: I have checked the proofs of Proposition 3.3 and Theorem 3.6 provided in this paper, and the proof of Theorem 3.6 May be wrong. Taking the true label as the model input can cause the true label and the predicted label to be directly associated by the model and cause the model to learn invalid representations. Experimental Designs Or Analyses: I have checked the experimental designs and analyses of this paper, including the predicted performance comparison experiment of all algorithms on all datasets, and the ablation experiments of SYNC on the RMNIST dataset. This paper also shows the decision boundary of the algorithms, the change of decision accuracy in different domains and the independence of dynamic and static representations. Although the authors present a large number of results, I still have some concerns: 1. This paper only shows the results of the algorithms in the last few domains. The results of the algorithms in all domains need to be compared to demonstrate the generalization ability of SYNC in different domains. 2. Figures 5 and 6 compare the results of only a few algorithms, and these algorithms are not the best baselines for the corresponding datasets. Please show the results of other baselines, or explain the reasons for presenting results from only those baselines. Supplementary Material: I have reviewed the sections "Theoretical Details" and "More Experimental Details" in the supplementary material. Relation To Broader Scientific Literature: This paper attempts to solve the problem of modeling spurious associations between data and targets across domains. Prior to this, LSSAE and MMD-LSAE had attempted to model evolving patterns. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. This paper is important for the study of evolving domain generalization. It introduces causal learning to solve the problem of spurious correlation between data and targets modeled by existing models. 2. This paper is innovative to some extent, introducing mutual information to constrain the model to learn dynamic and static representations. Weakness: 1. The contribution of this paper to solving spurious associations is limited. Although this paper proposes a structural causal model that divides potential factors into causal and spurious factors, this paper uses mutual information to constrain the model to disentangle dynamic and static representations, without disentangling causal and spurious factors. Therefore, whether the proposed model solves the spurious association needs further proof. 2. The method in this paper is not reasonable. For example, the posterior distribution of $z^{dy}_t$ in Eq. (1) is conditional on unknown variables, Eq. (3) and (8) conflict, and Eq. (11) has label information leakage. 3. The experimental results in this paper are incomplete. Authors should present the results of algorithms in all domains of the dataset, and should highlight the results compared to the optimal baseline under different datasets. Other Comments Or Suggestions: 1. There are some symbol errors in the paper that need to be corrected. For example, $Z^{gc}$ and $Z^{lc}$ in the formula below definition 3.1 and “variant E” in the ablation study. Questions For Authors: 1. Can the author explain in more detail how DYNC disentangles causal factors and spurious factors? I think the author overemphasizes disentangled dynamic and static representations and ignores the more important goal, which is to eliminate the spurious associations between data and targets. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. Modeling the drift factors may be not necessary. R1. We argue that modeling $Z^d$ is indispensable. Although both $Z^d$ and $Z^{dy}$ are influenced by $L$ and contribute to $Y$, $Z^{dy}$ operates in the feature space while $Z^d$ pertains to the label space which is designed to capture mechanism drifts. To be clear, $Z^d$ essentially characterizes the evolution of the classifier itself, which is crucial in EDG, as also emphasized in [1-2]. Furthermore, we conduct an ablation study on $Z^d$ and show result below, which clearly demonstrates the necessity of modeling drift factors. [1] LSSAE, ICML’22. [2] DDA, AAAI’23. ||RMNIST|Portraits|Overall |-|-|-|- |SYNC|50.8|90.8|70.8 |SYNC w/o $Z^d$|47.6|89.9|68.8 > Q2. Regarding the posterior distribution model for $z_t^{dy}$. R2. We clarify that the posterior $q_{\theta}(z_t^{dy}|z_{<t}^{dy},x_t)$ in our model is valid. In VAE frameworks, the variational posterior $q_\theta(z_t^{dy}|z_{<t}^{dy},x_t)$ is naturally learned through the inference network. The conditional dependency on $z_{<t}^{dy}$ reflects a data-driven modeling of temporal dynamics, independent of prior knowledge about $p(z_{<t}^{dy})$. Crucially, while both modeling $z_{<t}^{dy}$ and $x_{<t}^{dy}$ are reasonable, $z_{<t}^{dy}$ represents a ​low-dimensional encoding of $x_{<t}^{dy}$, compressed via LSTM hidden states to bypass high-dimensional data handling. This approach aligns with established sequence modeling practices [3], where latent variables effectively capture temporal dependencies. [3] S3VAE, CVPR’20. > Q3. About conflicting optimization goals. R3. We believe that the reviewer may have some misunderstandings. First, the misunderstanding fundamentally arises from a misinterpretation of the collider structure $Z^{st} \rightarrow Y \leftarrow Z^{dy}$, where $Z^{st} \perp Z^{dy}$, $Z^{st} \not \perp Z^{dy}|Y$ hold. This is a well-established principle in causal science, commonly referred to as the collider effect. Therefore, these two objectives are reasonable. In addition, the second objective does not directly take the static and dynamic factors as input. Instead, it operates on the representations processed by the causalizer. > Q4. Misunderstanding about information leakage during inference. R4. We respectfully disagree with the reviewer's view. During inference, static and dynamic causal factors are obtained by corresponding VAE encoder and causalizer, and the drift factors are inferred solely from the learned prior $p(z_t^d|z_{<t}^d)$, without access to ground-truth labels. Hence, the concern about possible information leakage may be misplaced. Please refer to Appendix. D for full details about training and inference procedures of our approach. > Q5. Regarding for showing results in domains. R5. We stress that EDG is inherently designed for future-domain generalization. Accordingly, following the common practice adopted by most existing EDG methods such as [1, 4], we evaluate on the last third of the domains to assess this capability. However, to address potential concerns, we also report the average test performance across all domains on Circle and RMNIST, demonstrating the superiority of SYNC. ||MMD-LSAE|SDE-EDG|SYNC |-|-|-|- |Circle|92.3|91.1|93.0 |RMNIST|81.2|81.7|82.0 [4] SDE-EDG, ICLR’24. > Q6. Lack of sufficient comparison with other baselines in Figures 5 and 6. R6. For Fig. 6(a-b), LSSAE [1] is chosen as the baseline since other methods don't explicitly model both static and dynamic factors. Although MMD-LSAE, an extension of LSSAE, considers these factors, it models them deterministically, making mutual information (MI) computation infeasible under our MWS-based estimation scheme. For the remaining figures, we have added the optimal baselines and redrawn them, as shown in Fig. 3 and Fig. 4 of https://anonymous.4open.science/r/2150-1CBD. The results demonstrate that SYNC still outperforms other methods, as claimed. [5] MMD-LSAE, TPAMI'23. > Q7. Clarification of this work's contribution to resolving spurious correlations. R7. Our method disentangles static and dynamic causal factors to learn time-aware causal representations and address spurious correlations. It works as follows: First, MI $I(z_t^{st},z_t^{dy})$ is minimized to disentangle the factors. Then, Proposition 3.3, Lemma 3.4, and Lemma 3.5 guide the learning process. Specifically, static causal factors can be identified by maximizing conditional MI between static components of consecutive domains given the class (Eq. (4)). Dynamic causal factors can be obtained by anchoring on static causal factors and maximizing conditional MI within the same domain given both the class and static causal factors (Eq. (8)). Based on these theoretical results, we use causalizer modules to extract finer-grained static and dynamic causal factors by optimizing the above objectives. > Q8. Regarding for some symbol errors. R8. Thanks. We will fix these errors and thoroughly revise the manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your response, which answered some of my questions. But I still have some concerns: 1. I admit that it is necessary to infer the drift factor $Z^d$, which helps the model understand cross-domain information, but SYNC does not distinguish between $Z^d$ and $Z^{dy}$. Similar to SYNC inferring static and dynamic factors from the overall data and past data, what designs have the authors introduced to ensure that $Z^d$ and $Z^{dy}$ learn different information? 2. I understand that min-max mutual information can ensure that the causality between static factor $Z^{st}$, dynamic factor $Z^{dy}$ and label $y$ conforms to the collider structure. In general, when we train a model with a min-max goal, we optimize some parameters to maximize the goal, and optimize others to minimize the goal, such as adversarial learning (or vice versa). Does SYNC do something similar? Maximizing mutual information and minimizing mutual information in SYNC seem to be done by updating the same set of parameters. I suggest a more reasonable approach to max-min optimization. 3. I believe SYNC does not introduce the real label into the model, but Figure 4 explicitly input $y_{1:T}$ into the model, is this a drawing error? Or does the arrow from $y_{1:T}$ to the model mean something else? 4. I understand the main contribution of this article. However, this paper has repeatedly emphasized that dynamic factors and static factors are composed of causal factors and spurious factors, which brings confusion. In fact, SYNC does not make much design to distinguish causal factors from spurious factors. --- Reply to Comment 1.1.1: Comment: Thanks for your feedback, we will solve the remaining concerns as follows. > Q1. The design that ensures $Z^d$ and $Z^{dy}$ learn distinct information. R1. Thanks for your comment. Here we explain the modeling of $Z^d$ and $Z^{dy}$ in detail. $Z^d$ characterizes the temporal changes in causal factors' influence on the target. Therefore, $Z^d$ contains the structural information of category space and indirectly models the classifier's state changes over time. Specifically, In a $C$-class classification problem, $Z^d$ is modeled as a vector in $\mathbb{R}^C$. To learn the temporal evolution of $Z^d$, we develop a network $q_{\zeta}$ that takes the historical variables $z_{<t}^d$ and the one-hot vector of label $y_t$ as inputs and outputs the current state $z_t^d$. By optimizing $\mathcal{L}_{\text{mp}}$, we can constrain $Z^d$ to contain the category space structure information and learn the evolving pattern. Unlike $Z^d$, the dynamic factor $Z^{dy}\in \mathbb{R}^D$ contains feature space semantic information, where $D$ is the dimension of latent features. We develop a network $q_{\theta}$ to capture the evolving pattern of $Z^{dy}$. In our objectives, the reconstruction loss of data and labels preserves the semantic information of feature in $Z^{dy}$, while the KL divergence between $q_{\theta}$ and the prior $p(z_t^{dy}|z_{<t}^{dy})$ aids in learning evolving patterns. Overall, we design $q_{\zeta}$ and $q_{\theta}$ with specific losses to ensure that $Z^d$ learns the category space structure information, while $Z^{dy}$ retains the feature space semantic information. The results below show that $Z^d$ and $Z^{dy}$ learn different information. ||RMNIST|Portraits|Overall |-|-|-|- |SYNC|50.8|90.8|70.8 |SYNC w/o $Z^d$|47.6|89.9|68.8 |SYNC w/o $Z^{dy}$|47.1|89.6|68.4 > Q2. About max-min optimization. R2. Thanks. Unlike adversarial training, which maximizes and minimizes the same loss across different parameters, our method optimizes **distinct loss functions**. Specifically, we minimize the mutual information (MI) loss $I(Z_t^{st},Z_t^{dy})$ to disentangle static and dynamic factors. The conditional MI loss $I(\Phi_c^{dy}(X_t);Z_{c,t}^{st}|Y)$ between dynamic causal factors and anchored static causal factors is maximized to extract the dynamic causal factors, which is implemented by minimizing the supervised contrastive loss. Therefore, our method performs max-min optimization on different losses. In addition, we clarify that these two losses update different sets of parameters. The MI loss updates the network $q_{\psi}$ and $q_{\theta}$, while the conditional MI loss above updates $\Phi_c^{dy}$, which includes the feature extraction component of $q_{\theta}$ and a MLP-based masker. > Q3. About $y_{1:T}$ in Figure 4. R3. Thanks for your response. This paper adopts a well-designed EDG setting [1-2], where data and labels from $T$ temporally ordered domains $\mathcal{D}\_t^{train}=\\{(x_{i,t},y_{i,t})\\}\_{i=1}^{N_t}$ are used during training, and predictions are made for future unlabeled domain $\mathcal{D}\_t^{test}=\\{x_{i,t} \\}\_{i=1}^{N_t}$ starting from $T+1$ during inference. Figure 4 illustrates the training process of SYNC, where the input $y_{1:T}$ is used to model the posterior network $q_{\zeta}$ to approximate the prior $p(z_t^d|z_{<t}^d)$, which is used to infer the drift factor during the test phase. In future versions, we will improve Figure 4 and its caption to clearly differentiate between the training and inference processes. [1] LSSAE, ICML’22. [2] SDE-EDG, ICLR’24. > Q4. The design to distinguish causal factors from spurious factors. R4. Thanks for your feedback. We clarify that our method is effectively designed to separate causal factors from spurious ones. It models static and dynamic factors in the time domain, and then further dividing them into causal and spurious factors at a finer granularity. Namely, we have $Z^{st}=[Z_c^{st},Z_s^{st}]$ and $Z^{dy}=[Z_c^{dy},Z_s^{dy}]$. To differentiate $Z_c^{st}$ from $Z_s^{st}$ and $Z_c^{dy}$ from $Z_s^{dy}$, we design two maskers: $m_c^{st}$ and $m_c^{dy}$. For each masker $m_c^{\cdot}$ ($\cdot$ denotes “st” or “dy”), it takes factors $Z^{\cdot}$ as input and outputs a 0-1 mask. This mask is then element-wise multiplied with the corresponding $Z^{\cdot}$ to model causal factors. Specifically, using $q_{\psi}^{ext}$ and $q_{\theta}^{ext}$ to learn $Z^{st}$ and $Z^{dy}$, the static and dynamic causal factors are denoted as $\Phi_c^{st}(X)$ and $\Phi_c^{dy}(X)$, respectively, where $\Phi_c^{st}=m_c^{st}\circ q_{\psi}^{ext}$ and $\Phi_c^{dy}=m_c^{dy}\circ q_{\theta}^{ext}$. After that, Lemma 3.4 and Lemma 3.5 ensure that optimizing $\Phi_c^{st}$ and $\Phi_c^{dy}$ via Eq. (4) and Eq. (8) allows the model to extract causal factors from mixed factors, separating them from spurious factors. Finally, the results shown in R2 of Reviewer RWjv demonstrates that our method effectively learns both static and dynamic causal factors.
Summary: This paper addresses the challenge of generalizing deep models in evolving domains, where data distributions shift dynamically over time. The author claims that exiting Evolving Domain Generalization (EDG) approaches suffer from spurious correlations, which degrade their generalization ability. To mitigate this, the authors propose a time-aware Structural Causal Model (SCM) that explicitly models dynamic causal factors and causal mechanism drifts. They further introduce Static-DYNamic Causal Representation Learning (SYNC), an approach that integrates a sequential Variational Autoencoder (VAE) and information-theoretic objectives to learn time-aware causal representations. Experiments on synthetic and real-world datasets demonstrate the superiority of the proposed method over existing EDG techniques. ## update after rebuttal I believe most of my concerns were solved by the authors during the rebuttal phase. So I tend to keep my score Claims And Evidence: Yes Methods And Evaluation Criteria: Yes, I would say the datasets and the tasks are set correctly. Theoretical Claims: Yes, No clear mistakes are found in the proofs. Experimental Designs Or Analyses: Kinds of, I would say the datasets and the tasks are set correctly. and the results of each experiment also show the effectiveness of the proposed method. However, some visualization can still be improved. Supplementary Material: Yes, I have a quick view of the codes, seems Okay Relation To Broader Scientific Literature: I think from what I know, and also the way the authors claimed, I think this method has a potential to be adapted for different/more extensive applications. Essential References Not Discussed: I seem the authors attempted to discuss more on details, but there is still a lack of discussion, more details see questions listed. Other Strengths And Weaknesses: Pros: 1. The experiments are sufficient, which visible performance improvements. 2. The method is technically sound, especially for considering the time factor of modelling causal reasoning. I think the idea is reasonable. 3. The functions and theoretical proving seems good. Cons: 1. The idea of using time factor for causal reasoning is reasonable, but not being expressed very clear. The background of why time matters for building correlations is still not clear enough. Directly saying that the time factor may cause spurious correlations, is not so convincible. but I think there is still space to even talk deeper, with more concrete examples. 2. The figure 1 can be improved with some day time images which can be recognized correctly, to show the difference between results of daytime and nighttime images. Then tell the trained model learned the spurious correlations. 3. The idea of capturing the time factor during causal reasoning is not the first time, what is the main difference between the proposal modelling and the existing methods who claim they also using time factors to avoid spurious correlations? 4. From the shown visualization results, there is potential overfitting to temporal trends, e.g., every image shows the prediction hot map just in front of the camera. The model might learn domain-specific time factors rather than true causal mechanisms, since we can not really tell from an un-explainable DNN. This may lead to reduced robustness when faced with unseen distributions that deviate significantly from training trends. Do you have an idea of how to conquer this? Other Comments Or Suggestions: Please check the questions above. Questions For Authors: Please check the questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. Further analysis of spurious correlation caused by time factors and more concrete examples. R1. Thanks. Herein, we provide a formal characterization of the spurious correlation problem. In the proposed time-aware SCM, the time factor constitutes a latent confounder whose components $G$ and $L$ establish backdoor paths between static and dynamic spurious factors and the label, namely $Z_s^{st} \leftarrow G \rightarrow Y$ and $Z_s^{dy} \leftarrow L \rightarrow Z^d \rightarrow Y$, thereby introducing spurious correlations. Under this configuration, naively maximizing mutual information between features and labels (that is, minimize the cross-entropy loss) will cause the model to learn spurious features. Specifically, the model may erroneously utilize brightness features with strong temporal correlations as discriminative cues for vehicle detection in Figure. 1, rather than learning essential shape semantics. To make the example more concrete, we provide more visualization results and analysis. Please refer to R2. > Q2. Improvement on visualization to tell the trained model learned the spurious correlations. R2. Thanks. To more clearly demonstrate the effectiveness of our method, we have enhanced the visualization and presented it in Fig. 2 of https://anonymous.4open.science/r/2150-1CBD. It can be found that in nighttime images, LSSAE [1] identifies the image as “No Car” based on lighting, while our method correctly focuses on the semantic information of the car, identifying it as “Car”. In daytime images, although LSSAE successfully classifies the image as car, it relies on environmental factors, whereas our method correctly identifies it based on the car's semantic information. Therefore, the trained model may learn the spurious correlations. [1] LSSAE, ICML 2022. > Q3. Comparison with methods using time factors to avoid spurious correlations. R3. Thanks. Time-series causal modeling constitutes the most pertinent research direction to this work. Most existing time series causal methods [2-4] construct temporal SCMs that incorporate causal factors and aim to learn causal representations based on the properties of these factors. However, due to the complexity of dynamic scenes, the behavior and nature of causal factors also become intricate, often requiring strong assumptions, such as the reversibility of the generation function and an additive noise model. In contrast to them, meticulous deliberation has been conducted regarding the majority of factors in our method, wherein causal variables are explicitly decomposed into static and dynamic components for joint modeling. This modeling approach allows our method to learn complete causal representations by first learning easily obtainable static causal factors by and using them as anchors to learn dynamic causal factors, without requiring stringent assumptions, thus endowing it with extensibility to more complex scenarios. Furthermore, the integration of causal mechanism drift enables better adaptation to the underlying data distribution. [2] Causal-HMM, CVPR’21. [3] TDRL, ICLR’22. [4] CtrlNS, NeurIPS’24. > Q4. Visualization improvements and the idea for mitigating significant deviations from the trend. R4. Thanks. In the displayed visualization results, the similarity in the hot map positions may be attributed to the fact that the vehicle's position is directly in front of the camera. We have improved the visualization, as shown in Fig. 2 of https://anonymous.4open.science/r/2150-1CBD. When faced with unseen distributions significantly deviating from the training trends, the performance of EDG methods inevitably deteriorates, as they are designed under the assumption of slow and regular distribution changes. However, our approach incorporates the learning of static causal factors, enabling the model to maintain relatively stable generalization capabilities. To illustrate this, we randomly reorder the test domains of RMNIST and evaluate different methods. Specifically, we keep the training and validation sets unchanged, while rearranging the domains in the original test set, which previously arrived sequentially from 130°, 140°, ..., 180°, into 170°, 140°, 180°, 160°, 130°, 150°. The results in the table show that our method still outperforms the baselines. In addition, techniques such as out-of-distribution detection can be used to identify moments of significant distribution shifts and adjust the model to mitigate the impact of abrupt and violent shifts. Finally, we also conduct an experiment on the more challenging FMoW dataset as shown in R2 of Reviewer GuDA. it can be found that our approach remains effective in generalizing in more challenging scenarios, exhibits the potential for deployment in complex real-world applications. || LSSAE | SDE-EDG | SYNC | | --- | --- | --- | --- | | RMNIST_Reorder (Wst./Avg.) | 35.7/40.8 | 28.1/42.0 | 38.1/43.4 | --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed responses and have reviewed both their replies and the feedback provided by other reviewers. Overall, my initial concerns have been largely addressed, although some issues remain unresolved. Nevertheless, I acknowledge the potential for these concerns to be adequately addressed in future revisions. Given that several points raised by other reviewers also warrant consideration, I am inclined to maintain my original scores at this stage. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your feedback and are pleased that our rebuttal has largely addressed your initial concerns. We will incorporate your valuable suggestions into the manuscript in future revisions and remain open to any further inquiries or discussions that may arise.
Summary: This paper proposes a framework called Static-DYNamic Causal Representation Learning (SYNC) to deal with distributional drift in dynamic environments for generalization. By designing a time-aware Structural Causal Model (SCM), SYNC models dynamic causal factors and causal mechanism drifts, leveraging a sequential Variational Autoencoder (VAE) framework combined with information-theoretic objectives to learn time-aware causal representations. Theoretical analysis demonstrates that this method effectively mitigates spurious correlations and learns the optimal causal predictors for each time domain. Experimental results on synthetic and real-world datasets validate its superiority and broad applicability in dynamic, non-stationary environments. ## update after rebuttal All my concerns have now been addressed. As previously suggested, I hope the authors can thoroughly revise the manuscript by adding a detailed description of the ablation study and carefully correcting the typos. Claims And Evidence: N/A Methods And Evaluation Criteria: N/A Theoretical Claims: N/A Experimental Designs Or Analyses: N/A Supplementary Material: Yes, most parts. Relation To Broader Scientific Literature: No, just in AI research community. Essential References Not Discussed: Yes Other Strengths And Weaknesses: **Strength:** 1. The framework is conceptually solid and interesting. The introduced SCM module integrates both static and dynamic causal factors, expanding causal representation learning to dynamic, non-stationary environments. 2. This paper provides rigorous theoretical analysis, proving that SYNC can learn optimal causal predictors for each time domain and mitigate spurious correlations effectively. 3. Extensive experiments have been done on various synthetic and real-world datasets and the proposed method exhibits excellent generalization performance under various types of domain shifts. **Weakness:** 1. The significance of Proposition 3.3 in learning dynamic causal representations is unclear. While (i) and (ii) seem reasonable here, what is the their connection towards the SCM? 2. Has an ablation study been conducted on the loss term $\mathcal{L}_{causal}$? What is detailed implementation of Variants A-D? I cannot find them in the main paper and appendix. 3. The manuscript contains numerous typographical errors, thus I recommend the author to revise their manuscript thorough to improve clarity and readability. Other Comments Or Suggestions: - In Line 189, left half of the page. I guess X and Y should be $X := f^x(Z^{st}_c,Z^{st}_s,Z^{dy}_c,Z^{dy}_s, \epsilon_x)$ $Y := f^y(Z^{st}_c,Z^{dy}_c,Z^d, \epsilon_y)$ - Lines 253-257, right half of the page. The maker should be $m^{st}_c$ and $\Phi^{st}_c = m^{st}_c \circ q^{ext}\_{\psi}$ Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. The significance of Proposition 3.3 and its connection to SCM. R1. Thanks. Here we provide a detailed explanation of Proposition 3.3. **The Significance for learning causal representation:** The two points of Proposition 3.3 respectively guide the learning of static and dynamic causal representations. From Proposition 3.3 (i), it can be found that for various static factors in two consecutive temporal domains, the static causal factors $Z_c^{st}$ are those that maximize the conditional mutual information (CMI) under a given category. Since we extract static causal factors using network $\Phi_c^{st}$, static causal representations can be learned by maximizing the CMI $I(\Phi_c^{st}(X_t);\Phi_c^{st}(X_{t-1})|Y)$. Proposition 3.3 (ii) shows that among various dynamic factors within the same temporal domain, the dynamic causal factors $Z_c^{dy}$ are those that maximize the CMI with the static causal factors under a given category. Therefore, static causal factors $Z_c^{st}$ can serve as anchors to facilitate the learning of dynamic causal factors $Z_c^{dy}$ by maximizing $I(\Phi_c^{dy}(X_t);Z_{c,t}^{st}|Y)$. **Connection towards the SCM:** Although Proposition 3.3 can be derived straightforwardly using information theory, its inspiration actually comes from SCM and can be intuitively explained within the SCM framework. Since the two points of Proposition 3.3 are quite similar, for the sake of simplicity, we will explain using the second point. As shown in the time-aware SCM in Figure. 2, there is a clear collider structure and a fork structure, namely $Z_c^{st}\rightarrow Y \leftarrow Z_c^{dy}$ and $Z_s^{dy} \leftarrow L \rightarrow Z_c^{dy}$. According to d-separation, when conditioned on $Y$, both $Z_c^{dy}$ and $Z_s^{dy}$ are related to $Z_c^{st}$. Consider a boundary case where $Z_c^{dy}$ and $Z_s^{dy}$ are independent, this implies that the backdoor path between $Z_c^{dy}$ and $Z_s^{dy}$ is blocked, leading to independence between $Z_c^{st}$ and $Z_s^{dy}$, while $Z_c^{st}$ remains related to $Z_c^{dy}$. Proposition 3.3 generalizes this observation, under certain entropy inequalities, static causal factors are more strongly related to dynamic causal factors than dynamic spurious factors. > Q2. More details about ablation study. R2. Thanks. Here we explain the ablation study in detail and illustrate the contribution of the modeled causal factors to performance. As the table shown below, Variant A serves as the base method trained solely with evolving pattern loss $\mathcal{L}\_{\text{evolve}}$. Variant B builds upon the base method by additionally training with MI loss $\mathcal{L}\_{\text{MI}}$, serving as an ablation for $\mathcal{L}\_{\text{causal}}$. Variants C and D build upon Variant B by incorporating additional training with static causal loss $\mathcal{L}\_{\text{stc}}$ and dynamic causal loss $\mathcal{L}\_{\text{dyc}}$, respectively. We conducted additional ablation experiments on the Portraits dataset, excluding RMNIST, and recorded their worst-case performance (W) and average performance (A). The results are presented below. First, both Variant C and Variant D show performance improvements over Variant B, validating the effectiveness of modeling static and dynamic causal factors. Then, it is clear that Variant C achieves a greater improvement in worst-case performance compared to Variant D, indicating that static causal factors can ensure stable generalization under continuous distribution shifts. However, focusing solely on them ignores evolving pattern in EDG, limiting further generalization gains. Learning dynamic causal factors captures features related to task evolving over time, enabling to generalize better to the current distribution. Variant D outperforms Variant C in average performance and provides evidence for this claim. Finally, SYNC learns static and dynamic causal representations jointly, achieving the best performance, demonstrating their significant contribution to overall performance. ||| RMNIST (W/A) | Portraits (W/A) | Overall (W/A) | |-|-|-|-|- |Variant A | base | 40.5/44.1 | 78.1/89.2 | 59.3/66.7 | |Variant B | base+$\mathcal{L}\_{\text{MI}}$| 41.9/45.7 | 78.3/89.4 | 60.1/67.5 | |Variant C | base+$\mathcal{L}\_{\text{MI}}$+$\mathcal{L}\_{\text{stc}}$ | 44.1/48.7 | 79.8/89.9 | 62.0/69.3 | | Variant D | base+$\mathcal{L}\_{\text{MI}}$+$\mathcal{L}\_{\text{dyc}}$ | 42.9/49.3 | 79.1/90.4 | 61.0/69.8 | | SYNC | base+$\mathcal{L}\_{\text{MI}}$+$\mathcal{L}\_{\text{stc}}$+$\mathcal{L}\_{\text{dyc}}$ | 45.8/50.8 | 81.0/90.8 | 63.4/70.8 | > Q3. Typographical errors. R3. Thanks. As your suggestion, we will thoroughly revise the manuscript to improve clarity and readability.
Summary: This paper proposes SYNC, a method for improving temporal generalization in evolving domains by explicitly disentangling static and dynamic causal representations. It introduces a sequential variational autoencoder (VAE) with mutual information minimization constraints to separate static and dynamic causal factors. Experimental evaluations on synthetic and real-world datasets show that SYNC achieves better generalization performance than existing causal and non-causal domain generalization methods. Claims And Evidence: Claim: *SYNC achieves improved temporal generalization by modeling dynamic causal factors and causal mechanism drifts.* This is partially supported. The experiments indeed show improvement over various baseline methods. However, results lack deeper analyses into why and how each causal factor contributes to performance. Claim of optimal causal predictor: Theoretically argued, but practically the verification is incomplete and lacks rigorous validation. Methods And Evaluation Criteria: 1. The current evaluation is primarily accuracy-based. Given the goal is generalization in evolving domains, metrics reflecting robustness (such as performance variance across evolving domains or robustness metrics under shifts) should also be considered. 2. Good coverage of existing EDG and DG methods, but lacks a detailed comparison with e.g., transformer-based methods [1,2]. [1] Domain Transformer: Predicting Samples of Unseen, Future Domains (2021) [2] Vision Transformers in Domain Adaptation and Domain Generalization (2024) Theoretical Claims: In this paper, the author presents two theoretical claims. Optimality of causal predictor for each domain is theoretically stated, but practical relevance and conditions for this optimality (e.g., assumptions like SCM correctness, Markov condition, faithfulness) remain unvalidated empirically. How sensitive is the method to violations of SCM assumptions? Experimental Designs Or Analyses: 1. Datasets selected (Circle, Sine, RMNIST, Caltran) are standard but somewhat simplistic or dated. How representative are these of realistic, high-dimensional, complex evolving domain generalization scenarios? For instance, how does the proposed method perform on WILDS benchmarks from Stanford (https://wilds.stanford.edu/datasets/#fmow). 2. Limited exploration of scalability (the proposed method's overhead, complexity, and memory cost are largely omitted). There is only one relatively limited comparison of computational cost in the appendix currently. Supplementary Material: I briefly checked the proofs in the appendix but not in full depth. I also checked the additional experimental results. Relation To Broader Scientific Literature: The paper relates well to domain generalization and causal representation learning literature, clearly distinguishing itself from static causal DG methods. Essential References Not Discussed: "*Continuous Temporal Domain Generalization.*" Zekun Cai, Guangji Bai, Renhe Jiang, Xuan Song, Liang Zhao. NeurIPS 2024. Other Strengths And Weaknesses: S1. Clear methodological motivation, novel integration of causal modeling into evolving domains. S2. The theoretical analyses round up a rigorous paper. S2. Promising empirical performance improvements. W1. Over-complex method without sufficient empirical or theoretical justification for complexity. W2. Insufficient practical robustness checks (e.g., sensitivity to hyperparameters, data complexity). W3. Limited clarity and depth in experimental analysis, especially regarding scalability and real-world application feasibility. Other Comments Or Suggestions: Please refer to my comments above. My final score will depend on other reviewers' comments and rebuttal discussion, and I will adjust my score accordingly. Questions For Authors: 1. How robust is SYNC to violations of SCM assumptions, specifically regarding causal factor disentanglement and mechanism drifts? 2. What is the computational complexity of SYNC, particularly the scalability of sequential VAE and MI computations? 3. Could you also compare with the transformer-based DG methods given their relevance? 4. Could you empirically evaluate the proposed method and other methods on more real-world benchmarks such as WILDS? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. The contribution of causal factors to performance. R1. Thanks. We perform a further analysis of causal factors. Please refer to R2 of Reviewer RWjv for details. > Q2. Model assumptions and their impact on performance. R2. Thanks. Given the challenging of the EDG problem, most existing methods make appropriate assumptions to derive their objectives. To name a few, LSSAE [1] assumes the latent variable's Markov property, while DDA [2] assumes consistent evolution across domains. Our approach adopts a causal perspective, relying only on the fundamental SCM assumptions like Markov condition, to build a reasonable SCM for modeling time causal factors. These widely used assumptions have been validated in real-world scenarios in literature [3-5]. Besides, our method generally outperforms other methods on real-world datasets in our paper, supporting the assumptions. Since causal sufficiency and faithfulness are relatively important in our assumptions (violating causal sufficiency disrupts static-dynamic independence, while violating faithfulness affects conditional dependencies), we conduct experiments to evaluate their impact. Results below show little performance change. Moreover, incorporating these relaxed assumptions improves performance. Finally, we conducted experiments on the more challenging FMoW. The results below show that SYNC outperforms others, demonstrating its applicability in more complex scenarios. ||RMNIST|Portraits |-|-|- |SYNC|49.6|89.9 |SYNC w/o sufficiency|48.9|89.2 |SYNC w/o faithfulness|50.8|90.8 ||LSSAE|SDE-EDG|SYNC |-|-|-|- |FMoW|42.8|44.2|46.6 [1] LSSAE, ICML’22. [2] DDA, AAAI’23. [3] MatchDG, ICML’21. [4] CIRL, CVPR’22. [5] iDAG, ICCV’23. > Q3. Comparison with transformer-based DG methods. R3. Thanks. We discuss transformer-based DG methods here. [5] generates data from unseen domains using CycleGAN, while [6] investigates the deployment of vision transformers (ViT) in DA and DG. However, neither addresses the DG using transformers. Here we compare SYNC with two highly cited transformer-based DG methods, i.e., DoPrompt [5] and GMoE [6]. While ViT helps capture robust semantic features, they fail to consider the continuous structure in EDG. The results below show SYNC's superior temporal generalization. ||GMoE|DoPrompt|SYNC |-|-|-|- |Portraits|87.9|88.2|90.8 |Caltran|70.1|70.4|72.2 [5] DoTra, IJCNN’22. [6] Vision...Generalization, Neural Comput Appl’24. [7] DoPrompt, arXiv. [8] GMoE, ICLR’23. > Q4. Evaluation on more real-world benchmarks. R4. Thanks. We evaluate our method and baselines on FMoW and the results are detailed in R2. It is known that SYNC remains effective in generalizing in more challenging scenarios, exhibits the potential for deployment in complex real-world applications. > Q5. Computational complexity and scalability analysis. R5. Thanks. Our network largely follows LSSAE [1] and MMD-LSAE [9], with the only addition being two MLP-based maskers. Similar to them, the time complexity and memory complexity of sequential VAE are $\mathcal{O}(T\cdot B\cdot [\sum\_{l=1}^LH^{l}W^{l}C^{l-1}C^l(K^l)^2+D^2])$ and $\mathcal{O}(T\cdot B\cdot [\sum\_{l=1}^LH^lW^lC^l+D])$ respectively, where $T$ is the number of time domains, $B$ represents batch size, $D$ is the dimension of latent features. For the $l$-th layer of the decoder, $H^l$ and $W^l$ denote the output feature map size, $C^l$ represents the output channel, and $K^l$ denotes the kernel size. For the MI loss function, the time complexity and memory complexity are $\mathcal{O}(T\cdot B^2\cdot (D+1))$ and $\mathcal{O}(T\cdot B\cdot D)$, respectively. In the implementation, $D$ and $K$ are set to a relatively small value ($D=32, K=5$) and the complexity is acceptable. Besides, we conduct an experiment on FMoW using DenseNet-121. The results show the effectiveness of our approach in challenging real-world scenarios. Additionally, we record memory usage and runtime per iteration. The results below show that our method requires almost the same cost to achieve better performance. ||LSSAE|MMD-LSAE|GI|SYNC |-|-|-|-|- |FMoW|35.2G/0.79s|35.3G/0.80s|28.6G/12.6s|35.3G/0.87s [9] MMD-LSAE, TPAMI'23. > Q6. Comparison with Koodos [10]. R6. Thanks. Koodos considers the continuous temporal DG and models the evolving pattern via Koopman theory. Our method is tailored to the EDG and enhance temporal generalization by tackling spurious correlations through time-aware causal representations. We evaluate Koodos on two datasets and the results below verify the effectiveness of SYNC. ||Circle|RMNIST| |-|-|- |Koodos|81.4|44.6 |SYNC|84.7|50.8 [10] Koodos, NeurIPS’24. > Q7. Practical robustness checks. R7. Thanks. We conduct a sensitivity analysis experiment on $\alpha\_1$ and $\alpha\_2$, the results are shown in Fig. 1 of https://anonymous.4open.science/r/2150-1CBD, indicating insensitivity within a certain range. Additionally, our evaluation on FMoW shows the method's adaptability to more complex scenarios. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the rebuttal. I’ve read through your responses as well as the other reviewers' comments. Most of my main concerns, especially regarding the empirical validation and efficiency of the proposed method, have been addressed. Given the clarifications and the effort put into the rebuttal, I’m increasing my score from 2 to 3. --- Reply to Comment 1.1.1: Comment: Thank you very much for your detailed feedback and valuable suggestions. We are pleased to hear that our responses have addressed your main concerns. In future revisions, we will carefully incorporate your suggestions into the manuscript. Once again, we sincerely appreciate your thoughtful and constructive engagement with our work.
null
null
null
null
null
null
Counting in Small Transformers: The Delicate Interplay between Attention and Feed-Forward Layers
Accept (poster)
Summary: This paper shows how a small Transformer can implement robust counting by arranging token embeddings with sufficiently low overlap and then leveraging architectural components (like softmax and BOS tokens) to preserve that separation under mixing. The authors demonstrate that different choices in hyperparameters control the network’s expressiveness to store and retrieve count information. Claims And Evidence: The authors claimed that the histogram task can be learned via relation-based approach, or an inventory-based approach. The claims are supported by: 1. Explicit weight constructions to realise the expressiveness for perfect outputs. 2. Experiments on synthetic sequences. Methods And Evaluation Criteria: The method is based on a specific simple Transformer architecture. It seems to be reasonable for understanding the effects of difference hyper-parameters. Theoretical Claims: The propositions in Section 4 seem to be sound. Experimental Designs Or Analyses: The experiment designs are reasonable. Supplementary Material: No. Relation To Broader Scientific Literature: This work analysed the roles of BOS tokens and some other Transformer configurations, which is related to earlier work on RASP(-L) (Wess et al 2021). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The analyses seem very rigorous. Weaknesses: - A toy-like model on a specific task - Little analyses on the training dynamics. The analyses seem to be based on perfectly trained end-models. Other Comments Or Suggestions: It would be interesting to have more analysis on how the geometric features change during the training. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer Jx43, We thank you for reading and evaluating our work, and providing us with feedback. We are glad you generally appreciate the soundness of our work and the approach to change hyperparameters to understand which parts of the architecture are impactful. You are indeed correct, in that our analysis rests on perfectly trained models (or as perfect as it gets, when the model does not afford a perfect solution). While we did observe some peculiarities to the training dynamics of the different architectures, we have not formally investigated the training dynamics themselves and therefore not shared results in the manuscript. These informal observations showed that e.g. linear mixing seems to observe a staircase behaviour in the loss, learning one letter of the alphabet at a time. In contrast, the dot-product mixing did not show such a staircase. Intuition from other toy problems (such as induction heads) also lead us to believe that the way the single counting subspace that arises in the models with BOS emerges apruptly. While we find these very preliminary intuitions gained during our experiments very interesting, they warrant a more formal treatment which we hope to tackle in future work. We hope that this answer somewhat satisfies your curiosity. However, since the above statements are very speculative and we would prefer not to include them in the manuscript. If you have further questions, feedback or concerns you would like to discuss, don’t hesitate to get back to us. Best, The authors
Summary: This paper investigates the counting mechanism behind transformer blocks using the histogram counting task as a case study. Two types of counting tasks were studied: relation-based counting leveraging attention for pairwise token comparisons, and inventory-based counting using feed-forward layers to memorize token counts via orthogonal embeddings. The study reveals that some minor architectural choices, such as embedding size, token-mixing mechanisms, and softmax application, significantly influence the model's ability to count accurately. The study highlights the delicate balance between attention and feed-forward layers in small transformers and provides theoretical insights into the architectural determinants of counting capabilities. Claims And Evidence: **Claims are supported by evidence.** This study claims that counting performance in transformers depends on architectural design choices. Empirical results first confirm that attention facilitates pairwise comparisons (relation-based counting), while feed-forward layers store token counts (inventory-based counting). Theoretical analyses then further demonstrate that softmax normalization and embedding size are two key variables. Methods And Evaluation Criteria: No specific method is described by the paper. Theoretical Claims: The paper developed several theories to analyze the countability under ideal conditions. The reviewer does not check the theory very carefully, but assumptions on embedding orthogonality and softmax effects seem make sense. Experimental Designs Or Analyses: The paper only conducts toy-level experiments. The experimental design for the histogram task using variants of token mixing models, including bos, lin, and dot, and with/without softmax, was analyzed. Supplementary Material: The reviewer reads the proof part. Mathematical proofs are well structured and seem correct. Supp. A serves as a graphic description for the mandates. Relation To Broader Scientific Literature: NA. Essential References Not Discussed: Sufficient. Other Strengths And Weaknesses: # Strengths: - **An enlightened work**: As far as I can tell, this is the first work analyzing the countability of transformer blocks, which advances our understanding behind transformer mechanisms. The discussion of d and T points out an optimization direction for subsequent work. - **Theory contribution**: The paper derives theoretical bounds for minimal embedding dimensions and the role of softmax in error reduction. # Weaknesses: - **Unfriendly figure illustrations**: The explanations of some figures are not immediately understandable to the reader. Perhaps they could be expressed in more friendly way (e.g., Figure 4). - **Practical applicability**: The current experimental setup may be too simple. The observations may not generalize well to other real-world counting tasks, e.g., visual counting. Other Comments Or Suggestions: In real-world scenarios, d may be significantly smaller than T, and computations cannot converge to infinite precision. The counting robustness may be discussed further. Questions For Authors: I cannot see the point of Figure 4 (Right). A more friendly explanation is expected. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer UA3n, We thank you for taking the time to evaluate our work and give us feedback. We are especially pleased that you find the aspect of how softmax and the dot-product attention influence robustness interesting. We agree that this point deserves further discussion and for a camera-ready version we would like to use part of the extra page to give this aspect more space. Thank you also for pointing out that Figure 4 (right) is difficult to understand from the caption. Let us elaborate here: Recall that the output of the token mixing mechanism (one of lin, dot, dot_BOS etc.) computes weights of the matrix $A(\bar x)$ to mix the input embeddings $\bar x$, where we denote a single weight as $a_{ij}$. So the token at position $\ell$ after the token mixing is $\bar x^\prime_\ell = \sum_{i = 0}^L a_{i\ell} \bar x_i + \bar x_\ell $ . If we know which letters were in the token sequence (recall that $x_\ell$ is the letter of the alphabet at position $\ell$ and $\bar x_\ell$ is the embedding of that letter in the $d$-dimensional space), we can also write the mixed token directly as a sum over the alphabet $\bar x^\prime_\ell = \sum_{t \in \mathcal T} \alpha_t e_t + e_{x_\ell}$, where $\alpha_t = \sum_{i =0}^L \delta(x_i = t) a_{i\ell}$. In this context, Figure 4 examines how the feature extractor depends on different compositions of tokens that are mixed with different magnitudes of $\alpha_t$ directly. We select three letters, and combine weighted sums of them to see how the values $\alpha$ relate to the final count that is predicted by the feed-forward layer. We look at the case where the softmax function is part of the token mixing so we have $\sum_{t \in \mathcal T} \alpha_t = 1$ guaranteed. This allows us to isolate the decision boundaries of the feature extractor along the different counts in terms of $\alpha$ in the plots on the right-hand side. We find this a useful perspective, because it allows us to qualitatively compare how the feature extractors differ that are learned by the lin+sftm and dot+sftm. We clearly observe that in one case (dot+sftm) the decision boundaries scale non-linearly in alpha, whereas in the case of lin+sftm they are almost linear. This verifies our hypothesis that the architectures lead to different solutions and shows the analogy with our manual constructions. We also discuss at a later point in the manuscript, that these different scalings influence the robustness of the feature extractor. We hope that this response clarifies the introspection experiment from Figure 4 further and we will expand on this aspect in an improved version of the manuscript to make it more friendly to the reader. If you have further feedback, doubts or concerns you would like to discuss, don’t hesitate to get back to us. Best, The authors
Summary: This paper investigates how small transformer architectures implement counting mechanisms in a controlled histogram task. The study identifies two distinct counting strategies: relation-based counting, which leverages local pairwise token comparisons, and inventory-based counting, which relies on memorization through a feed-forward network. The choice of counting strategy is influenced by hyperparameter configurations, such as embedding size, token-mixing mechanism, and the presence of softmax in attention. Claims And Evidence: The claims made in the submission are generally supported by clear and convincing evidence. The paper provides both theoretical constructions and empirical experiments to substantiate its findings. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. The histogram task is a well-defined and controlled setting for studying counting mechanisms in transformers. Theoretical Claims: The arguments are logically structured, leveraging dot-product attention properties and embedding orthogonality. Experimental Designs Or Analyses: The task is simple and well-suited for isolating the effects of model components. The authors systematically vary key hyperparameters (embedding dimension, feed-forward width, and attention mechanisms) and analyze accuracy trends, supporting their claims. However, while their empirical results align with theoretical expectations, further testing on more complex tasks or different datasets could strengthen generalizability. Supplementary Material: The Supplementary Material provides detailed proofs, explicit weight constructions, and additional experimental analyses supporting the main claims. Relation To Broader Scientific Literature: The paper builds on mechanistic interpretability and transformer analysis, aligning with prior work on counting tasks, attention mechanisms, and neural network generalization. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. The architecture is confined to a single-layer transformer, which may limit its practical applications. 2. The experiments were conducted on a simple task and dataset. Other Comments Or Suggestions: N/A Questions For Authors: 1. What is the potential for extending it to more complex structures, such as multi-layer architectures and multiple attention heads? 2. Can the proposed method be generalized to more complex tasks and larger datasets? 3. What insights do the findings presented in the paper provide? How might these insights inspire future research in architecture design? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer fzMo, We thank you for taking the time and effort to evaluate our work. We are glad you found it clear, systematic and rigorous and that you also appreciate the controlled setting that the histogram task provides for our analysis. While we agree with you that these very same properties limit the generalizability of our analysis, we do think that there are some broader findings that we can take away from this -- we discuss them in the context of the questions you posed: 1. > What is the potential for extending it to more complex structures, such as multi-layer architectures and multiple attention heads? As mentioned in the response to reviewer S9FV, an empirical analysis of counting was conducted for 2-layer transformer architectures in appendix E.8, where we observed a similar phenomenology as for the case of a single layer. However, it remains unclear whether the mechanisms implemented by a two layer net correspond to the IC and RC mechanisms described for a single layer. We also believe that several layers help with the robustness of the network to entangled embeddings and allow for more self-corrections when there is noise from overlapping token embeddings, and extending our theoretical analysis to this setting would be interesting future work. In the current version of our work we have not investigated multiple heads but preliminary experiments suggest a similar picture and we will provide specific results in a camera-ready version. Indeed, the question of what the specific functions of the heads are during counting has been analyzed in a similar context very recently in the literature, see https://arxiv.org/abs/2502.06923. 2. > Can the proposed method be generalized to more complex tasks and larger datasets? We interpret this question as asking how far the RC and IC mechanisms, as well as our analysis on entangled embeddings, generalize to more complex tasks and datasets. We anticipate that the algorithmic circuits that we discovered can be composed with other functions, which allows them to theoretically be applied to other datasets and tasks as well, when processing the input requires counting as a “subroutine”. In terms of the entangled embeddings and how they can be corrected via softmax and self-attention, our analysis is perhaps more general. It provides a framework to understand for which specific architectures (almost) orthogonality is needed for counting. More generally, we hope that rigorous and systematic empirical studies on how different architectures and parameterizations influence the algorithms is more broadly recognized as a tool of choice in the regime where tasks are complex and large yet broader simulations of tasks are feasible. 3. > What insights do the findings presented in the paper provide? How might these insights inspire future research in architecture design? Our findings reinforce the notion that small details in the architecture matter, which can have fundamental impacts on how given models achieve on given tasks, in agreement with other works that touch upon similar themes (e.g. https://arxiv.org/abs/2402.01032). In addition, our observation on the beginning of sequence token, which can drastically reduce the parameter size and input dimension needed for the counting task, adds to the growing evidence that “free tokens” help transformers to execute more complicated functions. Finally, it would be interesting to better understand how robustness and self-correction, as seen via the self-attention in the single layer transformer, play out in larger networks, and whether tweaks of the architecture can further reinforce this behaviour. We hope that our response clarifies how we expect our analysis to transfer to more complex settings and how it affirms specific research directions in architecture design. If you have further questions or concerns you would like to discuss, feel free to get back to us. Best, The authors --- Rebuttal Comment 1.1: Comment: Dear Authors, Thank you for the clarification regarding the extension and your insights. I tend to maintain my rate. Best regards,
Summary: This paper explores the delicate interplay between the attention mechanism and the feed-forward layers, and further offers deep insights into how subtle architectural choices can drive algorithmic behavior in Transformer-base models. As an example, the authors investigate how small transformer models tackle the histogram task - counting token occurrences in a sequence. The paper identifies two primary counting strategies: relation-based counting, which leverages attention for local token comparisons, and inventory-based counting, where the feed-forward layer memorizes token identities to aggregate counts. The authors provide explicit theoretical constructions for both strategies and back them up with extensive experiments that analyze how factors like embedding dimension, hidden layer size, and softmax application affect performance and robustness. Claims And Evidence: The paper’s claims about the feasibility of perfect histogram task solutions under different parameter regimes (e.g., d >= T, p = T) are supported with constructive proofs and empirical results. The observed phase transitions in model accuracy (Figure 1) closely match the theoretical predictions, enhancing credibility. Methods And Evaluation Criteria: While the paper does not introduce a new method in the conventional algorithmic sense, its design of the histogram task as a diagnostic probe, and the use of a structured hyperparameter grid search, serve effectively as methodological tools to reveal the architectural properties of small transformers. Theoretical Claims: Probably. Experimental Designs Or Analyses: The main experiments (Fig. 1 and 2) demonstrate the impact of different hyperparameter combinations on model performance, and the analysis of the paper starts here. They clearly validate how the model performs under different settings (Sec. 4.1 and 4.2). Afterwards, the authors provide the attention matrix and feedforward prediction to help explain why the model works or struggles in each setting. Supplementary Material: I reviewed the additional experimental results, the generation of data, and the brief introduction of counting with large language models. It is somehow difficult for the review to fully understand the mathematical proofs in parts B and C. Relation To Broader Scientific Literature: The paper advances the field by providing explicit constructions and detailed phase-space analyses that clarify how subtle design choices lead to different computational strategies in transformers. The paper identifies two distinct counting strategies and deepens the understanding of how architectural choices affect the solution space. Moreover, the discussion on the impact of embedding orthogonality and mutual coherence is also inspiring. Essential References Not Discussed: Not applicable. Other Strengths And Weaknesses: Strengths: 1. The paper not only gives a clear theoretical construction of how small Transformers can achieve counting tasks (including relation-based and inventory-based counting), but also verifies these theoretical predictions through rigorous experiments. In addition, the authors provide sufficient theoretical derivation and clear experimental demonstration. 2. By carefully analyzing the role of attention mechanisms and feed-forward layers under different hyperparameters, the paper provides valuable insights into the inner workings, parameter efficiency, and algorithmic implementation of the Transformer. In addition, the authors' analysis provides inspiration for studying how to better apply Transformer-like models to other tasks. Weaknesses: There are certain limitations in the scope of tasks and discussions on practical applications. This article mainly focuses on the relatively simple task of counting, and most of the experiments are based on a single-layer Transformer. Although this helps to gain a deeper understanding of the basic mechanism, the applicability of its conclusions in more complex practical tasks or multi-layer, large-scale models still needs further verification or remains questionable. In addition, the article mainly focuses on the discussion of mechanisms and theoretical levels, and lacks discussion on how to apply these findings to actual large-scale models or solve real-world problems. Other Comments Or Suggestions: No. Questions For Authors: 1. In the paper, the term Token Mixing is used to generalize across the self-attention mechanism. Could the authors clarify whether this abstraction is primarily conceptual, or if it implies a formal equivalence? Are there theoretical reasons to treat attention as a subclass of token mixing, especially in the context of algorithmic tasks? 2. Given that the current study focuses on single-layer architectures and a relatively simple task (histogram counting), do the authors anticipate that the identified mechanisms (RC and IC) would generalize to multi-layer models or more complex algorithmic tasks? Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer S9FV, We thank you for taking the time and effort to carefully evaluate our work, including the larger part of the supplementary material. We are glad you found it clear and rigorous as well as providing an inspiration for studying how transformers solve other algorithmic tasks; and that you appreciated the analysis of embedding orthogonality and mutual coherence. We are also in agreement that investigating these mechanisms in a more practical context with larger models is exciting future work that hopefully allows us to connect our theoretical insights with real-world applications! In the following we would like to answer your questions: > In the paper, the term Token Mixing is used to generalize across the self-attention mechanism. Could the authors clarify whether this abstraction is primarily conceptual, or if it implies a formal equivalence? Are there theoretical reasons to treat attention as a subclass of token mixing, especially in the context of algorithmic tasks? As you have noticed, we use the term token mixing to refer to the mechanism in the network that applies a weighted sum of tokens along the token dimension (sentence length), and we use feature mixing which analogously applies on the feature dimension. On a conceptual level, this clarifies that the two different blocks act along different dimensions on the input - this framing has been used before, e.g. in https://arxiv.org/abs/2105.01601. For the token mixing specifically, the way in which we define it helps us to formally unify the different flavours of self-attention that we examine in our experiments, e.g. using an activation or not, as well as the linear attention. We can therefore say that attention is formally a form of token mixing. > Given that the current study focuses on single-layer architectures and a relatively simple task (histogram counting), do the authors anticipate that the identified mechanisms (RC and IC) would generalize to multi-layer models or more complex algorithmic tasks? For the running example of the histogram task, in Appendix E.8., we briefly discuss that the phenomenology of the phase transitions we observed for single-layer models also transfers to the setting with two layers. This is reasonable, since it is formally possible to construct a single layer of token + feature mixing as the identity function, hence the construction for the one-layer case naturally generalizes to more than one. While it could very well be possible that an extra layer affords solving the task with e.g. fewer dimensions, we do not observe strong evidence of this. However, at the same time it is unclear which exact mechanism is at play in the learned models, and how the RC or IC mechanisms would distribute over the layers. As for more complex algorithmic tasks, we do expect these mechanisms to generalize to tasks that involve a counting operation on a finite alphabet. We hope that our response gives you some additional insights on our work as well as the surrounding literature. If there are further questions or concerns you would like to discuss, feel free to get back to us. Best, The authors
null
null
null
null
null
null
From Crowdsourced Data to High-quality Benchmarks: Arena-Hard and Benchbuilder Pipeline
Accept (poster)
Summary: This paper proposes Bench-O-Matic, a pipeline for curating high-quality benchmarks (Eval-O-Matic) from large volumes of crowdsourced queries. The pipeline combines hierarchical clustering with a set of LLM-based filters keyed to “prompt quality” dimensions (e.g., complexity, specificity, domain knowledge). The resulting prompt set is evaluated with a novel suite of metrics—like confidence agreement—that emphasize how effectively a benchmark can distinguish models and align with human preferences. Experimental comparisons show Eval-O-Matic outperforms or rivals popular benchmarks such as MT-Bench and AlpacaEval on both model separability and alignment with human preference rankings, all at lower cost. Claims And Evidence: yes. the experiment is extensive. Methods And Evaluation Criteria: Yes. Theoretical Claims: NA. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. I review them all. Relation To Broader Scientific Literature: The solution propose a pipeline to construct benchmark from the crowdsource data selected by LLM with customized prompt, which is of practical value in general. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. Clear, Straightforward Method: Despite involving multiple steps (clustering, filtering, LLM-based annotation, and evaluation), the overall pipeline is easy to follow. 2. Novelty of Automated Curation: Reliance on LLMs for prompt selection is an interesting approach that addresses a real challenge—continual benchmark development without human-in-the-loop curation. 3. Empirical Gains & Metrics: The benchmark produced (Eval-O-Matic) yields high model separability and near-human ranking alignment. The authors propose new measures (e.g., confidence agreement) to quantify a benchmark’s ability to differentiate and rank models reliably. 4. Potential Impact: By open-sourcing the pipeline, others can frequently generate fresh benchmarks, mitigating the common pitfalls of data leakage and benchmark saturation. Other Comments Or Suggestions: 1. Reliance on LLMs in Multiple Stages: The pipeline depends heavily on LLMs both for scoring prompt quality and for evaluating final responses. While the authors attempt validation, additional depth—e.g., human judgments or expanded evidence for each “key quality”—could improve trust. 2. Limited Generalization Evidence: Demonstration of generalizability is mainly restricted to one additional dataset (WildChat) with a relatively simple baseline for comparison. 3. Missing Ablations and details: The paper does not fully specify the mechanics of how many random seeds are applied, how precisely bootstrapping is conducted, or how the confidence intervals are chosen. This omission leaves readers uncertain about the exact procedure for computing separability and the sensitivity of that metric. Key design choices (like cluster count, weighting of quality dimensions, or dropping certain qualities) are not rigorously tested, which might leave readers unsure about the pipeline’s sensitivity to different configurations. 4. Minor Presentation Issues: Figure 1 and the key qualities are unreferenced, which may impede clarity. More elaboration on the seven quality criteria and how they were selected would be helpful. Questions For Authors: Could you intuitively show why the proposed by metrics can outperform spearman correlation? Does it really matter, given your experiment on 20 models? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedbacks. Below, we address your concerns and propose our revisions. **W1**: We acknowledge that our pipeline heavily utilizes LLMs for prompt quality scoring and response evaluation. To address this concern and enhance credibility, we validated annotation quality through majority voting among state-of-the-art models, achieving an 85.6% agreement rate with GPT-4-Turbo on 200 prompts in the validation set. After reading your review, we integrated human judgments by manually labeling a subset (50 out of 200) of the validation set, yielding an 84.3% agreement between GPT-4-Turbo and human labels, and a 94.9% agreement between majority votes from LLMs and human labels. While OpenReview rules currently prevent us from updating during the discussion period, we assure you this important validation step will be included in the final manuscript to enhance reproducibility and credibility (in Section 4.2). **W2**: We agree that demonstrating broader generalization is important. Although our primary experiments focused on Chatbot Arena, we also evaluated the pipeline on the WildChat dataset (Wild-O-Matic), obtaining comparable improvements in benchmark quality. Both datasets are crowdsourced from real-world users, aligning with our goal of creating benchmarks reflective of genuine user interactions. By validating our method on two distinct real-world datasets, we demonstrate sufficient generalization of our approach. We further highlight that benchmark validation is resource-intensive, as each evaluation of 20 models incurs significant costs (~$600 per validation cycle). Hence, we strategically focused our ablations on different LLM judges, quality annotation models, and datasets (Section 4). **W3**: - We used a fixed random seed (42) across all experiments, including bootstrapping, and will explicitly document this detail in the revised manuscript (Section 6.1). - Confidence intervals were calculated using 100 bootstrap iterations and are 95% unless specified otherwise; this clarification will also appear in Section 6.1. - We will document the minimum cluster size (8) and note that the number of resulting clusters varies based on datasets and min cluster size (Section 4.1). Regarding ablations on dropping qualities, we clarify that our experiments (Section 4.3, Figure 3) effectively cover this aspect by demonstrating improved separation between strong and weak LLMs as the number of included qualities increases. As we drop qualities, we see less distinction between strong and weak LLMs (e.g. GPT-4 vs Llama-2-70B and Claude-Opus vs Claude-Sonnet). We will further emphasize this connection in the revision. **W4**: Thank you for the suggestions. In our revised paper: - Figure 1 will be referenced in related works. - Elaboration on the selection of the quality criteria, and their specific contributions, will be detailed in Appendix C (p. 15). **W5**: Spearman correlation only tells you whether two rankings share a similar order: if Model A is above Model B under one benchmark, does that same ordering hold under human judgments? However, it glosses over two crucial aspects: - Confidence or “Separation” Among Models: Even if two benchmarks induce the same ordering of models, some may do so with very different levels of certainty. In practice, it matters whether “Model A beats Model B” is a large-margin result the benchmark can replicate consistently, or whether it’s a fragile result based on noise. Our “Separability” metric explicitly checks how often two models have non-overlapping confidence intervals in their benchmark performance. If a benchmark repeatedly yields overlapping intervals, you cannot reliably conclude that the top model is truly better. - Magnitude of Differences (Beyond Rank): Our “Brier Score,” for example, rewards a benchmark not just for correctly ranking pairs of models, but also for assigning appropriate probabilities to those rankings. A pairwise victory by 90–10 is stronger evidence of a real performance gap than a 55–45, yet Spearman treats them identically so long as the order is preserved. Even with 20 models, fine distinctions can be critical. Modern LLMs often cluster close to one another in quality, so you need to know if a newly finetuned is truly outperforming a similar rival or if the difference is effectively within the margin of error. If your benchmark and evaluation metrics don't capture this level of granularity and confidence, you may end up training or deploying a model you think is better but is actually equal (or even worse) when considering statistical noise. We will expand on this distinction clearly in the revised manuscript. --- We greatly appreciate the reviewer’s constructive feedback, which significantly enhances the quality and clarity of our work. We respectfully ask the reviewer to reconsider their rating, given our revisions, clarifications, and the substantial potential impact of our contributions to the community.
Summary: The paper introduces a new method for automatic generation of robust, high‐quality benchmarks for evaluating LLMs. The approach is designed with key features: it controls for the style and length of generated content, shows strong alignment with human preference, and can be done in a cost- and time-efficient manner. Extensive experiments show that the generated benchmarks have significant advantages in terms of confidence agreement and separability compared to existing evaluation systems such as AlpacaEval, MTBench, and Chatbot Arena. However, the authors focus their method on open-ended and single turn tasks. Claims And Evidence: Authors claim that their method automatically generates robust and high quality benchmarks which: 1. Control for style and length of generation 2. Show strong agreement with human preference 3. Can be created with reduced cost and time Their extensive examinations show strong advantages in confidence agreement and separability (compared to AlpacaEval, MTBench, Chatbot Arena). Their results show that controlling for style and length does help reduce the bias in evaluation. However, the method and experimental results are limited to open-ended tasks where the reference is human preference, which is a limitation since many competitive LLM benchmarks (e.g., olympiad math, legal or medical exams) rely on ground truth. Methods And Evaluation Criteria: As mentioned before, their evaluation criteria makes sense. Yet correlation with human preference is a limiting eval criteria. Theoretical Claims: NA Experimental Designs Or Analyses: Yes, i checked for validity of their experimental analysis. The experiments and ablations are well-designed and results are carefully reported. However, results are limited to a small set of benchmark datasets. Expanding the experiments to include more difficult datasets would strengthen the paper. Supplementary Material: Yes, I have reviewed all sections on Brier Score, Controlling for length and style, and prompt examples. Relation To Broader Scientific Literature: Focusing on evaluations, especially creating an automated evaluation framework which: 1. controls for potential biases in evaluations 2. provide a dynamically generated evaluation dataset which can mitigate test set contaminations 3. evaluates models and a diverse set of samples (expanding test set distribution) Essential References Not Discussed: I am not aware of such references. Other Strengths And Weaknesses: 1. The choice of chat bot area as a reference for computing confidence agreement and separability is a bit limiting. Does this method extend to more difficult tasks, such as olympiad level math or SWEbench, where SoTA models in general do very poorly on? 2. Currently this setup is limited to single turn evaluation. This is a bit limitation cause many LLMs today are used for agentic or multi turn tasks. Other Comments Or Suggestions: When using the same family of models as the judge and annotator, the judge model usually shows a bias towards generations of the same family of models. Have you studied this? Would controlling for style reduce this bias? Questions For Authors: 1. Given the increasing focus of evaluations on more technical and difficult tasks (like reasoning and coding), how do you think this method extends to such benchmarks? 2. Have you studied if the increased separability is the result of filtering difficult questions to include in the benchmark? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the valuable and insightful feedback. We address your concerns as follows: > "The choice of chatbot arena as a reference for computing confidence agreement and separability is somewhat limiting. Does this method extend to more challenging tasks, such as Olympiad-level mathematics or SWEbench, where state-of-the-art models generally perform poorly?" We would like to clarify that while confidence agreement relies on human preference rankings as a reference, computing separability does not require human preferences or chatbot arena as a reference. Separability effectively captures a benchmark's ability to distinguish among model performances, a critical aspect as state-of-the-art models increasingly demonstrate similar levels of performance. This metric provides meaningful differentiation crucial for model developers. We believe our method indeed extends to more challenging tasks. Our approach successfully extracts difficult tasks from vast unfiltered datasets, such as "PyTorch Autoencoder Implementation" and "Computability and Automata Theory" (see page 18, Figure 6). Here, we show topic clusters specifically ranked for their complexity and desirable quality and the highest scored topic clusters are selected for benchmark curation, suggesting our method aligns well with the current demands for more challenging tasks. We will make sure to clarify this connection in our revision. > "Currently, this setup is limited to single-turn evaluation, which is restrictive, as many contemporary LLM applications involve agentic or multi-turn interactions." We acknowledge that our current method is indeed limited to single-turn evaluations. Multi-turn benchmarks are inherently challenging to automatically curate from crowdsourced datasets because responses beyond the initial turn can depend significantly on prior model outputs. Extending Bench-O-Matic to robustly support multi-turn evaluations represents a valuable future direction we intend to pursue. > "When using the same model family for both judge and annotator, the judge model often exhibits bias towards generations from the same family. Have you investigated this issue, and could style control potentially mitigate this bias?" The reviewer raises an important point. We have studied and quantified biases arising from using judge and annotator models from the same family (see Section 6.6: Mitigating Self-Biases in LLM-Based Evaluation). While we have not yet specifically explored whether controlling for style mitigates this bias, we definitely plan to investigate this. > "Given the growing emphasis on evaluating technical and challenging tasks (such as reasoning and coding), how well do you believe your method extends to these benchmarks?" Our analyses indicate that the clusters selected by our approach indeed prioritize more technically demanding tasks, such as reasoning and coding, over trivial ones (as demonstrated in Figure 6 on page 18). For example, clusters involving sophisticated topics score higher, increasing their likelihood of inclusion in benchmarks compared to simpler ones like "Flirty Texting Strategies." Thus, we believe our method aligns strongly with the trend towards evaluating models on more technical and challenging tasks. > "Have you studied whether the increased separability results from filtering for difficult questions within benchmarks?" We directly investigated the impact of filtering for difficult questions on separability in our experiments (see Section 4.3, Figure 3). The results confirm that selecting tasks with higher difficulty indeed enhances the benchmark’s ability to differentiate between stronger and weaker LLMs, reinforcing the benefit of our filtering strategy. --- We deeply appreciate the reviewer’s feedback and hope our responses fully address your concerns.
Summary: This paper introduces Bench-O-Matic, a pipeline that automatically constructs high-quality, large scale benchmarks to evaluate LLMs from crowdsource datasets such as Chatbot Arena. To measure the quality of this benchmark, the authors proposed new metrics to measure properties that are important in when curating the data. The authors also present Eval-O-Matic (and Wild-O-Matic), which was curated using this pipeline, and demonstrated higher model separation of model performance compared to existing benchmarks, and also at a low cost. Claims And Evidence: The authors claim that this pipeline produces benchmarks that are high quality, which is supported by details empirical results that show a higher model separation and correlation with human preferences compared to other well known benchmark datasets used to evaluate LLMs. The analysis of the cost was also provided and supported the claim that the proposed method is cost effective. However one large concern is the fact that the experiments rely on the LLM-as-a-Judge evaluation framework which as noted in the paper is known to exhibit certain biases. While the authors have attempted to mitigate some issues, it is still an inherent limitation which could influence the robustness of the results. Methods And Evaluation Criteria: Yes, the methods are suitable for the problem at hand. The novel metrics proposed provide a more nuance way of evaluation LLMs beyond the traditional statistical measures such as the Spearman or Pearson correlations, which I agree does allow for a more robust evaluation of LLM performance, and also its agreement with human preferences. Re benchmark datasets: the experiments compare Eval-O-Matic with popular benchmarks such as MT Bench and Chatbot Arena, which are suitable for the task. Theoretical Claims: There are no formal proofs of any of the claims in the paper. The authors provide descriptive explanations of the metrics proposed and those are based on existing statistical foundations and are largely sound. Experimental Designs Or Analyses: The experiments compare Eval-O-Metric to several well-known benchmark datasets, and also tested it using the top 20 LLMs from Chatbot Arena, comparing them using existing statistical metrics and also the newly proposed ones. These are thorough designs and does provide compelling empirical results. The measures to prevent any LLM induced bias were appreciated, however as mentioned above, there are still concerns regarding the limitations of LLM-as-a-Judge evaluations and how robust they are. Supplementary Material: Yes, I reviewed the material in the appendix. They provide additional technical details, evaluation findings and specific information regarding the implementation of Bench-O-Matic which support the main paper. Relation To Broader Scientific Literature: The paper contributes to the domain of LLM evaluation and benchmark curation. It aims to extend benchmarking beyond the tradition static benchmarks such as MMLU with ground truth based evaluation, or even live benchmarks such as Chatbot Arena, by introducing an automated curation method that allows for evaluation on open-ended tasks. By automating this process, Bench-O-Matic also aims to address the existing issue of test set leakage by being able to frequently update benchmarks. The introduction of new evaluation metric which supplement traditional statistical metrics could prove to be important metrics that help to measure properties that existing metrics could not, contributing to the ability to better measure the performance of LLMs in future work. The authors also uses methods such as style control, and introduces Emsemble-as-Judges as an aim to improve the LLM-as-a-Judge framework which could help to improve the reliability of that framework. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Bench-O-Matic offers a scalable, cost-effective, and automated approach to curate benchmarks, making it extremely impactful in practice - The new evaluation metrics introduced provide better evaluation and separability of LLMs, which would be crucial in helping advance future work in LLM development and benchmarking Weaknesses: - Bench-O-Matic is currently quite rather limited to single-turn, English interactions, the authors acknowledge this limitation, however it does raise questions about the generalization of this pipeline for various real world applications - The reliance of LLMs as various parts of the pipeline may introduce bias which I believe has to be further investigated and regulated Other Comments Or Suggestions: N/A Questions For Authors: Prompts are filtered based on “quality scores” produced from an LLM, how was this threshold determined and do you know how adjusting this could potential affect the quality? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. Below, we address each concern raised and propose revisions: > "However, one large concern is the fact that the experiments rely on the LLM-as-a-Judge evaluation framework, which, as noted in the paper, is known to exhibit certain biases. While the authors have attempted to mitigate some issues, it is still an inherent limitation which could influence the robustness of the results." We agree that biases inherent to the LLM-as-a-Judge evaluation framework pose challenges. Nevertheless, it remains the most practical and cost-effective method to evaluate LLMs on diverse, open-ended real-user queries. Besides implementing multiple bias-mitigation strategies described in the paper, we highlight that our benchmark demonstrates greater robustness compared to similar LLM-as-a-Judge evaluations, as evidenced by the significantly higher agreement to human preference and stronger separability. > "Bench-O-Matic is currently rather limited to single-turn, English interactions. The authors acknowledge this limitation; however, it does raise questions about the generalization of this pipeline for various real-world applications." We acknowledge that our current method is indeed limited to single-turn, English evaluations. Multi-turn benchmarks are inherently challenging to automatically curate from crowdsourced datasets because responses beyond the initial turn can depend significantly on prior model outputs. The authors are also primarily proficient in only English. However, extending Bench-O-Matic to robustly support multi-turn and multilingual evaluations represents a valuable future direction we intend to pursue. > "The reliance of LLMs as various parts of the pipeline may introduce bias which I believe has to be further investigated and regulated" We recognize the reviewer’s concern regarding potential biases introduced by using LLMs throughout the evaluation pipeline. To address this, we validated the quality and reliability of our annotations using majority voting among several state-of-the-art models (GPT-4o, Claude-3-Opus, Gemini-1.5-Pro), achieving an 85.6% agreement with GPT-4-Turbo on 200 prompts in the validation set (as detailed in Section 4.2). After reading your review, we additionally conducted human validations by manually annotating 50 prompts from our validation set. These showed an 84.3% agreement between GPT-4-Turbo and human annotations, and notably, a 94.9% agreement between human annotations and majority LLM votes. Due to OpenReview's policy restricting updates during the discussion phase, this important validation results will be fully detailed in the final manuscript revision to enhance Bench-O-Matic's reproducibility and credibility (will be detailed in Section 4.2). > "Prompts are filtered based on 'quality scores' produced from an LLM. How was this threshold determined, and do you know how adjusting this could potentially affect the quality?" To clarify, we did not rely on LLM-generated numeric "quality scores." Instead, the LLM was instructed to provide binary judgments indicating whether each prompt met specific qualitative criteria (e.g., “Problem-Solving,” “Domain Knowledge”). The prompts and clusters with most of the criteria satisfied are selected for benchmark curation. We will clarify this distinction in Section 4.1 of our revision. Furthermore, our ablation analysis in Section 4.3 (Figure 3) demonstrates that including more qualitative criteria enhances the differentiation between stronger and weaker models (e.g., GPT-4 vs. Llama-2-70B, Claude-Opus vs. Claude-Sonnet), hence explains the enhanced separability of our final benchmarks (e.g. Eval-o-Matic, Wild-o-Matic) from other popular benchmarks. We will make sure to clarify this connection in our revisions. --- We greatly appreciate the reviewer’s constructive feedback, which improves the quality and clarity of our work. We respectfully ask the reviewer to reconsider their rating, given our revisions, clarifications, and the substantial potential impact of our contributions to the community.
Summary: This paper introduces Bench-O-Matic, an automated pipeline for curating high-quality benchmarks from large-scale crowdsourced datasets, and Eval-O-Matic, a benchmark dataset generated using this pipeline. The motivation is that existing benchmarks are either static (leading to saturation and test-set leakage) or require expensive human curation. Bench-O-Matic extracts prompts from datasets like Chatbot Arena and WildChat-1M, applying seven quality criteria (e.g., specificity, domain knowledge, problem-solving complexity) to filter high-quality prompts. The resulting benchmark, Eval-O-Matic, achieves 3× better model separation than MT-Bench and 98.6% correlation with human preference rankings at a fraction of the cost ($20 per evaluation). The paper also introduces new benchmark evaluation metrics—Separability with Confidence, Agreement with Confidence, and Pair Rank Brier Score—to assess benchmark effectiveness. Claims And Evidence: The claim that Bench-O-Matic extracts high-quality prompts aligned with human preferences is supported by Eval-O-Matic’s rankings showing 98.6% correlation with Chatbot Arena, which reflects real user interactions. But this assumes Chatbot Arena rankings are a gold standard, but prior work (Carlini et al., 2021) suggests human preference data may contain inconsistencies and biases. The authors should evaluate how much these factors impact benchmark quality. The authors argue that existing metrics (e.g., Spearman correlation) fail to measure model separation and propose confidence-based alternatives. This assumes that LLM judges are unbiased. While the paper introduces ensemble-based methods to mitigate bias, it does not analyze failure cases where LLM judges systematically misrank models. Methods And Evaluation Criteria: Benchmark Design: The hierarchical clustering method (BERTopic, UMAP, HDBSCAN) for grouping prompts is reasonable, but the paper lacks qualitative validation of cluster quality. Comparison with Prior Work: The evaluation compares Eval-O-Matic with MT-Bench and AlpacaEval, but does not include LiveBench or R2E, which also focus on dynamic benchmarking. Theoretical Claims: No formal proofs, but the paper’s proposed metrics (Separability with Confidence, etc.) are well-motivated. One potentially issue is with the bootstrapping methods used for confidence estimation, since the statistical robustness of these methods in this setting is not fully analyzed. Experimental Designs Or Analyses: - The clustering-based filtering approach ensures high-quality prompts, but the paper lacks qualitative analysis of outliers and failure cases. - Several ablations are performed, like different LLM judges, controlling for stylistic biases, and testing on alternative datasets. - But the paper seems to be missing some failure analysis, like discussion of when and why low-quality prompts are selected. - Some minor assumption about LLM costs remaining stable, when API pricing may change. Supplementary Material: The additional experiments e.g. styled-controlled comparisons. Relation To Broader Scientific Literature: The paper builds on MT-Bench, AlpacaEval, and Chatbot Arena but improves by introducing an automated pipeline. This work is similar to LiveBench and R2E, though Bench-O-Matic focuses on prompt curation rather than live model evaluation. Prior work has shown LLM-based evaluation correlates with human judgments, but the failure cases of LLM judges (e.g., hallucinations, self-reinforcement biases) are underexplored. Essential References Not Discussed: None that I'm aware of. Other Strengths And Weaknesses: Other weaknesses: - The pipeline is tested only on Chatbot Arena and WildChat-1M, with no evaluation on scientific, legal, or programming benchmarks. - The authors use ensemble LLM judges but do not analyze cases where LLMs systematically misrank models. - The paper does not discuss when Bench-O-Matic selects poor-quality prompts or how often prompt selection fails. Other Comments Or Suggestions: Including more qualitative examples of prompts selected by Bench-O-Matic would be helpful. Questions For Authors: 1. How does Bench-O-Matic handle adversarial noise? Crowdsourced datasets may contain low-quality or adversarial prompts—how does your system filter these? 2. What is the impact of training data contamination? If LLMs are trained on Chatbot Arena-style queries, could this inflate correlation scores? 3. Why not compare against LiveBench or R2E? These benchmarks also focus on dynamic evaluation—how does Bench-O-Matic differ? 4. How would the method adapt to multimodal benchmarks? Can Bench-O-Matic curate image- or video-based prompts? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the thoughtful and constructive feedback. We address your concerns and propose corresponding revisions: **W1**: We do not treat Chatbot Arena’s user-vote-based rankings as an unquestionable “gold standard.” Instead, we and other works (e.g., Dubois et al., Lin et al.) view it as a practical "silver standard" worth approximating. Although Chatbot Arena likely contains biases and noise, it currently represents the largest publicly available dataset of human evaluations, with over two million interactions. Hence, benchmarks demonstrating high agreement and correlation with Chatbot Arena rankings provide valuable guidance to model developers. We will include this in our revisions. **W2**: Our critique of Spearman correlation does not depend on assuming unbiased LLM judges. Rather, confidence-based metrics provide clarity on two critical questions: 1. When benchmarks predict model performance, do these predictions align with actual user preferences? 2. Can benchmarks effectively distinguish between similar model checkpoints? If LLM judges exhibit systematic biases, this would naturally result in lower confidence, agreement, and correlation metrics. Conversely, if our evaluation demonstrates high confidence, agreement, correlation, and separability, it affirms the benchmark's usefulness. We acknowledge potential biases inherent to LLM-based evaluations and address them through multiple strategies: - **Style Control**: As detailed in Section 6.5, our methods significantly reduce biases toward particular styles or longer responses, improving benchmark alignment with human preference rankings. - **Bias Diagnostics**: We systematically compare judge rankings with human leaderboards across multiple scenarios, using metrics like confidence agreement to quantify biases and misalignments. We agree that detailed exploration of systematic biases is important and plan to expand our analysis. **W3**: Our primary focus was comparing Eval-O-Matic with benchmarks like MT-Bench and AlpacaEval, which similarly utilize LLM-based judges for open-ended tasks without ground-truth references (instead of LiveBench for example). While recognizing that LLM-as-a-Judge inherently introduces challenges, this method remains the most scalable and affordable approach to model developers for open-ended prompts. **W4**: We agree with the reviewer regarding the presence of low-quality and unsafe prompts within crowdsourced datasets. In Section 4.1 (line 196), we explicitly describe our process for filtering prompts and retaining only high-quality clusters based on average quality scores. Figures 4 and 6 (Section 4.3) illustrate how clusters are differentiated qualitatively, with higher scores correlating with complexity and relevance, while lower scores correspond to trivial or ambiguous prompts. Further details and examples are provided in Appendix C. Moreover, our ablation study (Section 4.3, Figure 3) demonstrates how qualitative criteria effectively differentiate strong and weak models, explaining Bench-O-Matic's produced benchmarks' enhanced separability compared to other benchmarks. We will clarify these points further in the revision. **W5**: We agree with the importance of evaluating benchmarks in specialized domains (scientific, legal, programming). However, our core contribution specifically targets diverse, real-world, open-ended interactions. Benchmarks such as Chatbot Arena, MT-Bench, and AlpacaEval are widely used precisely because they reflect authentic user interactions. Hence, Eval-O-Matic remains valuable for assessing practical, downstream performance. Nevertheless, we recognize this limitation and will clearly outline it in our manuscript. **W6**: We have validated Bench-O-Matic’s performance beyond Chatbot Arena by also evaluating on WildChat-1M. Ensured no Eval-O-Matic prompts overlap with publicly released Chatbot Arena data. If a model trained on similar queries generally improves at addressing real-world interactions, we anticipate performance gains across both Eval-O-Matic and actual deployment settings. **W7**: While our current framework focuses exclusively on text-based evaluations, it could be adapted to multimodal benchmarks by substituting LLM annotators and evaluators with VLMs. Indeed, Chou et al. have successfully applied a similar approach to the Vision Arena dataset, demonstrating the feasibility of extending Bench-O-Matic’s principles to image-based tasks. Chou et al. "VisionArena: 230K Real World User-VLM Conversations with Preference Labels." Lin et al. "Wildbench: Benchmarking llms with challenging tasks from real users in the wild." --- We greatly appreciate the reviewer’s constructive feedback, which significantly enhances the quality and clarity of our work. We respectfully ask the reviewer to reconsider their rating, given our revisions, clarifications, and the substantial potential impact of our contributions to the community.
null
null
null
null
null
null
VerbalTS: Generating Time Series from Texts
Accept (poster)
Summary: The paper introduces VERBALTS, a novel diffusion‐based framework that generates time series from unstructured textual descriptions rather than structured metadata. The authors argue that traditional time series synthesis is limited by the reliance on structured, expert‐annotated conditions and that text provides richer, sample‐specific control. VERBALTS leverages a multi‐focal text processor and multi‐view noise estimator to capture the hierarchical and multiresolution aspects of both text and time series. The approach is evaluated on two synthetic and (~four) real‐world datasets. ## Update after Rebuttal I thank the authors again. The authors have mostly addressed my concerns. Unfortunately, the data pipeline is not included in the code, which is one of the key contributions of this work. > While directly adopting vision models is feasible like PatchTST in forecasting, later works [1,2] indicate large improvement space. To our knowledge, our VerbalTS is the first study on text-to-ts generation. We believe further exploration beyond adopting vision models could lead to significant advancements on this novel task, and we hope our work may shed some light on it This response mostly satisfies my question. However, I’m not sure if it is truly the first study on text-to-time-series generation—especially given the substantial body of research focused on generating trajectories for human avatars (the trajectories/time series of the joints). While I did not mention this in the first round, upon further reflection, I believe it’s important to acknowledge this line of work. The authors should consider including this in an updated version of the paper. Tevet, Guy, et al. "Human motion diffusion model." ICLR 2023. Shafir, Yonatan, et al. "Human motion diffusion as a generative prior." ICLR 2024. I will maintain my score. Claims And Evidence: The entire paper is based on the claim that we cannot simply use image diffusion‐based methods such as Song et al. (2021) because "time series data exhibit multivariate characteristics and complex temporal dependencies (Torres et al., 2021; Wu et al., 2021), which fundamentally differ from the spatial structures typically encountered in image generation tasks." While I agree to a certain extent, later methods like PatchTST (Nie et al. ICLR 2023) have shown that one can essentially assume variate independence for time series‐based problems and utilize an off‐the‐shelf (vision) transformer; thus, simpler adaptations might be feasible. In addition, the authors use a text‐to‐image technique (Saharia et al. NIPS 2022) as a basis for their architecture, which partially contradicts their argument. If the authors persue further with this claim they need stronger evidence. Otherwise, i think the remaining claims and research questions are properly backed and linked with evidence. The quantitative performance is measured in FID, J-FTSD, and CTTP, along with ablation studies that underscore the benefits of their framework. Methods And Evaluation Criteria: The proposed CTTP metric is an interesting approach for measuring the semantic alignment between generated time series and the input text. Creating a new metric isn’t scientifically hard as long as the terms are clear that what’s being measured and you have external experts of domain aside from authors validating you. While the definition of the metric is clear, the paper does not provide sufficient external validation or experiments to ensure that CTTP truly captures the intended properties. More insight into its calibration would strengthen the evaluation. The authors use 2 entirely synthetic and 2 real world and 2 augmented real-world datasets, which support their findings. Theoretical Claims: The authors used a custom tailored Element-wise Multiplication And Addition operation which is further discussed in Appendix B.3 Experimental Designs Or Analyses: I think they experiments are in general sound and are valid, however as I mentioned before the experiments are partially based on CTTP which needs in my opinion additional evidence (at least in the appendix). Supplementary Material: Yes, i partially checked the Dataset generation part, especially how the real world datasets ETT and Traffic where augmented. Relation To Broader Scientific Literature: They build upon generating time series in order to address data scarcity for certain situations (Narasimhan et al., 2024, Jing et al., 2024a). The architecture it self is based on an text2image method (Saharia et al. NIPS 2022). Essential References Not Discussed: The authors missed relevant time series modeling work that utilizes hierarchical/multiresolution techniques in their method—specifically, TS2Vec (Yue et al., AAAI 2021) and TimeMixer++ (Wang et al., ICLR 2025). Furthermore, they overlooked recent multimodal work based on LLMs, such as PromptCAST (Xue et al., TKDE 2023) and AutoTimes (Liu et al., NIPS 2024). Including these references would provide a more complete picture. Other Strengths And Weaknesses: ## Strengths: - S1. The integration of unstructured text for fine-grained control is novel for time series generation, offering a exciting approach to bypass the limitations of structured metadata. - S2. The paper is well written, and the research questions are clearly stated and effectively linked to the experimental findings, which helps guide the reader through the contributions. - S3. The augmented datasets and the detailed data pipeline presented in the work are valuable contributions for the time series community, as they address real-world data scarcity issues and offer new benchmarks for evaluation. ## Weaknesses: - W1. Figure 2 is confusing, the different colors and color mixtures as well as the spatial ordering is very tough to get through, or at the very least, not adequately explained. This makes it challenging to quickly grasp the key insights when skimming through the paper. - W2. Although the augmented datasets and data pipeline are significant contributions, a major drawback is that neither the model code nor the modified datasets are publicly shared. This limits reproducibility and hinders further research based on the work. Other Comments Or Suggestions: This is not really an issue, but rather a suggestion: There is an inconsistency in the color scheme across different plots, which is distracting and hampers the overall readability. A unified and clearly explained color scheme would greatly improve the visual presentation. E.g. why is in Figure 1 the titles Blue as well as the time series, why look figure 3-6 look all different. Questions For Authors: 1. Given the slower reverse diffusion process, what strategies do you have in mind for adapting VERBALTS to real-time or large-scale applications? 2. How sensitive is the performance of VERBALTS to the specific parameter choices, especially under noisy text conditions? Did you observe any situation where the performance greatly decreased? 3. Could you provide more evidence for your proposed CTTP Metric? 4. Would you mind sharing the code? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your recognition and valuable suggestions! > **Claims**: The paper's method contradicts the claim that can't simply use image diffusion‐based methods. The reviewer also think simpler adaptations might be feasible for text-to-ts (time series) generation. We apologize for any ambiguity. Our statement in Sec. 1—"generation framework based on diffusion models (Saharia et al., 2022) for text-to-ts generation"—refers to the general diffusion framework (Sec. 3.2), not a specific vision model. Our method introduces many innovations upon vanilla diffusion model, including multi-view noise estimator, multi-focal text processor and multi-modality semantic alignment (Sec. 4) to address challenges in text-to-ts generation. We'll clarify this in the revised paper. While directly adopting vision models is feasible like PatchTST in forecasting, later works [1,2] indicate large improvement space. To our knowledge, our VerbalTS is the first study on text-to-ts generation. We believe further exploration beyond adopting vision models could lead to significant advancements on this novel task, and we hope our work may shed some light on it. [1] iTransformer: Inverted Transformers Are Effective for Time Series Forecasting. ICLR 2024. [2] ElasTST: Towards Robust Varied-Horizon Forecasting with Elastic Time-Series Transformer. NeurIPS 2024. > **Evaluation**: More evidence of the CTTP's reliability are needed. Thanks for suggestion! We recognize the importance of metric reliability. Following contrastive learning practices [1], we evaluate CTTP model using a retrieval-based protocol. For each time series in a random batch (B=32), we compute top-1 accuracy of retrieving its paired text from B candidates. Results (https://files.catbox.moe/zf1a6s.png) show effective semantic alignment ablity of CTTP model. We’ll include this in the revised paper. [1] Learning Transferable Visual Models From Natural Language Supervision. ICML 2021. > **Reference**: Missing related works: TS2Vec, TimeMixer, PromptCAST and AutoTimes. Thanks for comments! + TS2Vec and TimeMixer++ only capture multi-scale patterns in time series, while our work tackles that in both text and time series modalities with cross-modal interactions. + Unlike LLM-based models (e.g., PromptCAST, AutoTimes) that use coarse-grained prompts to assist forecasting and QA, we leverages sample-specific text for fine-grained generation. We’ll include these works in the revised paper. > **W1**: Fig. 2 is confusing. We apologize for any confusion. To distinguish the three views and the information within them, we use different colors for variables, and color intensities for temporal resolutions. We'll follow other works like [1], simplify color scheme, clarify data flow for better readability. [1] Crossformer: Transformer Utilizing Cross-Dimension Dependency for Multivariate Time Series Forecasting. ICLR 2023. > **W2**: The datasets and code are not provided. Please check this link for codes: https://anonymous.4open.science/r/VerbalTS-E5BC. As is stated in Sec. 5 of our paper, we plan to release all the datasets and models upon the acceptance of the paper. > **Suggestion**: Unified and clearly explained color scheme should be considered, e.g., Fig. 1. Thanks! For Fig. 1, since we highlighted two different advantages of text in time series generation, we used two colors in it. We will refine Figs. 1 and 3-6 by unifying the legend style, marker sizes, and other visual elements to improve readability and consistency. > **Q1**: Any strategies for adapting VerbalTS to real-time or large-scale applications? It's a great point! The auto-regressive generation of reverse diffusion process is an efficiency challenge in practice. To address this, we have utilized DDIM [1] using deterministic non-Markov diffusion process with significant acceleration. Further improvements on diffusion efficiency include compressing the reverse steps via knowledge distillation [2] or performing diffusion in the latent space [3]. We plan to explore these to enhance our method for practice. [1] Denoising Diffusion Implicit Models. ICLR 2021. [2] Progressive Distillation for Fast Sampling of Diffusion Models. ICLR 2022. [3] High-Resolution Image Synthesis With Latent Diffusion Models. CVPR 2022. > **Q2**: How sensitive is VerbalTS to the specific parameters? Any situation where the performance greatly decreased? Thanks for comment. We added sensitivity study on Synth-M dataset to evaluate the impact of the multi-resolution number $R$ and diffusion stage number $S$, with results shown in https://files.catbox.moe/n6v94k.png. It suggests that, beyond certain threshold, the benefits of multi-resolution and multi-stage modeling become apparent. We plan to extend this analysis to more datasets and revise the paper accordingly. > **Q3**: More evidence for proposed CTTP metric. See response to Evaluation. > **Q4**: Would you mind sharing the code? Not at all! See response to W2.
Summary: This paper proposes VerbalTS, a novel framework for generating time series from unstructured textual descriptions. VerbalTS employs multi-focal alignment and generation framework that effectively models their complex relationships. Empirical evaluations demonstrate the benefits of VerbalTS in generation quality compared to existing baselines. Claims And Evidence: Generally fine. Only concern is the evaluation size to support the claims in the "Extended Analysis" Methods And Evaluation Criteria: Generally fine. Please see the concerns in the experimental design and question sections. Theoretical Claims: N/A Experimental Designs Or Analyses: Checked all experiments; generally fine. For the "Extend Analysis", the conclusion may not be solid enough, as the total evaluation datasets are not big, and these findings are based on the subset of the used datasets (e.g., Syn-M or Weather), which may not be strong enough to support the claim. More concern please check question section, Supplementary Material: Appendix Relation To Broader Scientific Literature: This work in time-series generation shows effectiveness in multiple scientific domains, as used in the evaluation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Minor suggestions: 1. The ablation study could better use w/o $A^s$, w/o $A^t$, and w/o $A^d$, to understand the each factors contribute to the improvements. 2. It is better to have some qualitative results or case studies that compare the generation difference between VerbalTS and baselines to understand the strength of the proposed method better. Questions For Authors: 1. The ablation study results show that even w/o $A^s$, $A^t$, and $A^d$, the generation results still outperform the baselines, e.g., on the Weather dataset. Could the author explain why this phenomenon, as it looks like the improvements are not exactly from the proposed model but the textual information. 2. Based on my first equation, I would want to understand whether the evaluations are fair to the baselines. Will the generation from text give more information than other conditions, particularly worried given that the textual information itself is generated from the original time series. Does that mean if you give specific enough textual descriptions of the data that comes from the data itself, it can somehow perfectly reconstruct the data? And should the paper compare with textual conditioned generation methods, if any? 3. In Table 1 results, ETTm1 and Traffic are known as multivariate time series datasets. Why do the evaluations on them show only univariate settings? It would be better to show both univariate and multivariate results on all four real-world datasets to show that the results are not cherry-picked. 4. Instead of using the generated text of ETT and etc., why not consider using more practical textual time-series pairs, such as doctor notes and medical time series, or several recent multi-modal time-series datasets (TimeMMD, ChatTime, etc.) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the comments and suggestions! > **Exp**: Extended Analysis covers the subset of the used datasets. Thanks! The used datasets represent both synthetic and real-world datasets, showcasing diverse data coverage. Per your suggestion, we conducted additional experiments on other datasets, except those analysis requiring grounded generation function available only in synthetic datasets. The results (https://files.catbox.moe/dmmpmt.png) align with the findings in Sec. 5.4 of our paper. We'll revise the paper accordingly. > **Suggestion 1**: The ablation study could better use w/o $A^s$, w/o $A^t$ and w/o $A^d$. Per your request, we conducted an additional study by ablating each operation of the multi-focal text processing operators respectively, in https://files.catbox.moe/965rm7.png. The results indicate that all three operations in our method contribute to performance improvement, though the functions of different modules exhibit a certain degree of overlap in some dataset. Combining with the ablation study shown in Table 2 of paper, each operation plays a significant role in our method. We'll revise the paper accordingly. > **Suggestion 2**: Better show qualitative results or case studies. Thanks! We show some case studies of different methods in https://files.catbox.moe/1910op.png and will add this in revision. They demonstrate that our VerbalTS better aligns with the semantic information, capturing nuanced details more accurately. > **Q1**: The ablation study shows even w/o $A^s$, $A^t$ and $A^d$, the generation still outperform baselines. Why? Thanks for point. As discussed in Fig. 1, Secs. 1 & 4.2 and Finding 1 in Sec. 5.4, this improvement comes from more flexbly expressed textual conditions with nuanced time series details, highlighting the overlooked advantage of text-based conditional time series generation, as most studies focus on structured conditions like attributes or class labels, e.g., [1,2]. Besides, our method further enhances performance (as in Tabs. 1-2, Finding 2, Fig. 4), showing potential improvements via better leveraging unstructured text in generation, indicating a promising direction for multi-modal time series research. > **Q2.1**: Will the generation from text give more information than other conditions? Yes, *unstructured* textual conditions may provide flexibly expressed and nuanced details that *structured* conditions cannot capture (discussed in Sec. 1). Yet, we believe the comparison is fair, as almost all the existing works rely on structured conditions. To our knowledge, this is the first study highlighting the impact of unstructured conditions on time series generation (Finding 1) and formally addressing this problem, as also recognized by Reviewer Gv7f. Our approach also well handles the noise in text (Finding 2) with enhanced multi-modal semantic alignment (Findings 3–5). > **Q2.2**: If given enough textual descriptions that comes from the data itself, can it perfectly reconstruct the data? Should the paper compare with textual conditioned generation methods, if any? It's a great point! However, as empirically observed, *perfectly* reconstructing a time series from texts remains challenging, as it's nearly impossible to describe time series perfectly. Yet, it's feasible to model the distributional alignment between text and time series modalities, which worth further exploration. To our knowledge, our work is the first on generating time series from *unstructured* texts, few works have attempted to this direction. Thus, we only compared with *structured* information based (e.g., metadata or attribute) conditional generation methods. > **Q3**: In Tab. 1, ETTm1 and Traffic are multivariate. Why show only univariate setting? This setting depends on dataset's textual conditioning property. For variate-specific textual descriptions (e.g., variate-specific text-augmented datasets ETTm1 and Traffic), we treat each variate independently to assess granular generation performance, following [1,2]. Contrastively, for texts describing multiple variates collectively (e.g., real-world datasets Weather and BlindWays), the multivariate time series is treated as a whole sample. > **Q4**: More practical textual time-series pairs like recently TimeMMD and ChatTime? Thanks! Beside the used 4 real-world datasets, more diverse datasets could further strengthen our experiments. Limited by rebuttal time, we conducted experiments on the Environment dataset in TimeMMD, its largest subset. The results (https://files.catbox.moe/uf3vs2.png) align with our original conclusion. Our focus is on fine-grained time series generation, those containing dataset-level descriptions like that in ChatTime are omitted. We'll add these to revise paper and continue incorporating more time series datasets. [1] Time Weaver: A Conditional Time Series Generation Model. ICML 2024. [2] Towards editing time series. NeurIPS 2024.
Summary: This paper studies time series generation, specifically generating time series from text. It proposes a method named verbalTS, which employs a multi-focus alignment and generation framework to effectively model the complex relationship between them. Claims And Evidence: NA Methods And Evaluation Criteria: I think the main issue is that the authors omit several straightforward yet crucial baselines: directly using LLMs for time series generation, as well as employing visualizations for iterative revision. From my experience, the revision-based approach already achieves good generation performance. I suggest the authors incorporate multiple LLMs along with a simple iterative refinement mechanism as additional baselines. Theoretical Claims: NA Experimental Designs Or Analyses: NA Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: No code. I appreciate the authors' effort, and I believe time series generation is an interesting yet underexplored problem. Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments. > **Evaluation**: I think the main issue is that the authors omit several straightforward yet crucial baselines: directly using LLMs for time series generation, as well as employing visualizations for iterative revision. From my experience, the revision-based approach already achieves good generation performance. I suggest the authors incorporate multiple LLMs along with a simple iterative refinement mechanism as additional baselines. Thank you for your suggestions and great idea! **On LLM utilization for time series generation**: If our understanding is correct, your suggestion proposes leveraging large language models (LLMs) for time series generation with an iterative refinement mechanism, which is indeed an interesting idea. However, applying LLMs to time series generation presents several non-trivial challenges: + Understanding time series with LLMs remains an open problem [1]. As discussed in [1], fundamental reconsiderations are required for both the technical foundations of time series modeling and the associated evaluation and benchmarking methodologies. + Bridging the gap between text and time series modalities is challenging. LLMs are inherently designed for *discrete* text inputs and outputs, whereas time series data consists of *continuous* real-valued sequences. This discrepancy poses significant challenges in encoding and decoding time series using LLMs [2]. Our work introduces a novel approach to achieving fine-grained *semantic alignment* between text and time series, going beyond token-level reprogramming [2,3]. + Limited paired text–time series data hinders progress in this direction. Few existing works explore general time series generation conditioned on unstructured textual information due to the scarcity of paired datasets. However, our work proposes a novel method to augment existing time series data with textual descriptions, a contribution also acknowledged in Strength 3 by Reviewer Gv7f. While leveraging LLMs for time series generation is not straightforward, it remains a promising avenue for exploration. Extending LLMs to model, understand, and generate time series beyond their traditional text-processing capabilities is an exciting research direction. **On the iterative refinement mechanism**: Your insight regarding the iterative refinement mechanism in the generation process is highly valuable! In our method, the multiple denoising steps in the reverse diffusion process naturally align with the concept of iterative refinement. Specifically, as described in Equation (3) of our paper, the model progressively refines the generated time series by iteratively denoising it at each step. The process begins with pure Gaussian noise and gradually converges to the fully generated time series. This autoregressive denoising mechanism closely resembles iterative refinement. A detailed explanation can be found in Section 3.2 of our paper. Overall, your proposed idea is highly intriguing. Exploring how multiple LLMs could iteratively generate time series is a promising research direction, and we plan to conduct more in-depth investigations and experiments in the future. [1] Are Language Models Actually Useful for Time Series Forecasting? NeurIPS 2024. [2] S2IP-LLM: Semantic Space Informed Prompt Learning with LLM for Time Series Forecasting. ICML 2024. [3] Time-LLM: Time Series Forecasting by Reprogramming Large Language Models. ICLR 2024. > **W1**: no code. Thanks for your suggestion. You can find the details of our code at the following link: https://anonymous.4open.science/r/VerbalTS-E5BC As stated in Section 5 of our paper, we plan to release all datasets and models upon the paper's acceptance.
Summary: This paper presents a new task of generating time series data from unstructured text and introduces VERBALTS, a method that combines a multi-view time series noise estimator with a multi-focal text processor. Additionally, it establishes a new benchmark featuring multi-faceted time series datasets enriched with textual information. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: This paper does not contain any theoretical discussions and claims. Experimental Designs Or Analyses: The experimental designs are reasonable and comprehensive. Supplementary Material: This paper has not uploaded the code as supplementary material. Relation To Broader Scientific Literature: The ideas presented could inspire further research into time series generation. Essential References Not Discussed: I think this paper have included all essential references in this research area. Other Strengths And Weaknesses: Strengths: 1. This paper introduces an interesting task of generating time series data from unstructured text and provides a corresponding benchmark. 2. Extensive experiments demonstrate that the proposed method achieves good performance. Weaknesses: The key aspect of this paper appears to be the alignment between text and time series data. However, the explanation of this process is unclear. It is not evident how different views and varying scales of time series data are aligned with the textual information. Additionally, the experimental analysis lacks an in-depth discussion on this critical aspect. Other Comments Or Suggestions: please refer to weaknesses Questions For Authors: please refer to weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your valuable comments! > **W1**: The key aspect of this paper appears to be the alignment between text and time series data. However, the explanation of this process is unclear. It is not evident how different views and varying scales of time series data are aligned with the textual information. Additionally, the experimental analysis lacks an in-depth discussion on this critical aspect. In our article, we propose aligning time series and text descriptions from three views: the temporal view, the spatial view, and the diffusion view. Below we briefly explain these aspects with (i) motivation, (ii) technical approach and (iii) experimental evidence to make it clearer. (i) **Motivation**. This design of multi-view modeling and generation is upon the findings from existing studies and empirical observations: + From the temporal view, prior research [1] has shown that the influence of structured condition like attributes on time series generation spans multiple temporal scales as discussed in Section 4.1 in our paper. Similarly, in Section 5.4, we observe that the control of unstructured textual information over time series also operates across different time ranges and scales. For example, descriptions of trend direction influence the overall rise or fall of the time series, while descriptions of local peaks determine finer morphological details. + From the spatial view, we analyze the impact of text on different variables within the time series and find their influence varies across variables, as illustrated in Section 5.4. For instance, in motion generation, a description of "walking" predominantly influences the motion trajectory related to leg joint variables. + From the diffusion view, many studies like [2,3] have found that models at different diffusion steps capture information at varying granularities. For example, the early stages of denoising primarily shape the overall structure, while the later stages refine the details. We provide a comprehensive discussion of the motivation for our method in Sections 4.1 and 4.2. We hope the above contents help understand the motivation of our study. (ii) **Approach**. Based on the motivation mentioned above, we propose to achieve alignment between time series and textual information across these three views: + We first model the time series considering the three views. Specifically, we adopt a multi-resolution representation for the temporal view, employ spatial attention for the spatial view, and divide diffusion steps into multiple stages for the diffusion view. These are discussed in Section 4.1. + Next, we reprogram text descriptions into representations that correspond to these different views using a multi-focal text processor, as detailed in Section 4.2. + Finally, we align the multi-view text representations with the corresponding time series parts using a semantic adapter, as described in Section 4.3. We will keep refining the description of our paper to reflect the above summary. (iii) **Experimental Evidence**. In Section 5, we proposed three research questions first and carried out experimental analysis accordingly. Our method tackles the challenges of text and time series alignment and has significantly improve the text conditional time series generation, as shown in Section 5 and detailed as below: + In Section 5.3, the ablation study on the three views of text representation shows that incorporating multi-view alignment between time series and text descriptions enhances generation performance. + In Section 5.4.1, we demonstrated that multi-view alignment effectively alleviates negative influences of incorporated noise from unstructured text condition. + In Sections 5.4.2 and 5.4.3, we found that the multi-focal text processor mitigates the impact of noise in text, and the text processor assigns varying attention to tokens across different focuses. + In Section 5.4.4, we also showcased our method's effectiveness of semantic control on time series editing. We hope that the above summarization of experimental analysis has addressed your concerns about how our method tackles alignment between text and time series. In conclusion, we believe that our proposed method is well-motivated, conceptually sound, and clearly articulated in the paper and it's also been recognized by other reviewers (Strengths from Reviewers Gv7f and 7E5h). Furthermore, we have conducted comprehensive experiments to study this novel task of text-based conditional time series generation. We'll continue refining the paper to enhance clarity and readability, and we hope the explanation above provides a better understanding of our work. We look forward to further valuable discussions with you. [1] Towards editing time series. NeurIPS 2024. [2] How Control Information Influences Multilingual Text Image Generation and Editing? NeurIPS 2024. [3] MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process. ICLR 2024.
null
null
null
null
null
null
LipsNet++: Unifying Filter and Controller into a Policy Network
Accept (spotlight poster)
Summary: Deep reinforcement learning suffers from the action fluctuation problem. This problem has been studied in many different forms in the past. Initial works constrained the Lipschitz constant of various aspects of the policy and various value functions to achieve desired smoothness. This paper proposes two methods for this problem. First, they propose smoothing observation noise by frequency domain filters with the filtering matrix learnt via gradient descent. Second, they propose to penalize the norm of the Lipschitz constant of the neural networks. Experimental results are shown on standard reinforcement learning benchmarks. Claims And Evidence: The paper proposes two methods to smooth the actions of RL policies. They are a little ad-hoc but there is some experimental evidence to suggest that the methods are effective. For the first improvement on constraining the Lipschitz constant of the neural network, the authors claim certain training time improvements from the simplification. Given that the proposed regularization is network-wise and not layer-wise, it is less susceptible to training difficulty. Further, this simplification is claimed to affect performance to a lesser extent. The other method of using a learned frequency-domain filter is new and is claimed to give additional improvements. An additional claim is that they are less sensitive to hyper-parameters than the benchmarks. These improvements are illustrated on an extensive set of experimental benchmarks. Methods And Evaluation Criteria: The methods are justifiable and the evaluation criteria of the action fluctuation ratio is standard for these benchmarks. However, given the control applications of this paper, the impact of filtering and normalizing the Lipschitz constant on the stability and safety of the controllers is worrying. This suggests incorporating evaluation criteria relevant to these aspects to gain deeper understanding of the problem. Theoretical Claims: The theoretical section in the appendix is not very new and only serves as an addendum to the reader. In theorem A.2, it should be the case that $K(x)\leq max_{x^{\prime}} \Vert \nabla f(x) \Vert$. The approximation in the theorem statement is concerning given that there is theoretical research on the lipschitz constant of a neural net. Experimental Designs Or Analyses: The experimental design is shown on the double integrator, deepmind control suite and mini-vehicle driving. The mini-vehicle driving is illustrated on physical vehicles. The experimental design is sound. Supplementary Material: I read the supplementary material. The theorems in the supplementary material are not very rigorous. The additional sensitivity analysis and computational experiments are good. I did not go through every figure in the appendix. Relation To Broader Scientific Literature: Action smoothness is a very important problem to be tackled for control and robotics applications. The frequency domain filtering method is interesting and deserves mention. The other method is similar to previously known literature and is not a standout approach. Essential References Not Discussed: RL theory proposes some filtering methods such as below. Some theoretical filters could be discussed. Hazan, Elad, Karan Singh, and Cyril Zhang. "Learning linear dynamical systems via spectral filtering." Advances in Neural Information Processing Systems 30 (2017). Other Strengths And Weaknesses: Strengths The effectiveness of the method despite its simplicity is the strength of the paper. The experimental benchmarks and figures are strong. The results are not compelling on all benchmarks. Further, experiments with adversarial noise could be interesting to evaluate further. The result in Figure 12 is strong. Weaknesses The structure and presentation is very similar to Lipsnet. The Lipschitz constant optimization of Lipsnet has more rigor. Only one of the two proposed methods fit the criteria of being high on both novelty and impact. These weaknesses are reflected in my score for this paper even though the experimental results have strength. Other Comments Or Suggestions: For control problems, observation filters could create a lag which can affect the system stability and safety. It would be interesting to look at these filters from a perspective of safety when the system is operating at the edge of its capability. Will smoothing the actions cause catastrophic failures when the only option is to use a big acceleration and stop to a halt? An adaptive smoothing scheme that remains cognizant of invariant constraints is much important for this line of work to expand in scope. Section 3.2 is very interesting to me. Line 160: The mapping function has significant output differences even if the inputs are closely adjacent Line 162: typo with “does” not Line 206: "However, LipsNet is not applicable in high-real-time tasks due to the Jacobian matrix calculation during forward inference," -unclear statement Page 11, Line 557: It is unclear what is the “real Lipschitz constant”. Theorem A.2. There are some mistakes in line 626 which should be an upper bound. The appendix has multiple typos for “acceleration” and “obstacle”. Questions For Authors: In terms of the claims of robustness, do we know if the proposed method is better or worse on adversarial noise distributions? Is the noise distribution same in training and test? What aspects of the training was changed during the test? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful feedback! We are encouraged by the positive aspects of your assessment, i.e., ***method effectiveness, sound experimental design, and strong benchmarks and figures***. Your recognition of the frequency-domain filtering, Section 3.2, and Fig. 12 is especially appreciated. We now address your comments in detail: --- #### **> Improved Theoretical Claims** In Line 626, **the equality $K=\max_{x'}\left\Vert\nabla f(x')\right\Vert$ follows from Equation (8) in Lemma A.1**, rather than a direct application of the mean value theorem. So, the inequality $\leq$ does not apply here. To ensure a more rigorous presentation, we have explicitly included the error term into the theorem in the main text and discussed the limiting case $\rho\to0$. Please refer to our response to reviewer QXnu for the revised theorem. #### **> Improved Discussion with Additional References** Thank you for pointing out some theoretical filters, such as [1-3]. These works exhibit strong theoretical foundations. In contrast, our approach is more experimental—**we don't make any assumptions about system linearity, convexity, or noise distribution.** Instead, we introduce a learnable filtering matrix that **adaptively learns from data which frequencies are important and which are noise**. We have added a detailed discussion in introduction section and don't elaborate here due to space constraints. #### **> Weakness Concerns** In fact, LipsNet++ differs significantly from LipsNet in structure. **First**, LipsNet only addresses excessive Lipschitz constant, overlooking the dual causes of action fluctuation. In contrast, our approach introduces the Fourier Filter Layer and Lipschitz Controller Layer, which not only control the Lipschitz constant but also directly address observation noise. **Second**, while LipsNet has a rigorous Lipschitz constraint, it relies on a strong assumption—only for MLP with piece-wise linear activations (ReLU). Our Jacobian regularization, however, generalizes to any network architectures. **Third**, LipsNet’s Lipschitz constraint method is too slow for real-time inference, whereas LipsNet++ achieves a 4× faster inference speed, as shown in Appendix H. In summary, the two techniques in our work are both highly novel and impactful, with a structure entirely distinct from LipsNet. #### **> Comments & Suggestions** The lag when operating at the edge of capability is an interesting topic. Notably, LipsNet++ prevents catastrophic failures when the only option is to use a big acceleration and stop to a halt. The DMControl-Cheetah environment exemplifies this scenario, requiring actions to switch rapidly between -1 and 1 for high-speed movement. The [[Link - Figure R1]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Figure_R1.png) visualizes the first two action dimensions (rapid switching occurs across all dimensions). As shown in the figures, LipsNet++ preserves the necessary rapid reaction behavior to maintain good control performance (see TAR in Fig. 9) while reducing action fluctuation as much as possible (see AFR in Fig. 9). LipsNet++ **can adapt action smoothness dynamically** by learning local Lipschitz constants that vary with different obs input. #### **> Fixed Typos & Unclear Words** Line 160: Typo fixed (*even* $\to$ *even if*) Line 262: Typo fixed (*dose* $\to$ *does*) Line 206: We have clarified this statement by explicitly noting that LipsNet’s Jacobian matrix calculation during inference introduces significant computational overhead. The inference time and their comparisons are already detailed in Appendix H. Line 557: Fixed by removing *'real'* Appendix typos: Fixed all typos of *'acceleration'* and *'obstacle'* #### **> Questions** Q1: We have not tested with adversarial noise, which is a promising direction for future work. Q2 & Q3: Training is conducted without noise. Our method doesn't require the presence or absence of noise during training. During testing, different levels of noise are added to evaluate performance and action fluctuation under varying noise intensities. Apart from this, the training and test setups remain identical. The ability to achieve filtering even with noise-free training is attributed to Equation (4): the term $\lambda_h\left\Vert H\right\Vert_F$ drives the filter coefficients of frequencies unrelated to control performance to 0, as shown in Fig. 11, 22, and 28. This mechanism is one of our core ideas. --- Thank you again for your time and valuable feedback! Please let us know if you have any further questions. #### ***References*** *[1] H. Elad, K. Singh, and C. Zhang. Learning linear dynamical systems via spectral filtering. NeurIPS 2017.* *[2] H. Elad, H. Lee, K. Singh, C. Zhang, and Y. Zhang. Spectral filtering for general linear dynamical systems. NeurIPS 2018.* *[3] A. Sanjeev, E. Hazan, H. Lee, K. Singh, C. Zhang, and Y. Zhang. Towards provable control for unknown linear dynamical systems. ICLR Workshop 2018.* --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response to my comments. I read the improved theorem statement and the changes are important for theorem to be rigorous. It is better to avoid using an approximate operator without further justification. In terms of training conducted with and without noise, I am still curious about this aspect. Adding noise or disturbance can excite additional modes of the underlying dynamics that were previously not excited. This can have a significant impact on training and the learned policy. The frequencies related to the control performance depend on the noise distribution if the system is not fully observable. In this paper, we assume perfect knowledge of the state which is not known usually. --- Reply to Comment 1.1.1: Comment: Thank you very much for your prompt response! Avoiding using an approximate operator has indeed made our theorem more rigorous, and we have revised the paper accordingly. We appreciate your insightful question on training noise—this is a valuable point. This reply box provides us with more characters to further elaborate on this question. **In short, LipsNet++ can be trained under any noise level, including noise-free settings.** We will substantiate this claim through both theoretical analysis and empirical evidence: #### **> Theoretical Analysis** As shown in Equation (3), the filtered frequency amplitude is determined by the magnitude of the complex elements in the Hadamard product $X\odot H$, where $X$ is the frequency feature matrix of the input observations and $H$ is the learnable filter matrix. If the magnitude of an element in $H$ approaches zero, it means that the corresponding frequency will be suppressed. LipsNet++ can **automatically identify valuable frequencies and noise frequencies**, thanks to the actor loss in Equation (4): $$\hspace{6cm}\mathcal{L}'=\mathcal{L}+\lambda_h\Vert H\Vert_F,$$ where $\mathcal{L}$ is the origin actor loss, $\Vert H\Vert_F$ is the Frobenius norm, and $\lambda_h$ is a coefficient. (1) The first term $\mathcal{L}$ aims to improve the policy control performance (total average return) as much as possible. (2) The second term $\Vert H\Vert_F$ aims to decrease the magnitude of elements in $H$ as much as possible. Due to the **mutual influence** of the above two terms, **the filter coefficients (element magnitudes) in $H$ for frequencies that do not affect performance will automatically decrease to zero, while the filter coefficients for important frequencies that do affect performance will remain at higher values.** In this way, the automatic extraction of important frequencies and the suppression of non-important frequencies are achieved. This analysis does not involve the presence or absence of training noise, so LipsNet++ can be trained under any noise level. You mentioned, *"Adding noise or disturbance can excite additional modes,"* and we fully agree. For example, noise is often added in RL to enhance exploration. LipsNet++ can also be trained with noise (i.e., when the system is not fully observable), but training with noise does not affect its control performance or action smoothing ability. In addition to the reason mentioned above, this is because we use **2D FFT** filtering in the Fourier Filter Layer, as shown in Equation (2) and (5), rather than applying 1D FFT filtering directly to each observed frequency. If the noise distribution is very coincidental, 1D FFT might fail to identify valuable/noise frequencies. However, 2D FFT works fine because the **interdependencies between observation dimensions** during filtering allow for **more precise identification of whether an observation change is due to noise or a system state variation**. For example, when speed and position exhibit **mutually consistent changes**, 2D FFT can infer that the vehicle is accelerating rather than experiencing noises of the position sensor and speed sensor; otherwise, the observed changes are likely caused by noise from one of the sensors. #### **> Empirical Evidences** To validate the above conclusion, we added **comparative training** under **noise-free, uniform noise, and normal noise** conditions in the mini-vehicle driving environment, and **visualized** the resulting filter matrices $H$. The results are shown in [[Link - Figure R2]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Figure_R2.png). These experimental results show that, regardless of noise presence or distribution, **the mode of the trained filter matrices remains consistent**, as only a fixed set of observation frequencies affect performance. Intuitively, the matrix $H$ can be seen as a form of *attention*: once the model learns which frequencies are worth focusing on, it naturally ignores the noisy ones; therefore, the performance remains unaffected when tested in noisy environments. This illustrates one of the **advantages of frequency-based processing**—training under any noise level (including noise-free settings) allows $H$ to identify the key frequencies. Finally, we acknowledge that under certain adversarially designed noise, the noise frequencies may overlap with the key frequencies, which calls for future works to deal with. **In general conditions, LipsNet++ can be trained under any noise level, including noise-free settings.** --- We hope our reply adequately address your concern. All the above analysis have been incorporated into the revised paper. We must emphasize that your valuable feedback has significantly improved the paper's quality. Thank you sincerely for your time and valuable feedback!
Summary: The paper proposes a new policy network, LipsNet++, to mitigate the action fluctuation problem in real-world robotic applications. The proposed network uses a Fourier filter layer to smooth the observation input which is then fed into a MLP network with its local Lipschitz constant regularized via Jacobian regularization. On a set of simulated and real robotic tasks, the proposed policy network exhibits a lower action fluctuation level and higher robustness to observation noises. In particular, on the double integrator task, the authors compare LipsNet++ against a set of prior approaches designed for mitigating the action fluctuations and show that the proposed network excels at all metrics, including control performance and action smoothness when subject to different amount of observation noises. ## update after rebuttal All my concerns have been addressed. With the new experiments, I am now convinced that LipsNet++ indeed outperforms previous approach on the action smoothness while not sacrificing performance. I have increased my score from 1 to 3. Claims And Evidence: "Simulated and real-world experiments show that LipsNet++ has excellent action smoothness and noise robustness, achieving a new SOTA performance." -- I am not convinced by this claim because the only environment where the authors compare LipsNet++ with the prior baseline approaches (CAPS, L2C2, MLP-SN) is the toy double integrator environment. It is unfair to say that LipsNet++ achieves SOTA when it is only really shown to be the best on a toy environment. Methods And Evaluation Criteria: The proposed method makes sense and the evaluation criteria are also reasonable. Theoretical Claims: The statement for Theorem 3.2 is very vague -- What does "$||\nabla_x f|| \approx K(x)$" mean? I would suggest to use a more precise statement (e.g., define what infinitesimal neighborhood means and state that as $\rho \rightarrow 0, ||\nabla_x f|| \rightarrow K(x)$). Experimental Designs Or Analyses: The experimental results and analyses are well written and convincing. My only concern is that the authors did not compare the proposed method against important baselines on environments other than the toy double integrator environment. This raises the important question that whether the proposed method is as effective as claimed to be (SOTA). Supplementary Material: No. Relation To Broader Scientific Literature: The paper is closely related to LipsNet with the same goal: mitigating action fluctuation in robotic applications. The key contributions of this paper are - Observation filtering module with a learnable filter matrix that can help smooth out noises in the observations. - Jacobian regularization to encourage the local Lipschitz constant to be small for the policy network Essential References Not Discussed: - One of the key component of the proposed policy network is the jacobian regularization of the network. However, this exact technique has been used in many prior works (especially in adversarial robustness literature, e.g., [1, 2].) but the paper fails to attribute this technique properly - The other key component of the proposed policy network is the observation filtering with the goal of being robust to observation noises. This is also widely studied in the literature (e.g., [3, 4]). The paper also fails to discuss how the proposed technique relates to prior techniques. While the ideas in the paper are nice, it is hard to contextualize this paper in the literature without proper discussions with the broader literature. [1] Hoffman, Judy, Daniel A. Roberts, and Sho Yaida. "Robust learning with jacobian regularization." arXiv preprint arXiv:1908.02729 (2019). [2] Jakubovitz, Daniel, and Raja Giryes. "Improving dnn robustness to adversarial attacks using jacobian regularization." Proceedings of the European conference on computer vision (ECCV). 2018. [3] Zhang, Huan, et al. "Robust deep reinforcement learning against adversarial perturbations on state observations." Advances in Neural Information Processing Systems 33 (2020): 21024-21037. [4] Liu, Zuxin, et al. "On the robustness of safe reinforcement learning under observational perturbations." arXiv preprint arXiv:2205.14691 (2022). Other Strengths And Weaknesses: Strengths: - The paper is easy to follow and the proposed method makes sense for addressing the action fluctuation problem. Weaknesses: - The empirical comparison of the prior methods is lacking. While the authors evaluate the proposed method on many environments, baseline results are only available on a toy double integrator environment. This alone is not sufficient to show the effectiveness of the proposed method. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful feedback! We appreciate your recognition of our ***well-written and convincing*** results and analyses, ***reasonable*** evaluation criteria, ***nice*** ideas, the proposed methods that ***make sense***, and the ***easy-following*** nature of our paper. The positive aspects of your assessment encourages us. We now address your comments in detail: --- #### **> Improved Theoretical Claim** We appreciate your valuable suggestion! We **have refined** Theorem 3.2 by explicitly **incorporating the approximation error, defining the neighborhood area, and describing its asymptotic behavior** when $\rho\to 0$, as you suggested. The revised theorem is as follows: **Theorem 3.2** (Lipschitz’s Jacobian Approximation): *Let $f:\mathbb{R}^n \to \mathbb{R}^m$ be a continuously differentiable neural network. The Jacobian norm $\left\Vert\nabla_xf\right\Vert$ serves as an approximation of the local Lipschitz constant of $f$ within the neighborhood $\mathcal{B}(x, \rho)$, centered at $x$ with radius $\rho$. The approximation error is given by $\mathop{\max}_{\ \delta\in \mathcal{B}(0,\rho)} \left[ \left(\nabla_x\left\Vert \nabla_x f(x) \right\Vert\right)^\top\delta + o(\delta)\right].$ Moreover, as $\rho \to 0$, the Jacobian norm converges to the exact local Lipschitz constant, i.e. $\left\Vert\nabla_xf\right\Vert \to K(x)$.* #### **> Additional Comparisons with Baselines on DMControl** Beyond the comprehensive baseline comparisons in the double integrator environment, Section 4.2 evaluates LipsNet++ against three baselines (MLP, LipsNet-G, LipsNet-L) across all DMControl environments and Appendix J compares it with one baseline (MLP-SN) on Reacher environment. Extending MLP-SN to all environments is impractical due to excessive hyperparameter tuning (Line 1039). To further address your concerns, we **have added experiments with additional baselines on DMControl** environments. Now, LipsNet++ is evaluated against all baselines on DMControl, including MLP, CAPS, L2C2, MLP-SN, LipsNet-G, and LipsNet-L. TAR comparison: [[Link - Table R1]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Table_R1.png), AFR comparison: [[Link - Table R2]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Table_R2.png) As shown in the above tables, LipsNet++ **maintains SOTA** performance on the **non-toy DMControl** tasks. #### **> Improved Discussion with Additional References** - Jacobian regularization: Prior works [1,2] use Jacobian regularization to enhance adversarial robustness for neural networks, with [2] proposing an efficient Jacobian norm approximation method (already cited in Line 214). While common [1-4], it is rarely explored in RL. Studies [5,6] have identified high Lipschitz constant of policy network as a cause of action fluctuation. Our Theorem 3.2 shows the Jacobian norm is an approximation of Lipschitz constant, introducing Jacobian regularization as a practical way to control Lipschitz constant in RL. - Observation filtering: Methods [7-10] enhance RL robustness in algorithm level, e.g., state-adversarial DRL with SA-MDP modeling for inaccurate observations [7] and robust training framework for safe RL with rigorous theoretical analysis and empirical results [8]. In contrast, our approach improves action smoothness and robustness in network level by designing policy network (Fourier Filter Layer), embedding prior knowledge directly into its structure. We have added a detailed discussion in introduction section and don't elaborate here due to space constraints. --- #### **Summary** We hope our revisions adequately address your concerns. Please let us know if you have any further questions. Thank you for your time and valuable feedback! #### ***References*** *[1] D. Jakubovitz, et al. Improving dnn robustness to adversarial attacks using jacobian regularization. ECCV 2018.* *[2] J. Hoffman, et al. Robust learning with jacobian regularization. arXiv 2019.* *[3] W. Yuxiang, et al. Orthogonal jacobian regularization for unsupervised disentanglement in image generation. ICCV 2021.* *[4] K. Co, et al. Jacobian regularization for mitigating universal adversarial perturbations. ICANN 2021.* *[5] R. Takase, et al. Stability-certified Reinforcement Learning via Spectral Normalization, arXiv 2020.* *[6] X. Song, et al. LipsNet: A Smooth and Robust Neural Network with Adaptive Lipschitz Constant for High Accuracy Optimal Control, ICML 2023.* *[7] H. Zhang, et al. Robust deep reinforcement learning against adversarial perturbations on state observations. NeurIPS 2020.* *[8] Z. Liu, et al. On the robustness of safe reinforcement learning under observational perturbations. ICLR 2023.* *[9] M. Xu, et al. Trustworthy reinforcement learning against intrinsic vulnerabilities: Robustness, safety, and generalizability. arXiv 2022.* *[10] H. Zhang, et al. Robust reinforcement learning on state observations with learned optimal adversary. ICLR 2023.* --- Rebuttal Comment 1.1: Comment: Thanks for your response and adding the literature discussion. > Improved Theoretical Claim What does $o(\delta)$ mean here? > Extending MLP-SN to all environments is impractical due to excessive hyperparameter tuning (Line 1039). It is a common practice to tune the baselines with a similar budget as your method for comparison fairness, so I don't think it is necessary to tune MLP-SN more than what you did to tune your method. As MLP-SN seems to be a pretty strong (and simple) baseline, not having experimental results on it does bring down my confidence in the effectiveness of the method. In addition, I noticed that the range of the spectral norm values that you used in your submission was very narrow (e.g., 5.0 - 6.0 in Table 13). Usually it is a good idea to sweep over a range of values with different order of magnitudes to get a better sense of how well the method does (e.g., 1.0, 10.0, 100.0). Having hyperparameter values too close to each other could increase the risk of overlooking good hyperparameters. > Added experiments with additional baselines on DMControl environments In Table R1, it seems that LipsNet++ shows very little improvement over LipsNet-G and LipsNet-L (especially considering the confidence intervals). It is also very close to the MLP-SN baselines. Unfortunately, the experiments are not sufficiently strong to convince me that the approach is more effective compared to baselines. --- Reply to Comment 1.1.1: Comment: Thank you for your response and the new concerns. --- > What does $o(\delta)$ mean here? The term $o(\delta)$ refers to a **higher-order infinitesimal term** with respect to $\delta$ in the **Taylor expansion** (see Line 631 for the Taylor expansion process). This is a standard notation used in Taylor series. To improve clarity, we have updated Theorem 3.2 by explicitly writing *“where $o(\delta)$ is a higher-order infinitesimal term with respect to $\delta$.”* We appreciate your helpful suggestion, which indeed improves the readability of the theorem. > Tune the MLP-SN baselines with a similar budget as your method Please refer to Appendix F, where we have already provided a **thorough sensitivity analysis** for LipsNet++. LipsNet++ is **insensitive** to hyperparameters — it **only requires** selecting the **correct order** of magnitude without fine-tuning, and it **has only** low-sensitivity hyperparameters. In contrast, tuning MLP-SN requires carefully **adjusting the spectral norm** for **each layer** within a **narrow range**, which involves **highly sensitive** and **numerous hyperparameters combination**. Therefore, the tuning complexity of MLP-SN is not comparable to that of LipsNet++. We have added a discussion on the tuning complexity of MLP-SN in the revised paper — thank you for your suggestion, which helped improve our work. > Range of the spectral norm values was very narrow (e.g., 5.0 - 6.0). It is a good idea to sweep over different order of magnitudes (e.g., 1.0, 10.0, 100.0). Thank you for your careful reading and observation. We agree with your suggestion that sweeping over different orders of magnitude (e.g., 1.0, 10.0, 100.0) is a principled way to search for hyperparameters — in fact, we **adopted this approach where appropriate**, as shown in the hyperparameter tuning in Appendix F. However, we would like to kindly invite you to take a closer look at the results within the 5.0–6.0 range: from MLP-SN's **trends of TAR and AFR**, it is **already evident** that MLP-SN cannot outperform LipsNet++, as shown in the following Figure R3. Figure R3: [[Link - Figure R3]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Figure_R3.png) Following your suggestion, we also trained MLP-SN with spectral norm values in different orders of magnitude (e.g., 1.0, 10.0, 100.0), and the results are listed in the following Table R6. Table R6: [[Link - Table R6]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Table_R6.png) All the above results indicate that **no matter how** the hyperparameters of MLP-SN are **adjusted**—whether through fine-tuning within the range 5.0-6.0 or by coarser adjustments at the scale of 1, 10, or 100—its performance remains **significantly inferior to that of LipsNet++**. We believe this addition greatly helps readers better understand the comparison, and we sincerely appreciate your detailed and constructive feedback. > Shows very little improvement Please kindly note that the main goal of this paper is to **reduce the Action Fluctuation Rate (AFR)**. As shown in [[Link - Table R2]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Table_R2.png), LipsNet++ **achieves significant improvements** over the previous SOTA (LipsNet-L), **reducing AFR by 23.5%, 75.0%, 13.0%, and 35.5%** on Cartpole, Reacher, Cheetah, and Walker respectively, with an **average reduction of 36.74%**. These are **substantial improvements**. Moreover, the AFR **variances** of both LipsNet++ and LipsNet-L remain **below 0.03**, which is **much smaller** than the mean values, indicating that the **results are statistically reliable**. Compared to MLP-SN, the improvements are **even more pronounced**. These results **clearly demonstrate the SOTA performance of LipsNet++**. Thank you for raising this point — we have now included the AFR reduction percentages directly in the revised paper to more explicitly highlight the performance advantage of LipsNet++, which will help readers better appreciate the significance of results. --- We hope our reply adequately address your concern. All the above results have been incorporated into the revised paper. We must emphasize that your valuable feedback has significantly improved the paper's quality. Thank you sincerely for your time and valuable feedback!
Summary: Action fluctuation is a major issue in reinforcement learning. The fluctuation in action can be caused due to measurement noise or steep changes in the policy due to large Jacobians. This paper addresses the measurement noise issue using Fourier filter and steep Jacobian issue using Jacobian regularization. The approach is well-motivated and I believe the paper makes a strong contribution. Claims And Evidence: Yes, the exposition in the paper supports the claim in the introduction. Methods And Evaluation Criteria: Yes from my perspective. Theoretical Claims: Yes, the mathematical development is correct, but it can be made more rigorous. In the statement of Theorem 3.2, the authors should also include the approximation error for the Lipschitz constant instead of just using $\approx$ notation, to make the theorem more precise. The approximation error appears well articulated in the proof in the appendix, so suggest just using that formulation. Experimental Designs Or Analyses: The experimental design appears valid to me, but I am more of a theoretically inclined researcher, I'll leave the critique on experiments to more experimentally oriented researchers. Supplementary Material: I read the Appendix. Relation To Broader Scientific Literature: The literature survey is sufficient to put the contribution in context. Essential References Not Discussed: None that I know of. Other Strengths And Weaknesses: This paper is an excellent specimen of leveraging theoretical insights, especially using Lipschitz constant estimates to solve a crucial performance issue in RL. Other Comments Or Suggestions: None. Questions For Authors: Isnt Lemma A.1 just the mean value theorem for vector valued functions? Any standard math textbook reference would suffice, e.g., see Principles of Mathematical Analysis by Rudin, Theorem 5.19. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful feedback! We appreciate your recognition of our work as a ***well-motivated*** approach that makes a ***strong contribution*** and as an ***excellent specimen of leveraging theoretical insights to solve a crucial issue in RL***. Your positive assessment greatly encourages us. We now address your comments in detail: --- #### **> Theoretical Improvement** We appreciate your valuable suggestion! We **have refined** Theorem 3.2 by **explicitly incorporating the approximation error and its asymptotic behavior**. The revised theorem is as follows: **Theorem 3.2** (Lipschitz’s Jacobian Approximation): *Let $f:\mathbb{R}^n \to \mathbb{R}^m$ be a continuously differentiable neural network. The Jacobian norm $\left\Vert\nabla_xf\right\Vert$ serves as an approximation of the local Lipschitz constant of $f$ within the neighborhood $\mathcal{B}(x, \rho)$, centered at $x$ with radius $\rho$. The approximation error is given by $\mathop{\max}_{\ \delta\in \mathcal{B}(0,\rho)} \left[ \left(\nabla_x\left\Vert \nabla_x f(x) \right\Vert\right)^\top\delta + o(\delta)\right].$ Moreover, as $\rho \to 0$, the Jacobian norm converges to the exact local Lipschitz constant, i.e. $\left\Vert\nabla_xf\right\Vert \to K(x)$.* This refinement provides a more precise characterization of the approximation error, as you suggested. #### **> Question on Lemma A.1** We appreciate your question regarding Lemma A.1. Theorem 5.19 in Rudin's book is an **existential result** that relates the function’s increment to its derivative at some point, **without involving** the Lipschitz constant or providing an explicit expression for Lipschitz constant; whereas Lemma A.1 **explicitly gives an equivalent expression** for the Lipschitz constant. In summary, Theorem 5.19 primarily focuses on the bounding variation of a function, while Lemma A.1 focuses on the local Lipschitz property and provides an approximation method for calculating the Lipschitz constant. --- #### **Summary** We hope our revisions adequately address your concerns. Please let us know if you have any further questions. Thank you again for your time and valuable feedback!
Summary: The paper introduces LipsNet++, a novel policy network designed to mitigate action fluctuation in reinforcement learning (RL). The authors identify two primary causes of action fluctuation: observation noise and policy non-smoothness. To address these, LipsNet++ integrates: A Fourier filter layer, which processes historical observations in the frequency domain and applies a trainable filter matrix to suppress noise. And, a Lipschitz controller layer, which applies Jacobian regularization to constrain the Lipschitz constant, ensuring smooth control outputs. ## update after rebuttal The authors did address most of my concerns, which is why I increased my score. Claims And Evidence: The paper presents several claims, the majority of which are well-supported by experimental results: - The Fourier filter layer effectively suppresses observation noise. - The Lipschitz controller layer enhances policy smoothness, supported by theorem 3.2. - The approach is applicable across various continuous control tasks. - LipsNet++ is easily integrable into different network architectures and RL frameworks. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense. However, one aspect that is missing is the influence of control type. Depending on the environment, the control inputs could be torque, velocity, or position-based. It would be particularly interesting to investigate how LipsNet++ performs across these different control modalities and whether its effectiveness varies between them. Theoretical Claims: I did take a look at theorem 3.2. The derivations are mathematically sound and align with prior work on smooth policy learning. Experimental Designs Or Analyses: The experiments are well-structured, incorporating multiple environments and ablations: - Double Integrator: Assesses basic control performance and robustness to noise. - DMControl Suite: Evaluates performance on complex continuous control tasks. - Mini-Vehicle Driving: Demonstrates real-world applicability in a robotics setting. It would have been interesting to examine the impact of control type, as position control inherently applies some degree of smoothing compared to torque control. A useful addition would be to construct two alternative control interfaces (for position and velocity; DMC already uses torque) within DMC and perform an ablation study over them. Given the exploratory nature of this analysis, it may not be necessary to compare all baselines for this ablation. Supplementary Material: I did take a look at the proof of theorem 3.2 and "C. Fundamental Reasons of Action Fluctuation". Relation To Broader Scientific Literature: The paper builds on prior work in: - Lipschitz-constrained policy learning (Takase et al., 2020; Song et al., 2023). - Fourier-based filtering in neural networks (Lee-Thorp et al., 2022; Tan et al., 2024). - Smooth control in RL (Mysore et al., 2021; Yu et al., 2021). The connection to classical control theory is a valuable insight, suggesting that RL policy networks can explicitly incorporate filtering and control mechanisms. Essential References Not Discussed: The related work section appears comprehensive, and I did not identify any missing references. Other Strengths And Weaknesses: Strengths: - Novel integration of filtering and control into a policy network. - Clear empirical improvements in smoothness and robustness. - Good ablation studies. - Public PyTorch implementation for reproducibility. Weaknesses: - Limited discussion on scalability to image-based RL. - Some interesting ablations are missing (see above on control type influence) - Certain claimed contributions seem overstated relative to their novelty. Other Comments Or Suggestions: - second formula: squared term needs to be inside the expectation - “3.1. Reasons Identification of Action Fluctuation” not a good section name. I also found that section not really novel and satisfying. Calling it one of the four core contributions of the paper looks like an overstatement to me Questions For Authors: - how well does this scale to image observations? - does the trainable H matrix receive gradients from the critic? Or did you stop these gradients? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful feedback! We are encouraged by the positive aspects of your assessment, i.e., ***well-supported*** claims, ***mathematically sound*** derivations, ***valuable*** insight, ***comprehensive*** related works, ***novel*** integration of filtering and control, ***clear*** empirical improvements, and ***good*** ablation studies. We now address your comments in detail: --- #### **> Examine Impact of Control Types** All DMControl environments in our paper (Cartpole, Reacher, Cheetah, Walker) are torque control. To examine the impact of control types as you suggested, we additionally train on **position control (DMControl Fish)** and **velocity control (Gymnasium Bipedal Walker)** environments. The Fish task needs to control 5 actions (tail, tail_twist, fins_flap, finleft_pitch, finright_pitch) under position control type, and the Bipedal Walker needs to control 4 motor speed values under velocity control type. Their TAR and ARF are summarized as follows: TAR comparison: [[Link - Table R3]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Table_R3.png), AFR comparison: [[Link - Table R4]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Table_R4.png) These results show that LipsNet++ achieves good action smoothness across all three control types. Additionally, some studies on robotics control also support the effectiveness of LipsNet-related structures in torque control [1] and position control [2]. In fact, we believe LipsNet++ does not assume the control type to be torque, position, or velocity; it ensures action smoothness in all cases and serves as a general solution. The choice of control type should depend on the task requirements. #### **> Other Comments Or Suggestions** - Formula in Line 98: Typo fixed. - Thank you for your suggestion. We **have renamed** Section 3.2 to “Understanding Key Factors Behind Action Fluctuation” and **refined** the contribution description accordingly. Additionally, we believe analyzing the root causes of action fluctuation is essential. Equation (1) mathematically identifies two fundamental causes, which play a crucial role in guiding the design of the subsequent network layers (Fourier Filter Layer & Lipschitz Controller Layer). Previous works did not distinguish the two causes and treated them as one; therefore, our analysis actually contributes to the broader community. #### **> Q1: Scalability to Image-based RL** **Yes**, LipsNet++ can scale to image observations. The **core question** here is whether LipsNet++ can handle high-dimensional observation tasks. To demonstrate this, we added two experiments: DMControl Cartpole (pixel observation) and Gymnasium Humanoid (348 observations), with the former has image-based observations and the latter has high-dimensional observations. Given the high-dimensional input, we adopt the following LipsNet++ architecture: - DMControl Cartpol: *[Convolutional Layer -> Fourier Filter Layer -> Lipschitz Controller Layer]* - Gymnasium Humanoid: *[Linear Layer -> Fourier Filter Layer -> Lipschitz Controller Layer]* In the Convolutional Layer and Linear Layer, they perform the feature extraction and dimension reduction. The TAR and AFR are summarized in [[Link - Table R5]](https://github.com/ICML-anonymous-2025/LipsNet_v2/blob/main/rebuttal/Table_R5.png). These results demonstrate that **LipsNet++ can handle high-dimensional observation tasks and can scale to image-based tasks**. #### **> Q2: Gradient Flow for Matrix $H$** $H$ receives gradients from the policy improvement loss $\mathcal{L}'$, which includes the critic value as described in Equation (4) and the last equation in Section 2.1; but $H$ does not directly receive gradients from the critic itself, as the critic does not have a Fourier Filter Layer. Integrating a Fourier Filter Layer into the critic is an interesting direction. In this case, the actor and critic could potentially share one $H$ matrix, raising an interesting question of whether $H$ should receive gradients only from the actor or from both. Thank you for your insightful perspective. We will explore this in future work. --- #### **Summary** We hope our revisions adequately address your concerns. Please let us know if you have any further questions. Thank you again for your time and valuable feedback! #### ***References*** *[1] Y. Zhang, et al. Robust Locomotion Policy with Adaptive Lipschitz Constraint for Legged Robots. IEEE RAL, 2024.* *[2] G. Christmann, et al. Benchmarking Smoothness and Reducing High-Frequency Oscillations in Continuous Control Policies. IEEE IROS, 2024.* --- Rebuttal Comment 1.1: Comment: I sincerely thank the authors for their detailed response and for taking the time to run the experiment I requested. I truly appreciate their effort. Most of my concerns have been addressed, and as a result, I will be raising my score.
null
null
null
null
null
null
Towards Memorization Estimation: Fast, Formal and Free
Accept (poster)
Summary: This paper introduces cumulative sample loss, CSL as a way to measure memorization in neural networks. The key idea is that by tracking the cumulative loss over training, CSL can identify mislabeled and duplicate examples efficiently. The authors argue that CSL is both cheaper than stability-based methods and more effective at finding problematic examples. They evaluate their method on standard benchmark datasets and show that it outperforms existing heuristics for detecting mislabeled data. Claims And Evidence: The authors claim that CSL is a better measure of memorization than previous methods, particularly stability-based approaches. The experimental results generally support this, showing that CSL can effectively finds mislabeled and duplicate samples. But, the claims about general memorization seem a bit narrow becuase the paper mainly focuses on mislabeled examples and does not explore broader notions of memorization, such as how it relates to generalization or robustness. Some comparisons to existing memorization measures could be stronger, especially with alternative ways of grouping examples based on learning dynamics. Methods And Evaluation Criteria: The method itself is quite simple: the authors track the cumulative loss over training and use this as a signal for memorization. The evaluation criteria, identifying mislabeled and duplicate samples make sense for the problem, but they don’t fully capture memorization in a more general sense. It would be good to see how CSL performs on other memorization-related tasks, such as distinguishing between easy-to-learn and hard-to-learn examples or detecting spurious correlations. The benchmarks used are reasonable, but additional datasets or different noise levels could provide a more complete picture. Theoretical Claims: There aren’t many deep theoretical results in the paper, but the reasoning behind CSL is intuitive and aligns with prior work on learning dynamics. The authors suggest that CSL correlates with memorization, but this is mostly supported empirically rather than through a formal analysis. Experimental Designs Or Analyses: The experiments are well-structured and support the main claims of the paper. The ablation studies show that CSL is effective at detecting mislabeled data, and the comparison with stability-based methods highlights its efficiency. However, the evaluation could be more comprehensive. For example, it would be useful to test CSL across different architectures or dataset sizes to see if the trends hold more broadly. Additionally, it would be good to analyze whether CSL correlates with other memorization-related signals, such as those used in curriculum learning or adversarial robustness. Supplementary Material: I did not get a chance to review the appendix. Relation To Broader Scientific Literature: The paper is connected to prior work on memorization in deep learning, particularly methods that use learning dynamics to classify examples as memorized or generalized. However, it focuses mainly on mislabeled and duplicate examples, which is only one aspect of memorization. There are several recent works that explore similar ideas but in a broader context, including those that analyze memorization in terms of generalization or in terms of progressive learning dynamics. A more thorough discussion of how CSL fits within these broader frameworks would improve the positioning of the paper. Essential References Not Discussed: There are a few missing references that seem very relevant to this work. For example: https://arxiv.org/abs/2202.09931 studies learning dynamics as a way to classify examples into different categories. At the core this is very similar to CSL. https://arxiv.org/abs/2412.07684 studies the connection between memorization and generalization. https://arxiv.org/pdf/2207.00099 and https://arxiv.org/pdf/2309.16748 also look at memorization from a learning dynamics perspective. Other Strengths And Weaknesses: Strengths: - CSL is a very simple and computationally cheap approach, making it practical for real-world use. - The method effectively detects mislabeled and duplicate examples, which is useful for dataset curation, specially these days that curating datasets seems more important than ever. - The paper is well-written and easy to follow. Weaknesses: - The focus on mislabeled examples is somewhat narrow and does not fully capture the complexity of memorization. - The comparisons to existing memorization measures could be stronger, especially with alternative approaches that the effect of memorization on generalization. Other Comments Or Suggestions: - It would be helpful to clarify whether CSL is robust across different architectures or if it mainly works well for the specific models tested. - Some additional theoretical grounding for why CSL is expected to work better than other methods would strengthen the argument. Questions For Authors: 1. How does CSL compare to other learning-dynamics-based methods beyond stability-based memorization? 2. Have you tried applying CSL to a setting where memorization is not just about mislabeled examples but also about spurious correlations? For example memorization of examples that all benefit from a spurious correlation. --- # Update on Mar 31st: Dear authors, At AC's request, here I provide more details on my initial review. Apologies for any missing details. I would be happy to update my score once I receive your reply. ## Which other architectures and dataset sizes? This paper reports results using a ResNet-18. But they reports the results against the memorization scores precomputed in [1] which was using a ResNet-50. So it makes sense to report on a ResNet-50 with the exact setup described in Appendix B of [1]. Running experiments on modern architectures like ViTs, specially since they have different inductive biases compared to ResNets, would also strengthen the paper. But I don’t expect the authors to rerun all their experiments with ViTs. Let me elaborate why I wished to see further experiment: Another aspect of memorization, beyond mislabeled examples, is the memorization of underrepresented examples. For example, if you train a large model on just a handful of examples, the model gets zero training error but won’t generalize. According to the definition of the memorization score in [1] (the difference between held-in and held-out performance), this is memorization. It would be interesting to test whether CSL can also identify memorized (but not mislabeled) examples in such cases. A natural testbed would be long-tailed datasets like ImageNet-LT or CIFAR-LT, where certain classes are underrepresented. ## Other memorization-related signals used in curriculum learning or adversarial robustness: In curriculum learning, [2] uses forgetting frequency as a signal: how often a sample flips from being correctly to incorrectly classified. [3] uses loss trajectory clustering to group examples similar to CSL. [4] uses model confidence on the true class and the variability of that confidence across epochs. In adversarial robustness, [5] proposes loss-sensitivity, suggesting that memorized examples are more easily perturbed. [6] uses sharpness of the loss function at convergence as another signal. ## Other learning-dynamics-based methods: Each of the papers above suggests a different signal: loss sensitivity, forgetting frequency, confidence, or confidence variability that may be compared with CSL. A comparison to at least one of these could strengthen the paper. ## The theory and the supplementary material: I checked the supplementary material. I had initially missed that the authors also conducted experiments with VGG, MobileNetV2, and Inception. However, since the ground-truth memorization scores are precomputed using a ResNet-50, it would make a lot of sense to include that for a more direct comparison. I also revisited the theoretical section in the appendix and realized I had initially missed the link the authors make between two parts: one showing that CSL bounds learning time, and the other that learning time bounds memorization. I now value the theoretical contribution more. Thank you! [1] "What neural networks memorize and why: Discovering the long tail via influence estimation" Vitaly Feldman and Chiyuan Zhang. NeurIPS 2020. [2] "An empirical study of example forgetting during deep neural network learning" Toneva et al. ICLR 2019. [3] "Deconstructing Distributions: A Pointwise Framework of Learning" Kaplun et al. ICLR 2023. [4] "Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics. Swayamdipta et al. EMNLP 2020. [5] "A Closer Look at Memorization in Deep Networks" Arpit et al. ICML 2017. [6] "Deep Nets Don't Learn via Memorization" Krueger et al. ICLR 2017. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, we address their questions below. Tables and Figures are provided at ****[Rebuttal Page clickable link](https://lively-dune-08c51d610.6.azurestaticapps.net/).**** 1. Which other architectures and dataset sizes? **A**: Please see our response to reviewer ZrxQ Q1. Additionally we also ran the the long tail experiments, visualized in Figure 3,4,5,6 in the rebuttal page. See the Q2 for a more in-depth explanation. 2. Other memorization-related signals used in curriculum learning or adversarial robustness: **A**: We would like to clarify that we provide two experiments, first *mislabeled detection*, second *duplicate detection* to capture mem. properties. Additionally, we ran the exps below - Adversarial Robustness (Fig 7 rebuttal page): - We provide a visualization of adversarial distance versus memorization and CSL scores. Adversarial distance refers to how easily a sample can be perturbed to change its classification. The results show that samples with low adversarial distance (more vulnerable) tend to have higher memorization and CSL scores. This suggests that CSL and memorization capture similar properties and are strongly related to model robustness. - Easy vs. Hard-to-Learn Samples: - Fig 1 (on the Rebuttal Page (link above)) shows images with low CSL scores from CIFAR100, interpreted as easy or typical examples (class prototypes). Fig. 2 shows high CSL images from CIFAR100 capturing atypical and likely memorized examples. Fig 3,4 presents the same for ImageNet. The high CSL visualizations clearly show we identify memorized (but not mislabeled) examples in such cases. This visualization is similar to [3] who show similar figures with their proposed proxy of curvature. - Long Tail Behavior - Clean CIFAR100 and ImageNet datasets are also long tailed [11]. Fig 5 and Fig 6 plots the histogram of CSL on CIFAR100 and ImageNet showing that CSL captures long tail behavior - Additional Results Under Higher Noise (Table 2 rebuttal page) - We evaluate CSL under higher label noise settings. - CSL maintains its ability to distinguish clean from mislabeled examples, indicating its robustness across different noise levels. 3. On Other learning dynamics method? **A**: Please note, Second Split Forgetting time (SSFT), Forgetting time, learning time, in-confidence (1 - confidence) and Curvature are learning dynamics based methods and clearly we see that CSL out performs them in both mislabeled detection and duplicate detection (Tables 2 and 3 in the main paper). Regarding similarity with memorization scores we have added forgetting frequency, loss sensitivity, final epoch loss to the (Table 1 rebuttal page) as additional baselines. 4. Supplementary Material **A**: Feldman and Zhang [11] ran Inception on CIFAR10/100 and ResNet50 on ImageNet. Additionally, we have added ResNet50 to ImageNet experiments (Table 1 rebuttal page) and Inception on CIFAR100. To save space we kindly ask the reviewer to see our response to reviewer ZrxQ Q1. 5. There are several recent works that explore similar ideas but in a broader context,.. would improve the positioning of the paper. **A**: Thank you for bringing this to our attention. While these works share similar goals, they differ from ours. We will include them in the related work section. [7] measures performance across multiple models on a single input, while CSL analyzes the distribution of memorization scores within a single training run, with no added overhead. In contrast, [7] is significantly more computationally expensive. [8] addresses how memorization and spurious correlations hinder generalization and proposes MAT to mitigate them. Our work introduces CSL, a fast, theoretical metric for estimating memorization, which we apply to detect mislabeled and duplicate data. [9] explores how memorization fades when examples are removed. CSL, on the other hand, focuses on efficiently estimating how memorized each sample is, offering both theoretical grounding and empirical utility. [10] proposes data splitting for robust OOD training. CSL instead offers a lightweight and accurate method for measuring memorization. Additionally please see our response to reviewer WoPj Q5. [3] Garg et al. "Memorization Through the Lens of Curvature of Loss Function Around Samples." ICML 24.\ [7] Kaplun et al. "Deconstructing Distributions: A Pointwise Framework of Learning." ICLR 23\ [8] Bayat et al. "The Pitfalls of Memorization: When Memorization Hurts Generalization." ICLR 25\ [9] Jagielski et al. "Measuring Forgetting of Memorized Training Examples." ICLR 23\ [10] Pezeshki, et al. "Discovering Environments with XRM." ICML 24\ [11] Feldman & Zhang "What neural networks memorize and why: Discovering the long tail via influence estimation" NeurIPS 20. --- Rebuttal Comment 1.1: Comment: I read your rebuttal and the additional experiments and analysis that you have provided. I must say that I am impressed with the effort you have put into addressing my previous concerns. The new experiments and analysis have strengthened the paper and I would increase my score.
Summary: The paper proposes a computational efficient proxy metric (CSL) to the popular notion of memorization proposed by Zhang and Feldman (2020). The authors support this metric with theoretical analyses and empirical results on standard image classification benchmarks. Claims And Evidence: The empirical claims that CSL are a better proxy metric to the baselines are convincing. There are 3 main portions of the empirical results: 1. CSL is better correlated with memorization than the curvature-based method and is also more computationally efficient. 2. CSL also performs better at detecting mislabeled samples due to symmetric label noise than the baselines. 3. CSL detects duplicate samples with higher accuracy than the baselines. Methods And Evaluation Criteria: Experiment methods and evaluation criteria are reasonable to me. **Question**: There is no description of how CSL is used to detect mislabeled samples (Section 7.1) and duplicate samples (Section 7.2). I also cannot find it in Appendix B.1 or B.5. Is this simply thresholding the CSL for each sample? Theoretical Claims: I skimmed over the proofs but did not check them carefully. All the claims and theorems seem to be reasonable, but I do have some questions below. **The expectation over samples.** Starting from Theorem 5.3, there is an introduction to an expectation over training samples ($z_i$). Looking at the proof quickly I believe that it is an expectation over the entire training data distribution. This already makes me curious about how this theorem would be used because we often care about memorization of a particular sample, not an expectation over a distribution. However, on L246, Theorem 5.3 is being interpreted as applying to “a group of samples” like $U(T)$ which is a subset of all training data $S$. It is a bit unclear to me why this interpretation is valid, given the theorem holds in expectation for the entire training distribution. This is fairly important as it is used to motivate the experiments showing relationship between CSL and “Mem Score” (Figure 6 and 8). I might be missing something here. **Assumption on L257**: “it can be assumed that $\kappa_T$ is constant across different subsets $U(T)$.” This assumption may need more explanation for why it is reasonable. My guess is that because $k_g$ depends on Frobenius norm and the singular values of the samples batch, which are arguably irrelevant to the difficulty to learn by neural network.This assumption may need more explanation for why it is reasonable. My guess is that because $k_g$ depends on Frobenius norm and the singular values of the samples batch, which are arguably irrelevant to the difficulty to learn by neural networks. Experimental Designs Or Analyses: Question: Is there a particular reason for choosing cosine similarity as a metric instead of Pearson's correlation coefficient or mean square error? Correlation coefficient seems the most natural to me. Supplementary Material: I checked some part of the appendix that are pointed to from the main text. Relation To Broader Scientific Literature: I believe that related works have been mentioned throughout the paper. I personally would like to see more discussion of the prior literature in more details. Essential References Not Discussed: Not that I know of. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: The paper is very well-written in my opinion. Even as a non-theorist, I feel like I understand the theorems and learn some useful tricks along the way. I feel like Lemma 5.1 can be generally useful, and it’d be a nice contribution too (if it’s not been shown before). The experiments are also convincing and a nice addition to the theoretical results. Questions For Authors: Question: I could not find how $\ell^{\backslash z_i}$ is defined. Is it loss when the sample $z_i$ is removed from the training set? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, we address their questions below. Tables and Figures are provided at ****[Rebuttal Page clickable link](https://lively-dune-08c51d610.6.azurestaticapps.net/).**** 1. There is no description of how CSL is used to detect mislabeled samples (Section 7.1) and duplicate samples (Section 7.2). I also cannot find it in Appendix B.1 or B.5. Is this simply thresholding the CSL for each sample? **A**: Thank you for bringing this to our attention, we will clarify this in the revision. We do indeed use thresholding on each sample’s CSL, giving us a binary detector whose performance is measured and reported using AUROC. 2. The expectation over samples. Starting from Theorem 5.3, there is an introduction to an expectation over training samples (). Looking at the proof quickly I believe that it is an expectation over the entire training data distribution. This already makes me curious about how this theorem would be used because we often care about memorization of a particular sample, not an expectation over a distribution. However, on L246, Theorem 5.3 is being interpreted as applying to “a group of samples” like  which is a subset of all training data . It is a bit unclear to me why this interpretation is valid, given the theorem holds in expectation for the entire training distribution. This is fairly important as it is used to motivate the experiments showing relationship between CSL and “Mem Score” (Figure 6 and 8). I might be missing something here. **A**: We would like to clarify that we meant arbitrary **random** subsets. Consider selecting a subset U at random; then T can be set as $\max T_{z_i}$ for $z_i \in U$. Since Equation (10) holds for such a distribution, we obtain the stated result. We will revise the text to clarify this point and explicitly indicate that the subsets under consideration are random. 3. Assumption on L257: “it can be assumed that is constant across different subsets .” This assumption may need more explanation for why it is reasonable. My guess is that because depends on Frobenius norm and the singular values of the samples batch, which are arguably irrelevant to the difficulty to learn by neural network. This assumption may need more explanation for why it is reasonable. My guess is that because depends on Frobenius norm and the singular values of the samples batch, which are arguably irrelevant to the difficulty to learn by neural networks. **A**: The reviewers explanation is intuitive and correct. More formally, note that $\kappa_g$ is not dependent on the training mini batch but is dependent on the choice of input made for computing the gradient norm w.r.t to the input, which is user controlled. The the input gradient bound for a given input is independent of the choice of the training mini batch. Now choosing $\max_u \kappa_{g_u},~u \subset S$, i.e. max value over the subsets, we can get a worst case upper bound constant. 4. Is there a particular reason for choosing cosine similarity as a metric instead of Pearson's correlation coefficient or mean square error? Correlation coefficient seems the most natural to me. **A**: We used cosine similarity because it was employed in prior methods and setups [3]. We have also added Pearson correlation in Table 1 (see the rebuttal page linked above). Any magnitude greater than 0.25 is considered a statistically significant correlation. Thus, the takeaways are similar between cosine similarity and Pearson correlation. Please also see our response to Q1 for reviewer ZrxQ. 5. I believe that related works have been mentioned throughout the paper. I personally would like to see more discussion of the prior literature in more details. **A**: Thank you for the suggestion, other reviewers have also expressed similar concerns. We will expand our Related Works section to discuss more thoroughly (1) stability-based memorization and its computational challenges; (2) learning dynamics; and (3) broader loss trajectory works for membership inference attacks. This should clarify further how our approach (CSL) integrates with and advances prior research. 6. I feel like Lemma 5.1 can be generally useful, and it’d be a nice contribution too (if it’s not been shown before) **A**: Thank you for recognizing the broader utility of Lemma 5.1. To our knowledge, it has not been explicitly presented before. We will emphasize this contribution and place it more prominently in the revised manuscript. 7. I could not find how  l^z is defined. Is it loss when the sample z is removed from the  training set? **A**: Yes, $l^{\setminus {z_i}}(z_i)$ is the per-sample loss for ($z_i$) under the model trained on the leave-one-out dataset ($S^{\setminus {z_i}}$). We will clarify that notation explicitly in the revised manuscript. [3] Garg et al. "Memorization Through the Lens of Curvature of Loss Function Around Samples." ICML 24. --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarifications and extra experiments. I did not notice that the authors originally used ResNet-18 which does not match the pre-computed results in Feldman & Zhang, but that seems to be fixed now thanks to the other reviewers. As a result, I maintain my original rating.
Summary: This paper introduces Cumulative Sample Loss (CSL) as a novel proxy for measuring memorization in deep learning models. The authors formally adopt the memorization definition established by Feldman(2020); Feldman & Zhang (2020) and develop a theoretical framework connecting CSL to both training time and memorization. Through the theoretical analysis, they prove these connections under specific assumptions about learning dynamics. The authors empirically validate their theoretical findings by demonstrating high correlation between CSL and established memorization scores across multiple datasets and model architectures. They further showcase the practical utility of CSL by applying it to detect mislabeled examples and duplicate samples in datasets, where it achieves state-of-the-art performance compared to existing methods while being computationally more efficient. ## Update after rebuttal Most of my concerns have been adequately addressed by the authors. 1. Authors provided the calculations matching the setup from Feldman & Zhang 2. Authors clarifies the generality of the proofs 3. Authors included missing citations and clarified the relation to the prior literature I have therefore increased my score 2->3. Some remaining concerns I have: 1. Given the training setup mismatch between some early experiments and Feldman & Zhang, I don't think it's entirely valid to use precomputed scores from the prior work - those need to be re-caclucated for each training setup 2. Given existing literature on using loss trajectories to assess vulnerability, I'd expect more extensive comparison to baselines Claims And Evidence: Discussed below Methods And Evaluation Criteria: The paper utilizes standard benchmarks (CIFAR-10/100, ImageNet) and common architectures following established protocols from prior work. Their evaluation employs appropriate metrics: cosine similarity for correlating CSL with memorization scores and AUROC for mislabeled/duplicate detection tasks. A notable limitation is limited baselines for correlation with the memorization scores. While CSL outperforms input loss curvature, more extensive set of baselines is necessary - covering at least simple metrics like final loss or model confidence. Theoretical Claims: The core contribution of this paper lies in its theoretical framework connecting Cumulative Sample Loss (CSL) to memorization and learning time. However, several aspects of these theoretical claims are not entirely clear to me. Lemma 5.1, which states that input gradient norm is bounded by weight gradient norm, appears to claim applicability to "any neural network." However, its proof in C1 only covers a linear layer with particular weight matrix configurations and with no bias. The theoretical framework relies on several strong assumptions that may not hold in practical deep learning settings. These include bounded loss functions, smooth loss landscapes, and uniform stability guarantees (which essentially implies bounded memorization by definition). While these assumptions facilitate mathematical analysis, they potentially limit the applicability of the results to real-world deep learning scenarios where loss functions may be unbounded (e.g., cross-entropy without clipping) and loss landscapes are known to be highly non-convex and irregular. Additionally, the paper is unclear about how the theoretical results extend to minibatch training, which is standard practice. The proofs appear to consider individual sample updates, leaving questions about how gradient interactions within minibatches might affect the derived bounds. I also don't understand the authors' claim that certain theorems hold for any arbitrary subset of training data. This interpretation is not clearly justified, as the optimization steps in the proofs seem to rely on specific data distributions. The paper would benefit from clarifying how the expectation-based results generalize to arbitrary data subsets. Experimental Designs Or Analyses: The paper's experimental methodology exhibits some significant limitations that undermine the strength of its empirical validation. A primary concern is the authors' use of memorization scores precomputed by Feldman & Zhang (2020) while employing different model architectures in their own experiments (ResNet50 vs ResNet18). Since memorization scores are highly specific to particular training configurations (including architecture, optimization settings, and data processing), this mismatch raises questions about the validity of the correlation analysis between CSL and the referenced memorization scores. Furthermore, the paper lacks comprehensive comparison against simpler baseline metrics. Examining Figure 2, it appears that final sample loss might achieve similar correlation with memorization as the proposed CSL metric. This observation suggests that accumulating loss throughout training may not provide substantial additional benefit. Without explicitly comparing against such straightforward alternatives, the paper fails to convincingly demonstrate the unique value of CSL over simpler approaches. Supplementary Material: I have reviewed selected proofs from the appendix (see "Theoretical claims") Relation To Broader Scientific Literature: This paper makes a valuable contribution to the theoretical understanding of memorization in deep learning by establishing connections between CSL, learning time, and memorization. Its most significant practical contribution is providing a computationally efficient proxy for memorization, which traditionally requires expensive leave-one-out training procedures. By demonstrating that CSL can be obtained with zero additional computational overhead during training, the authors offer a practical tool for analyzing memorization at scale. The work effectively builds upon prior research on input loss curvature while offering substantial computational advantages. Essential References Not Discussed: The paper overlooks two works with a very similar idea of using loss trajectory to assess memorization/vulnerability of the target point. The papers in question, however, frame it in the context of a Membership Inference Attack, but it's still highly relevant to the proposed research. [1] Liu, Yiyong, et al. "Membership inference attacks by exploiting loss trajectory." _Proceedings of the 2022 ACM SIGSAC Conference on Computer and Communications Security_. 2022. [2] Li, Hao, et al. "Seqmia: Sequential-metric based membership inference attack." _Proceedings of the 2024 on ACM SIGSAC Conference on Computer and Communications Security_. 2024. Loss trajectory (specifically mean loss) has also been previously used as memorization proxy in privacy auditing literature, e.g. [3] Nasr, Milad, et al. "Adversary instantiation: Lower bounds for differentially private machine learning." 2021 IEEE Symposium on security and privacy (SP). IEEE, 2021. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their valuable feedback, we address the questions below. Tables and Figures are provided at ****[Rebuttal Page click here](https://lively-dune-08c51d610.6.azurestaticapps.net/).**** 1. A primary concern is the authors' use of memorization scores precomputed by Feldman & Zhang (2020) while employing different model architectures .... scores. **A**: We agree with the reviewer. Feldman & Zhang used Inception for CIFAR100 and ResNet50 for ImageNet. We have added ResNet50 (same as Feldman & Zhang) and Inception (for CIFAR100 same as Feldman & Zhang) in Table 1 (rebuttal page above) along with more baselines (we reran Inception model, due to a minor bug fix, and present the updated scores). The choice of ResNet18 was dictated by its use in prior proxy such as curvature [3]. Additionally, we have included results using multiple architectures in Appendix B.2. These were already part of the appendix but not clearly referenced in the main text; we will fix this in the revised version. Our findings are inline with [3, 4, 5, 6] which have shown that memorization is often a dataset-level property, *given the model has sufficient capacity*. Also, please see our response to fZEA Q3. 2. Furthermore, ... Without explicitly comparing against such straightforward alternatives, the paper fails to convincingly demonstrate the unique value of CSL over simpler approaches. **A**: We have provided the final sample loss, loss sensitivity and forgetting frequency as a baselines (also suggested by reviewer fZEA) in Table 1 (rebuttal page above) . As noted by reviewer woPj, we have also added *Pearson correlation* $\in (-1, 1)$, any magnitude > 0.25 is considered statistically significant correlation. And clearly we see $CSL$ has statistically significant result generally and outperforms other methods. Also, please see our response to reviewer fZEA Q3. 3. Lemma 5.1, which states .. weight matrix configurations and with no bias. **A**: We would like to clarify that Lemma 5.1 indeed holds for a general feed forward neural networks. While the proof in Appendix C.1 begins with the case of a linear layer for simplicity, the result extends to arbitrary feed forward networks (see steps eq. 23 -> 24 this is holds for a general feed forward NN). Regarding bias, this is a notational simplification. The inclusion of bias can be handled without loss of generality by augmenting the input vector with an additional identity row, a common technique to account for bias. As such, Lemma 5.1 holds for feed forward networks with and without bias. We will revise the appendix to make this generalization more explicit. 4. The theoretical framework relies on .. assumptions that may not hold in practical deep learning settings .. irregular. **A**: First, our theoretical results do not assume convexity. In fact, our analysis applies to widely used unbounded, non-convex, cross-entropy loss, as noted in the paper. Additionally also for any non-convex bounded loss. Second, the assumptions (e.g., smoothness, stability, bounded gradients) are well-supported in practice. Lipschitz continuity and smoothness of deep networks have been studied and validated in Virmaux & Scaman [1], and uniform stability of SGD has been established by Hardt et al. [2]. Please see our detailed discussion in lines (303-317). Therefore, we believe the theoretical framework remains broadly applicable to standard deep learning practice, especially for networks trained with SGD. 5. Additionally, the paper is unclear about ... minibatches might affect the derived bounds. **A**: Thank you for raising this important point. We understand the potential confusion, and we’d like to clarify. Our proof relies on lemma 5.1, since lemma 5.1 does not depend on the choice of minibatch, our proof can also be extended to SGD with minibatch size larger than 1. 6. I also don't understand ... data subsets. **A**: We would like to clarify that we meant arbitrary **random** subsets. Consider selecting a subset U at random; then T can be set as $\max\{T_{z_i} : z_i \in U\}$. Since Equation (10) holds for such a distribution, we obtain the stated result. We will revise the text to clarify this point and explicitly indicate that the subsets under consideration are random. 7. Regarding Missing references w.r.t MIA: **A**: Please see our response to reviewer WoPj Q5. [1] Virmaux & Scaman Lipschitz regularity of deep neural networks: analysis and efficient estimation. NeurIPS 18\ [2] Hard et al. Train faster, generalize better: Stability of stochastic gradient descent. ICML, 16\ [3] Garg et al. "Memorization Through the Lens of Curvature of Loss Function Around Samples." ICML 24.\ [4] Garg & Roy. "Samples with low loss curvature improve data efficiency." CVPR 23\ [5] Lukasik et al. What do larger image classifiers memorise? arXiv preprint arXiv:2310.05337, 2023\ [6] Ravikumar et al. "Unveiling privacy, memorization, and input curvature links." ICML 24 --- Rebuttal Comment 1.1: Comment: I thank authors for their thoughtful response and for addressing many of mine and other reviewer's comments, specifically for including suggested baselines and aligning the training procedure with Feldman & Zhang (2020). I'm open to updating my score, but would like to clarify a few points. > (from WoPj rebuttal) This should clarify further how our approach (CSL) integrates with and advances prior research. Can you elaborate on how are you going to update this section? > see steps eq. 23 -> 24 this is holds for a general feed forward NN Can you please elaborate on that? I'm not sure I understand the basis on which you claim eq.24-25 "hold for any deep neural network", when they are derived from eq.22-23, specific to a particular two-layer architecture. --- Reply to Comment 1.1.1: Comment: Thank you for your response, we address your questions below. 1. Can you elaborate on how are you going to update this section? **A:** We were previously limited by the rebuttal characters, below is the updated related work section, we use \<sc\> to denote same citations as in the current version to save space. Memorization in deep neural networks has gained attention, with recent works improving our understanding of its mechanisms and implications (\<sc\>). This research is driven by the need to understand generalization (\<sc\>, Kaplun et al., 2022; Bayat et al., 2025), identify mislabeled examples (\<sc\>), and detect out-of-distribution or rare sub-populations (\<sc\> Pezeshki et al., 2023). Additionally, memorization impacts robustness (\<sc\>), unlearning (\<sc\>) and privacy (\<sc\>). Privacy is often tested using membership inference attacks (MIA), which tests whether a particular sample was used in training (Shokri et al., 2017; Carlini et al., 2022; Ravikumar et al., 2024b). Learning dynamics has been used in this context to build stronger privacy tests. Liu et al. (2022) leverage loss trajectories and distilled models to improve MIA, while Li et al. (2024) propose SeqMIA, an attention-based recurrent architecture that exploits intermediate model states. Both approaches demonstrate how learning dynamics can reveal training-set membership but at the cost of increased computational overhead. Additionally, Nasr et al. (2021) highlights how mean loss trajectories can reveal privacy leakage under differentially private training establishing a lower bound on leakage identification. Learning Dynamics. Beyond privacy, learning dynamics have been studied as proxies for memorization. Mangalam et al. (2019) showed simpler examples are learned first while mislabeled or difficult samples may be repeatedly forgotten or relearned (Toneva et al., 2019; Pleiss et al., 2020; Maini et al., 2022). Jagielski et al. (2023) explored how memorization fades when examples are removed. Carlini et al. (2019a) combine multiple metrics to understand data retention, and Jiang et al. (2021) introduce the C-score to capture hard examples. Other works have proposed loss-sensitivity (Arpit et al., 2017), and sharpness of the loss function (Krueger et al., 2017) as memorization signals. More recently, Garg et al. (2024) used input loss curvature as a proxy for stability-based memorization (Feldman, 2020), supported by theoretical analysis from Ravikumar et al. (2024a). 2. I'm not sure I understand the basis on which you claim eq.24-25 "hold for any deep neural network", when they are derived from eq.22-23, specific to a particular two-layer architecture. A: We provide the proof below. Let's start by considering the a general deep net. We denote: - The input as $X$, - The first layer weight matrix as $W_1$, and $\tilde{W}$ is the weights of the entire network (including $W_1$) - And the intermediate representation (or pre-activation) of the first layer as $IR^{(1)} = W_1 X$. - Then such a network $f$ can be written as $f(X, \tilde{W}) = g(IR), \quad \text{where} \quad IR = W_1 X \quad \text{Eq 1(R)}$ > **Condition (1):** The decomposition (Eq 1(R)) holds for any network where the first layer has no skip connection. Condition (1) holds for architectures like VGG or ResNet, since they have an initial conv layer without skip a connection. This also holds for ViTs, where input images are projected to patches using an initial linear layer. For example, in a ResNet, the function $g$ corresponds to all layers after the first convolution. Now using the chain rule, the gradient of the loss $\ell$ with respect to the first layer weights $W_1$ can be decomposed into two parts: 1. The gradient of the loss with respect to the intermediate representation $IR^{(1)}$. 2. The gradient of the intermediate representation $IR^{(1)}$ with respect to $W_1$. Thus, we write: $\nabla_{W_1} \ell = \nabla_{IR^{(1)}} \ell \cdot \nabla_{W_1} IR^{(1)} = \nabla_{IR^{(1)}} \ell \cdot X^T$ (Eq. 2R) Now using the chain rule, we can write the same for input grad: $\nabla_{X} \ell = \nabla_{IR^{(1)}} \ell \cdot \nabla_{X} IR^{(1)} = W_1^T \cdot \nabla_{IR^{(1)}} \ell$ (Eq. 3R) Multiplying by $W_1^T$ on the left for (Eq. 2R) and $X^T$ on the right for (Eq. 3R) gives: $W_1^T \nabla_{W_1} \ell = \nabla_X \ell \, X^T.$ Thus we get the lemma 5.1 for any general neural net that satisfies *Condition (1)* above. Additionally note: $||W^T_1|| \leq ||\tilde{W}^T||$ and $||\nabla_{W_1} \ell|| \leq ||\nabla_{\tilde{W}} \ell ||$ since $\tilde{W}$ includes $W_1$. We have $||\nabla_{X} \ell|| \leq \cfrac{|| W_1^T || \cdot || (X)^+ ||}{s_P} \cdot || \nabla_{W_1} \ell ||$ Thus: $||\nabla_{X} \ell|| \leq \cfrac{|| \tilde{W}^T || \cdot || (X)^+ ||}{s_P} \cdot || \nabla_{\tilde{W}} \ell ||$ We will clarify above condition and proof in the revised version. We are happy to clarify any further details.
null
null
null
null
null
null
null
null
FuseUNet: A Multi-Scale Feature Fusion Method for U-like Networks
Accept (poster)
Summary: The paper proposes FuseUNet, a multi-scale feature fusion method for U-Net-like networks that enhances skip connection mechanisms by reinterpreting feature fusion as solving an initial value problem (IVP). It employs a linear multistep numerical method with neural memory ODEs (nmODEs) and a predictor-corrector framework, treating skip connections as discrete nodes in an IVP to facilitate effective multi-scale information interaction beyond simple concatenation or addition. Experiments on multiple medical segmentation datasets (ACDC, KiTS2023, MSD brain tumor, ISIC2017, ISIC2018) using CNN, Transformer, and Mamba backbones demonstrate its generalizability, achieving significant reductions in parameters and computational costs while maintaining or surpassing state-of-the-art performance. Ablation studies further analyze the impact of discretization order and memory flow channel numbers, validating the approach’s effectiveness. Claims And Evidence: 1. The paper claims that FuseUNet significantly improves multi-scale feature interaction; however, while experiments show performance comparable or marginally superior to baselines, explicit evidence supporting this claim is lacking. The study primarily reports overall Dice metrics without directly demonstrating that enhanced cross-scale feature interaction is the key factor driving these improvements. Detailed quantitative metrics on multi-scale interaction effectiveness would strengthen the argument. 2. Although the experiments are conducted on diverse architectures, the claim of generalizability across "any U-like network" remains insufficiently validated. The study mainly relies on three specific backbones, making the theoretical generalization claim less convincing. Additional empirical support across a broader range of U-Net variants is needed to substantiate this assertion. 3. The paper highlights a reduction in parameter counts but presents unclear and inconsistent GFLOPs improvements. Notably, for 2D segmentation tasks, GFLOPs slightly increased rather than decreased, contradicting the claimed computational efficiency gains. This discrepancy necessitates a more nuanced explanation or expanded experimental analyses to clarify the impact on computational costs. Methods And Evaluation Criteria: The paper clearly states and conceptually justifies its core methodological innovation: leveraging linear multistep numerical methods, specifically Adams-Bashforth and Adams-Moulton methods, combined with neural memory ordinary differential equations (nmODEs) for multi-scale feature fusion in skip connections. This approach directly addresses the limitations of traditional skip connections in U-Net-based models, making the methodological choice both relevant and theoretically sound for enhancing multi-scale feature interaction. Theoretical Claims: The authors conceptualize skip connections as discrete nodes of an initial value problem (IVP), where multi-scale features represent discrete solutions at different timesteps. This analogy is theoretically sound and aligns with established frameworks in neural ordinary differential equations (NODEs) and linear multistep methods. The theoretical background (Section 3.1) on Adams-Bashforth and Adams-Moulton methods, as well as predictor-corrector techniques, is accurate and consistent with classical numerical analysis literature. Additionally, the formulation of neural memory ODEs (nmODEs) in equations (3) and (4) (Section 3.3) aligns with prior theoretical works, correctly capturing the concept of treating neural network decoding steps as discrete ODE solutions. The derivations are presented clearly, with no significant mathematical errors or inconsistencies, demonstrating a careful and rigorous application of established mathematical methods. Experimental Designs Or Analyses: The chosen benchmarks and backbone networks—CNN-based nn-UNet, Transformer-based UNETR, and Mamba-based UltraLight VM-UNet—are appropriate and widely recognized, providing a solid basis to verify the generality of the proposed method. The authors utilize well-established, publicly available datasets (ACDC, KiTS2023, MSD brain tumor, ISIC2017, ISIC2018), ensuring reproducibility and comparability of results. Additionally, the evaluation protocol employs standard medical image segmentation metrics, including Dice coefficient, sensitivity, specificity, and accuracy, along with five-fold cross-validation, ensuring a robust and reliable assessment. Supplementary Material: Yes, I carefully reviewed the supplementary material provided. Specifically, I reviewed: • Appendix A: Detailed derivations and formulations of updating memory flow using the Adams–Moulton and Adams–Bashforth methods (Theorems A.1–A.7, Equations 5–19). • Appendix B: Experimental hyperparameters, particularly the learning rate settings used across different datasets (Table 6). • Appendix C: Detailed fold-level Dice performance results for 3D segmentation tasks (ACDC, KiTS23, MSD datasets), provided in Table 7. • Appendix D: Additional visualization examples for segmentation performance on ACDC, KiTS, MSD, ISIC2017, and ISIC2018 datasets (Figures 6–9). Relation To Broader Scientific Literature: The paper clearly and accurately positions its contributions within the broader scientific literature, addressing recognized limitations in U-Net-based segmentation architectures (Ronneberger et al., 2015) by enhancing skip connections for improved cross-scale interaction. While existing variants like UNet++ (Zhou et al., 2020) and UNet3+ (Huang et al., 2020) incorporate dense connections and full-scale feature interactions, they primarily rely on simple concatenation or summation, which the authors argue corresponds mathematically to lower-order explicit Euler methods, limiting their information integration capabilities. To overcome this, the authors draw from classical numerical methods—specifically, linear multistep methods such as Adams-Bashforth and Adams-Moulton—and recent advances in neural ordinary differential equations (NODEs, Chen et al., 2018; Yi, 2023). By framing the U-Net decoding process as solving an initial value problem (IVP), the proposed FuseUNet bridges deep learning architecture design with established numerical analysis, enabling higher-order, implicit multi-scale interactions for improved information fusion. This theoretical grounding uniquely situates the work at the intersection of deep learning and classical numerical computation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. Marginal Practical Performance Gains: Although the proposed FuseUNet reduces the number of parameters and slightly improves performance in some cases, the practical segmentation improvements (Dice scores) are relatively modest (typically under 1%), especially for well-established benchmarks like ACDC and ISIC datasets. This raises questions about the real-world significance of the proposed method compared to existing models. 2. Limited Justification for Hyperparameter Selection: The authors justify the learning rate adjustments (e.g., increasing learning rates by factors of 2 or 3 due to parameter reduction) but do not provide empirical evidence or systematic hyperparameter tuning results. The lack of thorough justification weakens confidence in whether reported performance gains are optimal or incidental. 3. Insufficient Statistical Validation: The paper does not include statistical significance tests (such as paired t-tests or confidence intervals), leaving uncertainty about whether observed minor differences represent genuine improvements or are within experimental variance. Clarity and Visualization Issues: Certain figures (e.g., Fig. 5 with normalized performance) could be improved for clearer interpretability. Non-normalized, explicit performance metrics would better reflect actual impacts and facilitate easier interpretation by readers. 4. Missing Important Baselines: Although the authors benchmark their method against CNN-based, Transformer-based, and Mamba-based architectures, direct comparisons against closely related skip-connection enhancement approaches, particularly UNet++ and UNet3+, are not sufficiently elaborated upon. These models explicitly address multi-scale fusion, and their detailed comparative evaluation would strengthen the justification of FuseUNet’s claimed advantages. 5. Limited Computational Analysis: The reported GFLOPs increase for the lightweight UltraLight VM-UNet backbone is concerning. Although the authors attribute this to interpolation, no comprehensive computational analysis (e.g., inference time or memory consumption) is provided to clearly demonstrate practical efficiency, limiting a comprehensive assessment of the claimed computational benefits. 6. Weak Demonstration of Generalizability: Despite claims of broad applicability, experiments are limited to standard benchmarks. Results across more diverse medical imaging modalities or different medical scenarios are not presented, limiting the generalizability claims. Other Comments Or Suggestions: N/A Questions For Authors: 1. Statistical Significance of Results: Can the authors provide statistical tests (e.g., paired t-tests or confidence intervals) to verify the statistical significance of the performance improvements reported in Tables 3 and 4? Such tests would clarify whether observed improvements over baseline methods are meaningful or could arise from random fluctuations. 2. Hyperparameter Justification: Could authors elaborate on how the learning rate and other hyperparameters were selected, ideally presenting ablation studies or grid searches? Clarifying these choices would significantly enhance confidence in the reported performance gains. 3. Direct Comparison to Alternative Multi-scale Methods: Why were direct comparisons to UNet++ and UNet3+—methods explicitly designed for multi-scale feature fusion—not thoroughly presented or discussed? Can the authors include detailed comparative results to clearly demonstrate FuseUNet’s superiority? 4. Computational Efficiency and Practical Deployment: Given that FuseUNet shows a minor increase in GFLOPs in lightweight settings, could the authors further quantify inference speed, GPU utilization, and memory consumption explicitly? Detailed metrics would clarify whether FuseUNet is genuinely advantageous for practical deployments. 5. Generalizability and Broader Applicability: Have the authors tested or considered their method on modalities beyond CT and MRI segmentation tasks, such as ultrasound or pathology images? Providing additional data or experiments would greatly enhance the strength of claims regarding generalizability. 6. Theoretical Novelty versus Practical Effectiveness: The paper strongly emphasizes theoretical novelty by relating U-Net architectures to numerical ODE methods. Considering the modest practical improvements observed, could the authors clarify whether their primary intent was theoretical innovation (interpretable architecture design) or practical performance improvements? Ethical Review Flag: Flag this paper for an ethics review. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your recognition of our work and your detailed and constructive comments. Below is a summary of our responses. --- ### **Claims and Evidence** **1. Multi-scale Feature Interaction** In the first part of the ablation experiment, we compare the performance of different fused scales, highlighting the effectiveness of multi-scale feature fusion. Specific quantitative metrics have yet to be clearly identified in existing papers. We will explore them in future research. **2. Generalizability** We acknowledge that evaluating only three architectures is insufficient to justify claims of universal generalizability. We will revise the paper to say "a variety of U-like networks" and clarify the current scope of validation. **3. GFLOPs** Here is a theoretical analysis to explain the computational cost trade-offs. Taking the L-stage convolutional architecture as an example. The reduction in parameters and computation is tied to the number of channels in each stage. Flops increase mainly due to interpolation in skip connections, especially when the original network has fewer channels. |FuseUNet-Ori| decoder|skip connection| |-|-|-| |Params|$\sum_{i=L}^{1}4N^2\cdot1^2-\sum_{i=L}^{1}3C_i^2\cdot k^2$|$\sum_{i=L}^{1}2N*C_i\cdot1^2-0$| |Flops|$2L\cdot 4N^2\cdot1^2\cdot H_o\cdot W_o-\sum_{i=L}^{1}6C_i^2\cdot k^2\cdot H_i\cdot W_i$|$2\sum_{i=L}^{1}2N*C_i\cdot1^2\cdot H_o\cdot W_o+7H_o\cdot W_o \cdot 2N-0$| N and the superscript 'o' represent the number of target classes and the original, respectively. --- ### **Other Strengths and Weaknesses** **1. Real-world Significance** Despite modest performance gains on some benchmarks, FuseUNet offers significant computational savings (up to 50% for standard backbones and 30% for lightweight ones), making it advantageous for compute-constrained applications. Besides, FuseUNet is a general framework, not limited to a fixed model, and can be applied to both existing and future models. We also found that some baselines underperformed in our reimplementation, suggesting FuseUNet's benefits may be understated. We view theoretical innovation and practical performance as complementary. Our method introduces a new perspective on feature fusion using numerical ODEs. In future work, we plan to extend ODE theory to the encoder for further improvements. **2. Hyperparameter Justification** We adjusted only the learning rate, keeping other hyperparameters as default. We did not conduct a dedicated ablation study but validated it briefly in the early stage. Hyperparameter tuning is not the focus of this study, and most related works only report them without justification, https://arxiv.org/pdf/2404.09556 simply states "decreasing the learning rate until convergence" without specifying a value. **3. Statistical Significance** We added statistical validation as suggested. For Table 4, as metrics were computed from a global confusion matrix over the full test set, per-image scores were unavailable, preventing additional statistical validation. The data shows that FuseUNet performs similarly to the Backbone, which aligns with our claim. |FuseUNet-Backbone|Mean of Differences|Standard Deviation of Differences|Standard Error of the Mean|95% Confidence Interval|t-statistic|Degrees of Freedom|p-value| |-|-|-|-|-|-|-|-| |ACDC|0.03|3.39|0.14|(-0.24,0.31)|0.25|603|0.80| |KiTS|0.15|10.18|0.27|(-0.38,0.67)|0.55|1470|0.58| Fig. 5's performance metrics are shown below. |oder|KiTS|ACDC|MSD|ISIC2017|ISIC2018| |-|-|-|-|-|-| |1|84.8|91.75|71.22|89.25|88.63| |2|85.2|91.84|71.49|89.65|89.06| |3|85.7|91.85|71.56|90.15|89.35| |4|86.7|92.05|71.75|90.69|89.78| **4. Missing Baselines** We added suggested baseline where possible, though full evaluation across all datasets may not be feasible within the rebuttal period. **5. Computational Analysis (Inference & Memory)** We provided some theoretical analysis in our previous response, and here we offer a detailed comparison of the computational cost data. |VRAM (G)/epoch (s)|nn-UNet|UNETR|UltraLight VM-UNet| |-|-|-|-| |Backbone|7.33/144|9.4/115|0.87/21| |FuseUNet|5.91/128|7.8/105|1.27/22| **6. Limited Modality Diversity** Our work currently demonstrates the generalizability of the proposed method to some extent through segmentation tasks on three data types. Due to time and compute constraints, we didn’t include more modalities but plan to do in future work to further validate our model’s performance across diverse scenarios. --- ### **Questions for Authors** **1–5.** These points are addressed in Other Strengths and Weaknesses - 3, 2, 4, 5, and 6, respectively. **6. Theoretical vs. Practical Focus** Our primary contribution is theoretical. By linking U-like networks with numerical ODE methods, we propose a mathematically grounded framework for multi-scale fusion. We hope this new perspective will aid both model design and interpretability, serving as a foundation for future architectural research. --- Rebuttal Comment 1.1: Comment: Thanks to the authors for the detailed response in the rebuttal, which addressed most of my concerns. However, the overall presentation, including formatting and figure aesthetics, falls short of the standards typically expected at ICML. While the strong experimental results support the core claims of the paper, the subpar presentation leaves room for concern. I am currently leaning towards acceptance, but I acknowledge that a rejection could also be justified. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the thoughtful evaluation and for acknowledging the strength of our experimental results and core contributions. We also appreciate your honest feedback regarding the overall presentation quality, including formatting and figure aesthetics. We fully understand the importance of clear and polished presentation for a high-standard venue like ICML. In response to your comments, we will carefully refine the paragraph spacing, line breaks, and layout structure throughout the paper to eliminate excessive blank areas, inconsistent indentation, and misaligned elements. For tables and figures, we have consulted numerous accepted ICML papers in recent years and revised our presentation style to better align with community standards. Specifically: - **Tables**: We optimized the use of borders and adjusted text alignment; added more informative content to sparsely populated tables to reduce excessive surrounding whitespace; and shaded the backbone and FuseUNet rows to better distinguish them from other entries. - **Figures**: We adjusted the spacing in segmentation visualizations to prevent boundary confusion; and fixed a rendering issue where thin white lines appeared during image scaling. These updates are aimed at improving both clarity and aesthetic quality. In addition, during this revision, we noticed that one of the equations in our previous rebuttal contained a minor typographical error. Specifically, the parameter difference between FuseUNet and convolutional backbone in the decoder was incorrectly denoted as **$\sum_{i=L}^{1}4N^2\cdot1^2-\sum_{i=L}^{1}3C_i^2\cdot k^2$**, whereas it should have been **$L \cdot 4N^2\cdot1^2-\sum_{i=L}^{1}3C_i^2\cdot k^2$**. We appreciate your careful review, which motivated us to re-examine both presentation and content more thoroughly. We will incorporate all these enhancements in a later version of the paper. Thank you again for your constructive suggestions, which are invaluable in helping us improve the presentation of our work.
Summary: This paper introduces a new multi-scale feature fusion method for skip connections and for the U-Net framework called FuseUNet, which aims to address the problems of lack the capability for multi-scale information interaction. Specifically, it defines the differential relationship between the skip connections and the corresponding stages. Furthermore, FuseUNet introduces nmODEs for optimization of U-Nets networks, which divides neuron into two parts: learning neuron and memory neuron. Most importantly, the approach proposed by authors is applicable to any U-like network. Comprehensive experiments are conducted on three datasets to demonstrate its effectiveness. Claims And Evidence: The claims in the submission are supported by clear evidence. Methods And Evaluation Criteria: Yes, the proposed methods make sense for the problem. Theoretical Claims: Yes, the theoretical claims in this manuscript are all correct. Experimental Designs Or Analyses: The experimental results demonstrate performance on both 2D and 3D datasets, but the comparison methods are limited. There is no comparison with other similar methods based on nmODE, and the ablation study is somewhat insufficient. Supplementary Material: Yes, the reviewer has checked all parts in the supplementary material. Relation To Broader Scientific Literature: Based on prior related efforts[1, 2], this paper combines them together to reduce network parameters and maintain network performance. [1] Gragg W B, Stetter H J. Generalized multistep predictor-corrector methods[J]. Journal of the ACM (JACM), 1964, 11(2): 188-209. [2] Yi Z. nmODE: neural memory ordinary differential equation[J]. Artificial Intelligence Review, 2023, 56(12): 14403-14438. Essential References Not Discussed: No. Other Strengths And Weaknesses: ###Strengths### S1: For the related work Linear Multistep Method, Predictor-Corrector Method and nmODEs are introduced relatively clearly, this paper introduces the above methods to optimize U-Net in a reasonable way. S2: The experimental results are clear and provide abundant comparative results. S3: The paper presents a systematic improvement to the U-Nets architecture, enabling network training with fewer parameters and achieving faster speed. ###Weaknesses### W1: More detailed ablation experiments are needed for the three proposed modules in this paper: Predictor-Corrector, Calculator, and the nmODEs block. W2: The experiments did not include a comparison with nmODE-Unet[*]. Both approaches improve U-Net using nmODE, and both papers focus on medical image segmentation. W3:The modules: Predictor-Corrector, Calculator, and the nmODEs, need to be described in more detail in the paper, including how they are implemented. A comprehensive workflow description is not clear, needing more clarifications. [*]Wang S, Chen Y, Yi Z. nmODE-Unet: A novel network for semantic segmentation of medical images[J]. Applied Sciences, 2024, 14(1): 411 Other Comments Or Suggestions: The experimental section needs improvement by including more detailed ablation studies on the Predictor-Corrector, Calculator, and nmODEs block, as well as comparisons with similar methods based on nmODE or Predictor-Corrector. Questions For Authors: The reviewer has no additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for the constructive and encouraging comments. We appreciate your recognition of our use of Linear Multistep Methods, the clarity of the experiments, and the improvements to U-like networks. Your feedback has helped us refine the paper. Below we provide detailed responses to your comments. ### **Other Comments or Suggestions** **W1. More detailed ablation experiments are needed for the Predictor-Corrector, Calculator, and nmODEs block** The proposed Predictor-Corrector and Calculator modules are implementations of different orders of linear multistep methods. Thus, the ablation study on multistep orders inherently serves as an ablation of these modules. Since they are tightly coupled with established numerical methods, modifying their mathematical formulation would be inappropriate. Apart from varying the step order to explore the trade-off between information depth and complexity, additional experimental variations are difficult to justify. The second part of our ablation focuses on the memory space within the nmODEs block. We acknowledge the value of more fine-grained ablations as you suggested. Due to time and computational constraints, we were only able to supplement further experiments on 2D datasets in this round. We plan to explore more comprehensive ablation studies in future work. | F | ODE | Dice17 | SE17 | SP17 | ACC17 | Dice18 | SE18 | SP18 | ACC18 | |------|---------|--------|-------|-------|--------|--------|-------|-------|--------| | ReLU | nmODEs | 89.55 | 86.53 | 98.49 | 96.30 | 88.36 | 88.65 | 96.70 | 94.96 | | Conv | nmODEs | 89.14 | 87.04 | 97.73 | 95.43 | 88.52 | 86.16 | 97.66 | 95.18 | | PVM | simple | 89.18 | 89.59 | 97.46 | 96.01 | 89.25 | 89.50 | 96.96 | 95.35 | | PVM | nmODEs | 90.69 | 89.59 | 98.20 | 96.62 | 89.77 | 89.10 | 97.41 | 95.62 | Table: In the ablation study of the nmODEs block, "ReLU" refers to the F function used in https://www.ijcai.org/proceedings/2024/0091.pdf. We also replaced the PVM module with a convolutional block to show the importance of preserving core components. The "simple" variant removes the implicit mapping in the differential equation based on nmODEs. Results suggest that modifying elements of nmODEs block does not improve performance. --- **W2. The experiments did not include a comparison with nmODE-Unet** We did consider comparing with nmODE-Unet. However, the relevant paper does not provide open-source code or sufficient details about the internal design of the nmODE block, which made reproduction infeasible. For related works that are open-source, their methods are essentially equivalent to our first- and second-order approaches, with the main difference lying in the choice of the function \( f \). Based on your suggestion, we have conducted additional experiments on 2D datasets to provide further comparisons and have reported the results accordingly. --- **W3. The modules (Predictor-Corrector, Calculator, nmODEs) need clearer descriptions, including implementation and workflow** While we have designed the Predictor-Corrector, Calculator, and nmODEs modules in detail, the limited space in the main text prevented us from fully presenting their implementation. To address this, we have included step-by-step derivations in the appendix. However, we acknowledge that the textual explanation may lack clarity in linking the mathematical formulation to the module structure and overall workflow. To improve this, we provide a table to illustrate the workflow and highlight the correspondence between equations and components. Additionally, we have released the [source code](https://anonymous.4open.science/r/FuseUNet-3BA3/README.md) to help readers better understand the implementation. | source | workflow | result | |--------|----------|---------| | $X_1,Y_1$ |P: $Y_2 = Y_1 + \delta \cdot F_1$|| ||C: $Y_2= Y_1 + \frac{\delta}{2}\cdot (F_1 + F_2)$| $Y_2$ | |$X_{1:2},Y_{1:2}$|P: ${Y_3} = Y_2 + \frac{\delta}{2}\cdot (3F_2 - F_1)$|| || C: $Y_3 = Y_2 + \frac{\delta}{12}\cdot (5F_3 + 8F_2 - F_1)$|$Y_3$| |$X_{1:3},Y_{1:3}$|P: $Y_4 = Y_3 + \frac{\delta}{12}\cdot (23F_3 - 16F_2 + 5F_1)$|| || C: $Y_4 = Y_3 + \frac{\delta}{24}\cdot (9F_4 + 19F_3 - 5F_2 + F_1)$|$Y_4$| |$X_{1:4},Y_{1:4}$|P: $Y_5= Y_4 + \frac{\delta}{24}\cdot (55F_4 - 59F_3 + 37F_2 - 9F_1)$|| || C: $Y_5 = Y_4 + \frac{\delta}{24}\cdot (9F_5 + 19F_4 - 5F_3 + F_2)$|$Y_5$| |$X_{2:5},Y_{2:5}$|Cal: $Y_6= Y_5 + \frac{\delta}{24}\cdot (55F_5 - 59F_4 + 37F_3 - 9F_2)$|$Y_6$| The process in the table uses a 6-stage U-shaped network as an example.In the table, P, C, Cal, F stand for Predictor, Corrector, Calculator, nmODEs block, respectively. $F_i = -Y_i + f(Y_i+g(X_i))$. --- ### **Other Comments or Suggestions** We have addressed this suggestion in detail across the responses above. --- Rebuttal Comment 1.1: Comment: Thanks to the author for the detailed response in rebuttal, which has addressed most of my concerns. Considering that the substantial experimental results in this paper are sufficient to support their claim, I would like to recommend to Accept this paper. --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the recognition and thoughtful reassessment of our work. We are very pleased that our rebuttal helped clarify the key points and address your concerns. Your constructive feedback throughout the review process has been invaluable in improving the quality of our paper. We deeply appreciate your final recommendation and your support of our work.
Summary: A new variant of U-Net involving a new way to fuse features across different scales. It achieves half the compute of nnUNet while matching performance evaluation. Claims And Evidence: Not supported. See below. Methods And Evaluation Criteria: There are multiple evaluation issues for which I have provided a weak reject even though the core technical foundations seem to be strong. 1. nnUNet is missing in the evaluation of MSD and ISIC(s) with no real reason as to why. 2. There's no real reason provided as to why these datasets are chosen. Very recent (and well defined) literature shows why this is important: https://arxiv.org/pdf/2404.09556 & https://arxiv.org/pdf/2411.03670 3. Relaying to 2, reporting just one metric is not enough. 4. If performance in terms of compute is the main innovation (because the scores across model do not seem statistically significant), there should be much more in depth reporting, especially in terms of VRAM usage and run times. These should not be too extravagant to compute in terms of add-on in experimentation times. 5. Why was STU-L not considered, it achieves comparable performance for almost the same param size. 6. Overall, it seems to me that UltraLight VM-UNet can achieve everything the proposed model can achieve in terms of performance evaluation and compute costs. Theoretical Claims: Did not verify. Experimental Designs Or Analyses: Not sound. Explained above why. Supplementary Material: Did not review apart from fold scores. Relation To Broader Scientific Literature: Important. With the designed steps, it can achieve nnUNet's performance for essentially half the compute. However, at the nnUNet level, it can already run on most commodity hardware and the decrease in compute is not enough to deploy on the edge. This is an important point to be noted in terms of real world use cases. Essential References Not Discussed: STU-Net variant S in terms of adding it in the ablation. Other Strengths And Weaknesses: The primary (and seems to be the only) weakness is in the evaluation setting. Other Comments Or Suggestions: 1. It is unclear how they achieved the scores for each class. Did they run all the models? If so, it is quite strange that they have matched the exact average values reported here: https://arxiv.org/pdf/2404.09556 where no class -wise scores have been provided. Unless these have been picked from the specific papers (but this is not mentioned anywhere). 2. A personal opinion would be to test only nnUNet and the proposed model on a very strict set of computational ablations to highlight the effectiveness of the method rather than discuss performance scores across various models. I feel those are not even required to highlight the novelty in the work. It also seems that the authors have the necessary compute to do this. Questions For Authors: None. Ethical Review Concerns: None. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely thank you for the valuable comments and suggestions, which have significantly helped us improve the clarity and depth of the paper. Below, we provide point-by-point responses to each concern. --- ### **Methods and Evaluation Criteria** **1. Dataset selection and missing benchmarks** We selected datasets based on those used in the original backbone papers to ensure fair comparison and avoid inconsistencies across literature. The MSD dataset was not mentioned in [nnUNet v24](https://arxiv.org/pdf/2404.09556), but we added experiments comparing data from [nnUNet v18](https://arxiv.org/pdf/1809.10486) following your suggestion. ISIC2017 and ISIC2018 were not included as no original data is available, and nnUNet has not been evaluated on them in any relevant paper we reviewed. Therefore, we did not include them. |Dataset|Model|Dice1|Dice2|Dice3|Dice avg| |-|-|-|-|-|-| |MSD|nn-UNet|80.71|62.22|79.07|74.00| ||FuseUNet|80.82|61.32|80.15|74.10| **2. Justification for dataset choices** Following the principle mentioned above, we balanced data selection with practical compute constraints. For nnUNet, we used two of the three datasets recommended. BTCV, used in UNETR, was reported as unsuitable for comparison in [1], so we ratained MSD. In UltraLight VM-UNet, PH2 was excluded due to its small size (200 images), while ISIC2017 and ISIC2018 were retained. **3. Reporting only Dice** Since our primary reference [nnUNet v24] reports only the Dice coefficient, we followed the same format for consistency and fairness. However, we are pleased to provide more detailed metrics of our experiments in later vision. **4. Compute-related metrics** We provide VRAM usage and training time comparisons on RTX4090 between backbone models and FuseUNet. However, we would like to emphasize that compute reduction is not our main contribution; the core innovation lies in the skip connection fusion mechanism, detailed in Response 6. |VRAM (G) / epoch (s)|nn-UNet|UNETR|UltraLight VM-UNet| |-|-|-|-| |Backbone|7.33/144|9.4/115|0.87/21| |FuseUNet|5.91/128|7.8/105|1.27/22| **5. On STU-Net comparisons** It is important to note that STU-Net relies on pretrained weights. In our experiments, we deliberately excluded models that require pretraining to minimize external factors beyond architecture itself. This helps us ensure that the improvements come solely from the proposed structural changes. We acknowledge the value of pretrained models and plan to explore them as part of our future work. **6. Comparison with UltraLight VM-UNet** While UltraLight VM-UNet is a strong model, FuseUNet differs in both focus and design advantages. First, the core innovations of them are fundamentally different. UltraLight VM-UNet emphasizes lightweighting with PVM modules, similar to group convolutions, while FuseUNet focuses on multi-scale fusion using a novel view of U-Net stages as discrete ODE nodes and applying techniques like linear multistep and predictor-corrector methods. This enables effective fusion across stages, with lightweighting as a byproduct. Second, FuseUNet offers better compatibility with backbones and tasks. UltraLight VM-UNet replaces core modules of others with PVM when applying to them, discarding their core innovations. FuseUNet, on the other hand, enhances skip connections in a way that integrates with existing architectures without altering their structure. Additionally, while UltraLight VM-UNet performs well on 2D single-target tasks, its effectiveness in more complex 3D multi-target segmentation tasks is still unproven. --- ### **Relation to Broader Scientific Literature** The goal of FuseUNet is not solely lightweighting. Instead, we propose a general skip connection fusion strategy applicable to both existing and future U-like networks, with practical significance. Besides, major compute reduction would come from lightweighting the encoder and decoder, but we avoided this to isolate the effect of our method. The encoder remains unchanged. Incorporating ODE-inspired designs into the encoder is part of our future work. We believe that with further development, FuseUNet's efficiency and deployability will improve. --- ### **Other Comments or Suggestions** **1. Class-wise score source** For nnUNet, class-wise results were obtained via direct email communication with the author. We will clarify this in the revised manuscript. **2. Focus on nnUNet-only** Thank you for recognizing the novelty of our work. While detailed ablation on nnUNet would highlight our method's theoretical effect, generalizability is also key. Since skip connections are common across U-like networks, we prioritize verifying our method's effectiveness across architectures. Deeper ablation studies on nnUNet are planned for future work, as validating generalizability consumed computational resources, limiting our ability to conduct these studies in the current work. --- Rebuttal Comment 1.1: Comment: All my queries were addressed. I switch to a full accept! --- Reply to Comment 1.1.1: Comment: We sincerely thank you for the updated evaluation and for your full acceptance of our work. We're truly encouraged to hear that all your concerns were addressed. Your feedback has been instrumental in helping us refine and clarify the paper, and we greatly appreciate the time and thought you invested throughout the review process. Thank you again for your support and recognition.
null
null
null
null
null
null
null
null
Hybrid Quantum-Classical Multi-Agent Pathfinding
Accept (poster)
Summary: This paper studies the problem of multi-agent pathfinding. The authors proposed a framework that can formulate the problem to a two-level optimization problem and then be able to use a quantum computing method to efficiently solve the problem (QUBO). The authors proved that the algorithm is able to find the optimal solution in the end. The authors provided a detailed comparison between the performance of the proposed methods and several benchmark planners' anytime performances and conclude that the proposed planner can efficiently solve MAPF problem optimally in many different cases. Claims And Evidence: As discussed below, there are other SOTA algorithms in the field, and it would be better if the authors can take them into consideration and compare the performance. Besides that, the claims are supported well and look good to me. Besides, the contribution of this work is not explicit to me. It seems that the core weapon of the framework - QUBO solving strategy is not the novelty here, and this work applies this method in the MAPF problem. The flow-problem formulations and integer programming related methods and formulations are studied before as well. It would be nice if the authors can explicitly discuss the contribution of this work more clearly. Methods And Evaluation Criteria: In this paper, the authors compared the performance of the proposed planner with LNS2 and BCP. However, the current SOTA algorithm of MAPF problem should be LaCAM series algorithm. Specifically, LaCAM* is also an anytime MAPF algorithm which is able to converge to the optimal solution. Given this, it would be good if the authors can include LaCAM into the benchmarks and compare the performance with it (them). Besides, some variant algorithms of CBS are bounded suboptimal, such as EECBS (AAAI 2021), although it's not anytime, since BCP is taken into account, it would be nice if such bounded suboptimal algorithms can also be compared. Theoretical Claims: The proof looks good to me. Experimental Designs Or Analyses: The authors used many experiments and illustrated the advantages of the proposed algorithms against BCP and LNS in different cases. The experiments look good in general, but in the performance comparison experiments, it would be interesting to see the results in experiments with more agents. As the authors mentioned on page 4, the efficiency of the quantum algorithm is closely related to the number of decision variables. Currently the maximum scale of the problem is 100 agents, and it would be nice if the authors can provide the results for experiments with more agents (1000, let's say) and compare the performance. Supplementary Material: This paper does not have supplementary materials Relation To Broader Scientific Literature: In this paper, the authors proposed an interesting and applicable approach to solve MAPF problems using quantum computation, which is pretty novel in the community. On the other hand, the result in the paper paves the way for using quantum computing in more robotics scenarios, which is interesting and promising. Essential References Not Discussed: As mentioned in the above Methods And Evaluation Criteria part, LaCAM series algorithms should be included and discussed. The papers are published in AAAI and IJCAI 2023, with the prior work PIBT published in IJCAI 2019. Other Strengths And Weaknesses: The paper is written and organized clearly and the illustrations are useful for understanding the algorithm. Other Comments Or Suggestions: The term NISQ is used in page 4 without explanation. The explanation is on page 6. It's better to put the explanation before using it. Questions For Authors: All discussed in the above parts Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and thoughtful observations. Below, we address each of the main concerns and suggestions raised. ### Clarification of Contributions and Novelty: We appreciate the reviewer’s request for a clearer articulation of our contributions. Our key novelty lies in proposing the first quantum-compatible algorithm for MAPF that is theoretically grounded and hardware-aware. While QUBO as a general optimization framework is not new, our contribution lies in: - Structuring the MAPF problem via a two-level optimization framework suitable for QUBO translation. - Proving a formal optimality guarantee in this setting, extending classical results to the integer domain. - Introducing a hardware-aware QUBO design, using conflict graphs to enable decomposition into independent subproblems for practical execution on current quantum devices. Quantum annealing is not yet outperforming classical methods in most practical MAPF scenarios. Our paper does not claim otherwise. Instead, our goal is to present a hardware-aware, theoretically grounded framework that integrates quantum optimization in a meaningful way—even if its benefits will become more tangible as hardware improves. We will clarify this positioning and explicitly reference hardware roadmaps such as IBM’s quantum roadmap (https://www.ibm.com/roadmaps/quantum/) to illustrate the forward-looking relevance of this work. We will revise the introduction and contribution sections to make these points more explicit. ### Comparison with SOTA MAPF Algorithms (LaCAM and EECBS): We thank the reviewer for pointing out important recent advancements. We are currently running experiments with LaCAM* and EECBS to compare against our proposed methods and will include the results or a detailed discussion in the updated version. Additionally, we will expand the related work section to cover the LaCAM series (including PIBT) and EECBS to provide better contextualization within the MAPF literature. ### Scaling Beyond 100 Agents (e.g., to 1000 Agents): We agree that scaling is a crucial topic. While our current experiments evaluate up to 100 agents, scaling to thousands of agents on today’s quantum hardware remains infeasible due to hardware limitations in terms of qubit count, connectivity, and precision. Nonetheless, our framework is modular and designed to be compatible with future quantum devices. We identify large-scale testing (e.g., 1000 agents) as an important direction for future work, especially as quantum hardware evolves. We appreciate the reviewer’s positive comments on the novelty and applicability of our approach, as well as the clarity and illustrations in the paper. We believe that with these improvements, the paper will provide even greater value to both the MAPF and quantum optimization communities. --- Rebuttal Comment 1.1: Comment: I would like to sincerely thank the authors for their comprehensive rebuttal. I understand that under the current constraint of quantum hardware, it would be hard for the quantum-based MAPF algorithm to outperform the current methods, and the contribution of this work lies more in creating such a framework that could be useful in practice in the future when the hardware becomes better. I know that the authors are running the experiments, and it would be nice if the authors could provide the experimental data and demonstrate and compare the results to prove that the framework you proposed is useful in practice. Alternatively, in other ways, explicitly point out that the current hardware condition is the bottleneck of which part, and show how different hardware can greatly impact the performance and foresee what kind of equipment, which is reasonable to achieve in the future, can hopefully reach and outperform the current best classical methods based on a back-of-the-evenlop model or other techniques. The potential of this method could be supported by doing this kind of analysis. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for recognizing the forward-looking nature of our framework and for this thoughtful suggestion. In response, we now include a two-part addition to the revised manuscript: ### Empirical Evaluation We ran experiments with EECBS, LaCAM and LaCAM* on four benchmark MovingAI maps discussed in the paper. Similar to the already evaluated BCP and LNS2, we used a maximum time limit of 180s and used default parameters otherwise. The same setup as in the paper is used, that is we evaluate each algorithm for 25 different scenarios for 20, 40, 60, 80 and 100 agents. We compare solving the RMP optimally with ILP (QP-ILP) and with solving the QUBO formulation with simulated annealing (QP-QUBO). Detailed results indicating mean and standard deviation over 25 different scenarios are given in the tables below. Considering the mean performance over all 25 scenarios, we observe that our method QP-ILP performs best in 17/20 cases. It is evident QP-QUBO performs only slightly worse, with also achieving best performance over all baselines in 15/20 cases. Near future quantum optimization would allow for a performance lying between optimally solving (QP-ILP) and simulated annealing (QP-QUBO). LaCAM* performs best in 3/20 cases which is probably due to a bad choice of initial paths for our algorithm. Due to our algorithms' anytime property, it can be combined with other algorithms to also find better initializations (e.g. with LaCAM*). *random-32-32-10* | #Agents | LNS2 | EECBS | LaCAM | LaCAM* | QP-ILP | QP-QUBO | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | 20 | 447.1 ± 40.9 | 448.0 ± 40.9 | 453.0 ± 41.0 | 447.1 ± 40.9 | **447.0 ± 40.8** | **447.0 ± 40.8** | | 40 | 887.3 ± 66.0 | 891.0 ± 65.9 | 910.5 ± 68.2 | 887.1 ± 66.0 | **886.3 ± 66.3** | **886.3 ± 66.3** | | 60 | 1332.9 ± 77.6 | 1344.9 ± 78.4 | 1388.7 ± 88.5 | 1332.4 ± 77.4 | **1329.6 ± 77.1** | 1330.0 ± 77.0 | | 80 | 1780.9 ± 95.6 | 1807.4 ± 95.0 | 1880.6 ± 96.3 | 1779.9 ± 94.3 | **1773.6 ± 93.2** | 1775.8 ± 93.9 | | 100 | 2239.0 ± 113.1 | 2288.5 ± 111.1 | 2410.2 ± 126.5 | 2236.8 ± 112.2 | **2225.4 ± 111.5** | 2231.3 ± 112.2 | *maze-32-32-4* | #Agents | LNS2 | EECBS | LaCAM | LaCAM* | QP-ILP | QP-QUBO | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | 20 | 862.8 ± 108.5 | 879.6 ± 112.0 | 883.4 ± 102.0 | 851.8 ± 105.8 | **835.4 ± 104.0** | **835.4 ± 104.0** | | 40 | 1778.0 ± 163.3 | 1824.2 ± 170.5 | 1828.3 ± 533.8 | 1753.8 ± 163.4 | **1724.1 ± 195.7** | 1743.8 ± 235.2 | | 60 | 2691.1 ± 264.7 | 2791.2 ± 219.4 | 2794.7 ± 1066.7 | **2623.0 ± 191.1** | 2626.3 ± 591.4 | 2696.0 ± 459.2 | | 80 | 3664.6 ± 226.3 | 3869.7 ± 195.4 | 3889.0 ± 1309.4 | **3575.6 ± 196.9** | 3778.4 ± 495.4 | 4234.9 ± 745.5 | | 100 | 4725.4 ± 334.6 | 5059.3 ± 240.4 | 5086.5 ± 2551.3 | **4534.3 ± 219.8** | 5715.9 ± 1020.6 | 6332.5 ± 1272.6 | *room-64-64-8* | #Agents | LNS2 | EECBS | LaCAM | LaCAM* | QP-ILP | QP-QUBO | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | 20 | 1208.0 ± 125.9 | 1216.3 ± 127.0 | 1226.6 ± 128.2 | 1203.3 ± 125.2 | **1194.7 ± 124.9** | **1194.7 ± 124.9** | | 40 | 2455.2 ± 187.5 | 2504.8 ± 189.3 | 2525.9 ± 188.1 | 2434.2 ± 180.0 | **2405.2 ± 178.2** | 2408.2 ± 179.3 | | 60 | 3673.8 ± 239.5 | 3794.4 ± 258.0 | 3847.4 ± 256.8 | 3634.8 ± 222.6 | **3569.2 ± 216.6** | 3580.2 ± 218.7 | | 80 | 4992.8 ± 259.1 | 5215.0 ± 307.7 | 5292.2 ± 287.5 | 4927.2 ± 252.3 | **4790.0 ± 969.0** | 4835.7 ± 264.0 | | 100 | 6301.2 ± 307.2 | 6644.0 ± 342.3 | 6794.6 ± 321.1 | 6219.2 ± 285.4 | **6107.1 ± 303.7** | 6323.4 ± 342.4 | *den312d* | #Agents | LNS2 | EECBS | LaCAM | LaCAM* | QP-ILP | QP-QUBO | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | 20 | 1059.2 ± 114.0 | 1062.9 ± 115.1 | 1082.0 ± 121.3 | 1059.4 ± 114.1 | **900.5 ± 117.3** | **900.5 ± 117.3** | | 40 | 2169.0 ± 141.9 | 2183.3 ± 143.3 | 2249.4 ± 160.9 | 2168.8 ± 142.0 | **1815.0 ± 127.5** | **1815.0 ± 127.5** | | 60 | 3247.2 ± 184.7 | 3287.6 ± 190.8 | 3414.6 ± 208.3 | 3244.6 ± 184.8 | **2727.0 ± 173.7** | **2727.0 ± 173.7** | | 80 | 4369.3 ± 203.1 | 4444.8 ± 211.3 | 4645.2 ± 252.5 | 4364.2 ± 203.1 | **3627.8 ± 161.2** | **3627.8 ± 161.2** | | 100 | 5499.4 ± 232.6 | 5641.4 ± 241.1 | 5902.4 ± 273.9 | 5492.3 ± 224.9 | **4558.9 ± 195.2** | 4559.0 ± 195.2 | ### Hardware Bottleneck and Forecasting Analysis To supplement the above empirical results, we provide a detailed analysis of quantum hardware constraints hindering evaluating scenarios with thousands of agents and discuss future projections. - Identification of the main algorithmic bottleneck: Finding a valid solution to the RMP - Identification of the main hardware bottlenecks: QUBO embedding limitations (qubit count, connectivity) and sample quality (limited by noise and precision). - A scaling analysis of our problem formulation in terms of qubit requirements (showing how many qubits are needed). - Reference to realistic near-term hardware improvements, e.g. IBM’s roadmap, and estimates of what scale of MAPF problem such devices might support
Summary: The paper proposes novel hybrid quantum-classical algorithms leveraging quantum annealers for the problem of multi-agent path finding (MAPF). It proposes two iterative variant algorithms, QUBO-and-Price (QP) and QUBO-and-Cut-and-Prince (QCP) based on the idea of branch-and-cut-and-price (BCP) to find conflict-free paths for multiple agents navigating through shared space, which are among the first of its kind in the literature. Alongside theoretical proof guarantee, the experiments show that these algorithms have achieved competitive performance comparatively. This work provides an interesting method for solving this problem. Claims And Evidence: below expectations Methods And Evaluation Criteria: n/a Theoretical Claims: inaccurate Experimental Designs Or Analyses: below expectations Supplementary Material: not submitted Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strength: 1) The first quantum-classical solver for MAPF; 2) Detailed math formulation of the MAPF problem as a QUBO which can run on annealers; 3) Extensive experimental evaluations of the protocol which showed great performance compared to other solvers adopted for benchmark. Weaknesses: 1) The proper tries to address the problem of coordinating paths for multiple agents. My question is then how is this related to machine learning as ICML is a conference for machine learning? I think it could be better presented at a conference like IROS; 2) The writing is not easy and logical enough to follow and requires special attention; 3) Some statements on the background of quantum computing are inaccurate. For example, in L84 - 88, the adiabatic theorem only guarantees the ground state if there are no thermo-perturbations which does not exist for D-Wave annealers. This statement could be misleading; 4) in L194, the "NISQ" device normally refers to gate-based methods, but not for annealers. Specifically, annealers do not suffer from the noise that is commonly referred to in gate-based models; 5) Some details require further explanation. For example, from (9) to (10), why does (10) hold if slack variables are not used? I have to think and guess it makes sense, but it can benefit from a more thorough explanation; 6) In the formulation of the QUBO problem, the quadratic nature comes from the way that the constraint is incorporated, not the original objective function. So there raises a question: How justifiable is it to use the annealer in this case. In other words, if the constraint is not enforced via a soft constraint but through some other form, the objective is just a linear programming problem, in which there exist many more specific solvers. A more detailed comparison with these solvers is lacking in the experiment; 7) Some ablation studies are missing. How does the penalty impact the problem accuracy and the hardware-specific sparsity? Probably also the chain broke during the embedding. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful and constructive feedback. Below, we address each of the raised concerns and clarify several technical points. We will incorporate any remaining clarifications and adjustments into the final version. ### Relevance to ICML and the Machine Learning Community: We acknowledge the reviewer's concern regarding scope. While MAPF is a combinatorial optimization problem, our submission aligns with ICML’s growing interest in quantum computation, optimization theory, and AI planning. Recent ICML papers have explored quantum optimization, hardware-aware algorithm design, and combinatorial planning—topics closely related to our work. Furthermore, MAPF is widely studied in multi-agent reinforcement learning and planning under constraints, often intersecting with machine learning methodologies. ### Clarification on the Adiabatic Theorem (Lines 84–88): We agree that the current phrasing may overstate the practical applicability of the adiabatic theorem to real-world quantum annealers. We will revise this section to clearly distinguish idealized adiabatic quantum computation from practical implementations such as D-Wave’s quantum annealer, which is subject to thermal noise and device-specific imperfections. Our discussion will be reframed under the broader umbrella of quantum optimization, and not restricted to quantum annealing. ### Use of the Term “NISQ” (Line 194): We appreciate the clarification. While “NISQ” is typically used to describe gate-based devices, it is also used in broader literature to refer to pre-fault-tolerant quantum systems, including annealers. We will clarify this distinction and highlight that quantum annealers suffer from different noise models (e.g., Integrated Control Errors, ), as documented in D-Wave’s technical materials (https://docs.dwavequantum.com/en/latest/quantum_research/errors.html). Regardless, we will revise this terminology for clarity and consistency. ### Use of QUBO for ILP and Comparison to Classical Solvers: We acknowledge the need to clarify this point. The original optimization objective is indeed an Integer Linear Program (ILP), which is NP-hard. We explicitly compare our QUBO-based approach with a classical exact ILP solver (via branch-and-bound), and will emphasize this more clearly in the experimental section. The benefit of QUBO is not in replacing linear programming for trivial cases, but in enabling structured quantum-compatible formulations with hardware-aware decomposition, which can become more relevant as hardware capabilities improve. ### Ablation Studies and Penalty Effects in QUBO: We agree that understanding the effect of penalty parameters and hardware constraints is critical. Large penalty weights may increase the dynamic range of the QUBO coefficients, potentially leading to: - Weaker embedding quality and increased chain breaks - Reduced solution quality due to spectral gap constraints - Increased annealing time requirements These trade-offs are well-known in the quantum annealing literature. While our current contribution focuses on algorithmic structure and theoretical guarantees, we recognize the importance of such ablation studies and plan to pursue them in future work, especially as hardware matures. We fully agree that quantum annealing is not yet outperforming classical methods in most practical MAPF scenarios. Our paper does not claim otherwise. Instead, our goal is to present a hardware-aware, theoretically grounded framework that integrates quantum optimization in a meaningful way—even if its benefits will become more tangible as hardware improves. We will clarify this positioning and explicitly reference hardware roadmaps such as IBM’s quantum roadmap (https://www.ibm.com/roadmaps/quantum/) to illustrate the forward-looking relevance of this work. ### Clarification from Eq. (9) to (10): Thank you for pointing this out. The equivalence from (9) to (10) stems from the fact that the slack variables in (9) only serve to linearize the inequality constraint Dz ≤ 1. In (10), we exploit the binary nature of D and z, and reformulate the constraint violation as a quadratic penalty without introducing additional variables. We will elaborate on this derivation in the revised version for clarity. We thank the reviewer again for their helpful suggestions, which we believe will substantially improve the clarity and impact of our paper.
Summary: The paper presents a quantum-classical hybrid approach to multi-agent path finding based on solving the restricted master problem via a QUBO translation. Claims And Evidence: Claims are supported, but the claims are rather weak anyway, involving only certain baseline solvers in a specific setup. Methods And Evaluation Criteria: The classical baseline approach appears too simple and competitive classical state of the art is not sufficiently discussed. Theoretical Claims: One theorem is introduced, although not really needed for the empirical approach of this study. Experimental Designs Or Analyses: The comparison study is lacking better baselines as well as a fully classical optimization-based approach. Various choices are insufficiently discussed and the quantum computer setup is not sufficiently explained. Supplementary Material: none Relation To Broader Scientific Literature: appears fine Essential References Not Discussed: none come to mind Other Strengths And Weaknesses: Aside from what I described above, it is also unclear where any possible advantage up to the point of "dominat[ing] [...] baseline MAPF solvers" should even come from. Quantum annealing is notoriously inefficient at the moment. Other Comments Or Suggestions: typos: "prize" (l. 024), "j--th"(use "$j$th", l. 090), "however" (meant "but", l. 109), wrong citation style (l. 139, e.g.), "implicitely" (l. 227), "c.f." (l. 234), "the in the ..." (l. 359) Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed and structured feedback. We address your concerns below and will incorporate unresolved items into the final version. Any other points not mentioned in this answer will be fixed in the camera ready version. ### On the Strength of Our Claims and Baseline Selection: While we appreciate that the reviewer finds our claims supported, we respectfully disagree that they are weak. Our contributions go beyond empirical comparisons: - We present the first quantum-compatible optimal MAPF algorithm, introducing a novel integration of QUBO formulations into a branch-and-cut-and-price framework. - We provide a theoretical optimality guarantee, generalizing known results to the integer domain. - Our hardware-aware QUBO design, incorporating conflict graphs, enables practical decomposition and paves the way for future scalability with emerging quantum hardware. - Through our quite general and extensive experimental setup, we show advantages over popular MAPF baselines ### On Baseline Selection: While our experiments compare against selected strong baselines (BCP and LNS2), we agree that more extensive comparisons could further strengthen the work. We are currently running additional experiments with EECBS [1] and LaCAM [2,3], and will incorporate their results or a discussion in the updated version. ### On Classical Baselines and Fully Classical Solvers: The reviewer correctly points out the importance of a fair baseline. In addition to heuristic methods (e.g., LNS2), we note that our ILP-based solver is itself a fully classical method, using branch-and-bound for exact optimization. We will clarify this point in the manuscript and expand the discussion of classical methods in the related work section, including suboptimal but high-performance solvers like EECBS and LaCAM. ### On Experimental Design and Quantum Annealing Setup: We agree that more details on the quantum annealing setup can enhance clarity. We used D-Wave’s default parameters and will include explicit settings such as annealing time (e.g., 50µs) and the number of samples. If the reviewer has specific choices they would like us to elaborate on (e.g., QUBO encoding, annealer scheduling, embedding strategies), we are happy to address those explicitly. ### On the Usefulness of Quantum Annealing Today: We fully agree that quantum annealing is not yet outperforming classical methods in most practical MAPF scenarios. Our paper does not claim otherwise. Instead, our goal is to present a hardware-aware, theoretically grounded framework that integrates quantum optimization in a meaningful way—even if its benefits will become more tangible as hardware improves. We will clarify this positioning and explicitly reference hardware roadmaps such as IBM’s quantum roadmap (https://www.ibm.com/roadmaps/quantum/) to illustrate the forward-looking relevance of this work. [1] Li, Jiaoyang et al. "EECBS: Bounded-Suboptimal Search for Multi-Agent Path Finding." In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), 2021. [2] Okumura, Keisuke. "LaCAM: Search-based Algorithm for Quick Multi-Agent Pathfinding." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 10. 2023. [3] Okumura, Keisuke. "Engineering LaCAM*: Towards Real-time, Large-scale, and Near-optimal Multi-agent Pathfinding." In Proceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2024.
Summary: This paper approaches large scale multi-agent path finding as a hybrid quantum computing problem by combining branch-and-cut-and-prize (BCP) with quadratic unconstrained binary optimization (QUBO) formulations for resolving path conflicts. The resulting hybrid algorithms are evaluated on several common large-scale multi-agent path finding tasks, where favorable performance over baseline algorithms is demonstrated. Claims And Evidence: The claims of the paper are well-motivated and the resulting algorithms are tested on several example scenarios with favorable performance compared to selected baselines. Additional ablations provide further insights into performance differences among e.g. different underlying QUBO formulations. Methods And Evaluation Criteria: The proposed methods and evaluations make sense, validating performance on established tasks against capable baselines. Scaling to even larger scale examples, as considered by one of the baselines, could further improve the paper (or alternatively further discussion of why this is currently not possible). Theoretical Claims: I did not see issues with the theoretical claims. Experimental Designs Or Analyses: The overall evaluation is good, comparing against key baselines on established benchmarking tasks. Extension to larger scale domains provided in the MAPF benchmark or discussion of why this is currently infeasible would further strengthen the paper. Supplementary Material: No supplementary material provided. Relation To Broader Scientific Literature: The paper gives a good overview of related work and where the current approach is situated relative to prior works. Essential References Not Discussed: Consider citing a D-Wave technical report. Other Strengths And Weaknesses: - The paper is well-written, leverages well-crafted illustrations, and is therefore mostly easy to follow - The maps considered for benchmarking include small to intermediate size scenarios from the MAPF benchmarks by MovingAI. What is restricting extension to the larger maps? Are you hardware limited? It would be interesting to discuss further details here. - Similarly, the original BCP paper evaluated on the larger den520d and lak503d domains - how would QP-QUBO fare on these tasks? - The used MAPF benchmark provides different sizes per map type, it would be interesting how performance comparisons behave across map sizes. - In Figure 3, consider adding environment names to each column in addition to the map - It could be interesting to see e.g. performance standard deviation around the average in Figure 3, potentially as an extra figure in the appendix to maintain visual clarity. Other Comments Or Suggestions: - Line 023: also add CBS abbreviation after “Conflict-based Search” - Line 131: wait (at) a certain location - Line 194: NISQ abbreviation used before introduction in Line 306 - The abbreviations QP and QCP might be a bit overloaded for some readers thinking of Quadratic(ally Constrained) Programs - QuP and QuCP could be options? Questions For Authors: - Could you clarify why LNS2 seemingly improves with increasing agent count on the maze and room domains, while QP-QUBO shows the opposite trend? - Could you clarify scalability (issues) when considering larger environments from the MAPF benchmark? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thorough and insightful evaluation of our paper. We are encouraged by the positive assessment regarding our motivation, theoretical contributions, experimental validation, and presentation. We address the reviewer’s specific comments and questions below. Everything not addressed in this answer will be integrated into the camera ready version. ### Scalability to Larger MAPF Instances (e.g., den520d, lak503d): We appreciate the suggestion to evaluate on larger-scale maps. The main limitation in scaling to domains such as den520d and lak503d is twofold: #### QUBO dimensionality The number of binary variables grows rapidly with the number of paths and constraints, which directly impacts the QUBO size and solvability on current NISQ hardware. #### Hardware constraints Even with conflict graph decomposition, QUBOs derived from larger environments often exceed current quantum annealers' qubit capacities and connectivity constraints. Our contributions consist of the presentation of the first quantum-compatible optimal algorithm that leverages QUBO formulations. Our approach is grounded in a theoretically proven optimality guarantee, which generalizes classical column generation theory to the binary domain. We introduce a hardware-aware QUBO design, enabling scalable and parallel sub-problem decomposition—crucial for current and near-term quantum devices. By successfully integrating these concepts and demonstrating favorable performance on standard MAPF benchmarks, our work paves the way for exploiting future generations of quantum hardware in complex combinatorial planning tasks. That said, our method is modular and scales well with improvements in hardware or more aggressive decomposition techniques. We plan to explore den520d and lak503d in future work using circuit-based quantum methods or batched hybrid strategies. We will add a discussion on this to the paper. ### Clarification on QP-QUBO vs. LNS2 Scaling Trends (Maze and Room Domains): We agree that the observed trend—LNS2 improving with more agents, while QP-QUBO degrades—is interesting. This can be attributed to the heuristic flexibility of LNS2, which benefits from local repair in denser agent settings. In contrast, QP-QUBO solves a more rigidly structured optimization problem where QUBO hardness increases with more conflicts. Since QUBO is solved heuristically via SA/QA, solution quality can deteriorate with size unless additional paths or constraints are added, which also increases problem size. Furthemore, we use Prioritized Path Planning for initializing the set of paths which can lead to suboptimal initial results for a large number of agents. Adapting the initial heuristic for obtaining a valid set of conflict-free paths (e.g. using LNS2 instead) can mitigate this effect. We will clarify these phenomena in the revised version. ### Benchmark Variability Across Map Sizes: Thank you for highlighting this. Due to space constraints, we selected representative maps from each structural class. However, we agree that presenting performance across different sizes within the same map type (e.g., room-32-32 vs. room-64-64) would enrich the analysis. We will consider adding this to the appendix or releasing extended experiments online. --- Rebuttal Comment 1.1: Comment: Thank you very much for the detailed response and clarifications! Looking forward to the extended "map size" experiments. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the helpful suggestion for scaling up the experiments. In response, we now include a two-part addition to the revised manuscript: ### Empirical Evaluation We ran experiments with further state-of-the-art MAPF algorithms EECBS, LaCAM and LaCAM* on benchmark MovingAI maps discussed in the paper. Similar to the already evaluated BCP and LNS2, we used a maximum time limit of 180s and used default parameters otherwise. The same setup as in the paper is used: we evaluate each algorithm for 25 different scenarios for 20, 40, 60, 80 and 100 agents. We compare solving the RMP optimally with ILP (QP-ILP) and with solving the QUBO formulation with simulated annealing (QP-QUBO). Detailed results indicating mean and standard deviation over 25 different scenarios are given in the tables below for three maps, due to character constraints. Considering the mean performance over all 25 scenarios, we observe that our method QP-ILP performs best in 17/20 cases. It is evident QP-QUBO performs only slightly worse, with also achieving best performance over all baselines in 15/20 cases. Near future quantum optimization would allow for a performance lying between optimally solving (QP-ILP) and simulated annealing (QP-QUBO). LaCAM* performs best in 3/20 cases which is probably due to a bad choice of initial paths for our algorithm. Due to our algorithms' anytime property, it can be combined with other algorithms to also find better initializations (e.g. with LaCAM*). Experiments for larger scale maps such as *den520d*, *lak503d* are still running at the moment. We expect that similar results will be achieved. *random-32-32-10* | #Agents | LNS2 | EECBS | LaCAM | LaCAM* | QP-ILP | QP-QUBO | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | 20 | 447.1 ± 40.9 | 448.0 ± 40.9 | 453.0 ± 41.0 | 447.1 ± 40.9 | **447.0 ± 40.8** | **447.0 ± 40.8** | | 40 | 887.3 ± 66.0 | 891.0 ± 65.9 | 910.5 ± 68.2 | 887.1 ± 66.0 | **886.3 ± 66.3** | **886.3 ± 66.3** | | 60 | 1332.9 ± 77.6 | 1344.9 ± 78.4 | 1388.7 ± 88.5 | 1332.4 ± 77.4 | **1329.6 ± 77.1** | 1330.0 ± 77.0 | | 80 | 1780.9 ± 95.6 | 1807.4 ± 95.0 | 1880.6 ± 96.3 | 1779.9 ± 94.3 | **1773.6 ± 93.2** | 1775.8 ± 93.9 | | 100 | 2239.0 ± 113.1 | 2288.5 ± 111.1 | 2410.2 ± 126.5 | 2236.8 ± 112.2 | **2225.4 ± 111.5** | 2231.3 ± 112.2 | *room-64-64-8* | #Agents | LNS2 | EECBS | LaCAM | LaCAM* | QP-ILP | QP-QUBO | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | 20 | 1208.0 ± 125.9 | 1216.3 ± 127.0 | 1226.6 ± 128.2 | 1203.3 ± 125.2 | **1194.7 ± 124.9** | **1194.7 ± 124.9** | | 40 | 2455.2 ± 187.5 | 2504.8 ± 189.3 | 2525.9 ± 188.1 | 2434.2 ± 180.0 | **2405.2 ± 178.2** | 2408.2 ± 179.3 | | 60 | 3673.8 ± 239.5 | 3794.4 ± 258.0 | 3847.4 ± 256.8 | 3634.8 ± 222.6 | **3569.2 ± 216.6** | 3580.2 ± 218.7 | | 80 | 4992.8 ± 259.1 | 5215.0 ± 307.7 | 5292.2 ± 287.5 | 4927.2 ± 252.3 | **4790.0 ± 969.0** | 4835.7 ± 264.0 | | 100 | 6301.2 ± 307.2 | 6644.0 ± 342.3 | 6794.6 ± 321.1 | 6219.2 ± 285.4 | **6107.1 ± 303.7** | 6323.4 ± 342.4 | *den312d* | #Agents | LNS2 | EECBS | LaCAM | LaCAM* | QP-ILP | QP-QUBO | | :-: | :-: | :-: | :-: | :-: | :-: | :-: | | 20 | 1059.2 ± 114.0 | 1062.9 ± 115.1 | 1082.0 ± 121.3 | 1059.4 ± 114.1 | **900.5 ± 117.3** | **900.5 ± 117.3** | | 40 | 2169.0 ± 141.9 | 2183.3 ± 143.3 | 2249.4 ± 160.9 | 2168.8 ± 142.0 | **1815.0 ± 127.5** | **1815.0 ± 127.5** | | 60 | 3247.2 ± 184.7 | 3287.6 ± 190.8 | 3414.6 ± 208.3 | 3244.6 ± 184.8 | **2727.0 ± 173.7** | **2727.0 ± 173.7** | | 80 | 4369.3 ± 203.1 | 4444.8 ± 211.3 | 4645.2 ± 252.5 | 4364.2 ± 203.1 | **3627.8 ± 161.2** | **3627.8 ± 161.2** | | 100 | 5499.4 ± 232.6 | 5641.4 ± 241.1 | 5902.4 ± 273.9 | 5492.3 ± 224.9 | **4558.9 ± 195.2** | 4559.0 ± 195.2 | ### Hardware Bottleneck and Forecasting Analysis To supplement the above empirical results, we provide a detailed analysis of quantum hardware constraints hindering evaluating scenarios with thousands of agents and discuss future projections. - Identification of the main algorithmic bottleneck: Finding a valid solution to the RMP - Identification of the main hardware bottlenecks: QUBO embedding limitations (qubit count, connectivity) and sample quality (limited by noise and precision). - A scaling analysis of our problem formulation in terms of qubit requirements (showing how many qubits are needed). - Reference to realistic near-term hardware improvements, e.g. IBM’s roadmap, and estimates of what scale of MAPF problem such devices might support
null
null
null
null
null
null
ADIOS: Antibody Development via Opponent Shaping
Accept (poster)
Summary: The paper introduces ADIOS (Antibody Development via Opponent Shaping), a meta-learning framework that designs antibodies capable of both neutralizing current viral strains and influencing viral evolution to favor less dangerous variants. By framing antibody-virus interactions as a two-player zero-sum game, the method uses nested optimization loops: an inner loop simulating viral escape and an outer loop optimizing antibodies against long-term viral adaptation. ADIOS is implemented within the Absolut! framework, leveraging GPU acceleration for a 10,000x speedup. The authors demonstrate that ADIOS-optimized antibodies, or "shapers," outperform conventional myopic antibodies in long-term efficacy, shaping viral evolution to produce more targetable variants. Claims And Evidence: - The paper claims that ADIOS-optimized antibodies shape viral evolution toward weaker, more targetable variants. This claim is supported through simulation results demonstrating that shapers lead to viruses more susceptible to a broader range of antibodies. - The claim that ADIOS outperforms myopic antibodies in long-term protection is substantiated by comparative evaluations, showing superior performance over extended evolutionary trajectories. - The authors suggest that ADIOS has applications beyond antiviral therapy, including antimicrobial resistance and cancer treatment. While plausible, this claim remains speculative and would require further validation in these domains. - The paper asserts that its GPU-accelerated JAX implementation of Absolut! achieves a 10,000x speedup. The performance comparison is well-documented, demonstrating substantial computational gains. Methods And Evaluation Criteria: - The method is evaluated within the Absolut! simulation framework, which is appropriate for modelling antibody-virus interactions. - The evaluation compares ADIOS to myopic antibodies, demonstrating superior long-term efficacy. However, additional comparisons against existing evolutionary models would further validate ADIOS’s impact. - The paper explores the trade-offs between short-term efficacy and long-term viral shaping, providing valuable insights into practical applications. Theoretical Claims: - The theoretical framing of ADIOS as a two-player zero-sum game is well-grounded in reinforcement learning and opponent shaping literature. - The meta-learning approach to optimising antibodies across viral evolutionary trajectories is effectively justified. Experimental Designs Or Analyses: - The experimental setup effectively evaluates the model’s ability to shape viral evolution and sustain long-term efficacy. - The choice of the dengue virus as a test case is reasonable, though additional validation on other viruses would strengthen the study. - The paper provides clear analyses of the computational trade-offs in shaping horizons, offering practical guidance for deploying ADIOS in resource-constrained settings. Supplementary Material: - The paper references additional details in the appendix, but no supplementary material was explicitly reviewed. Relation To Broader Scientific Literature: - The work builds on prior research in computational antibody design, reinforcement learning, and viral escape modeling. - The integration of opponent shaping into antibody therapy design represents a good contribution that extends beyond traditional machine learning-based antibody optimisation. - The study effectively situates ADIOS within the broader context of viral adaptation and therapy resistance, demonstrating its relevance to long-term immunotherapy strategies. Essential References Not Discussed: - The paper sufficiently engages with prior work on opponent shaping, reinforcement learning, and antibody-virus interactions. Other Strengths And Weaknesses: - Strengths: - ADIOS introduces a novel approach to antibody design by explicitly modeling viral adaptation, moving beyond conventional myopic strategies. - The study provides empirical validation, demonstrating that shapers outperform myopic antibodies in long-term efficacy. - The GPU-accelerated JAX implementation significantly improves computational efficiency, making large-scale evolutionary simulations feasible. - The exploration of shaping trade-offs provides valuable insights for real-world deployment. - The method has potential applications beyond virology, including antimicrobial resistance and cancer treatment. - Weaknesses: - While the study presents compelling results, additional validation on a broader range of viruses would strengthen its generalizability. - The claim that ADIOS is applicable to antimicrobial resistance and cancer therapy remains speculative without experimental evidence. - The evaluation focuses primarily on simulation results, and real-world validation would be necessary to confirm ADIOS’s practical impact. - The authors assume that the structure of the antigen does not significantly change over the course of viral escape. Other Comments Or Suggestions: NA Questions For Authors: - Have you considered evaluating ADIOS on other viruses beyond dengue to test its generalizability? - How sensitive is ADIOS to different choices of evolutionary parameters in the viral escape simulation? - How would you incorporate dynamics (going beyond the assumption of static antigen structure), to improve accuracy? - Although it might be difficult, how would go about real-world validation? - How do you envision ADIOS being applied to cancer therapy, and what adaptations would be required for such use cases? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your thorough review and feedback! We appreciate it a lot. We are glad that you thought that applying opponent shaping to antibody design is "a good contribution that extends beyond traditional ML-based antibody optimisation"! # Answering Questions ## 1. Evaluating ADIOS on Other Viruses We agree that testing ADIOS against a wider range of viruses would help further validate our claims. To address this, we have conducted additional shaping experiments with three more viral antigens: the flu, MERS, and the West Nile virus. For all three, our experiments show that the antibody shapers generated by ADIOS successfully shape the new viruses, limiting viral escape. Interestingly, the computational trade-offs of shaping horizons are dependent on the virus but follow the same main trends. We have added these new results to the paper. ## 2. Sensitivity of ADIOS to Hyperparameters Based on our experiments ADIOS is not very sensitive to choices of different evolutionary parameters. As an example, we experimented with different mutation rates of the virus and found qualitatively similar results. To test the transferability of our results we also designed the external pressure experiment, the results of the experiment are in Figure 4. This result demonstrates the robustness of ADIOS to a domain shift. Even with an added pressure of an external antibody we can still see a shaping effect induced on the virus by the long horizon shapers. Additionally, the new results of the extra experiments on 3 other viruses show that ADIOS achieves shaping and limits viral escape for all the selected viruses, not only Dengue. Overall, this demonstrates that ADIOS is not sensitive to the choice of evolutionary parameters. ## 3. Incorporating Dynamics Absolut! already allows us to incorporate an aspect of ‘dynamics’, as it tests binding over a larger number of potential poses, and the antibody structure is permitted to change between poses. The minimum energy poses change dramatically throughout the evolution of antibody/antigen, and we study the influence of shapers on binding poses in Appendix B. That said, you are completely correct about the static structure of the antigen which is used to generate these binding poses in Absolut!. In future work, we plan to directly account for the changes in the antigen structure by using structure prediction models like AlphaFold3 (AF3), which can now accurately predict the structure of an antibody-antigen complex. We are building a binding simulator that combines the AF3 structure with a protein-protein docking score to achieve a more accurate binding affinity score. We plan to use this new binding simulator which accounts for antigen structure changes, instead of Absolut!, to further evaluate the ADIOS framework and demonstrate its real-world applicability. ## 4. Real-World Validation We want to conduct a real-world evaluation of ADIOS in the new simulator mentioned above by using data from past COVID variants. All SARS-CoV-2 pandemic strain sequences are available through GISAID (https://gisaid.org/). Additionally, a dataset of COVID antibodies is also available through CoV-AbDab (https://doi.org/10.1093/bioinformatics/btaa739). We could retrospectively evaluate the ‘Myopic’ case by using these antibodies. I.e. we can show that by setting up our simulator to a state equivalent to the beginning of the pandemic and simulating viral escape as a response to some of these real-world antibodies, the real viral strains observed later in the pandemic are ‘in distribution’ of our simulator’s outputs. That would show the simulator is ‘trustworthy’, and successfully running ADIOS in it would indicate real-world applicability. Going beyond simulation, we can validate ADIOS using bacteria phages, i.e. viruses that infect bacteria, and run shaping experiments in a wet lab with mutating bacteria and bacteria phages. Co-evolving bacteria phages (https://doi.org/10.1073/pnas.2104592118) would be a good real-world test ground for ADIOS, that is experimentally tractable and low-risk. ADIOS’s inner loop would correspond to bacteria phage evolving while the outer loop would be evolving the actual bacteria. ## 5. ADIOS for Cancer Therapy Monoclonal antibodies (mAb) are a common therapy used for cancer treatment. The way to adapt ADIOS to cancer therapy would be to still optimise antibodies in the outer loop, this time these would correspond to mAb antibodies. Instead of viral escape simulation in the inner loop, we would simulate the evolution of cancer cell’s growth factor receptors. The goal would be to design antibodies that shape cancer cells into cells that don’t proliferate well. We have added a short phrase to our manuscript suggesting this as a potential domain of application. Again, thank you for your detailed feedback! Hopefully, we have addressed all your concerns. If we did, could you consider updating your score for our paper? --- Rebuttal Comment 1.1: Comment: ### On the dynamics Although AF3 is a clear improvement over AF2 in terms of structure prediction, it still struggles with accurately modeling antibody-antigen complexes. More importantly, it doesn't capture true molecular dynamics—it simply samples a range of possible conformations, some of which may not reflect biologically relevant states. So while your proposed direction sounds promising, it may still be limited by the inherent shortcomings of AF3. --- Reply to Comment 1.1.1: Comment: ## On the Dynamics Thank you for your comment. We agree that AF3 doesn’t capture molecular dynamics, nor did we claim it does. Our previous response referred to “dynamics” resulting from antigen sequence mutation. This seems to be the result of a simple misunderstanding over what is meant by a “static antigen structure”. However, this is only relevant for future work. ## Rebuttal Acknowledgement In your original review, you list 4 weaknesses of the paper: 1. Testing on only one virus. 2. No experimental evidence for ADIOS for antimicrobial resistance and cancers. 3. No real-world validation. 4. Assumptions of static antigen structure. In our rebuttal, we’ve addressed all of them: 1. We run additional experiments on three other viruses. In all cases, ADIOS successfully achieves shaping and limits viral escape, see point 1 in our rebuttal. 2. We only briefly talk about antimicrobial resistance and cancer treatment in our paper, and we have now added further clarification regarding our claims, see point 5 in our rebuttal. 3. You already acknowledge the potential difficulty of real-world validation, and we have discussed at length how real-world validation of ADIOS is possible, see point 4 of our rebuttal. However, it’s beyond the scope of our current work, where the goal is to demonstrate that shaping with ADIOS is possible which we have successfully shown. 4. For this initial study assuming a static antigen structure was necessary, as modelling the changing antigen structure would hugely inflate the computational budget. Now that we have shown ADIOS works when Absolut! is the binding simulator we can build more accurate and expensive simulators to evaluate ADIOS as future work which we discuss in point 3 of our rebuttal. Given that you don’t provide any further concerns with our paper or rebuttal, we assume that our responses have satisfactorily addressed the weaknesses you initially highlighted. If so, we would greatly appreciate you reconsidering your score. Judging from your original review, it’s clear that you see a great deal of value in our work. However, we understand that a 'weak accept' was aligned with your initial critique. In light of our clarifications and extra experiments we conducted - including those with additional viruses - could you raise your overall score?
Summary: The authors consider a very interesting and important problem of antibody development against viral strains which would control and defend against newer strains evolved from this one. So, the antibody development problem is viewed as a sequential decision making problem which is modeled as a two player zero-sum game between the antibody developer and the viral strain. The authors call their approach ADIOS (Antibody Development vIa Opponent Shaping) and the non-myopic antibodies they design as 'shapers'. They label their approach as a meta-learning problem where the outer loop is the antibody (shaper) design process and the inner one is the virus' adaptive (evolutionary) response. To demonstrate their approach, the authors build a simulator using the Absolut! framework. **Update after rebuttal** Based on the authors' responses to my review and the other reviews, I would like to retain my overall recommendation. I have highlighted my concerns and response to rebuttal in my comments. Claims And Evidence: The authors present a new approach for antibody design and demonstrate it using a simulated environment (based on work from literature). They do not make any theoretical claims. Methods And Evaluation Criteria: The authors do not use any benchmark datasets, but they use a simulator to show the efficacy of their approach. This simulator is based on other published work, but it would be good if the authors can add a note on the accuracy and real-word utility of this simulator. They do mention that their simulator is based on simplified models, but not its practical implication. Can they be used to generate a good proof of concept? Theoretical Claims: There are no theoretical claims in this paper. The authors propose a new approach, which is actually a novel combination of existing approaches to a new problem. Experimental Designs Or Analyses: The authors perform several experiment using a simulator and also provide detailed analyses of their results. Supplementary Material: I read the entire supplementary material in the pdf. I have not gone through the code given in an external link. Relation To Broader Scientific Literature: This paper provides a novel way of considering antibody design (to the best of my knowledge), thereby providing interesting downstream research opportunities for the community. Essential References Not Discussed: I am not aware of any essential references that have been missed. Other Strengths And Weaknesses: Strengths: 1. The paper is very well written and the authors' approach is clearly explained. 2. The approach presented by the author provides a way to try several other ideas from multi-objective RL and game theory in antibody design and thus is a useful research contribution. Weaknesses: 1. Novelty: While the application is definitely novel to the best of my knowledge, the various components in the approach are taken from literature. Other Comments Or Suggestions: 1. It would be good to describe the variables used in the algorithms where they are mentioned. For example $R_V$ in Algorithm 1, $F^H_{\hat v}$ in Algorithm 2. 2. It would be useful to the readers if the authors formally define the two player game including the various state, action and observation spaces, transition probabilities, observation functions, payoff functions, horizon and discounting used (if any). Questions For Authors: 1. How difficult is it to simulate virus evolution/mutation. Doesn't it depend on factors beyond antibodies, such as possibly another (non-human) host etc.? 2. Is it computationally involved to simulate viral evolution? Are the potential trajectories tractable? 3. What are the practical considerations involved in antibody design? Does the current approach account for effect of antibodies on human beings? 4. What are the time scales for evolution of viruses and development of new antibodies? How do these vary with respect to the disease progression timescale, severity/mortality rate of the disease, R0 etc.? 5. How accurate is the Absolut! framework based environment in estimating the binding strength of protein-protein interactions? Do these work for novel antibodies? 6. When considering the potential harmful effects of future variants of viruses, are co-morbidities taken into account or just escape potential? 7. Is the action of the virus evolution and or antibody generation composes of generating a fixed-length amino-acid sequence at each time step or can this be a variable length sequence with some maximum length? Does protein folding play a role in designing feasible structures? The authors mention this as a future step in Section 7, but it would be interesting to note how will this affect the action space itself. 8. In equation (1), can there be multiple anti-targets for an antibody? 9. Have the authors explicitly defined a policy/strategy function, which is a mapping from the current state for the antibody design to a new antibody design? 10. Is some kind of an equilibrium reached by the authors' algorithm? Is an equilibrium expected? If so, what would be its characteristics? Ethical Review Concerns: The authors already highlight potential concerns in their Impact Statement on Page 9. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your detailed review! We address everything possible within the character limit: ## Accuracy + Real-World Utility of Absolut! To address the real-world utility of our simulator we quote the original Absolut! paper: “Of note, Absolut! is neither suited nor designed to directly predict if and where an antibody sequence binds to an antigen in the real world. Rather, Absolut! has been developed with the premise that a successful ML antibody-binding strategy for experimental (real world) datasets should also perform well on synthetic datasets (and vice versa)”. Hence Absolut! is not designed for direct real-world predictions, but it has been shown that ML models which perform better according to Absolut! also perform well on real-world datasets. This makes Absolut! ideal as a testing ground for ADIOS. We reference this in Appendix Section C, where we explain Absolut! in more detail. To make this more clear, we can incorporate an additional explanation in the background section? ## Notation and MDP Definition We note that the functions $R_v$ and $F_v^H$ were not defined in the algorithms; we have therefore added their definitions. As for the formal definition of the MDP, we agree and have added a section detailing the MDP to the methods section. ## Answering Questions 1. Simulating viral evolution is generally challenging and computationally involved. However, recent works, such as EVEscape (https://doi.org/10.1038/s41586-023-06617-0) have had success forecasting viral mutations, so it is possible to have some predictive power, which is likely to increase as models and data improve. In our viral escape simulation, the virus responds to two main factors, the antibody and the viral target. In more accurate simulators you might want to account for other factors too, for example in real life the evolution will depend on: how chronic the virus is, what is the percentage of people that will be infected, and the mortality. To keep our approach computationally feasible we omit some of these factors. 2. See above. 3. The key consideration in antibody design is the specificity of the antibody, i.e. does it bind to the correct antigen and nothing else (https://doi.org/10.1186/s12929-019-0592-z). In ADIOS we just consider a region of the antibody sequence which is responsible for the specificity, the CDRH3 region. Designing the rest of the antibody sequence will affect many developability parameters in the human body such as PK, half-life, and effector functions, which are also important for antibody design but engineering these is independent of the specificity. Typically, the CDRH3s can be grafted to antibody scaffolds with good profiles of other properties, so our approach is suitable for the specificity question and selective pressure it creates. 4. The development of antibody therapeutics in the US on average takes about 8 years until the drug receives an FDA approval (https://doi.org/10.4161/mabs.2.6.13603). Viral evolution is very different between viruses, for example, RNA viruses (e.g. HIV, SARS-Cov2) evolve faster than DNA viruses (e.g. Herpesvirus). In general, a higher mortality rate limits viral evolution and needs a faster immune response, while a higher R0 means more evolution opportunities for the virus given the higher number of infections. 5. See the section on Absolut! above. 6. In this work we mainly focus on viral escape potential and how easily targetable the virus is. However, in future work, with higher fidelity simulators we can model other factors such as mortality or co-morbidities. 7. In this work we assume a fixed sequence length for the antibody and the virus, we explain this in more detail in Section 4.1. An extension of this work where we allow insertions and deletions to the sequence is certainly possible, one way to adapt the action space to allow this would be to have a max sequence length for the insertions and an additional ‘None’ token for the deletions. Protein structure prediction models, such as AlphaFold3 (AF3), would be useful to validate the feasibility of structures resulting from the mutated sequences of antibodies and antigens. However, the action space would remain in the sequence space, and we could just include an extra ‘structure check’ of the sequence by passing it through AF3. 8. Yes, there can be multiple anti-targets in our model, but in our experiments, we stick to 1. 9. We don’t define the antibody’s policy function explicitly but we can easily add that if you think it’s helpful. 10. Figure 2b and Figure 2c show viral and antibody evolution plateauing. Eventually, an equilibrium must be reached, however, this may take an extremely long time. Do you think it would be interesting to run an ultra-long horizon experiment, to better understand the behaviour? ## Conclusion We appreciate the interest you have shown in our work! Is there anything we could change for you to raise your review to a “strong accept”? Also, any future work ideas would be great too! --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. They answer several of my questions. Regarding points about helpfulness of antibody policy definition and simulating equilibrium, I am not sure about this from a healthcare problem perspective; I just asked these from an RL perspective. Based on the authors' responses to my review and the other reviews, I would like to retain my overall recommendation. --- Reply to Comment 1.1.1: Comment: Thanks for your comment! Due to the character limit, we had to make our answers short and could not expand on your equilibrium question. We’ll take this opportunity to explain it a little further. To expand on the equilibrium question, in our current setting we model a one-shot deployment of an antibody therapy, reflecting the real-world limitations of releasing multiple rounds of therapies. Since we model the viral response to this single antibody, in the inner loop the virus will eventually converge to some local optima against the antibody. Technically, the virus will eventually converge to a distribution concentrated around the global optima (with the degree of concentration determined by the evolutionary parameters), but this will only happen in the limit. The antibody will converge to a different solution depending on the horizon it’s optimized for. In the extreme case as the horizon tends to infinity, and the virus is at its global optima, the optima choice of antibody coincides with the minimax regret choice. In the general setting, with new antibodies being deployed, the equilibrium will likely be a complicated dynamic equilibrium. Technically speaking, there is a weak pure Nash equilibrium where the virus exactly replicated the anti-target, $v = t_a^-$ and the antibody exactly replicated the target, $a = t_v^+$. To prove this, if $v = t_a^-$: $$ R_a(v,a) = B(v,a) - B(t^-_a, a) - B(v, t^+_v) = B(t_a^-,a) - B(t^-_a, a) - B(t_a^-, t^+_v) = - B(t_a^-, t^+_v) $$ Which is a constant, satisfying the conditions for the antibody. For the virus, if $a = t_v^+$: $$ R_v(v,a) = - R_a(v,a) = -B(v,a) + B(t^-_a, a) + B(v, t^+_v) = -B(v,t_v^+) + B(t^-_a, t_v^+) + B(v, t^+_v) = B(t^-_a, t_v^+) $$ Which is, again, a constant; completing the proof. This Nash equilibrium disappears once an additional anti-target is used. Since this is a weak Nash, the virus, being a naive evolutionary learner, has no capacity to remain at the equilibrium anyway. Hence, we hypothesise that the equilibrium will be dynamic. Part of the reason we have the extra target $t_v^+$ and anti-target $t_a^-$ is that without them a trivial pure-strategy Nash equilibrium exists.
Summary: This paper introduces a long-term strategy using opponent shaping, a concept from game theory and reinforcement learning, to design antibodies that not only bind effectively to the virus but also influence the virus's evolutionary trajectory to make it less dangerous over time. The algorithm involves three main components: - Virus-Antibody Game: Models the interaction between the virus and the antibody as a two-player zero-sum game. - Simulated Viral Escape: Simulates how the virus evolves to escape the antibody over time. - Antibody Optimization: Optimizes the antibody to perform well against both the current virus and its future evolved variants. Claims And Evidence: Clams are clear. Methods And Evaluation Criteria: Yes, they make sense. Theoretical Claims: No theoretical claims in this paper. Experimental Designs Or Analyses: I am not sure how Myopic Antibodies are generated. I believe we could have more baselines to be compared with. For example, PPO, rainbow DQN and even LOLA (https://arxiv.org/abs/1709.04326), AAA (https://arxiv.org/abs/2406.14662). I also think since we already have some trajectories of the virus mutation. We can do back test on these trajectories and see if "simulates how the virus evolves to escape the antibody over time" really have a good prediction and the designed antibody would really perform good on the virus in the "future" in back test. Supplementary Material: I read the code gen_alg_basic.py and shaping_funcs.py (the code that simulated viral escape via evolution). Relation To Broader Scientific Literature: Medical science, drug discovery. Essential References Not Discussed: The paper should also discuss more related work about antibody design such as Biological Sequence Design with GFlowNets, Reinforcement Learning for Sequence Design Leveraging Protein Language Models and more... Other Strengths And Weaknesses: The paper is overall clear to me. Other Comments Or Suggestions: I think more discussion about antibody design methods and more baseline would make the paper better. I am not sure how Myopic Antibodies are generated. I believe we could have more baselines to be compared with. For example, PPO, rainbow DQN and even LOLA (https://arxiv.org/abs/1709.04326), AAA (https://arxiv.org/abs/2406.14662). I also think since we already have some trajectories of the virus mutation. We can do back test on these trajectories and see if "simulates how the virus evolves to escape the antibody over time" really have a good prediction and the designed antibody would really perform good on the virus in the "future" in back test. Questions For Authors: See Other Comments Or Suggestions Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review! As we understand, you have three primary concerns: 1. Lack of discussion of antibody design methods 2. Unclear how Myopic Antibodies are generated 3. No comparison to other reinforcement learning algorithms ## 1. Discussion of Antibody Design Methods You are completely correct, and we have updated the paper to include references to more antibody design methods. The reason we didn’t include such references originally is because ADIOS is somewhat agnostic to the choice of the antibody optimization algorithm used, the primary difference is that ADIOS emphasises the importance of accounting for the adaptation of the virus using simulated evolution. However, it is important to discuss the breadth of different antibody design methods in our related work section, which we have now rectified with additional references. In particular, we now reference energy-based antibody optimization methods (https://doi.org/10.1371/journal.pone.0105954, https://doi.org/10.1371/journal.pcbi.1006112), including optimization methods based on GFlowNets (https://doi.org/10.48550/arXiv.2411.13390), sequence-based language models (https://doi.org/10.1093/bioinformatics/btz895, https://doi.org/10.1038/s41598-021-85274-7) and structure-based approaches relying on GNNs (https://doi.org/10.48550/arXiv.2207.06616) and diffusion models (https://doi.org/10.48550/arXiv.2308.05027). ## 2. Generation of Myopic Antibodies The Myopic Antibodies are generated in the exact same way as the other shaper antibodies, but with a horizon of 0 in the objective. We now recognise that our explanation of this was unclear, as we didn’t explicitly state that we also optimize myopic antibodies in the same way we optimize shapers. We have now changed line 233 from “To optimise antibody shapers, we …” to “To optimise both shapers and myopic antibodies, we …” to rectify this. Thank you for bringing this to our attention! ## 3. Comparison to RL Baselines We don’t compare against several baselines because the core question of our current work is whether it is possible to apply opponent shaping to viruses and design effective antibody shapers, which we have now successfully demonstrated. However, your suggestion is a natural follow-up question - given that we can shape viral evolution, what algorithms (PPO, Rainbow DQN, etc.) allow us to create the optimal antibody shapers in the outer loop? This becomes significantly more viable to test due to our enormous speed up of Absolut! (x10,000), making fast, full training rollouts on the GPU possible. Still, there are some challenges in adapting a few of the works you mentioned to our problem setting. In particular, a lot of opponent shaping literature, such as LOLA and AAA, rely on assumptions suitable for RL-trained/humanlike agents, but less suitable to biological adaptive agents, such as viruses, which do not perform value iteration but evolve instead. ## Back-Testing on Historical Data Thank you for this suggestion. We fully agree with you! We are already planning on doing the back-testing, but it requires a much higher fidelity simulation of both binding and viral escape. As a result, we are working on extensions to this work, where instead of using Absolut! as our binding simulator we use more complex models. These include AlphaFold3 (and similar) to predict the changing structure of antibodies and viruses, as well as protein-protein docking models to estimate the binding strength based on the structure. Using other predictive models like EVEscape (https://doi.org/10.1038/s41586-023-06617-0), we also account for additional factors that influence viral escape. We can retrospectively evaluate the ‘Myopic’ case by using historical antibodies. I.e. we can show that by setting up our simulator to a state equivalent to the beginning of the pandemic and simulating viral escape as a response to some of these real-world antibodies, the real viral strains observed later in the pandemic are ‘in distribution’ of our simulator’s outputs. That would show the simulator is ‘trustworthy’, and successfully running ADIOS in it would indicate real-world applicability. ## Additional Experiments To better evaluate ADIOS we have now also conducted extra experiments with 3 additional viruses: flu, MERS and the West Nile virus. In all of these cases, described in the updated manuscript, our experiments show that the antibody shapers generated by ADIOS can successfully shape the viruses. The results of these experiments are now added to the paper as well. ## Conclusion Hopefully, we have addressed all your concerns! If we did, would you consider raising your score for our paper? If not, please let us know any further questions or suggestions. We are very focused on making this paper as high quality as possible, so we really appreciate your feedback. --- Rebuttal Comment 1.1: Comment: Hi, Thanks for the rebuttal. I still think reinforcement learning baselines are very important to this work, although you already show "it is possible to apply opponent shaping to viruses and design effective antibody shapers". I understand the time is limited to implement new experiments. I would maintain my score of the paper. Good luck! --- Reply to Comment 1.1.1: Comment: Thank you for your continued engagement with our work! We're glad to see that the RL baselines concern is your only remaining point, which suggests we've successfully addressed all your other feedback. If we understand correctly, you are suggesting using RL baselines in one of three possible places: 1. In the inner loop for the virus 2. In the outer loop, for shapers 3. For myopic antibodies Additionally, you also suggest testing ADIOS against other opponent shaping algorithms. Below we consider all of these points. ## 1. Inner loop RL baselines for the virus: We deliberately use an evolutionary algorithm to model viral evolution, reflecting biological processes of mutation and selection rather than an RL-based mechanism. Substituting an RL algorithm here would undermine this biological realism. ## 2. Outer loop RL baselines for shapers: While we understand your suggestion to use algorithms like PPO or Rainbow DQN, implementing RL in the outer loop presents significant challenges. RL algorithms like PPO usually require 10^5+ rollouts to make policy improvements (https://doi.org/10.48550/arXiv.2005.12729). In our most expensive setting with horizon H=100, we use at most 12,000 rollouts. While applying RL to the outer loop is theoretically possible, it would require significantly redefining the MDP and making substantial adaptations to work effectively within these constraints. These modifications would be extensive enough that we believe they merit exploration in a separate paper rather than within this initial work on ADIOS. We also appeal to the opponent shaping literature. Model-Free Opponent Shaping (M-FOS, https://doi.org/10.48550/arXiv.2205.01447) simply chooses one effective outer-loop optimizer (either Genetic Algorithms or PPO) to demonstrate the viability of shaping. Similarly, we believe additional RL algorithms are not essential to validate ADIOS, as a genetic algorithm is sufficient. ## 3. RL baselines for myopic antibodies: In our paper, the myopic antibody represents the baseline, equivalent to the ‘naive learner’ in other opponent shaping literature such as LOLA or M-FOS. Although a more sophisticated myopic optimizer might exist, Figure 2a demonstrates that optimizing for the myopic objective doesn’t improve performance on the true objective. Figure 2d similarly shows the myopic strategy plateauing. We therefore expect that even an RL-based approach to the myopic objective would not notably improve its true-objective performance. ## 4. Opponent shaping baselines for ADIOS: Standard opponent shaping evaluation includes comparing against both other shaping algorithms and naive learners. We include the latter through our myopic antibody baseline. Most shaping algorithms are unsuitable comparisons as they assume reinforcement learning-based opponents, not evolutionary dynamics. M-FOS could potentially work, but ADIOS is already an adaptation of M-FOS to this biological context, making such comparison redundant. ## Conclusion Overall, we believe our comparison of shaper vs. myopic antibodies adequately demonstrates ADIOS’s key contribution: how shaping can influence viral evolution. Our additional tests with multiple viruses further reinforce this finding. We trust this clarifies why we have not included RL-based or other opponent shaping baselines and why we see these as future research directions. Thank you for your thoughtful feedback and for considering this explanation in your final assessment.
Summary: The manuscript presents ADIOS as a method for optimizing antibodies while considering viral escape. In essence, the proposed approach is a simplified version of adversarial training. Experiments were conducted using a binding prediction method (Absolut!), and the results indicate that the proposed method outperforms the one that does not account for viral escape. Claims And Evidence: One major concern with the method is that the optimization results are evaluated using Absolut!, while the proposed method may generate data that lies outside of Absolut!’s distribution. Therefore, an improvement in the results measured by Absolut! does not necessarily reflect an actual enhancement in the final antibody. It is highly likely that the resulting sequences fall outside the distribution of Absolut!, which is, in fact, a key challenge in this field. As an application-oriented paper, such considerations should be addressed. Thus, the results presented in the manuscript cannot conclusively demonstrate that the proposed method leads to antibodies with improved properties in real-world scenarios. Methods And Evaluation Criteria: The performance of the proposed method is evaluated using Absolut!, which does not demonstrate the method's applicability in real-world scenarios. Additionally, the proposed method is tested with only one type of antigen, making it impossible to conclude that the method would be effective for other antigens. Theoretical Claims: There's no theoretical claims in this manuscript. Experimental Designs Or Analyses: I have reviewed all the experimental designs, and I found issues with both the evaluation metrics (the computed binding scores) and the data (only one type of antigen was used). These limitations prevent the proposed method from demonstrating its performance and generalizability in real-world scenarios. Supplementary Material: I reviewed all the supplementary material. Relation To Broader Scientific Literature: Antibody optimization is a critical area of research, with significant implications for drug and vaccine design, while viral escape remains a major challenge in vaccine development. Numerous efforts have been made to build antibody binding prediction models, but none have achieved perfect accuracy. Although the motivation behind this paper is sound, the method and experimental design fail to address the key challenges encountered in real-world applications. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: The acceleration of Absolut! contributes to the broader user community of Absolut!. Weaknesses: 1. The novelty of the algorithm is limited. 2. In terms of paper presentation, the figures in the supplementary material should be renumbered rather than continuing the numbering from the main text. Other Comments Or Suggestions: No other comments Questions For Authors: 1. What is the computational speed of the proposed method? 2. How is the correctness of the "binding" prediction determined when binding poses change? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your review. Below we address your concerns and questions: # Extra Experiments Following your feedback, we have now conducted extra experiments with 3 additional viruses: flu, MERS and the West Nile virus. In all of these cases, our experiments show that the antibody shapers generated by ADIOS successfully shape the viruses, limiting viral escape. We have now added these results to the paper, alongside correcting the enumeration of appendix figures. # Sampling Outside of Absolut!’s Distribution Thank you for bringing up the important problem of a binding model's distribution, or in other words its domain of applicability. We have carefully considered this problem and, in fact, one of the main reasons we decided to use Absolut! as our binding simulator is to avoid sampling outside of the distribution. As we explain in Section 3 of our paper, sequence-based ML models like Mason et al., 2021 (https://doi.org/10.1038/s41551-021-00699-9); Lim et al., 2022 (https://doi.org/10.1080/19420862.2022.2069075); Yan Huang et al., 2022 (https://doi.org/10.3389/fimmu.2022.1053617), as data-driven predictive models struggle to generalise beyond their training distribution and are therefore not suitable for our application which explores novel viral mutations. However, Absolut! is a mechanistic model (not data-driven) that we use as an oracle for querying the binding strength of new sequences. As we write in Section 3, “For any antibody-antigen pair, Absolut! enumerates possible binding poses and computes their energy using the Miyazawa-Jernigan potential” This process doesn’t have a training distribution or a domain of applicability and is well suited to our application which requires mutating both the virus (antigen) and the antibody. # Real-World Applicability We appreciate the challenges of binding prediction, and acknowledge that using Absolut! (or any other simulator) comes at a cost of real-world applicability. However, it has been shown that ML models that perform better on Absolut! perform well on real-world datasets too (https://doi.org/10.1038/s43588-022-00372-4). Besides, using more realistic models is extremely expensive and cheaper data-driven models are not suitable due to their limited domain, making Absolut! an ideal choice for the current work. More broadly, breakthroughs in biology have come from building on previous computational works with initially limited real-world applicability. For example, the development of effective mRNA vaccines has been credited at least in part to RNA Folding algorithm. Original papers such as Nussinov and Jacobson, 1980 (https://doi.org/10.1073/pnas.77.11.6309) and Zucker and Stiegler, 1981 (https://doi.org/10.1093/nar/9.1.133) laid the foundations to more modern heuristics for RNA structure prediction and therefore therapeutics applications. We are currently working on extensions to this work, where instead of using Absolut! as our binding simulator we use more complex models. These include AlphaFold3 (and similar), to predict the changing structure of antibodies and viruses, as well as protein-protein docking models to estimate the binding strength based on the structure. We also account for additional factors that influence viral escape. However, Absolut! is a necessary first step to prove that ADIOS is feasible and effective. Due to its speed, it is also a great testbed for developing methods which can then be scaled to more computationally expensive models. # Answers to Questions 1. In Figure 2d, we show the number of binding samples required to achieve a given antibody fitness for different shaping horizons. In Table 1, we report how long a single binding sample takes in our GPU-accelerated implementation: $2.1 ×10^{−4}s$. As the binding calculation is the main computational cost, a single run takes up to 4000s ~ 1h. For Figure 2d we chose to show binding samples on the x-axis (instead of runtime) because it provides a reference point for future work where one will want to use a more expensive (i.e. slower) binding simulator. Would it be helpful to discuss in the paper how binding calculation is the main computational cost? Or show a plot with the computational speed of our method (on our hardware) on the x-axis? 2. Our binding simulator, Absolut!, is not a pre-trained predictive model but a mechanistic one instead. Absolut! applies the same scoring computation to any possible binding pose to calculate the binding energy, a priori, there is no one pose where the binding calculation is systematically more “correct”. Thus, we hypothesise that mean correctness of the binding score should not systematically change as we mutate the sequences and/or change binding poses. # Conclusion We value and respect your critique, but we believe it is in large part due to a misunderstanding of Absolut!. If you disagree, please let us know why! We are always looking to improve the quality of our work. Otherwise, we would appreciate a re-evaluation of your review score.
null
null
null
null
null
null
Flow Q-Learning
Accept (poster)
Summary: This paper proposes using flow models to tackle offline RL tasks. To leverage the Q-function for guiding the learning of the flow-based policy while avoiding the computational cost of multiple backpropagations, this paper proposes learning a one-step action generation policy through a flow policy constrained distillation loss. Claims And Evidence: Most of the claims in this paper are well supported, but some parts remain difficult to understand. For details, please refer to the questions and comments. Methods And Evaluation Criteria: This paper compares several classic offline RL methods. Theoretical Claims: I have read the theoretical parts related to the paper. Experimental Designs Or Analyses: Please refer to the questions and comments for experimental concerns. Supplementary Material: I have read the content of the appendix. Relation To Broader Scientific Literature: In recent years, diffusion-based RL methods have shown great potential in offline RL tasks. As a more general generative model compared to diffusion models, applying flow models to RL is a promising research direction. Essential References Not Discussed: None Other Strengths And Weaknesses: Please refer to the questions and comments for review concerns. Other Comments Or Suggestions: 1. In line 110: To make the description clearer and more intuitive, I suggest the authors directly use notations like $a^1$ and a^0 to represent actions at different time points. 2. In line 139 right column: Whether \mu_w serves as deterministic policy? In other words, you want to distill a deterministic policy that maximizes the Q function and, at the same time, minimizes the output discrepancy with flow policy $\mu_\theta$. 3. I suggest to add the parameters comparison between $\mu_w$, $\mu_\theta$, and Gaussian policies. 4. Compared to previous baselines, the key difference in the method used in this paper lies in the policy constraint. Specifically, it measures the distance between a deterministic policy and an expressive policy, whereas previous Gaussian policy methods impose constraints on the state-to-action mapping learned from the dataset, and the diffusion policies measure the distance between two expressive policies. It is better to understand this paper by discussing the differences among these types of applying policy constraints. Questions For Authors: 1. Is the analysis in the Remark redundant? It does not seem to contribute much to the understanding of the proposed method. 2. Would using a better ODE solver lead to improved performance? Would using a different marginal probability path result in better performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed review and constructive feedback on this work. We especially appreciate the clarification questions about deterministic policies and policy constraints, as well as several helpful suggestions. We also conducted an additional ablation study on ODE solvers. Please find our response below. --- * **Would a better ODE solver/different marginal probability path improve performance?** Thank you for the question. Following the suggestion, we conducted an additional ablation study to compare different ODE solvers for FQL. We consider three ODE solvers in this experiment: (1) the Euler method (**default**), (2) the midpoint method, and (3) the Runge-Kutta (RK4) method. The table below compares the final performance of these ODE solvers on $3$ OGBench tasks ($8$ seeds, $\pm$ denotes standard deviations): | Task | Euler (Default) | Midpoint | RK4 | |-|-|-|-| | $\texttt{antmaze-large}$ | $76 \pm 23$ | $83 \pm 6$ | $82 \pm 4$ | | $\texttt{cube-double}$ | $56 \pm 16$ | $54 \pm 17$ | $7 \pm 20$ | | $\texttt{scene}$ | $87 \pm 8$ | $83 \pm 12$ | $83 \pm 5$ | The results suggest that the performance of FQL is generally robust to the choice of ODE solver, except that RK4 leads to somewhat unstable performance on $\texttt{cube-double}$. While we have not tested different marginal probability paths, we expect similar results, given that the performance of FQL is generally robust to other flow-related hyperparameters (Figure 9). That said, we believe a better ODE solver or marginal path sampler might further improve performance in more complex tasks and datasets. * **About different kinds of policy constraints.** As the reviewer correctly pointed out, Gaussian behavior-constrained actor-critic methods (e.g., ReBRAC) minimize the distance between the RL policy's actions and dataset actions, while FQL minimizes the distance between the RL policy's actions and the BC policy's actions. They both serve the same role of a behavioral regularizer. The main difference is that ReBRAC *directly* imposes a *Gaussian*-based constraint, while FQL *indirectly* imposes a *flow*-based constraint. We note that FBRAC in our experiment lies precisely between the two: it *directly* imposes a *flow*-based behavioral constraint. We have empirically compared these $3$ policy extraction schemes (ReBRAC, FBRAC, and FQL) in Table 2 of the paper. The result shows that our indirect flow-based behavioral constraint (FQL) indeed leads to better performance than both the direct Gaussian-based behavioral constraint (ReBRAC) and direct flow-based behavioral constraint (FBRAC). Following the suggestion, we will further clarify the differences between these policy constraints in the final version of the paper. * **Is $\mu_\omega$ in L139 a deterministic policy?** As mentioned in the "Notational Warning" paragraph (L126), $\mu_\omega(s, z)$ is a deterministic function. However, this does **not** mean that we perform distillation into a deterministic policy: even though $\mu_\omega(s, z)$ is a deterministic function, since $z$ is a random variable (sampled from $\mathcal{N}(0, I)$), it serves as a *stochastic* policy $\pi_\omega(a \mid s)$ when marginalizing out $z$. As a result, the one-step policy clones the flow BC policy **for each $\mathbf{z}$** through *deterministic distillation*, while maximizing the Q function with *stochastic* actions (Eq. (8)). While we (partly) explained this subtlety in the "Notational Warning" paragraph, we will further clarify this point in the final draft to prevent any potential confusion. * **About the remark box.** The purpose of the remark box is to provide further theoretical insight into the policy constraint of FQL and to discuss its relation to previous offline RL methods (TD3+BC, AWAC, CQL, etc.). We believe this section can be safely skipped if the reader is mainly interested in empirical results and methodology (which is the main reason we formatted this discussion with a separate box). * **Parameter comparisons between policies.** To clarify, we used the same [512, 512, 512, 512]-sized MLPs *all* networks (including flow policies, one-step policies, and Gaussian policies; Table 5), unless otherwise noted, and they have almost identical numbers of parameters. There are some exceptions (e.g., IDQL), but we used smaller networks *only* when the smaller ones performed better than the default-sized ones. We fully described the way we chose these hyperparameters in Appendix F.2. We will further clarify this point in the final draft. * **Notational suggestions.** Thanks for the helpful suggestions! We will incorporate them in the camera-ready version. --- We would like to thank the reviewer again for raising important questions about FQL. We believe the additional results and clarifications have significantly improved the quality of the paper. Please let us know if you have any additional questions or concerns. If we have addressed your concerns, would you consider raising your rating? --- Rebuttal Comment 1.1: Comment: I thank the authors for their clarification and additional experiments. Most of my concerns are addressed. I tend to keep the current review evaluation.
Summary: The paper introduces Flow Q-learning (FQL), an offline RL method that combines flow-matching policies with Q-learning to address challenges in modeling complex action distributions. FQL uses two components: (1) an expressive flow-matching policy trained via behavioral cloning (BC) to capture multimodal dataset actions, and (2) a separate one-step policy trained with Q-learning to maximize values while distilling knowledge from the flow model. This decoupling avoids unstable recursive backpropagation through iterative flow steps and eliminates costly iterative action generation during evaluation. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No Experimental Designs Or Analyses: D4RL Supplementary Material: No Relation To Broader Scientific Literature: The article provides a detailed discussion on the relevant literature concerning RL with diffusion and flow models. Essential References Not Discussed: The method proposed by the authors bears a resemblance to reward distillation in alignment for image diffusion, where a few-step model is distilled while simultaneously maximizing the reward. [1] Reward Guided Latent Consistency Distillation​ [2] DOLLAR: Few-Step Video Generation via Distillation and Latent Reward Optimization Other Strengths And Weaknesses: No Other Comments Or Suggestions: No Questions For Authors: Could you provide more details about the inference step for the BC Flow Policy? Specifically, I would like to know whether the BC Flow Policy incorporates classifier-free guidance (CFG) during its inference process. Additionally, I’m curious if the one-step policy also utilizes CFG. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed review and constructive feedback on this work. We especially appreciate your pointing out related work in different domains. Please find our response below. --- * **Inference details of policies.** Thanks for asking this clarification question! In FQL, *neither* the BC flow policy nor the one-step policy uses CFG (or CG), as there are no separate class labels in our problem setting. More specifically, the one-step policy $\mu_\omega(s, z)$ is simply a standard feedforward neural network (so no iterative sampling procedure is needed), and the BC flow policy $\mu_\theta(s, z)$ is based on the standard Euler method for ODE solving. At evaluation (inference) time, only the one-step policy is used (L160), and no iterative process is involved. We refer to the algorithm box in the main text (Algorithm 1) as well as L679-L688 in Appendix C for the full description of the sampling procedure. * **Related works on image diffusion models.** Thanks for pointing out these relevant works in image diffusion models! We will cite and discuss them in the camera-ready version. --- We would like to thank you again for raising clarification questions and suggesting several related works. Please feel free to let us know if you have any additional questions or concerns. If we have addressed your concerns, would you consider raising your rating?
Summary: This paper propose Flow Q-learning, and offline reinforcement learning that integrates expressive flow-matching policies for modeling complex action distributions. ## Update after rebuttal: I have read the rebuttal and the discussions from other reviewers. I am maintaining my score. Claims And Evidence: The paper identifies a clear challenge in offline RL related to using flow or diffusion policies. It presents an elegant solution: training an expressive one-step policy separately from the iterative flow policy, which is both theoretically sound and empirically effective. Methods And Evaluation Criteria: The approach is grounded in existing behavior-regularized actor-critic frameworks but innovatively avoids costly BPTT during RL training. Theoretical Claims: The derivation relating the distillation loss to a Wasserstein behavioral regularizer provides additional insight and theoretical justification for why the proposed method might perform better. Experimental Designs Or Analyses: Extensive experiments are performed, demonstrating consistently strong performance across benchmark tasks. Supplementary Material: Supplementary material is extensive and well-organized. Relation To Broader Scientific Literature: The key contributions of this paper are related to offline reinforcement learning, generative modeling (specifically flow matching), and policy extraction techniques. Essential References Not Discussed: Not applicable. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the positive feedback about this work! We would be happy to address any additional questions or concerns you may have, so please feel free to let us know. If there are no further concerns or questions, would you consider raising your rating?
Summary: This paper proposes the offline-RL method Flow-Q-Learning (FQL) which leverages an expressive flow-based generative model for modeling the action distribution while avoiding common issues such as unstable backprop through each time-step or less efficient re-weighting schemes. This is achieved by introducing a so-called “1-step” policy which is used to minimize the Q function while being regularized to match the behavior behavior cloning flow-model that learns the offline transitions from data. This 1-step model can then be evaluated efficiently and shows increased performance across many OGBench and D4RL tasks compared to previous offline methods. Claims And Evidence: -The paper claims that FQL outperforms previous offline RL methods on a wide range of tasks and in particular this type of behavior constrained policy extraction method is better than previous flow/diffusion based extraction schemes like advantage weighted flow-matching. -They claim that this 1-step policy preserves the expressivity of the underlying action distribution learned by the BC flow-matching model. -Training FQL is more computationally efficient and effective than related flow-based policy extraction approaches. I think that all claims are pretty well substantiated by the results experimentally except for the 1-step policy expressiveness other than the remark about how the distillation loss is an upper bound for the W2 distance between the policies. For this reason it makes sense that the parameter \alpha needs to be tuned carefully. Methods And Evaluation Criteria: The proposed method is very practical with the application of offline RL in mind. All design choices are well supported by the OGBench and D4RL benchmarks. Theoretical Claims: This paper does not really make any theoretical claims except for the connection between the W2 distance and the distillation regularizer. I didn't check this claim carefully, but the authors do ablate the parameter alpha. Experimental Designs Or Analyses: The experimental section to be very thorough when comparing against previous offline RL approaches. I found the QA format of the discussion engaging and informative to read from a practitioner's standpoint. It would be nice to better understand the limitations of FQL either in scalability due to the simulation-based loss (atleast it requires a forward pass of the ODE) or task expressiveness and when it does not compare well to simple Gaussian policies. Supplementary Material: I reviewed some additional results tables. Relation To Broader Scientific Literature: The paper contextualizes its contribution quite well with existing research integrating generative models (diffusion, flow) with RL. It spends quite a lot of time distinguishing the mechanisms used from works like Diffusion-QL, IDQL, and CAC and why FQL might be more performant. Essential References Not Discussed: I don't feel too strongly about this, but It might be useful to discuss how offline RL and FQL deal with more realistic data sources /real world data with noise. Perhaps this one is suitable: Zhou 2022, Real World Offline Reinforcement Learning with Realistic Data Source Other Strengths And Weaknesses: Strengths: -The methodology is clear and simple, justified from the successes and failures of previous works. -Strong empirical results and ablations -Very clear and informative discussion of previous approaches which I believe is useful to practitioners. Weaknesses: -Some aspects of FQL aren’t fully addressed such as scalability to state-action dimension and real world data sources. -Its not very clear how expressive the 1-step policy and how it trades off BC policy distribution and the bias/titling of the distribution towards the Q function. There is probably a trade-off between staying close to the expressive flow-model state-action representation and decreasing the Q-function more quickly. Other Comments Or Suggestions: This is minor, but I would find the loss function and the algorithm more readable if instead of the notation “a^\pi” you used “a^\omega” or “a^\theta” to help distinguish which policy the action is sampled from. In the Remark, It seems like \pi^\theta and \nu^\theta are the same thing? Questions For Authors: 1. Is there a way to clearly demonstrate the expressiveness of the 1-step policy and how it trades off BC policy distribution and the bias/titling of the distribution towards the Q function? There is probably a trade-off between staying close to the expressive flow-model state-action representation and decreasing the Q-function more quickly. 2. How does FQL scale with state-action dimension and real world data sources which are potentially noisy? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the highly detailed review and constructive feedback about this work. We especially appreciate your question about the expressivity of the one-step policy, for which we conducted an additional experiment. Please find our response below. --- * **Is the one-step policy expressive enough?** Thanks for asking this valid question. To empirically address it, we performed a controlled ablation study to compare the expressivity of (1) Gaussian BC, (2) full flow BC, and (3) one-step distilled BC policies. Here, we evaluate their *BC* performance (not RL performance) to solely focus on their expressivity without being confounded by different policy extraction strategies (in particular, note that there's no clear, straightforward way to train a full flow policy with RL). Specifically, we trained these three types of policies on $4$ goal-conditioned OGBench tasks, and measured their goal-conditioned BC performance. To ensure a fair comparison, we used the same architecture and common hyperparameters, and ablated only the policy class. The table below shows the mean and standard deviation across 8 seeds at 1M gradient steps (200K steps for $\texttt{cube-single}$ due to overfitting). | Task | Gaussian | Full Flow | One-Step Distillation | |-|-|-|-| | $\texttt{antmaze-medium}$ | $77 \pm 3$ | $82 \pm 6$ | $85 \pm 6$ | | $\texttt{cube-single}$ | $54 \pm 17$ | $81 \pm 15$ | $90 \pm 3$ | | $\texttt{cube-double}$ | $4 \pm 3$ | $15 \pm 9$ | $19 \pm 9$ | | $\texttt{scene}$ | $13 \pm 3$ | $34 \pm 5$ | $34 \pm 8$ | The results show that one-step distilled policies achieve nearly identical performance to full flow policies on these tasks, and both flow variants generally outperform Gaussian policies. Overall, we expect that one-step policies should be expressive enough to model complex action distributions in many practical scenarios, especially considering that one-step flow models can generate highly realistic samples [1, 2] even for high-dimensional image generation. [1] Liu et al., Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow (2023). \ [2] Frans et al., One Step Diffusion via Shortcut Models (2025). * **Potential trade-off by using a one-step policy.** As shown in the table above, one-step policies often have a very similar expressivity to full flow policies (at least on our benchmark tasks). Hence, we expect the "trade-off cost" of using a one-step policy to be generally marginal. That said, this trade-off might come into play if the task requires *highly* precise control and thus the full expressivity of iterative generative models. In this case, one solution would be to relax FQL to train a *few*-step distillation policy to strike a balance between precision and the number of recursive backpropagations. We leave this extension of FQL for future work. * **Scalability to higher state-action dimensions and real-world data sources.** Thanks for the question. We would like to first note that we have evaluated FQL on a diverse set of high-dimensional benchmark tasks (e.g., $\texttt{visual-*}$ tasks require *pixel*-based control, $\texttt{humanoidmaze}$ requires $21$-DoF whole-body control, and $\texttt{adroit}$ tasks require $24$-DoF dexterous manipulation). Moreover, the OGBench tasks used in this work are generally more complex than the D4RL tasks used in prior work, and they feature noisy, non-Markovian, multi-modal action distributions that (partly) resemble real-world datasets. That said, we did not evaluate FQL on real robotics data, as this work focused more on the algorithmic side of the method. We believe applying FQL to real robots with pre-trained VLA BC flow models [3] is a particularly exciting direction for future research. We will mention the lack of real-world experiments as a limitation (and discuss relevant work) in the final version of the paper. [3] Black et al., π0: A Vision-Language-Action Flow Model for General Robot Control (2024). * **Notational suggestions (e.g., $a^\theta$ instead of $a^\pi$) and minor clarifications.** Thanks for the helpful suggestions! We will incorporate them into the camera-ready version. In the remark box, as the reviewer correctly pointed out, $\pi^\theta$ and $\nu^\theta$ correspond to the same distribution. Our original intention was to distinguish $\nu^\theta$ as a probability measure, but in hindsight, we feel that this distinction is unnecessary, and we will revise the paper to use the same $\pi^\theta$ notation in the remark box. --- We would like to thank you again for raising important questions about FQL, and we believe the additional results and clarifications have strengthened the paper. Please let us know if you have any additional concerns or questions. --- Rebuttal Comment 1.1: Comment: I want to thank the authors for their follow up and feedback. My concerns have been addressed and I maintain my positive rating.
null
null
null
null
null
null
Learning Along the Arrow of Time: Hyperbolic Geometry for Backward-Compatible Representation Learning
Accept (poster)
Summary: The authors propose Hyperbolic Backward-Compatible Training (HBCT), which is essentially an objective for backwards-compatible representation learning in hyperbolic space. HBCT balances the objective of the embedding loss (e.g. cross-entropy on image classification) with a hyperbolic entailment loss that encourages a partial order between old-model and new-model embeddings, dynamically weighted (via RINCE loss) by the old model’s uncertainty for that input. Across a suite of experiments covering a variety of plausible compatibility scenarios (extending data, extending classes, new architecture, and combinations thereof), HCBT performs well. ## Update after rebuttal I was satisfied with the authors' responses and am maintaining my weak-accept recommendation. Claims And Evidence: The authors’ claims about their method’s performance on retrieval tasks are solidly backed by experimental results and theoretical justification. Some of their implementation details (e.g. many hyperparameter choices, the specific formula for uncertainty, etc.) are limited. I call these out more specifically in my questions for the authors. Methods And Evaluation Criteria: The authors’ approach to testing their method makes sense to me, though I would like to see classification accuracy included as a metric in their results (see Weaknesses section). Theoretical Claims: The authors provide no proofs with their paper. For most of their theoretical claims, the authors defer to the existing literature, which is fine by me. However, certain claims—the form of equation 10, the use of the hyperboloid model, the choice of $\beta_n = \beta_o + 0.2$ for clipping—are neither justified with proofs/mathematical intuitions nor tested via ablations. Experimental Designs Or Analyses: On the basis of the description in the paper, all aspects of the experimental design appear sound to me. Supplementary Material: The authors only include a single figure in the appendix, which I reviewed. Relation To Broader Scientific Literature: This paper unites the fields of hyperbolic deep learning (particularly the computer vision applications thereof) with backwards-compatible learning. To the best of my knowledge, this is the first paper at the intersection of these two fields. Essential References Not Discussed: To the best of my knowledge, the authors’ references are adequate. Other Strengths And Weaknesses: **Strengths:** * **Novelty:** this is the first work (to my knowledge) to combine the fields of backward-compatible representation learning and hyperbolic geometry; the motivation for combining these is compelling. The use of entailment loss as a relaxation of distance-based matching for embeddings is especially insightful. * The authors demonstrate strong results on a wide range of retrieval benchmarks. This is an admittedly idealized setting, but it gets at the core differences between their approach and other approaches in the literature. * The method is highly general: in particular, the models are not expected to be hyperbolic to start with. **Weaknesses:** * **Many choices are unjustified:** * Most hyperparameter choices are only justified in terms of existing literature. The authors claim to have tried a hyperparameter sweep converging on $\lambda=0.3, \tau=0.5, \beta=0.01$, but do not include the results in the Appendix. * The use of the Lorentz model is somewhat surprising, given that the entailment cone literature generally relies on the Poincare model and the authors make no use of the timelike dimension. I understand that these models are interchangeable, so it may be the case that this is purely a matter of convenience/numerical stability—if so, I would like the authors to clarify this. * The derivation of Equation 10 is unclear: what is the relationship between this quantity and the distance from the origin of the hyperboloid? * **Poor structure** throughout the work. For instance: * Sections 2.1 and 2.2 could be combined (there is no need to e.g. define the tangent plane twice); * It is unclear why "norm control for numerical stability" and "overall training pipeline" are under subsection 4.3; * **Limited results:** The authors only test retrieval performance, evaluated via CMC@k and mAP. While I agree these are the most important things to test, what other metrics can be evaluated? In particular, since the authors discuss classification as a training objective, what happens to classification performance? Other Comments Or Suggestions: * L115: The Lorentzian norm is defined but not used; what the authors are really interested in is the norm of the spacelike dimensions * L288: The reference to “our proprietary training dataset” should be removed from the paper. The paper should only be evaluated on the merits of the actual experiments presented therein. * Figure 5: the UMAP embeddings would be more informative if we could see which embeddings are paired with which, e.g. by drawing lines or arrows between old and new embeddings. As this will likely create a lot more visual noise, it may necessitate reducing the total number of points visualized. * I caught a couple of typos: * R253: “making it adaptively” should be “adaptive” * R270: empty parentheses * R371: “can be achieved only minimal” should be “can be achieved with only minimal” * L431: “aligns with out intuition” should be “aligns with our intuition” Questions For Authors: * In your paper, you say "their exponential growth of areas and volumes with respect to time makes them particularly suited for applications involving continual model updates with new entities or classes arrive over time." Intuitively, what is the connection between exponential growth in neighborhoods and applications with continual model updates? I could just as easily believe, for instance, that continuous model updates work fine in polynomially-growing vector spaces. * Can this approach be used to train a new model to align embeddings between several different models? Assume we don’t know anything about how they were trained * What happens if you ablate the hyperbolic MLR (i.e. compute loss on Euclidean model, then exp map, then compute the other losses)—does this do better or worse? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and suggestions. We answer the remaining questions by points as follows. > Q1. Ablation on hyperparameter sweeping Due to restricted space, we report the ablation for the clipping threshold. We refer to our response to Reviewer fRer for the ablation with the curvature value. Remaining hyperparameter ablations (alignment weight, entailment cone weight, temperature) are included in the revision. | $\beta_n = \beta_o + \zeta$ |self-CMC@1|cross-CMC@1| |-|-|-| | $\zeta = 0.0$| 0.7232 | 0.5525 | | $\zeta = 0.1$| 0.7153 | 0.5719 | | $\zeta = 0.2$| 0.7221 | 0.5705 | | $\zeta = 0.3$| 0.7154 | 0.5588 | > Q2. The use of the Lorentz model… it may be the case that this is purely a matter of convenience/numerical stability—if so, I would like the authors to clarify this. Yes, we use the Lorentz model for its numerical stability, following recent work [1, 2]. Its space-time interpretation also fits our context, allowing us to use the time-like dimension to model continual updates and embedding uncertainty. > Q3. The derivation of Equation 10 is unclear: what is the relationship between this quantity and the distance from the origin of the hyperboloid? We define Lorentz uncertainty via its isometric relation to the Poincaré model. Both share the same form, based on the Euclidean norm of the pre-exponential embedding. $$ \mathrm{Uncertainty}(\mathbf{h}^{\mathcal{L}}) = \mathrm{Uncertainty}(\mathbf{h}^{\mathbb P}) = 1 - \frac{1}{\sqrt{K}} \mathrm{tanh}(\sqrt{K} \|\| \mathbf z \|\|) $$ The distance from the origin of the hyperboloid is given by $$ d(\bar{\mathbf{0}}, \mathbf{h}) = \mathrm{cosh}^{-1} (\frac{1}{\sqrt{K}} cosh(\sqrt{K} \| \| \mathbf z \| \|) $$ Both the distance to the origin and our uncertainty measure depend explicitly on the Euclidean $\ell_2$-norm before the exponential map. While prior work uses the unbounded origin distance as a proxy for uncertainty, our bounded [0,1] measure is more interpretable and manageable. > Q4. What other metrics can be evaluated? classification performance? We report CMC@1 and mAP to focus on retrieval performance. The table shows accuracy and retrieval results for various alignment methods, including cross-accuracy using the old classifier on new features. Accuracy trends generally follow retrieval, though with smaller gaps. || Self-CMC@1 | Cross-CMC@1 | Self-Acc | Cross-Acc| |-|-|--|-|-| |BCT | 0.695 | 0.447 | 76.47 | 40.66| |Hot-refresh | 0.715 | 0.498 | 77.65 | 41.2| |HOC | 0.713 | 0.490 | 77.67 | 41.36| |HBCT | 0.722 | 0.572 | 78.86 | 42.19| > Q5. How does exponential volume growth relate to continual model updates, and why might it be preferable to polynomially growing spaces like Euclidean space? We argue that exponential volume growth in hyperbolic space allows more room to represent new entities with smaller increases in radius, which is advantageous for continual updates. In contrast, Euclidean space requires rapidly growing radii to accommodate new entities. Moreover, Lemma 1 in [3] shows that in Euclidean space, the probability of new class prototypes aligning with a trained model decreases exponentially with the number of dimensions and classes, leading to an impossibility result for backward compatibility. We hypothesize that hyperbolic geometry with its exponential growth property may mitigate this issue, though a formal analysis is left for future work. > Q6. Can this approach be used to train a new model to align embeddings between several different models? Assume we don’t know anything about how they were trained. Aligning a new model to multiple old models is challenging without knowing their training processes. Some models may have incompatible embeddings, enforcing inherent trade-offs in the compatibility. A common solution is to add a projection layer to map embeddings into a shared space [4], which is orthogonal and can be integrated to our method. > Q7. What happens if you ablate the hyperbolic MLR (i.e. compute loss on Euclidean model, then exp map, then compute the other losses)—does this do better or worse? We replaced the hyperbolic MLR with its Euclidean counterpart while keeping the entailment cone and RINCE loss in hyperbolic space. This led to a significant drop in both the new model's performance and its compatibility. This is likely due to conflicting signals between the Euclidean MLR and hyperbolic RINCE loss. | | Self-mAP | Cross-mAP | |-|-|-| |HBCT w/o entail | 0.435 | 0.388| |HBCT | 0.655 | 0.398| |HBCT-EMLR w/o entail | 0.4791 | 0.047| |HBCT-EMLR | 0.4897 | 0.279| > Q8. Writing structure and typos We reorganized the "norm control" and "overall training" into a separate section and fixed the typos. References: [1] Bdeir et al. Fully Hyperbolic Convolutional…, ICLR’24. [2] Desai et al. Hyperbolic Image-Text…, ICML’23. [3] Biondi et al. Stationary Representations: Optimally…, CVPR’24. [4] Linia et al. Asymmetric Image Retrieval with Cross Model... --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing my questions thoroughly. Overall, I found their additional experiments and clarifications helpful for evaluating the paper. I intend to keep my already positive rating of 3. I believe this work introduces a valuable approach combining hyperbolic geometry with backward compatibility. I encourage the authors to add the classification metrics to the final version of this paper, as these strengthen the case for their model. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their thoughtful feedback and positive assessment of our work. We are glad that our additional experiments and clarifications were helpful in evaluating the paper. We will make sure to incorporate additional experiments and discussion into the final version of the paper. Best, Authors
Summary: This paper aims to improve backward compatible representation learning by using hyperbolic embeddings instead of Euclidean embeddings. This paper claims that using hyperbolic embeddings achieves greater compatibility with previous models without compromising the performance of new embedding models. Methods-wise, this paper uses a hybrid Euclidean-Hyperbolic model to encode images in hyperbolic space. This allows better integration of existing Euclidean encoders compared to fully hyperbolic models. The paper also defines an uncertainty measure for Lorentz spaces, based on the analogous uncertainty measure for Poincare spaces. The model uses two auxiliary losses, an entailment cone loss, and RINCE-based loss, which is modified to include the uncertainty measure defined in the paper. Claims And Evidence: The high-level intuition of the method is interesting and makes a lot of sense. The authors claim that HBCT enhances backward compatibility without compromising the performance of new embedding models. This is supported by the empirical evidence, which suggests that HBCT enhances backward compatibility (best $P_\mathrm{com}$ scores in Table 1) with little tradeoff in performance of the new models ($P_\mathrm{up}$ scores near 0 in Table 1). The use of the entailment loss and the RINCE-based loss both seem effective according to the ablations (Table 2), but the performance of RINCE is a bit more mixed, sometimes resulting in a large drop in compatibility. Methods And Evaluation Criteria: Overall, the methods and evaluation criteria seem reasonable. It is not clear to me why the Lorentz uncertainty was defined the way it was. Perhaps the connection to Poincare uncertainty could be made more explicit in the text. Although not the main focus of the paper, I think it is still important to also report the performance of all models on CMC@1 and mAP directly so readers can assess the tradeoff between compatibility and performance. Theoretical Claims: N/A Experimental Designs Or Analyses: Overall the experimental design seems reasonable. However, this paper could benefit from analyses on more datasets than just CIFAR-100 and TinyImageNet. Supplementary Material: I have reviewed all of the supplementary material. Relation To Broader Scientific Literature: The idea of using hyperbolic embeddings for backwards compatibility is novel and is also a very natural idea. The method of this paper is a combination of existing methods. The hybrid Euclidean-Hyperbolic model is based on [1], and the entailment cone loss was first proposed in [2]. The RINCE-based loss is essentially the RINCE loss of [3] with the modification that the $q$ parameter is the Lorentz uncertainty measure. In light of this, the novelty of the paper's methods is quite limited. [1] Khrulkov, Valentin, et al. "Hyperbolic image embeddings." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020. [2] Desai, Karan, et al. "Hyperbolic image-text representations." International Conference on Machine Learning. PMLR, 2023. [3] Chuang, Ching-Yao, et al. "Robust contrastive learning against noisy views." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses In Table 1, the self and cross columns are not explained or referred to in the main text. It seems that a possible limitation of the model is that performance may degrade after many model updates. It would be good if the authors could discuss this. Other Comments Or Suggestions: Equation 12, the apostrophe in $h_o’$ not defined Missing citation (Line 269, second column) Questions For Authors: 1. How was the Lorentz uncertainty derived? 2. What is the CMC@1 and mAP performance of the models? 3. What do the self and cross columns mean in Table 1? 4. Is there a reason why using the RINCE-based loss sometimes results in a large drop in compatilibity? 5. Given how the norms grow with repeated model updates and the possible resulting instability, how does the performance of this method compare to Euclidean methods after many model updates? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and suggestions. We answer the remaining questions by points as follows. > Q1: why the Lorentz uncertainty was defined the way it was. We define the Lorentz uncertainty using the isometric relation between the Poincaré and Lorentz models. Both uncertainty measures in Poincaré and Lorentz models have the same form with respect to the norm 2 of the Euclidean embedding before the exponential map: $$ \mathrm{Uncertainty}(\mathbf{h}^{\mathcal{L}}) = \mathrm{Uncertainty}(\mathbf{h}^{\mathbb P}) = 1 - \frac{1}{\sqrt{K}} \mathrm{tanh}(\sqrt{K} \|\| \mathbf z \|\|) $$ Both the distance to the origin and our uncertainty measure depend explicitly on the Euclidean $\ell_2$-norm before the exponential map. While prior work uses the unbounded origin distance as a proxy for uncertainty, our bounded [0,1] measure is more interpretable and manageable. We will discuss this explicitly in the revision. > Q2+3: What is the CMC@1 and mAP performance of the models?… What do the self and cross columns mean in Table 1? The *self* (new-to-new retrieval) and *cross* (new-to-old retrieval) columns in the Table refer to the original model performance (CMC or mAP). We thank the reviewer for pointing this out, and we will explain this explicitly in the paper to avoid future confusion. > Q4: Is there a reason why using the RINCE-based loss sometimes results in a large drop in compatibility? During the experiments, we observed that training with RINCE-based loss sometimes is more difficult to converge in some cases, especially on the Vision Transformer model. This maybe because of the instability of the exponential map when applied to the large norm produced by ViT. However, when training with the entailment cone loss, the training becomes more stable. > Q5: Given how the norms grow with repeated model updates and the possible resulting instability, how does the performance of this method compare to Euclidean methods after many model updates? We conducted an experiment with five consecutive model updates to test the performance of different compatibility methods on the CIFAR100. The training data will be split into five sets, each containing 20, 40, 60, 80, and 100 (all) classes, respectively. For the first two models (time steps 0, 1), we will use ResNet18 as the base model and we will update to ResNet50 at the last three steps (2, 3, 4). For each method, we report the CMC@1 compatibility matrix for HBCT, HOC and BCT. Columns are the encoders for the query, and rows are the encoders for the gallery set. It can be seen that the HBCT can maintain the compatibility with the old model after several updates ($\phi_1 / \phi_4$) = 0.32) , while HOC and BCT quickly deteriorate after a few model updates ($\phi_1 / \phi_4$ = 0.207 for HOC and 0.14 for BCT). HBCT | $\phi_o / \phi_n$ | $\phi_0$ | $\phi_1$ | $\phi_2$ | $\phi_3$ | $\phi_4$ | |-|-|-|-|-|-| | $\phi_0$ | 0.2016 | | | | | | $\phi_1$ | 0.2295 | 0.3585 |||| | $\phi_2$ | 0.2585 | 0.4309 | 0.4872 ||| | $\phi_3$ | 0.2867 | 0.4857 | 0.5525 | 0.6166 | | | $\phi_4$ | 0.3261 | 0.5227 | 0.6051 | 0.6995 | 0.7424 | HOC | $\phi_o / \phi_n$ | $\phi_0$ | $\phi_1$ | $\phi_2$ | $\phi_3$ | $\phi_4$ | |-|-|-|-|-|-| | $\phi_0$ | 0.1825 | | | | | | $\phi_1$ | 0.17 | 0.3265 | | | | | $\phi_2$ | 0.1954 | 0.3807 | 0.4831 | | | | $\phi_3$ | 0.207 | 0.418 | 0.5357 | 0.6045 | | | $\phi_4$ | 0.1926 | 0.4332 | 0.5831 | 0.6888 | 0.7302 | BCT | $\phi_o / \phi_n$ | $\phi_0$ | $\phi_1$ | $\phi_2$ | $\phi_3$ | $\phi_4$ | |-|-|-|-|-|-| | $\phi_0$ | 0.1818 | | | | | | $\phi_1$ | 0.1589 | 0.3124 | | | | | $\phi_2$ | 0.1524 | 0.3218 | 0.4456 | | | | $\phi_3$ | 0.135 | 0.3299 | 0.4834 | 0.5647 | | | $\phi_4$ | 0.1497 | 0.3212 | 0.5249 | 0.5939 | 0.7206 | > Q6: It seems that a possible limitation of the model is that performance may degrade after many model updates. Yes, we have discussed this in the limitations section. Increasing the clipping threshold after each update can lead to numerical instability when it becomes large (e.g., >10). One potential solution is rescaling the time dimension of all embeddings after a certain number of updates (e.g., >20). Another approach is online backfilling, where a subset of items is used to update the gallery set—this can help manage the growing time dimension. We hope that we have addressed all your concerns adequately. We have updated the manuscript according to your comments. Please let us know if we can provide any further details and/or clarifications.
Summary: The paper proposes to leverage hyperbolic geometry for backward compatible learning: setting in which a model is updated and its representation should preserve compatibility with representations from the model before the update. The authors propose a loss composed of two terms to (i) constraint the new embedding to lie in the entailment cone of the old embeddings (ii) regularization loss which align weights according to the uncertainty, to preserve performance of the new model.The method is validated using the CIFAR100 and MiniImagenet datasets, analyzing backward compatibility when introducing novel data samples, novel classes or a different architecture. Ablation experiments are performed to quantify the contribution of each term and setting. The method outperforms existing methods which are based on considering euclidean geometry in representation space. Claims And Evidence: To the best of my judgment, claims in the paper are validate in the paper. In particular: (i) They show how hyperbolic geometry is good modeling choice for the problem of backward compatibility, showing previous contains as the entailment loss and the uncertainty estimation naturally fits into this problem. The is further confirmed by the experiments in Table 2. (ii) The experiments demonstrate how hyperbolic geometry is a better fit than exiting methods based on euclidean geometry. Methods And Evaluation Criteria: The paper compares to many baselines, according to the best of my knowledge. Concerning the dataset, in the paper (Shen et al 2022) introducing backward compatibility larger models and datasets (e.g. Imdb ) were adopted, so it would be interesting to see if the result scales up to larger datasets. Theoretical Claims: No theoretical claims are present. Experimental Designs Or Analyses: ## Strenghts - Experiments are sounds and shows that under different setting (sample change, class change, and architectural change) that hyperbolic spaces are effective for the problem of backward compatibility. - Ablations show the impact of each term in the loss, providing a good understanding of the method. ## Weaknesses - Despite the good performance demonstrated on the proposed benchmark, previous works introducing the problem of backward compatibility (Shen et al 2020) experimented with larger models (e.g. Resnet 100) and in particular larger datasets (e.g. IMDB). This raise some questions on how the performance of the proposed method would scale to this size. - Some details in the table of result could be explained more in depth (e.g. cross and self columns, see questions section) Supplementary Material: I inspected the additional qualitative Figure in the Appendix. Relation To Broader Scientific Literature: Key contribution of there paper are to demonstrate the importance of characterizing the geometry of representational space with a different metric than the euclidean, i.e. flat. Previous work has highlighted the importance of characterizing different metrics in latent spaces in distinct setting, e.g. generative [2], text [3,4], images (Khrulkov et al., 2020). To the best of my knowledge this is the first work that tries to advocate for a different geometry to solve the recent problem of backward compatibility, demonstrating an elegant and effective solution. The work has also important consequences and relation to the field of representation alignment [e.g. 1,5] which could be interesting to relate to. _[1] Moschella, Luca, et al. "Relative representations enable zero-shot latent space communication." ICLR 2023_ _[2] Arvanitidis, Georgios, Lars Kai Hansen, and Søren Hauberg. "Latent space oddity: on the curvature of deep generative models._ _[3] Gu, Albert, et al. "Learning mixed-curvature representations in product spaces." International conference on learning representations. 2018._ _[4] Dhingra, Bhuwan, et al. "Embedding text in hyperbolic spaces., ACL_ _[5] Huh, Minyoung, et al. "Position: The platonic representation hypothesis." Forty-first International Conference on Machine Learning. 2024._ Essential References Not Discussed: To the best of my knowledge there are no very fundamental related work that has not been cited. It could be beneficial to include the references in the previous section, although this is not strictly necessary. Other Strengths And Weaknesses: ### Strenghts - *Clarity* the paper is very clear and well written. - *Originality* the work result in the effective combination existing frameworks for robust contrastive loss (Chuang et al., 2022) , uncertainty properties of hyperbolic geometrical representation spaces (Franco et al., 2023; Atigh et al., 2022) applied to backward compatibility (Shen et al 2022), resulting in an effective and good fitting framework for the problem. - *Significance*: the problem of backward compatibility I still very open and recent, and of importance for practical application (.e.g industrial deployment of models). The paper demonstrate the importance of correctly characterizing the geometry of representational space, with important practical consequences. Moreover the type of research has possible implication on diverse fields such as representational alignment, model adaptation, test time training, and out of distribution detection. ## Weaknesses - Limitations in assuming hyperbolic geometry: the hyperbolic geometry assumption could be also limiting in my understanding in two ways: training could be more expensive and applicability of the method could be limited as most methods are not assumed to be trained with hyperbolic embeddings spaces. Also some questions arise when the geometry of the space is not flat and nor with constant curvature. On the regard see last two questions in the question section. - A discussion and comparison with methods that use product of spheres seems to miss. Other Comments Or Suggestions: I spotted the following typos: Line 270: citation missing: "unlike previous methods ()," Line 312: "the the" -> "the" Questions For Authors: - How does the similarity measure is used to compare queries to gallery samples in the Cumulative Matching Characteristics metric for retrieval affect results? What happens if one use a different measure (eg. euclidean distance in euclidean based method, or hyperbolic geodesic distance in euclidean based methods, assuming despite pertaining an hyperbolic geometry) - In Table 1 the columns corresponding to self and cross denote the base loss? what is this number referring to, the original model performance? - The authors mention approaches that use hypersphere geometry (with cosine similarity). What's the comparison with these methods? - How does curvature K affect the performance? why the choice of fixing it to 1 for all experiments? - How much impactful is pretraining from scratch assuming hyperbolic geometry? How does this compare to finetune from a pretrained model assuming hyperbolic geometry as opposed to train from scratch? Although this is not the direct focus of the paper, the applicability of the method for backward compatible training to pretrained models is an important point in order to apply the method. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and suggestions. We answer the remaining questions by points as follows. > Q1: Training could be more expensive with hyperbolic geometry We agree hyperbolic models may be more expensive, but our method only adds a simple exponential map to a Euclidean model, incurring minimal overhead. We report the running time of ResNet18 with a batch size of 256 in the table below. | ResNet18 | Forward only | Forward & Backward | |-|-|-| | Euclidean | 7.9e-07| 0.0556| | Euclidean-Hyperbolic | 8.8e-07 | 0.0630| > Q2: Applicability could be limited as most methods are not assumed to be trained with hyperbolic embeddings spaces. Our method is general and can be applied on top of existing Euclidean embedding models. > Q2: Some questions arise when the geometry of the space is not flat and nor with constant curvature We consider Euclidean and hyperbolic geometries because they are most commonly used in existing retrieval systems, mostly because the geodesic distance between two points can be computed efficiently. In some cases, manifolds with non-constant curvature can better represent data, but the geodesic distance will be more computationally expensive, making it difficult to generalize to large-scale retrieval systems. > Q3: The comparison with these approaches that use hyperspherical geometry (with cosine similarity)? Hot-Refresh and HOC are two alignment methods that employ hyperspherical geometry, as they align models using a contrastive objective based on cosine similarity. Since cosine similarity normalizes embeddings to have unit norm ($| z |_2 = 1$), these methods map embeddings onto a shared unit hypersphere. Pairwise embedding comparisons are then computed using the negative inner product as a distance measure. We will clarify this in the baseline discussion in the revision. > Q4: How does the similarity measure used in CMC affect results? We observe that retrieval performance remains consistent across different similarity measures. Below are the results on CIFAR100: - Euclidean-ResNet18: - Cosine: mAP 0.6543, CMC@1 0.7157 - Euclidean: mAP 0.6543, CMC@1 0.7157 - Hyperbolic-ResNet18: - Geodesic: mAP 0.6548, CMC@1 0.7177 - Lorentz inner product: mAP 0.6548, CMC@1 0.7177 - Using hyperbolic distance on Euclidean embeddings (via exponential map): - Hot-refresh original: - new-to-new retrieval (self): mAP: 0.649485, CMC@1: 0.7147 - new-to-old retrieval (cross): mAP: 0.347584, CMC@1: 0.4977 - Hot-refresh -> exponential map - new-to-new retrieval (self): mAP: 0.649482, CMC@1: 0.7147 - new-to-old retrieval (cross): mAP: 0.347583, CMC@1: 0.4978 > Q5: In Table 1 the columns corresponding to self and cross denote the base loss? what is this number referring to, the original model performance? Yes, *self* (new-to-new retrieval) and *cross* (new-to-old retrieval) refer to the original model performance (CMC or mAP). We will explain this explicitly in the paper to avoid future confusion. > Q6. Experiments larger datasets We mainly followed [1, 2] to test on CIFAR100 and Tiny-ImageNet, where hyperbolic geometry is shown to be effective. Given the short time frame for rebuttal responses, extensive experiments on large datasets are beyond our immediate scope but we acknowledge the valuable suggestion and plan to explore larger-scale evaluations in future research to further validate the approach. > Q7: How does curvature K affect the performance? why the choice of fixing it to 1 for all experiments? We set a fixed curvature value following the existing literature [1]. We conducted an ablation study to examine how different curvature values affect compatibility performance. Generally, $K \in [0.5, 1.0]$ yields stable outcomes in both compatibility and the quality of the new model. When we also tested the learnable curvature, it showed good performance in the new model but negatively impacted compatibility performance. |Curvature K|self (mAP)|cross (mAP)|P_up (mAP)|P_comp (mAP)| |-|-|-|-|-| |0.1 |0.581|0.366|-0.114|0.108 | |0.5 |0.651|0.397|-0.008 |0.206 | |0.7|0.657 |0.399 |0.002|0.211 | |1 |0.654|0.398 |-0.003|0.207| |1.5|0.604 |0.370|-0.079 |0.123| |Learnable |0.660 |0.371|0.006|0.124| > Q8: How impactful is pretraining from scratch assuming hyperbolic geometry? How does this compare to finetuning from a pretrained model assuming hyperbolic geometry as opposed to training from scratch? In our experiment, ResNet18 is trained from scratch, while ViT is finetuned from ImageNet21K pretrained weights to reflect a typical usage of open-source models. We do not pretrain ViT from scratch due to the poor convergence on CIFAR100 and TinyImagenet. References: [1] Bdeir et al. Fully Hyperbolic Convolutional Neural Networks for Computer Vision, ICLR’24. [2] Biondi et al. Stationary Representations: Optimally Approximating Compatibility and Implications for Improved Model Replacements, CVPR'24.
null
null
null
null
null
null
null
null
UniMoMo: Unified Generative Modeling of 3D Molecules for De Novo Binder Design
Accept (poster)
Summary: This paper introduces a unified framework, called UniMoMo, for general target-specific binder generation. The target is a protein, and the binders could be peptides, antibodies, or small molecules. UniMoMo aims to train a single generative model to tackle general binder generation problem, while being able to leverage datasets across different domains. The performance on various binder generation benchmarks has been demonstrated. Claims And Evidence: The main claim is that by using a unified generative model, we can tackle different binder design task at once. Also, the dataset from different binder domains can help each other. This has been verified by the great experimental performance across several widely used benchmarks. Methods And Evaluation Criteria: The main contributions of this unified framework are the unified representation of graph of blocks and the geometric latent diffusion model. The overall UniMoMo framework is sound. In terms of evaluation, the authors conducted experiments on widely used benchmarks for target-conditioned peptide generation, antibody generation, and small molecule generation. The performance is strong across benchmarks compared to existing methods that are specifically designed for each domain. Also, it shows that leveraging datasets across domain with the UniMoMo framework is helpful for boosting performance. Theoretical Claims: N/A. No major theoretical claims. Experimental Designs Or Analyses: All the experimental designs are closely following the community standard, and they are solid to me. Supplementary Material: I briefly checked the addition experimental details. Relation To Broader Scientific Literature: The proposed UniMoMo is unique as it unifies binder design for different binder types. Moreover, it shows training on the combined datasets from all domains can help improve performance for each domain, which is exciting. Essential References Not Discussed: None Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: How did you balance data from different all three domains during training? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your appreciation and the positive comments! > Q1: How did you balance data from different all three domains during training? Thanks for the question! In the current implementation, we have not extensively explored domain-specific data balancing strategies. For our joint training approach, we utilized representative datasets from each domain: CrossDocked (\~100k samples) for small molecules, PepBench (\~80k samples) for peptides, and SAbDab (\~10k samples) for antibodies. As an initial effort toward unified molecular modeling, we employed simple random sampling across these datasets during training, which has demonstrated promising results. Moving forward, particularly as we scale the framework to incorporate larger and more diverse datasets, we recognize the importance of investigating more sophisticated data balancing approaches to further enhance model performance.
Summary: In this paper, the authors propose a new generative model for 3D molecule design conditioned on a protein target. The proposed model, UniMoMo, unifies generation of different ligand modalities (small molecules, peptides and parts of antibodies) into a single model. This is done by considering each molecule, independent of their modalities, as a graph of blocks (amino acids for peptides/antibodies or molecular substructures for molecules). The proposed approach is a latent generative model with three parts: (i) a autoencdoer that encodes blocks into a latent space then decode the back to block types, (ii) a latent diffusion model operating on the learned latent space, and (iii) an iterative generation approach to recode full-atom geometries from the blocks. The authors show good results on three benchmarks (for three different molecular modalities). More interestingly, they show that training a model on all modalities is usually better than training each modality independently. ## Update after rebuttal I thank the authors for their rebuttal. I will keep my score with the requirement that the authors update the manuscript to make the points fellow reviewers and I pointed out, specially when it comes to better details and explanations. Congratulations! Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. The authors propose a model that is modality-agnostic and show results on three different molecular modalities. Theoretical Claims: N/A Experimental Designs Or Analyses: - The paper only evaluates on in-silico metrics, which are known to be far from perfect. The proposed method do achieve good results on these metrics (compared to baselines benchmarked). - I think a lot of experimental details are missing, specially when it comes to the antibody and peptides experiments. For example, which parts of the CDR is modelled? The loops? The frames? Everything? How many AAs are considered on this setting? - During sampling time, how many "block" nodes are chosen before starting the (reverse) diffusion process? How does this choice affects experimental results (on all modalities)? - During sampling time, given a tartget pocket, how does the model decide when it generates blocks that belond to small molecules, or peptides or antibodies? Supplementary Material: Yes Relation To Broader Scientific Literature: The paper is well-placed in the context of structure-conditioned 3D molecule generation. They provide a method that model all-atoms and can be applied on different data molecule modalities. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: Strengths: - This is one of the first approaches that proposes a unified model for structure-based molecule design that can be applied to different molecular modalities. Moreover, the authors show that training on all modalities is better than training on a single modality. Weakness: - The proposed approach contains many different sub-components, making it hard to reproduce/build upon, and probably reducing its effectiveness and expressivity. Other Comments Or Suggestions: More details on pepdtides and antibodies experimental section is neeeded. Questions For Authors: - See above for more questions. - During sampling time, how many "blocks" nodes are chosen before starting the (reverse) diffusion process? How does this choice affects experimental results (on all modalities)? - I feel some information is missing on how the bonds are computed. Could the authors elaborate more on how the bonds between bonds and between blocks are computed? It seems that the latter depends on a NN prediction. How accurate is it? - How are the sequences of AAs (on both peptides and antibodies) extracted from the full atom point cloud? - What parts of the CDR are modelled on the Antibody experiments? More experimental details on this section is needed. - During sampling time, given a target pocket, how does the model decide when it generates blocks that belond to small molecules, or peptides or antibodies? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your appreciation and the constructive comments! > Q1: The paper only evaluates on in-silico metrics, which are known to be far from perfect. The proposed method do achieve good results on these metrics (compared to baselines benchmarked). Thanks for the comments! While we are currently conducting wet-lab validations, the experimental timeline is extensive. Therefore, we plan to include these results in a future extension of this work. > Q2: During sampling time, how many "blocks" nodes are chosen before starting the (reverse) diffusion process? How does this choice affects experimental results (on all modalities)? For peptides and antibodies, we follow the literature [a,b] and set the number of blocks equal to that of the reference sequence, as many metrics, including AAR and RMSD, assume generated sequences having the same lengths as the native binders. For small molecules, the number of blocks is sampled from the statistical distribution based on the spatial size of the pocket [c]. We evaluated different settings on the GPCR case in Section 4.4, averaging results over 100 designs, where $n$ denotes the originally sampled number of blocks for small molecules: | peptide lengths | Rosetta dG | |-|-| | small(4-10) | -8.09 | | medium(11-17) | -16.93 | | big(18-24) | -22.39 | | molecule fragments | Vina score (dock) | Avg.atom per block | | -------- | --------| -- | | small (n-5) | -6.72 | 5.10 | | medium (n) | -7.42 | 4.06 | | big (n+5) | -7.44 | 3.35 | Commonly, larger binders lead to lower energies, since they can form more interactions. Interestingly, for small molecules, the model adapts to the number of blocks. With fewer blocks, it generates larger fragments with more atoms to fill the pocket; otherwise, with more blocks, it generates smaller fragments, avoiding overcrowding the pocket. Thanks for the insightful question again! We will include the discussion in the revision! [a] Full-Atom Peptide Design based on Multi-modal Flow Matching. ICML 2024. [b] Conditional Antibody Design as 3D Equivariant Graph Translation. ICLR 2023. [c] CBGBench: Fill in the Blank of Protein-Molecule Complex Binding Graph. ICLR 2025. > Q3: I feel some information is missing on how the bonds are computed. Could the authors elaborate more on how the bonds between bonds and between blocks are computed? It seems that the latter depends on a NN prediction. How accurate is it? Sorry for the confusion. For intra-block bonds, they are predetermined once the block type is assigned. For inter-block chemical bonds, we employ an MLP for prediction, using the hidden states of atom pairs as input (Eq. 6). Bond prediction is restricted to spatially neighboring atoms, excluding those that are too far apart. The predictions are dynamically changing during the flow matching process in the decoder (Figure 1B, bottom right). Ultimately, the reconstruction accuracy of the chemical bonds are around 97%, which we think is accurate enough. > Q4: How are the sequences of AAs (on both peptides and antibodies) extracted from the full atom point cloud? Sorry for the confusion. In our block-level decomposition, each natural amino acid forms a block. Therefore, we can directly get the amino acid type of each block. We think this is also one merit of our block-based unified representation, without requirements to further derive an algorithm to extract the AAs. > Q5: What parts of the CDR are modelled on the Antibody experiments? More experimental details on this section is needed. Sorry for the confusion. We follow the convention [d, e] and evaluate on CDR-H3, as it exhibits much higher irregularities than other CDRs and plays a crucial role in binding and interactions [f]. We appreciate the suggestion and will clarify this in the revision. [d] End-to-End Full-Atom Antibody Design. ICML 2023. [e] GeoAB: Towards Realistic Antibody Design and Reliable Affinity Maturation. ICML 2024. [f] Antigen-Specific Antibody Design and Optimization with Diffusion-Based Generative Models for Protein Structures. NeurIPS 2022. > Q6: During sampling time, given a target pocket, how does the model decide when it generates blocks that belond to small molecules, or peptides or antibodies? Thanks for the insightful question! We apologize for not making this clear in the paper. We use a binary prompt for each block, with 1 indicating the generation of amino acid (AA), and 0 indicating either AAs or molecular fragments. During benchmarking for peptides and antibodies, all blocks are assigned a prompt of 1 to ensure AA generation. For small molecules, we set the prompt to 0, allowing flexible generation of molecular fragments. We will add a detailed explanation in the revision to clarify this point.
Summary: This paper addresses the task of generating de novo binder molecules to target proteins. Importantly, the paper introduces a single unified framework and model, *UniMoMo*, that can generate peptide binders, antibody binders, and small molecule binders. To this end, the paper proposes a variational autoencoder that encodes different types of molecules building block-wise in a unified latent space. Atomistic details are encoded in latent space and reconstructed with an interative decoder. A diffusion model is trained afterwards in latent space for generation, while conditioning on protein target information. Crucially, the paper demonstrates meaningful improvements by training a model jointly on all molecule types, indicating a certain type of knowledge transfer. The authors extensively validate their model and show strong performance, outperforming baselines in many experiments. Claims And Evidence: Yes, the paper supports all its claim by extensive experiments. Methods And Evaluation Criteria: Yes, all proposed methods and evaluation criteria make sense to me and are appropriate for the tackled problem. Overall, the method and design of UniMoMo is well motivated, although some of its details seem ad-hoc and could be explained better. See my questions below. Theoretical Claims: The paper does not rely on complex proofs, novel mathematical frameworks, or theoretical claims. Rather, the paper proposes a novel unified molecule generation system. Experimental Designs Or Analyses: All experimental designs and analyses seems appropriate to me, as well as sound and valid. Supplementary Material: I skimmed the supplementary material, but did not read it in detail. The supplementary material describes training and sampling algorithms in detail, which is helpful because UniMoMo's variational autoencoder and diffusion model framework is fairly complex. It also discusses evaluation metrics and presents additional results and ablation studies. Relation To Broader Scientific Literature: The paper is appropriately positioned with respect to the broader literature. Most importantly, the paper discusses many previous related works tackling structure-based drug design. While there already exist related frameworks, including latent diffusion models, for similar molecular modeling tasks, the particular framework proposed by the authors seems novel. The way the variational autoencoder is designed in this paper and the unified decomposition of different types of molecules into graphs of coarse building blocks seem new to me. Essential References Not Discussed: I am not aware of any essential but missing references. Other Strengths And Weaknesses: **Strengths:** - The paper runs very extensive evaluation experiments for all the different molecule types that UniMoMo can generate. I appreciate the detailed evaluations and the strong results. - The experiment on the GPCR is very interesting, showing that the model leverages aspects it has learnt from different molecule modalities when generating its small molecule binder. - The finding that jointly learning a model over different molecule types improves performance in all applications is very interesting and significant, I believe. To the best of my knowledge, this has not been done or shown before in this fashion in a binder generation setting. - While the overall framework and model is somewhat complex, the paper makes a good job explaining the approach and the supplementary material includes a lot of details. The paper is mostly clear and easy to read. - The tackled applications, antibody design, small molecule drug design, and peptide design for target proteins are impactful with direct real world applications, which further underscores the relevance of the method. - UniMoMo builds on established and existing concepts (molecule autoencoders, latent diffusion, etc.), but its detailed architecture, joint building block-wise representation and latent encoding and decoding scheme are novel, to the best of my knowledge. **Weaknesses:** - Some method details are not well motivated or explained, see questions below. - If my understanding is correct, the method only works if the binding site on the target protein is known and given. While I have several questions about the method and believe that some details could be explained and motivated better, I think UniMoMo is overall a strong model and this is an interesting paper. Hence, I am recommending acceptance. Other Comments Or Suggestions: I believe equation (13) is missing the square root over $\sqrt{1-\bar{\alpha}^t}$, keeping in mind that in equation (12), we have the variance, and when doing reparametrized sampling we need the standard deviation. Questions For Authors: 1. In line 096 in the introduction, the authors point out that the method uses E(3)-*invariant* latent states and E(3)-*equivariant* coordinates. This *invariance* and *equivariance* is later in the method section not further discussed. But how exactly is the invariance of the latents guaranteed, as well as the equivariance of the coordinates? I would suggest the authors to discuss this in more detail. 2. My understanding is that UniMoMo tackles the situation where the location of the binding site on the target molecule is known and given. Can the authors clarify? What if we do not know the binding site? Can we still use and apply UniMoMo? 3. The authors construct its latent space through encoding both into "abstract" latents $z_i$ and coordinate-valued latents $\vec{z}_i$. Why exactly is this separation needed? Also, why use coordinate-valued latents $\vec{z}_i$ at all, and why not instead directly use the original building block coordinates, $\vec{X}_i$, together with the other latents $z_i$ to encode all the additional information? I think this could be explained and motivated better, and it also could be interesting to run an ablation study for this modeling choice. - Related, an ablation study over the KL weights $\lambda_1$ and $\lambda_2$ in equation (2) would be interesting. 4. The authors manually add extra noise to $\vec{z}_i$ before feeding it to the decoder, to enforce robustness. This seems very ad-hoc and in principle the sampling of the posterior distribution should already lead to noise injection making the decoder robust. A more principled way would be to increase the KL weight, such that the posterior encoding distributions themselves become wider and smoother, as opposed to manually adding noise. I would suggest the authors to better justify their choice here and also show in an ablation study that this is necessary. 5. The authors speak of *"motion vectors"*. If my understanding is correct, this is the vector field that encodes the flow in the flow matching framework. If that is the case, I would suggest the authors to update their wording here, as I have never heard the expression motion vectors for this quantity before. 6. An important detail I am missing: The authors build one joint model for all molecule types, and then apply it to the different applications in the experiments. How exactly is the model told to either generate a peptide, a small molecule, or an antibody for the different applications? Is there some conditioning given to the model to control this? Or does the model generate different molecule types entirely randomly? But that would be confusing, because the model is applied to the specific applications. I think I am missing something here and I would suggest the authors to be clearer about that. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your appreciation and insightful feedback, which is very helpful in improving our paper! > W & Q2: If my understanding is correct, the method only works if the binding site on the target protein is known. What if we do not know the binding site? Yes, your understanding is correct. Our method requires prior knowledge of the binding site, in line with the convention established in previous studies [a], since most public benchmarks are built upon this setting. This is also biologically reasonable. For example, to study a protein-protein interaction, a binder is often designed at the interface to inhibit the PPI. If truely no biological knowledge is available, pocket detection tools like MaSIF [b] should be able to identify potential sites for pocket-based design models. [a] Pocket2mol: Efficient molecular sampling based on 3d protein pockets. ICML 2022 [b] Deciphering interaction fingerprints from protein molecular surfaces using geometric deep learning. Nature Methods > C: Equation (13) is missing the square root over $1-\bar{\alpha}^t$. Thanks for catching this! We apologize for the typo and will correct it in the revision > Q1: How exactly is the invariance of the latents guaranteed, as well as the equivariance of the coordinates? For the diffusion part, GeoDiff [c] proves that the diffusion process maintain equivariance via an equivariant denoising kernel (Proposition 1), which in our work is the equivariant transformer[d]. The transformer is scalar-based[e], which maintain the equivariance with inner product and unbiased linear transformation on the velocities. For the VAE part, the encoding equivariance is determined by the network used, which is also an equivariant transformer. The decoder employs a short flow matching, with a similar theoretical foundation as GeoDiff on equivariant flow matching [f] (Theorem 4.1). Thanks for pointing this out! We will add a detailed discussion in the revision for clarity. [c] GeoDiff: A Geometric Diffusion Model for Molecular Conformation Generation. ICLR 2022. [d] An Equivariant Pretrained Transformer for Unified 3D Molecular Representation Learning. preprint. [e] Scalars are universal: Equivariant machine learning, structured like classical physics. NeurIPS 2021. [f] Equivariant Flow Matching with Hybrid Probability Transport for 3D Molecule Generation. NeurIPS 2023. > Q3: Why not use the original building block coordinates $\vec{X}_i$, together with the other latents $z_i$ to encode all the additional information? Related, an ablation study over the KL weights $\lambda_1$ and $\lambda_2$ in equation (2) would be interesting. Thanks for the valuable question. The block coordinates $\vec{X}_i\in\mathbb{R}^{n_i\times 3}$ vary in length due to different number of atoms per block (e.g. residue), which makes direct diffusion challenging, as it typically operates in a fixed-length space. A similar issue arises for $H_i$. Thus, we use an all-atom VAE to compress these irregular matrices into fixed-length latent vectors $z_i$ and $\vec{z}_i$, making diffusion feasible. Regarding $\lambda_1$ and $\lambda_2$, this is a really insightful question. A higher $\lambda$ smooths the latent space, improving continuity for later generative modeling but also increasing compression, which may limit expressivity. Thus, the weight can neither be too low nor be too high, as shown in the [validation loss curves](https://anonymous.4open.science/r/UniMoMo-CEA0/assets/l1l2.png). Ultimately, we select the combination with the lowest validation loss. > Q4: The authors manually add extra noise to $\vec{z}_i$ before feeding it to the decoder, to enforce robustness. A more principled way would be to increase the KL weight. Thanks for the suggestion. The extra noises primarily benefits small molecules, which have more intricate inter-block connections than peptides and antibodies connected by peptide bonds, thus require higher robustness to coordinate errors introduced by diffusion. Given that the weight of KL loss is already high (0.8), further increasing it would overly constrain the latent space (as in Q3 above and the table below). |model|vina(score only)|vina(minimize)|vina(dock)| |-|-|-|-| |w/o extra noise|-4.62|-5.58|-7.24| |w/o extra noise+KL 1.0|-4.42|-5.59|-7.09| |w/ extra noise|-5.72|-6.08|-7.25| > Q5: Are "motion vectors" the vector field in the flow matching framework? Yes, we will replace them with "vector fields" in the revision to avoid ambiguity. > Q6:How exactly is the model told to either generate a peptide, a small molecule, or an antibody for the different applications? Thanks for the insightful question! We apologize for not making this clear in the paper. We assign a binary prompt to each block, with 1 indicating amino acid (AA) and 0 without restriction. For peptide and antibody benchmarks, all blocks are assigned 1 to ensure AA generation. For small molecules, we use 0 to allow arbitrary fragment generation. We will clarify this in the revision.
null
null
null
null
null
null
null
null
An Expressive and Self-Adaptive Dynamical System for Efficient Function Learning
Accept (poster)
Summary: This paper introduces EADS, an framework for learning equations efficiently . EADS is inspired by the efficiency of natural systems in learning and solving complex equations. The authors argue that EADS overcomes the limitations of traditional ML methods, such as high computational cost and complex models. They demonstrate EADS's superior performance in various tasks, including graph learning, PDE solving, and LLMs. Claims And Evidence: The main claim is that EADS achieves higher accuracy than traditional ML methods. This is justified by experimental results on graph learning, PDE solving, and LLMs show that EADS outperforms baselines in terms of accuracy. The proposed method also achieves much higher efficiency compared with those methods running on dedicated gpus, for both inference and training. Methods And Evaluation Criteria: The performance of EADS is evaluated based on accuracy, training time, inference latency, and energy consumption. I have concerns regarding the evaluation baselines. 1.Regarding accuracy (e.g, table 2), it is difficult to judge the effectiveness of the proposed method without mentioning the number of parameters in each method. 2.Regarding energy consumption (e.g section 4.4), why a dedicated all purpose gpu is introduced for comparison with the proposed hardware customized to the specific model design? A fair comparison should include NP-GL, which also significantly reduce the energy cost on CMOS-compatible computers. 3.The above concern can also apply to training time, as the major contribution is from not using gpu instead of the proposed method. 4. Including theoretical computational complexity could be a more suitable evaluation criterion for all algorithms, as it is independent of hardware constraints. Theoretical Claims: n/a Experimental Designs Or Analyses: In figure 6, EADS achieves much faster training time than NG-PL due to training implementation on device. In section 4.3, it says "we replace a single decoder block with its corresponding EADS model while keeping all other components of GPT-2 unchanged", is GPT-2 also implemented on the device? can you specify the details? There also lacks details regarding the architecture of the proposed method for each experiment setting (e.g., number of parameters, hidden units, input/output dimensions.) Supplementary Material: n/a Relation To Broader Scientific Literature: n/a Essential References Not Discussed: n/a Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: n/a Questions For Authors: How does EADS compare to other methods in terms of scalability and flexibility? Do we need to customize the hardware if the model is modified? Why the method is called "dynamical system" rather than "dynamical system model" or more specific models like Ising model? EADS is essentially a model to capture the system behavior, not the system itself. I'm curious under what circumstance on-devide training is needed? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful comments. We will address your questions below. **1.The number of parameters** We will report the number of parameters in the Appendix. Due to the character limit, we present parameters for a dataset below. For the PEMS04 dataset: Graph WaveNet: 250,689; MTGNN: 555,169; DDGCRN: 567,109; MegaCRN: 316,225; NP-GL: 188,498; EADS: 94,976. **2.Include NP-GL in Comparison** NP-GL supports inference on a dynamical system, its training is conducted offline on GPUs. According to [1], NP-GL operates with similar power for inference to EADS. Both methods achieve similar inference latency, resulting in comparable inference energy efficiency. However, for training, compared to NP-GL, EADS achieves an average ~$10^{2}$ training speedup, ~ $10^5$ greater energy efficiency. **3.Include Theoretical Computational Complexity** The computing paradigm of EADS is inherently different from that of digital methods, rendering their complexities not directly comparable. Specifically, GPU-based methods execute explicit instructions sequentially, while EADS operates via natural annealing—where electrons (dis)charge capacitors to seek equilibrium. To provide a clear understanding, we define one operation in our DS as a single substantial (dis)charge event of a capacitor, taking ~0.2 ns. For a digital processor, a single instruction execution requires ~0.25 ns. Given that the operation execution times are comparable, we evaluate computational complexity by comparing their total number of operations. Our results show that EADS requires ~$10^3$ operations per inference, while GPU-based solutions require ~$10^5$ operations per inference. We will incorporate detailed discussion. **4.Experiment Details on LLMs** We design our LLM experiments to assess EADS’s ability to learn the complex transformations embedded within each decoder of GPT-2. The entire GPT-2 model is not implemented on EADS; rather, only one decoder is implemented by EADS. Although our system has the potential to implement all decoders, such an extension is beyond the current scope of our work. We will provide details in the manuscript. **5.Include the Settings of EADS** Thanks for your valuable suggestion. We will add a comprehensive section in the Appendix specifying EADS configurations. **6.Scalability and Flexibility of EADS** - Scalability: Dynamical systems have demonstrated strong scalability through single-chip and multi-chip solutions, as evidenced in [2-3]. Specifically, [2] propose a single-chip solution, while [3] explored multi-chip solutions. Overall, the scalability of dynamical system machines is well-founded. - Flexibility of EADS: Since EADS enables on-device training, its parameters can be dynamically adjusted to suit different problems without hardware modifications, making it highly versatile. **7.Why Is the Method Called Dynamical System** We refer to EADS as a “dynamical system” because EADS is not merely a model—it is a software-hardware co-design implemented using programmable electronic components. **8.Under What Circumstances Is On-Device Training Needed?** On-device training is essential for several reasons: 1. Enabling a New Computing Paradigm: Instant on-device training extends the exceptional computational power of dynamical systems beyond inference to include training, thereby establishing a new AI paradigm with remarkable efficiency. 2. Applications Requiring Real-Time Adaptation or Facing Training Costs: - By implementing training directly on-device, EADS can update parameters automatically in response to new data in real time without the delays associated with off-device training, ensuring accurate predictions as underlying patterns evolve. - In scientific computing, the potential of data-driven equation solving is often hindered by prohibitive computational costs. As ML models scale to achieve higher accuracy, training resources may exceed those required by traditional numerical solvers, limiting their practical benefits. Our on-device training directly addresses this challenge. 3. Advancing AI Paradigm Development: Recent research suggests that collocating inference and training on the same hardware can significantly reduce training costs while more closely mirroring biological intelligence [4]. There is also evidence indicating that the human hippocampus functions as a dynamical system with collocated training and inference [5]. Our proposed on-device training method offers a meaningful exploration in this domain. [1] Extending power of nature from binary to real-valued graph learning in real world [2] DS-GL: Advancing Graph Learning via Harnessing Nature’s Power within Scalable Dynamical Systems [3] Increasing ising machine capacity with multi-chip architectures [4] The forward-forward algorithm: Some preliminary investigations [5] Attractor dynamics in the hippocampal representation of the local environment --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal work. As the proposed efficiency improvement arises from two sides, the numerical algorithm and hardware efficiency, my major concern still lies on the issue that the paper lacks a clear disentanglement of the two contributions, not to say the experiments are eventually evaluated from the GPU. After checking other reviews, unfortunately it looks like all the reviewers (including me) are familiar with the algorithm side (hamiltonian, equation discovery), not the hardware side. I will keep my score now and am looking forward to discussing this further with other reviewers and AC. PS: from your rebuttal, why does EAD only take 10^3 operations per inference but with around 94k parameters? Are they not activated? --- Reply to Comment 1.1.1: Comment: Thank you very much for your insightful comment! We fully understand the concerns and have provided structured clarifications below. **Why hardware/software in traditional codesign can be disentangled?** Traditional AI development follows a top-down paradigm, where the AI community designs universal algorithms that can be deployed across a wide range of digital processors, such as GPUs/TPUs/FPGAs. In this paradigm, the algorithm remains useful as hardware changes—highlighting a decoupled relationship between algorithm and hardware. This approach, where the AI algorithm lives/exists independently of the hardware that runs it, has been referred to as “immortal computing” [1] by Prof. Geoffrey Hinton. While this separation offers versatility, it also inevitably results in mismatches between AI algorithmic demands and hardware capabilities, further leading to inefficiencies and high computing costs in AI computing. To mitigate this, the computer systems community has focused on accelerating AI by narrowing this gap through algorithm-hardware co-design. However, in most existing approaches, the co-designed algorithm is still based on “immortal computing”, conceived as a sequence of instructions (e.g., MAC operations), hence can be executed by most digital hardware. This nature makes it relatively straightforward to disentangle contributions: algorithmic complexity is measured by the number of instructions, while hardware performance is measured by instruction throughput. **Why is entangled hardware/software important?** Traditional AI algorithm-hardware co-design remains vital to efficient AI development. However, it is important to note that we are entering an exciting yet challenging era of AI. As Moore’s Law approaches its limit—constraining further gains in computational power—while AI workloads continue to grow exponentially, exploring fundamentally more efficient computing paradigms is crucial for the long-term sustainability of AI development. One promising direction is the “mortal computing” paradigm, as introduced by Prof. Geoffrey Hinton (Sec. 9 in [1]), where hardware and software are tightly entangled with minimal mismatch—much like biological intelligence systems such as the brain. Evidence from prestigious scientific articles [2] shows that biological intelligence (brain): 1. Computes through stochastic and continuous processes, rather than deterministic, instruction-based sequences (e.g., A × B + C × D), and 2. Is inherently mortal—i.e., the "model" and the organ that realizes it are inseparable. For example, a monkey’s intelligence cannot be ported to a human brain, and individual differences in brain structures result in variations in human intelligence. **Why is our work inherently hard to disentangle?** Our work embraces the mortal computing paradigm. The hardware is a physical dynamical system that naturally evolves toward equilibrium (minimum energy) through a stochastic and continuous process known as natural annealing. The algorithm is grounded in the natural annealing process, rather than relying on a predefined sequence of instructions. This deep coupling between algorithm and hardware is key to the exceptional efficiency of our approach—but it also makes it difficult to isolate the contributions of hardware and software. In contrast to GPUs, which perform inference for universal models (e.g., neural networks) through step-by-step execution of instruction sequences, our system performs inference (EADS) via natural annealing—a process in which electrical current and voltage evolve continuously, driven by capacitor (dis)charge events, until equilibrium is reached. **Computational complexity and 94k parameters?** For a fair comparison between EADS and traditional neural networks, accuracy and total execution time are the most meaningful metrics. However, to provide an intuitive sense of computational complexity, we identify the primitive/basic computing unit in our system as a substantial capacitor (dis)charge event, which drives the evolution of the system toward equilibrium. Each such event takes approximately 0.2 nanoseconds—comparable to the execution time of a single instruction on a 4 GHz processor (~0.25 ns). Thus, we propose defining a single substantial (dis)charge event as one operation in our system. However, it's important to emphasize that, unlike in digital processors where each instruction typically involves only one or a few parameters, each operation in our system engages all 94K parameters simultaneously. In every operation, these parameters collectively influence the charging and discharging of capacitors, jointly driving the evolution of electrical voltage and current, and ultimately pushing the system toward equilibrium. [1] Hinton, Geoffrey. The forward-forward algorithm: Some preliminary investigations. arXiv. [2] Wills, Tom J., et al. Attractor dynamics in the hippocampal representation of the local environment. Science 308.5723.
Summary: The paper proposes an Expressive and self-Adaptive Dynamical System (EADS) that can learn a wide range of equations with efficiency. The authors propose an efficient on-device learning method that leverages intrinsic electrical signals to update parameters, making EADS self-adaptive at a reduced cost. The authors explore the accuracy of EADS and compare it to existing works, showing that EADS can provide speedups and energy efficiency over other methods. ## update after rebuttal In view of the authors rebuttal, I updated my score 2 -> 3. Claims And Evidence: The paper show evidence of good performance of the proposed method. Methods And Evaluation Criteria: The method is evaluated on a set of benchmark datasets. Some baselines are missed in the experimental evaluation. Notably, a genetic programming method should be included in the comparison and the Finite Element Method (FEM) for the PDE solving task. Theoretical Claims: Theoretical claims are not discussed in the paper. Experimental Designs Or Analyses: I did not run experiments to verify the results. Supplementary Material: I have reviewed the supplementary material. Relation To Broader Scientific Literature: The paper does not emphasize enough the relation of the proposed method with the literature on equation discovery (aka symbolic regression). The authors should discuss how their method relates to existing methods in the field of equation discovery. Essential References Not Discussed: The paper does not discuss any reference in the field of equation discovery, also known as symbolic regression. The authors should discuss how their method relates to existing methods in the field of equation discovery. Other Strengths And Weaknesses: - Strengths - The paper addresses the important question of efficiency in inference and training in machine learning. - Weakness - In my opinion, the language in the paper is not precise and too colorful for a scientific paper. For example, "Nature presents an elegant solution to this computational crisis." or "Can we harness dynamical systems to create a nature-powered ML paradigm that learns and solves equations with revolutionary efficiency?". I would suggest the authors to use a more formal language. Other Comments Or Suggestions: 1. The paper repeatedly uses the idea of a dynamical system being able to "learn" equations. (e.g., "... dynamical systems effortlessly learn complex equations ...", "Natural systems effortlessly learn and solve complex equations through inherent dynamical processes.", etc). In mathematical physics, the term "dynamical system" refers to a system that evolves over time according to a set of rules, typically described by a system of differential equations. (From Wikipedia: "In mathematics, a dynamical system is a system in which a function describes the time dependence of a point in an ambient space, such as in a parametric curve.") Therefore, the equation that describes the dynamics of a dynamical system is not "learned" by the system, but rather is a fundamental property of the system. It is very important that the authors clarify this use of the term "learn" in the context of dynamical systems. 2. Rewrite the line 327: "...we first evaluate the performance of EADS in graph learning tasks that it is originally deployed, showing the performance of EADS ...". Do you mean "in graph learning tasks that it is originally designed for"? 3. A definition of "inference latency" will be pertinent for the reader to understand the results presented in the paper. The authors should provide a definition of this term in the paper. 4. Line 306 "For hard-to-define equation learning in real-world problems, ..." What do the authors mean by "hard-to-define equation learning"? This sentence is vague and should be clarified. Questions For Authors: 1. (Abstract) "While modern machine learning (ML) methods are powerful equation learners, their escalating complexity and high operational costs hinder sustainable development." What do the authors mean by "equation learners"? This is not a standard term in the ML literature. Is a LLM a "equation learner"? What do the authors mean by "sustainable development" in this context? This sentence is vague and should be clarified. 2. (Section 2.1). " Given a well-trained Hamiltonian that accurately captures the correlation between inputs and outputs, ..." How is the Hamiltonian trained? Can the authors provide an example on how to do this for a toy problem? I think that will help the reader to understand the concept better. 3. (Section 4.2) The authors present examples of PDE Solving. How does the Hamiltonian of the system is trained in this case? Can the authors provide an example on how to do this for a simple Parabolic PDE? 4. Experimental Results. A critical aspect of learning equations is the ability to inspect the learned equations and assess their interpretability. Can the authors provide some examples of the learned equations and discuss their interpretability? 5. Table 1. How does the method compare to other methods in the field of equation discovery (aka symbolic regression)? In particular, a well-stablihed method in the field is genetic programming. How does the proposed method compare to genetic programming in terms of accuracy and efficiency? 6. (4.2 PDE Solving) The authors should compare with a Finite Element Method (FEM) solver for the PDEs. How does the proposed method compare to a FEM solver in terms of accuracy and efficiency? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thorough review and constructive feedback. We address each concern in detail. **1.Relation to Equation Discovery** We clarify that our work fundamentally differs from equation discovery: rather than discovering equations, our work focuses on efficiently solving equations with high speed and low computational cost while maintaining high accuracy. EADS works by optimizing its parameters through the proposed on-device training method to capture the joint distributions between inputs and outputs. Once trained, it can efficiently generate solutions for new inputs. In PDE solving, equation discovery methods would attempt to identify the underlying equations, while EADS aims to generate solutions to PDEs under varying inputs (different coefficients or initial conditions) as described in [1]. Importantly, our work extends to solve analytically intractable equation. We will add a subsection in the Related Work to state the relation. **2.Clarification of Learn in Dynamical Systems** We apologize for any confusion. As noted in Q1, learning is indeed misleading since our focus is on solving equations efficiently rather than learning them. We agree that EADS do not learn their inherent dynamics. However, it's worth noting that natural dynamical systems do adjust the parameters within their dynamics. A typical example is the human hippocampus, which functions as a dynamical system and adjusts synaptic strengths (i.e., parameters) between neurons [2]. We will revise the manuscript to replace “learn equations” with “tune the parameters within dynamics.” **3.Formality of language, Sentence Clarifications and Key Term Definitions** We will revise the manuscript to use more precise language. - Line 327: Revised to “in graph learning tasks that it is originally designed for.” - Inference Latency: Defined as the time delay between submitting an input to a trained model and receiving its output. - Hard-to-Define Equation Learning: Revised to “For complex real-world problems where underlying equations are unknown or not readily expressible in closed form.” - Abstract: Revised to “While modern machine learning methods approximate complex functions, their escalating complexity and computational demands pose challenges to efficient deployment.” **4.Hamiltonian Training and Example** Training Hamiltonian involves optimizing its parameters to capture correlations between nodes. Consider $\mathcal{H} = -\sum_{i\neq j}^{N} J_{ij} x_i x_j + \sum_{i=1}^{N} h_i x_i^2$, with trainable parameters $J,h$. Using a conditional likelihood method [3], the estimated state at equilibrium is: $\hat{x_i} = \frac{1}{2h_i}\sum_{j\neq i}^{N} (J_{ij}+J_{ji}) x_j.$ Then, the MSE loss is minimized via standard backpropagation to optimize parameters. Detailed explanation and example will be included in Section 2.1. **6.EADS Training in PDE Solving** In PDE solving, EADS approximates solutions under varying inputs (coefficients or initial conditions). Training is optimizing system parameters ($P,J,Q$) to capture the correlations between inputs and outputs. In one of the Darcy Flow example [1], we solve the 2D Darcy Flow equation on the unit square with Dirichlet conditions. Each training sample comprises a 16×16 grid of coefficient a(x) and its corresponding solution u(x). EADS learns mapping from a to u, then rapidly generate solutions without iterative solvers. A detailed description will be added in the Appendix. **7.Interpretability and Comparison with Equation Discovery and FEM** As explained in our response to Q1, our method does not aim to derive explicit equations from data. Since the goal and required training data are significantly different between our method and methods in equation discovery methods, it is difficult to incorporate genetic programming based methods in our evaluation for a fair comparison. To provide a reference, a method in the genetic programming-based equation discovery domain that also evaluated on the Burgers' equation achieves an MSE of $4.33×10^{-5}$ [4], while our method achieves an average MSE of $9.37×10^{-6}$ under our evaluation. Regarding the comparison with FEM, our evaluation employs outputs from the FEM as ground truth, so we only compare their efficiency. On average, the inference latency of EADS is at the level of 10e-7 seconds, while the latency of typical FEM generally exceeds 10e-3 seconds. Therefore, the efficiency of EADS is significantly better than FEM, especially on complex PDEs with higher resolutions. [1] Li, Z., et al. Fourier Neural Operator for Parametric Partial Differential Equations. ICLR. [2] Wills, T.J., et al. Attractor dynamics in the hippocampal representation of the local environment. Science, 2005. [3] Wu, C., et al. Extending power of nature from binary to real-valued graph learning in real world. ICLR. [4] Chen, Y., et al. Symbolic genetic algorithm for discovering open-form partial differential equations (SGA-PDE). Physical Review Research. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' detailed response to my comments. The acknowledgement of the misleading use of the term "learn" in the context of dynamical systems and subsequent revision of the manuscript to clarify this point will improve the clarity of the paper. This is specially important, considering that the paper is entitled "An expressive and self-adaptive dynamical system for efficient **equation learning**". I also appreciate the authors' willingness to revise the manuscript to use more precise language and clarify the sentences I pointed out. The authors' response to the questions regarding the PDE solving task is helpful. I will update my overall recommendation to reflect the authors' clarifications. --- Reply to Comment 1.1.1: Comment: Dear Reviewer 17Sp, Thank you for your thoughtful feedback and for taking the time to review our response. We sincerely appreciate your acceptance of our work. Your constructive insights have been invaluable in improving the quality of the manuscript. We will carefully follow your suggestions and improve the manuscript accordingly. Please do not hesitate to let us know if there are any remaining concerns or additional details that we can address to further improve the manuscript. Thank you once again for your thorough and valuable review. Best regards, The Authors
Summary: The authors propose a method to learn equations efficiently from data. Current methods such as neural networks have high complexity and high operational cost which hinders widespread applications. Recently electronic dynamic systems have shown great promise in solving simple learning problems with great efficiency. However, current electronic dynamic systems lack sufficient expressivity to learn complex equations, and also lack effective training support. To mitigate the current limitations of electronic dynamic systems for learning, the authors propose Expressive and self Adaptive Dynamic System (EADS) that integrates hierarchical architecture and heterogeneous dynamics to increase expressivity and also propose an on-device learning method for efficiency. Experiments show that EADS has lower training times and higher/comparable accuracy compared to baselines. Claims And Evidence: The authors claim their method provides low training times and more expressive power. Experiments show that the training times are lower. The accuracy is clearly higher for one experimental setup, while for others they are comparable. The comparable accuracy experiments can use some additional possible explanations. Methods And Evaluation Criteria: The proposed method makes sense, as the authors clearly identify 2 gaps in current literature, and their proposed method incorporates possible remedies for the identified gaps. The training times are measured for efficiency comparison, while accuracy is measured for expressivity comparison. Theoretical Claims: None Experimental Designs Or Analyses: I looked at the experimental designs, at quick glance the experimental designs look good, however I haven’t checked the details thoroughly. Supplementary Material: No Relation To Broader Scientific Literature: Energy efficient training is a major concern at present for complex machine learning models, the authors work can significantly push the limit of training efficiency using hardware optimizations through dynamic systems. As scientific literature in many domains now heavily relies on machine learning methods and model training, this can have a broad impact on overall literature. Essential References Not Discussed: None Other Strengths And Weaknesses: The authors present a very good introduction, clearly identifying gaps in current literature and justifying their proposed solutions. I have few suggestions that the authors can consider for improving the current manuscript: 1. Equation learning can be confused with symbolic equation learning/symbolic regression, I think a better name could be function learning. To avoid confusion, the authors can add a line or two in the introduction. 2. It will be great if the authors can provide a simplified visual diagram of EADS workflow, that will be very helpful to interested readers. 3. The captions can be more detailed, highlighting the key message i.e., figure shows EADS is more efficient Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your positive feedback and constructive suggestions. Below, we address each of your points in detail to further improve the manuscript. **1.Additional explanations for comparable accuracy experiments** Thank you for this insightful suggestion. In our evaluations, we compare EADS with NP-GL and deliberately selected state-of-the-art digital baselines that have consistently demonstrated exceptional accuracy through rigorous validation in prior research. Although the accuracy differences may appear similar in magnitude, the actual improvements are significant. Specifically, compared to NP-GL, EADS achieves an average MAE reduction of 8.92% on spatial-temporal prediction, a 71.4% MAE reduction in PDE solving, and a 6.32% reduction in PPL on LLM tasks. Furthermore, compared to the best digital baseline, EADS yields a 15.10% MAE reduction on spatial-temporal prediction, and a 4.62% MAE reduction on PDE solving, while substantially reducing computational demands. We will expand our analysis with additional explanations to better highlight the advantages of EADS. **2.Terminology clarification** Thank you for highlighting this potential confusion and for your valuable suggestion. We agree that the term “function learning” offers a more precise description of our work. We will change “equation learning” to “function learning” throughout the manuscript and differentiate our focus from symbolic regression in the Introduction. **3.Add visual diagram for EADS workflow** We appreciate your valuable suggestion. We will include a new figure in the Methodology section to illustrate the overall EADS workflow. As we cannot directly upload figures in the rebuttal, we provide a detailed description of the proposed diagram below. The diagram will be divided into two main components: - **Expressivity Enhancement:** We will visualize the hierarchical structure and heterogeneous dynamics with clear illustrations of information flow. This visual will demonstrate how EADS progressively refines the input through multiple processing stages with heterogeneous dynamics. - **Instant On-Device Training:** This part will detail the parameter adjustment process. It will depict how intrinsic electrical signals are used as feedback to drive learning, illustrating the feedback loop where the output nodes’ electrical currents guide rapid, on-device parameter updates. **4.Enhanced figure and table captions to highlight key messages** Thank you for recommending more informative captions. In our revision, we will update the captions to explicitly highlight the key performance takeaways: - **Table 1:** Spatial-temporal prediction performance in MAE. EADS consistently outperforms all baselines across all datasets (best results in bold). - **Figure 4:** Training time and inference latency for spatial-temporal prediction. EADS demonstrates significantly higher efficiency than GPU-based baselines. - **Table 2:** Test MAE for PDE solving. EADS achieves superior accuracy compared to all baselines, with marginal improvements over FNO. - **Figure 5:** Training time and inference latency for PDE solving. EADS is markedly more efficient than GPU-based baselines. - **Table 3:** Test perplexity (PPL) on LAMBADA. EADS significantly outperforms NP-GL. - **Figure 6:** Training time and inference latency on LAMBADA. EADS exhibits superior efficiency compared to GPU-based methods.
null
null
null
null
null
null
null
null
SMART-PC: Skeletal Model Adaptation for Robust Test-Time Training in Point Clouds
Accept (poster)
Summary: The paper titled introduces a novel skeleton-based framework, SMART-PC, designed to enhance the robustness and efficiency of 3D point cloud classification models during test-time training (TTT). This paper leverages skeletal representations to extract robust geometric features that are less sensitive to corruptions, enabling the model to adapt effectively to test-time distribution shifts. Extensive experiments on several benchmarks demonstrates its effectiveness Claims And Evidence: Refer to the Strengths and Weaknesses Methods And Evaluation Criteria: Refer to the Strengths and Weaknesses Theoretical Claims: Refer to the Strengths and Weaknesses Experimental Designs Or Analyses: Refer to the Strengths and Weaknesses Supplementary Material: Yes. All. Relation To Broader Scientific Literature: Refer to the Strengths and Weaknesses Essential References Not Discussed: Refer to the Strengths and Weaknesses Other Strengths And Weaknesses: Goodness: 1. This paper leverages skeletal representations to extract robust geometric features that are less sensitive to corruptions. 2. Extensive experiments on several benchmarks demonstrates its effectiveness Weakness: 1. Why leveraging skeleton representation can eliminate the need for back propagation during adaptation is confusing, it would be better to adding more explanations or analysis. 2. Since the authors claim the skeleton representation is more robust than current representation like point cloud, it will be better to demonstrate its generalization and extend the skeleton representation to more methods, not limited to MATE. 3. During the training process of predicting the skeletons, is there any more supervision object? Current framework predicts the skeleton(position+radius) of the input via an implicit way (Eq. 7,8). I am not sure whether it truly works, for the reason that just adding extra reconstruction branch can well help the model getting more inner information from the data.[1,2] 4. The results of MATE[3] (both standard and online) on ModelNet40-C is not consistant with the paper. 5. In Tab.1, lacking comparsion with current methods like BFTT3D[4], [5], 6. It will be more convincing to adding more comparison with other methods about the efficiency in Fig.3. 7. The writing needs to be improved. This point will not affect my rating score. [1] Improving Language Understanding by Generative Pre-Training [2] Generative Pretraining from Pixels [3] MATE: Masked Autoencoders are Online 3D Test-Time Learners [4] Backpropagation-free Network for 3D Test-time Adaptation [5] Test-Time Adaptation in Point Clouds: Leveraging Sampling Variation with Weight Averaging Other Comments Or Suggestions: Refer to the Weakness. Questions For Authors: Refer to the Weakness. I will improve my rates when the weaknesses are solved. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the novelty of our skeleton-based framework and the effectiveness of skeletal representations, as well as our thorough experimental validation. **Weakness** **1 and 2**. Our method supports two modes of adaptation: one with backpropagation and one that is backpropagation-free. The goal of the backpropagation-free mode is to demonstrate that, during pretraining, the use of a skeletal prediction branch encourages the model to learn **robust and meaningful geometric features**. As a result, during test-time adaptation, simply updating the **statistical parameters** of the BatchNorm layers (i.e., running mean and variance) is sufficient to enhance performance on corrupted datasets. This effect is validated by our results in **Table 1** of the main paper. To further support this claim, we conducted additional experiments using the BFTT3D[1] method, employing three different pretrained models: 1. Org-SO: A baseline model pretrained with only an encoder and classification head. 2. MATE-SO: A model pretrained using the MATE framework, which includes an encoder, a reconstruction decoder, and a classification head. 3. SMART-PC-SO: Our model, pretrained with an encoder, a skeleton-based decoder, and a classification head. Each of these models was evaluated using the same BFTT3D adaptation strategy. The results in Table 1 show that our pretrained model significantly outperforms both the baseline and MATE-pretrained models in the backpropagation-free setting across all three datasets. This provides strong evidence that the skeletal prediction branch enables the model to extract more **structure-aware and corruption-resilient features**, which allow effective test-time adaptation even without updating model weights via backpropagation. **Table 1: Mean Accuracy (%) of BFTT3D with Different Pretrained Models** | Dataset | Org-SO | MATE-SO | SMART-PC-SO | |-|:-:|:-:|:-:| | ScanObjNN-C| 33.00| 33.22| **35.90** | | ModelNet40-C| 57.16| 54.71| **65.25**| | ShapeNet-C| 60.73| 53.07| **62.24**| [1] Backpropagation-free network for 3d test-time adaptation *** **3.** We confirm that our method does **not use any supervised ground truth for skeleton prediction**. Instead, the model learns to predict the skeleton structure (position and radius) in an **unsupervised manner** using the loss functions defined in Equations 11, 12, and 13. These losses serve as supervisory signals and include: (1) a point-to-sphere distance loss ensuring coverage of the input shape, (2) a skeleton-to-point loss encouraging compactness, and (3) a radius regularization term. Together, they provide strong geometric guidance, even without explicit supervision. Our goal is not to recover the point cloud itself (as in standard reconstruction branches), but to extract a more **structured and abstract representation** that captures the underlying geometry. The effectiveness of this approach is validated by the results in **Table 1 of the main paper**, which show that the features learned through the skeleton prediction branch lead to significantly better performance on corrupted datasets, especially in the backpropagation-free adaptation mode, compared to prior reconstruction-based methods like MATE. Additionally, we tested our pretrained model (SMART-PC-SO) with the BFTT3D method in the backpropagation-free setting, and compared it to Org-SO and MATE-SO. As shown in **Table 1**, our skeleton-pretrained model achieves higher robustness across corruptions, further supporting the generalizability and strength of the learned skeletal features. *** **4.** Since our implementation is largely based on the official MATE codebase, it is crucial to ensure the reproducibility of their reported results for a fair comparison. To this end, we faithfully reproduced their results using the exact code, hyperparameters, and pretrained models provided in their public GitHub repository (we witnessed a discrepancy in their results on ModelNet40-C and ScanObjectNN-C). For transparency and verification, we have also included the corresponding log files in our anonymous repository (https://anonymous.4open.science/r/SMART-PC-ICML-737C/). *** **5.** We thank the reviewer for pointing this out. Please refer to our response to Reviewer **k4Xv** for the updated **Table 1**. We regret that, due to space constraints, we are unable to include the full content here. *** **6.** We conducted this experiment, and the results are shown in **Table 2** in our response to reviewer **k4Xv**. For DDA and CloudFixer, we report the FPS values directly from their respective papers. For all other methods, we measured the FPS ourselves, including the time required for adaptation. Importantly, the reported FPS for SMART-PC corresponds to the **backpropagation-free mode**, highlighting its efficiency under test-time adaptation without gradient updates. *** **7.** We will revise and improve the writing of our paper in the final version. --- Rebuttal Comment 1.1: Comment: The authors addressed my problems. I have updated my ratings. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for carefully considering our responses and updating the rating. If there are any further questions or suggestions, we would be happy to address them.
Summary: This paper proposed the method of test-time training for point cloud classification by leveraging skeletal representations. It aims to enhance the model's robustness to different distribution in test time samples. To this end, it introduced skeleton feature extraction branch besides classification branch to enhance the encoder and decoder of model. Claims And Evidence: The authors' claims are clear and straightforward, and they effectively address the limitations of previous work. For example, MATE lacks geometric understanding, rendering it less robust to surface-level corruptions. Additionally, the authors tackle the efficiency issues present in MATE. These limitations are directly addressed by the solutions proposed in this paper. Methods And Evaluation Criteria: Most parts of the Method section are clear and self-contained, and Figure 2 effectively illustrates the architecture and training pipeline of SMART-PC. Here are a few minor recommendations: 1. The notation for the mask predictors is inconsistent—MLP_skel (Equation 7) and MLP_s (Figure 2) are used interchangeably (similarly for MLP_radius and MLP_r). Please standardize these terms to avoid confusion. 2. At the top of Figure 2, MLP_r appears to be connected sequentially after MLP_s, which does not match the equations in the main paper. Could the authors clarify this discrepancy? Theoretical Claims: This paper presents an empirical-results-driven method, without offering any theoretical contributions. Experimental Designs Or Analyses: Below are several suggestions to further enhance the experimental design: 1. Could the authors include visualizations that demonstrate how corrupted point clouds are restored or abstracted using skeleton points? 2. Would ablation studies on the skeletal losses help quantify their impact? 3. Is regularization necessary to prevent the radii from becoming excessively large? 4. Are there any failure cases that could be analyzed to better understand the method’s limitations? 5. Can the authors explore a backpropagation-free strategy for SMART-PC-Standard? Supplementary Material: The reviewer read all parts of supplementary material. As recommended above, it would be better to add more visualization to show the effectiveness of skeleton points. Relation To Broader Scientific Literature: Key contributions acknowledged by both the authors and the reviewer include: 1. first work to improve test-time training of point cloud classification using skeleton representation. 2. This paper improves the performance while achieving efficient adaptation and inference pipeline. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please see above. Other Comments Or Suggestions: Please see above. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for recognizing the novelty, clarity, and effectiveness of our method in improving robustness and efficiency for point cloud test-time training. **1. METHODS AND EVALUATION CRITERIA - inconsistent Notation for Mask Predictors** We'll correct the notations in the final version for clarity and consistency. *** **2. METHODS AND EVALUATION CRITERIA - Clarification on Decoder Structure in Figure 2** Our intention was to illustrate that each skeleton point has an associated radius, predicted separately using MLP\_s and MLP\_r. Both outputs are used to construct the final figure. For visualization purposes, they were shown connected. However, we acknowledge that this may cause confusion. In the final version, we will revise the figure to clearly separate these components and better reflect the equations in the main paper. *** **Theoretical Claims** By incorporating skeletal representation learning, we believe our model introduces a structural inductive bias that prioritizes shape and topology, which aligns with deep learning theory emphasizing that such biases improve generalization and robustness under distribution shifts [1]. [1] Shortcut learning in deep neural networks. Nature Machine Intelligence *** **EXPERIMENTAL DESIGNS OR ANALYSES** 1. We have included a visualization [Figure 1] (https://anonymous.4open.science/r/SMART-PC-ICML-737C/skeleton.pdf) that illustrates how our method abstracts both clean and corrupted point clouds into compact and meaningful skeleton representations. The top row shows a point cloud under different conditions (clean, Uniform noise, Background noise), and the middle row relates to the corresponding skeletons. From this visualization, the model can estimate the skeleton under a uniform noise, preserving the overall shape of the clean data. This shows that the skeleton's abstract representation is less sensitive to the noise compared to the original points (Success Case). But under a harsh noise condition (right), the skeletons try to cover all the background outlier points, making the radius excessively large, hindering the effective representation of the main inlier points. Accuracy gain (bottom row) shows the trend; more visualizations will be provided in the final version. *** 2. We provided an ablation study in **Table 1** to quantify the impact of each skeletal loss component. The best performance is achieved with coefficients **(0.3, 1.0, 0.4)**—as suggested in the Point2Skeleton paper—showing a good improvement in mean corrupted accuracy (**72.95\%**) compared to other settings. This confirms that each loss term contributes to learning more robust features. ### Table 1: Ablation study of skeleton loss coefficients (ModelNet40 / ModelNet40-C, online adaptation) |Pt2Sphere|Sampling|RadiusReg|Source Acc(%)|Corrupted Acc(%)|| |:-:|:-:|:-:|:-:|:-:|-| |1.0|1.0|0.0|91.3|67.82|| |0.0|1.0|1.0|91.6|67.79|| |1.0|0.0|1.0|91.6|67.80|| |1.0|1.0|1.0|91.2|72.84|| |0.3|1.0|0.4|91.3|**72.95**|coefficients from Point2Skeleton paper| *** 3. As described in the main paper, the **Radius Regularization Loss** (Equation 6) is designed to avoid instability caused by overly small radii, especially under noisy conditions. This loss encourages the model to learn **larger and more stable radii**, which improves the robustness of the skeletal abstraction. Although we do not observe excessively large radii, the **Point-to-Sphere** and **Sampling** losses (Equations 11 and 12) implicitly constrain radius size by preserving geometric consistency. As shown in **Table 1**, removing the regularization term leads to a drop in performance, confirming its importance. *** 4. In [Figure 1] (https://anonymous.4open.science/r/SMART-PC-ICML-737C/skeleton.pdf), we have included a failure case (right) to showcase a potential limitation of the skeleton estimation. From this figure, when the noise expands far beyond the main object's dimensions, the skeleton gets distracted by this out-of-distribution dimensionality change, and while it remains focused on the original points, its radii get excessively large in a way that it no longer represents the original point cloud's shape. *** 5. In standard mode, the batch size is 1, and the model is reset for each sample, so updating BN statistics (backpropagation-free) cannot help the model. To further investigate, we conducted an ablation study on the ScanObjectNN-C dataset in standard mode using the backpropagation-free setting. We tested batch sizes of 8, 16, and 32 with a single iteration for efficiency, as shown in **Table 2**. It shows that in standard mode, SMART-PC can improve performance on corrupted datasets in the backpropagation-free setting when the batch size is increased. ### Table 2: Ablation study of SMART-PC (standard mode and backpropagation-free) on ScanObjectNN-C. |Batch Size|Iteration|Mean Acc.(%)|| |:-:|:-:|:-:|-| |1|--|38.7|SMART-PC-SO (source only) |8|1|39.73|| |16|1|39.96|| |32|1|39.98|| --- Rebuttal Comment 1.1: Comment: The authors addressed the concerns and questions that the reviewer has. I will maintain my initial rating, "weak accept". --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for taking the time to read our responses and for maintaining a positive recommendation. If there are any additional concerns or suggestions, we would be happy to hear them and address them to further improve our work.
Summary: This paper proposes a test-time adaptation framework for point cloud recognition, establishing a novel self-supervised fine-tuning paradigm that utilizes Skeletal Representation as a pretext task. By predicting skeletal points and their corresponding radii, the method extracts noise-insensitive geometric features. The authors claim that their approach eliminates the need for backpropagation during adaptation, significantly reducing computational overhead. Extensive comparative experiments are conducted, and the algorithm is validated under diverse experimental conditions. Claims And Evidence: The paper presents two main claims. The first is the use of Skeletal Representation as a pretext task to adapt the model to target domain data, which is both interesting and proven to be effective. The second claim is the direct updating of the mean and variance in the batch normalization (BN) layers to fine-tune model parameters without backpropagation, which indeed offers a novel perspective on model parameter adaptation. Both of these claims are clear and well-articulated. Methods And Evaluation Criteria: The proposal of using Skeletal Representation as a pretext task to assist in fine-tuning model parameters is quite interesting. Introducing this self-supervised method into the backbone appears to be highly effective for enabling the model to adapt to target domains. I believe this contribution is well-suited for research on test-time adaptation in point cloud recognition, and it could also be extended to related fields. Theoretical Claims: In Section 3.4, when discussing the "Classification Loss", the paper mentions, "we train the network using labeled source data." Does this imply that source domain data is being used during the test-time training phase? If labels from the source domain data are indeed utilized, I believe this is inappropriate. Any modules that rely on source domain data for training should not be incorporated into the TTT (Test-time Training) process; they should be part of the source domain training phase rather than the test-time training phase [1][2]. I hope the authors can clarify the details here, as I consider this to be of significant importance. [1]. Liang J, He R, Tan T. A comprehensive survey on test-time adaptation under distribution shifts[J]. International Journal of Computer Vision, 2025, 133(1): 31-64. [2]. Sun Y, Wang X, Liu Z, et al. Test-time training with self-supervision for generalization under distribution shifts[C]//International conference on machine learning. PMLR, 2020: 9229-9248. Experimental Designs Or Analyses: 1. The comparative methods in this paper appear outdated. It is recommended that the authors include more recent algorithms specifically designed for test-time adaptation in point cloud recognition [1][2], or adapt general test-time adaptation algorithms to the point cloud recognition task [3] for a more comprehensive comparison. 2. The experiments in this paper are quite comprehensive, encompassing both online adaptation and standard adaptation tests. The analysis of the batch normalization statistics, in particular, further substantiates the rationale for the lightweight design. [1]. Shim H, Kim C, Yang E. CloudFixer: Test-Time Adaptation for 3D Point Clouds via Diffusion-Guided Geometric Transformation[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024: 454-471. [2]. Wang Y, Cheraghian A, Hayder Z, et al. Backpropagation-free network for 3d test-time adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 23231-23241. [3]. Yuan Y, Xu B, Hou L, et al. Tea: Test-time energy adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 23901-23911. Supplementary Material: Yes. The supplementary materials also include a wealth of experiments, analyzing various dimensions such as parameter sensitivity, cross-dataset performance, and the performance with different severity levels. Relation To Broader Scientific Literature: 1. [1] demonstrated that constructing an efficient pretext task in point cloud recognition can effectively enable model adaptation to target domain data. The current study further explores and proposes a novel method for designing pretext tasks specifically for test-time adaptation in point cloud recognition. 2. The analysis of batch normalization layer statistics has frequently been utilized in past studies on test-time adaptation for segmentation tasks [2][3]. However, most approaches involve constructing loss functions based on the mean and variance of BN layers to minimize the discrepancy between the source and target domains. This paper introduces a new approach by directly updating these two parameters without backpropagation. Experimental results show promising outcomes while maintaining recognition accuracy. [1]. Mirza M J, Shin I, Lin W, et al. Mate: Masked autoencoders are online 3d test-time learners[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 16709-16718. [2]. Shiyu Liu, Daoqiang Zhang, Xiaoke Hao. Efficient Deformable Convolutional Prompt for Continual Test-Time Adaptation in Medical Image Segmentation. AAAI 2025. [3]. Chen Z, Pan Y, Ye Y, et al. Each test image deserves a specific prompt: Continual test-time adaptation for 2d medical image segmentation[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2024: 11184-11193. Essential References Not Discussed: [1] and [2], published at ECCV 2024 and CVPR 2024 respectively, are highly relevant to this study, focusing on test-time adaptation for point cloud recognition. While [1] is not cited in this work, Paper 2 is mentioned in the related work section with the statement: 'In addition to these approaches, several works on test-time adaptation have explored updating model parameters during inference to handle distribution shifts effectively.' However, [2] presents a classic test-time adaptation algorithm for point cloud recognition that does not require backpropagation, which aligns closely with the motivation of this paper. Despite this, it is neither included in the comparative experiments nor analyzed in the discussion. [1]. Shim H, Kim C, Yang E. CloudFixer: Test-Time Adaptation for 3D Point Clouds via Diffusion-Guided Geometric Transformation[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2024: 454-471. [2]. Wang Y, Cheraghian A, Hayder Z, et al. Backpropagation-free network for 3d test-time adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 23231-23241. Other Strengths And Weaknesses: The two main contributions of this paper are highly insightful, as they adeptly apply previous research to the current task and achieve performance improvements. Other Comments Or Suggestions: I recommend that the authors provide a more detailed description of the batch normalization layer parameter updates to distinguish their approach from previous methods that directly update the BN layer weights and biases by backpropagation. This would help readers more clearly understand the authors' contributions. Questions For Authors: 1. In the Classification Branch, the authors mention "adding the features from the encoder and decoder together", but I did not fully understand this description. Additionally, the main network diagram (Figure 2) does not clearly illustrate which specific features are being added or how this operation is performed. Why not directly feed the features obtained from the encoder into the classification head? 2. How exactly do the authors use the source domain label data in this paper? Is it during the TTT process? (The detail can be seen in Theoretical Claims). 3. Compared to [1], which backpropagation-free method is more effective? What are the advantages of the backpropagation-free approach proposed in this paper? [1]. Wang Y, Cheraghian A, Hayder Z, et al. Backpropagation-free network for 3d test-time adaptation[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 23231-23241. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer’s thoughtful feedback and their recognition of our contribution to pretext task design and its connection to prior work in test-time adaptation for point cloud recognition. **1. Theoretical Claims** To clarify, **our method does not use labeled source data during the test-time training phase**. The classification loss discussed in Section 3.4 is only applied **during the pre-training phase** on the source dataset to learn the initial model parameters. During test-time training, only the target domain data without labels is used, and no source labels are required. Only the encoder and decoder are adapted at test time; the classifier is frozen (Figure 2). Furthermore, we emphasize that our method **strictly follows the standard definition of Test-Time Training (TTT)**, as established in prior works such as the MATE paper. We acknowledge that our wording in Section 3.4 may have caused confusion, and we will clarify this distinction in the final version of the paper. *** **2. Comparison with Recent Test-Time Adaptation Methods** Thank you for the suggestion. As shown in **Table 1**, we repreduced **BFTT3D[2]** and **SVWA[3]** using our settings. Since SVWA produced very low results in our standard setting (MoodelNet40-C: 5.54% and ScanObjectNN-C: 21.79%), we used BS=4, It=1, and N_v=2 for adaptation with the pretrained model available on their GitHub repository. Diffusion-based methods (**CloudFixer[1]**, **DDA[4]**) are reported separately due to slow Frame/Second (see **Table 2**). Due to time constraints, we were unable to reproduce them during the rebuttal period. [1] CloudFixer: Test-Time Adaptation for 3D Point Clouds via Diffusion-Guided Geometric Transformation [2] Backpropagation-free network for 3d test-time adaptation [3] Time Adaptation in Point Clouds: Leveraging Sampling Variation with Weight Averaging. [4] Diffusion-driven adaptation to test-time corruption. ### Table 1: Top-1 Accuracy (%) Comparation With More Recent Methods (🆕) |Method|ModelNet40-C|ScanObjectNN-C|ShapeNet-C| |-|:-:|:-:|:-:| |**Source-Only**| |Org|54.0|37.0|61.3| |MATE|53.7|34.5|56.5| |**SMART-PC**|**61.7**|**38.7**|**64.5**| |**Diffusion**| |🆕CloudFixer-Standard|68.0|-|-| |🆕CloudFixer-Online|77.2|-|-| |🆕DDA-Standard|68.1|-|-| |**Standard**| |🆕SVWA|57.1|37.4|50.5| |MATE|58.9|36.9|63.1| |**SMART-PC**|**63.1**|**39.6**|**64.4**| |**Online**| |🆕BFTT3D|57.2|33.0|60.7| |MATE|69.6|36.9|**69.1**| |SMART-PC†|70.8|46.7|65.9| |**SMART-PC**|**72.9**|**47.4**|67.1| ### Table 2: Frame per Second Comparison (FPS), N_v is the number of sampling variations. |Method|FPS| |-|:-:| |DDA|0.04| |CloudFixer|1.07| |BFTT3D|6.83| |SVWA|10.86 , for N_v=2| |MATE|10.79| |**SMART-PC**|**59.52**| *** **3. Clarification on Batch Normalization Updates** During pretraining, our method learns more abstract and robust features through the skeleton prediction branch. These features are resilient to corruption, such that during test-time adaptation, simply updating the **statistical parameters** of the BatchNorm layers (i.e., running mean and variance) can effectively suppress noise without requiring backpropagation. As shown in [Figure 1] (https://anonymous.4open.science/r/SMART-PC-ICML-737C/skeleton.pdf), the predicted skeleton offers a noise-resistant representation that aligns corrupted inputs with clean data. Further details will be provided in the supplementary material. *** **4. Clarification on Feature Fusion in the Classification Branch** The encoder and decoder outputs share the same shape **(B, N, D)**, where **B** is the batch size, **N** the number of tokens, and **D** the feature dimension. These features are combined through **element-wise addition** and passed to the classification head. This design enriches the encoder’s representation with structural information from the decoder, which captures the **skeletal geometry** of the object. As shown in **Table 2** in the main paper, this improves classification accuracy. We will clarify this mechanism and revise **Figure 2** in the final version. *** **5. Comparison and Advantages of the Proposed Backpropagation-Free Approach over BFTT3D** Unlike **BFTT3D**, which uses class-based prototypes from the source dataset during adaptation, our method relies solely on the target dataset without accessing any source information. Another key distinction is efficiency: as shown in **Table 2**, our method in the backpropagation-free mode achieves significantly higher **FPS** compared to **MATE**, **SVWA**, **BFTT3D**, **CloudFixer**, and **DDA**. In terms of performance, our method also outperforms **BFTT3D** across three corrupted datasets, as reported in **Table 1**. Together, Tables 1 and 2 demonstrate that our skeleton-based pretraining enables the model to learn robust geometric features, allowing effective adaptation with only statistical parameter updates, achieving both high accuracy and fast inference. --- Rebuttal Comment 1.1: Comment: The author has addressed my concerns clearly. As mentioned by the author, some more complex experiments may not be fully completed during the rebuttal period, but the existing experimental results already demonstrate the effectiveness of the proposed method. Furthermore, the author has provided clear responses regarding the experimental setting, and I hope the authors will provide a clearer explanation in the subsequent version. However, I would still like to discuss one issue with the authors. Is the direction of updating the statistical parameters in BatchNorm layers interpretable? Can the distribution of BN layer statistics obtained during forward propagation on the target domain be meaningfully linked to the characteristics of the current domain shift? The authors need not conduct additional experiments but could address this question based on existing results or their observational analysis. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their positive feedback and thoughtful comments. Updating the running mean and variance in BatchNorm layers has been shown to improve robustness under covariate shift [1]. Furthermore, the ADABN [2] paper supports the significance of this mechanism by stating that ***" label related knowledge is stored in the weight matrix of each layer, whereas domain related knowledge is represented by the statistics of the Batch Normalization (BN)"*** This highlights that updating BN statistics is a meaningful and effective approach for handling domain shifts. To analyze the effect of BatchNorm statistics update in our setting, we visualized the impact of distribution shift on the BatchNorm input and how updating the statistics can help realign the model to the new distribution. In [Figure 1] (https://anonymous.4open.science/r/SMART-PC-ICML-737C/Batch_Norm_Statistics.pdf), the Gaussian solid curves represent the statistics of the input data. As observed, the source distribution (blue) aligns well with the accumulated statistics from pre-training (black dashed curve), resulting in a centered and scaled distribution with zero mean and unit variance. However, when facing a distribution shift (red) in the center column, the pre-training-time accumulated statistics no longer align with the corrupted target distribution, leading to an inconsistent input distribution to the subsequent layers—compared to what the model has seen during training (i.e., covariate shift). This misalignment is clearly visible in the center column as the distance between the red solid curve and the black dashed curve. By updating the BatchNorm statistics, the running mean and variance shift toward the target distribution. This helps mitigate the covariate shift introduced by the target domain (right column). It is evident that updating the batch statistics moves the BatchNorm statistics (green dashed curves) closer to the data distribution (solid curves). This pattern is consistent across different channels of both the first BatchNorm layer in the encoder (top row) and the classification head (bottom row). For each BN layer, channel input values are aggregated across batch samples and tokens, and plotted as histograms of the values. Similar to the input data statistics, the BN statistics curves are plotted as Gaussian curves, computed from the corresponding running mean and running variance. If there are any additional concerns or suggestions, we would be happy to hear them and address them to further improve our work. --- [1] Nado, Zachary, Shreyas Padhy, D. Sculley, Alexander D'Amour, Balaji Lakshminarayanan, and Jasper Snoek. "Evaluating prediction-time batch normalization for robustness under covariate shift." arXiv preprint arXiv:2006.10963 (2020). [2] Li, Yanghao, Naiyan Wang, Jianping Shi, Xiaodi Hou, and Jiaying Liu. "Adaptive batch normalization for practical domain adaptation." Pattern Recognition 80 (2018): 109-117.
null
null
null
null
null
null
null
null
Do We Really Need Message Passing in Brain Network Modeling?
Accept (spotlight poster)
Summary: This paper investigates brain network modeling and identifies previous methods' shortcomings, including Graph Neural Network (GNN)-based methods and Graph Transformer (GT)-based methods. Specifically, they often use the Pearson correlation coefficients between pairs of ROIs (Regions of Interest) to construct brain networks, which function as node attributes and graph topology. Based on it, this paper introduces a novel Brain Quadratic Network (BQN) using the Hadamard product instead of the matrix product in previous methods. Extensive comparative experiments demonstrate the effectiveness and efficiency of the proposed BQN. ## update after rebuttal The authors have adequately addressed my concerns; therefore, I would like to maintain my positive evaluation. Claims And Evidence: The proposed BQN is well-founded and superior, as evidenced by both theoretical analysis and experimental results. Methods And Evaluation Criteria: The proposed BQN makes sense for brain network analysis. It is novel in its utilization of a simple yet effective quadratic network. Theoretical Claims: After conducting a thorough review of the proof, I have essentially confirmed its correctness. Experimental Designs Or Analyses: The experiments are validated for their soundness. They are designed using widely recognized datasets and criteria. Performance is verified across diverse datasets, and hyperparameter analyses are carried out. Supplementary Material: There is no supplementary material for checking. Relation To Broader Scientific Literature: Previous methods have primarily focused on designing specialized GNN and GT models for brain network analysis. This paper challenges this conventional approach, which is innovative and promising. It reveals that existing GNN- and GT-based models essentially rely on the generated Pearson correlation matrices in a redundant manner. To address this issue, this paper proposes an efficient Brain Quadratic Network (BQN). Essential References Not Discussed: There is no other relevant literature that needs to be discussed. Other Strengths And Weaknesses: **Strengths** 1)The motivation of this paper is meaningful for brain network analysis. The authors are ambitious and reasonably challenge the rationality of message passing in previous methods. 2)Figure 1 is presented with exceptional clarity and ease of comprehension. Figures 2 and 3 provide clear insights and design motivation. 3)The proposed Brain Quadratic Network (BQN) is simple yet is underpinned by robust theoretical foundations. **Weaknesses** 1)Reproducibility is a concern. Although the authors claim significant performance improvements over state-of-the-art models with a relatively simple architecture, they have not provided code access, making independent verification challenging. 2)Some of the related work mentioned appears to be extraneous or not directly relevant to the scope of this study. While Graph Neural Networks (GNNs) are foundational models for brain network analysis, it is unclear which specific models, especially SGC and APPNP in Eq. 3, are being utilized in this context. 3) Although the meaning can be understood, the expressions "with" and "without" used in Figure 4 are not conventional. The authors are advised to adopt more standard notations, such as "BQN" and "w/o residual" for clarity. Other Comments Or Suggestions: 1)The statement of matrix multiplication is inconsistent. For example, the notation $\mathbf{A}\cdot \mathbf{X}$ is used in Eq. 3, while $\mathbf{Z}\cdot \mathbf{W}$ appears in Eq. 4. 2)In the caption of Figure 4, QBN is clearly a misspelling. Questions For Authors: Refer to Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1.Reproducibility is a concern. Although the authors claim significant performance improvements over state-of-the-art models with a relatively simple architecture, they have not provided code access, making independent verification challenging. R1. According to your suggestion, the source code has been made available at [https://anonymous.4open.science/r/BQN-demo](https://anonymous.4open.science/r/BQN-demo) for verification purposes. > Q2. Some of the related work mentioned appears to be extraneous or not directly relevant to the scope of this study. While Graph Neural Networks (GNNs) are foundational models for brain network analysis, it is unclear which specific models, especially SGC and APPNP in Eq. 3, are being utilized in this context. R2. We agree that the inclusion of certain related work, such as SGC and APPNP, may not be directly relevant to the scope of this study. While these models are foundational in the context of GNNs and their propagation mechanisms are indeed important for understanding message-passing models, their specific application in brain network analysis is limited. Therefore, we will adjust the manuscript to focus more directly on the models and methods that are most relevant to our study. Specifically, we will move the detailed discussion of SGC and APPNP to the appendix. > Q3.Although the meaning can be understood, the expressions "with" and "without" used in Figure 4 are not conventional. The authors are advised to adopt more standard notations, such as "BQN" and "w/o residual" for clarity. And in the caption of Figure 4, QBN is clearly a misspelling. R3. Thanks for your careful review. We will revise the legend in Figure 4 to "BQN" and "w/o residual" as suggested. Additionally, we will conduct a thorough review of the manuscript to correct any misspellings, including the error in the caption of Figure 4. > Q4. The statement of matrix multiplication is inconsistent. For example, the notation $\mathbf{A} \cdot \mathbf{X}$ is used in Eq.3, while $\mathbf{Z} \cdot \mathbf{W}$ appears in Eq. 4. R4. Thank you for pointing this out. We will revise the notation to maintain consistency. Specifically, we will update Eq. 4 to use the same matrix multiplication notation as Eq. 3, removing the symbol $\cdot$. This will ensure uniformity in our mathematical expressions.
Summary: The paper identifies a limitation in the message-passing framework for brain network analysis and proposes an approach, the Brain Quadratic Network (BQN), to address this issue. BQN demonstrates superior performance compared to standard Graph Neural Networks (GNNs) and graph transformers on widely used brain network datasets, highlighting its potential for advancing graph-based neuroimaging analysis. Update of review after rebuttal: the authors have addressed my concerns and I have increased my rating. Claims And Evidence: Appears to be. Methods And Evaluation Criteria: The paper uses widely adopted datasets and metrics in brain network analysis for the empirical assessment. Theoretical Claims: Appears to be sound. Experimental Designs Or Analyses: 1. More recent and relevant related studies should be included and compared for more convincing conclusions on the superiority of the proposed method. Some examples include [1] Cho, H., Sim, J., et al. Neurodegenerative brain network classification via adaptive diffusion with temporal regularization. In International Conference on Machine Learning (ICML), 2024. [2] Zhang, et al. A-GCL: Adversarial graph contrastive learning for fmri analysis to diagnose neurodevelopmental disorders. Medical Image Analysis, 90:102932, 2023. [3] Ma, Hao, Yongkang Xu, and Lixia Tian. "RS-MAE: Region-State Masked Autoencoder for Neuropsychiatric Disorder Classifications Based on Resting-State fMRI." IEEE Transactions on Neural Networks and Learning Systems (2024). [4] Shehzad, Ahsan, et al. "Multiscale Graph Transformer for Brain Disorder Diagnosis." IEEE Transactions on Consumer Electronics (2025). 2. It appears that the performance on the two datasets are very different from those reported in the related literature, which raises the doubt of the validity of the results. For instance, the accuracy on ADNI/ABIDE reported on ALTER is around 67%/70% but that reported in the original paper is 74%/77%. Similar issues exist for ContrastPool with even larger gaps. What causes such a large difference in performance? 3. The dataset ADNI used is small with around 100 samples. With such a small sample size, how would the proposed method adequately train a reliable model without severe overfitting? A convergence plot is expected to support this. 4. The authors should provide a detailed description of how they select the readout function for the final classification. Given their claim that BQN operates differently from GNNs and Graphormers, it is unclear whether traditional readout mechanisms remain valid for BQN. A more thorough discussion is needed to clarify whether the chosen readout is theoretically aligned with the proposed framework. 5. For the case study, the authors should clearly explain how they transform the output of the final BQN layer into brain graphs. Specifically, it is unclear whether this conversion is based on a thresholding mechanism, top-k edge selection, or other criterion. Providing such details would enhance the interpretability and reproducibility of the method. Supplementary Material: There is no supplementary material for this paper. Relation To Broader Scientific Literature: This paper addresses an important open question: Is message passing truly necessary for certain graph-related tasks? Prior research has shown that in heterophilic graphs, message passing in GNNs can sometimes degrade node classification performance. The authors contribute to this discussion by providing both theoretical and experimental evidence in the context of static brain network analysis, suggesting that message passing may not be a crucial component for achieving strong performance in this domain. Essential References Not Discussed: Many closely related recent works in brain network analysis are not included and compared (see the previous comment on "Experimental Designs Or Analyses"). Other Strengths And Weaknesses: Strengths: 1. BQN demonstrates strong computational efficiency, making it a promising approach for large-scale brain network analysis. 2. The design of BQN is relatively simple yet effective, striking a balance between model complexity and performance. Weaknesses: 1. Limited baselines selected. 2. Doubtful inconsistent empirical results. 3. Small dataset used for ADNI. More large-scale datasets should be used. 4. More clarifications on the readout function and the case study. Other Comments Or Suggestions: Nil Questions For Authors: Please see the comments above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. Stacking multiple layers of Eq. 10 is theoretically equivalent to a single-layer formulation. The paper lacks theoretical or empirical justification for stacking multi-layer of Eq. 10 leads to performance gains. R1. There may be a serious misunderstanding. Stacking multiple layers of Eq. 10 is **NOT** equivalent to a single-layer formulation. Take the case of a two-layer BQN as an example. Formulation $AW_A^1 \odot AW_A^2$ at the vector level is equivalent to $(aw_a^1) (aw_a^2)$, where $W_A^l$ means linear transformation of layer $l$. Subsequently, take the example of $a$ contains two variables, $(aw_a^1) (aw_a^2) = (a_1w_{a1}^1 + a_2w_{a2}^1) (a_1w_{a1}^2 + a_2w_{a2}^2)≠(a_1w_{1} + a_2w_{2})$ is clearly obtained for multiply interaction of variables $a_1$ and $a_2$. Thus, at the matrix level $AW_A^1 \odot AW_A^2≠AW_A$, which means stacking multiple layers of Eq. 10 is not equal to the single-layer formulation. Besides, the performance gains of multi-layer come from the captured community structure in Eq. 11, which is the objective function of NMF-based community detection. Eq. 14 is the **iterative** updating rule of solving Eq. 11, and is equivalent to Eq. 10. Thus, multi-layers of Eq. 10 is the **iterative** updating rule of solving Eq. 11, which can learn better community structure and guarantee the performance gains. > Q2. More related studies and large datasets should be included. R2. We have incorporated the mentioned methods into our effectiveness and efficiency comparison and expanded our evaluation to include two additional datasets (ADHD-200 and PPMI), more large-scale than ADNI. All methods use the same data preprocessing, brain network construction and uniform data partition. No comparison was made with the work [3] for no open source. AGT requires datasets with more than two classes, limiting its evaluation on the PPMI dataset. The experimental results are shown in https://anonymous.4open.science/r/BQN-demo/figure/Table.jpg Table reveals that BQN outperforms the baselines on the ABIDE, ADNI, and ADHD-200, demonstrating its superiority. On the PPMI dataset, BQN is not superior to AGT but still shows competitive performance while achieving the best efficiency. This highlights the promising potential of our model. > Q3. The reported results of ALTER and ContrastPool are different from original papers. ? R3. Existing GNNs and Transformers for brain networks are very different in data processing and model selection. It makes it difficult to fairly assess their performance. To alleviate this difficulty, **this paper unifies these settings and compares methods in a fair manner**. - **model selection**. We unify early stopping criteria as the lowest loss on the validation set. This is different from ALTER, which uses the criteria of the highest AUC. This causes the performance gap of ALTER. We also find that the proposed BQN also outperforms ALTER with the criteria of highest AUC. - **data processing**. We unify data preprocessing and brain network construction as BioBGT[ICLR'25], ALTER, and BrainNETTF. This is different from the ContrastPool, which follows [5]. This causes the performance gap of ContrastPool. Existing work has demonstrated that different brain network construction methods can lead to different performance. [5]Data-driven network neuroscience: On data collection and benchmark. NeurIPS, 2023 > Q4. A convergence plot is expected to support a reliable trained model using the ADNI dataset. R4. The convergence plot, which includes training loss, validation loss, and test accuracy using BQN, is available at https://anonymous.4open.science/r/BQN-demo/figure/BQN_convergence.jpg. The overall trend of the three experimental metrics is steady, despite some fluctuations. This indicate that the model generalizes well without severe overfitting. > Q5. The authors should clarify the chosen readout function theoretically aligns with BQN framework. R5. We employ OCREAD, which concatenates the embeddings of cluster centers, as the readout function in our proposed method, consistent with BrainNETTF and ALTER. OCREAD exploits the clustering property for readout, which is consistent with the quadratic network in BQN as shown in Theorem 5.1. Besides, traditional readout mechanisms, such as MEAN and MAX, remain valid for BQN, since they meet the requirements of readout mechanisms from node embeddings to graph embeddings. However, we believe their performance is not as good as OCREAD, since the clustering property is not considered. > Q6. Elaborate the transformation from the output of the model to brain graphs and provide thresholding details in the conversion. R6. Given the output of the final BQN layer $A$, it is first calculated as $A=(A+A^T)/2$. Next, $A_{Template}^{ASD}$, $A_{Template}^{NC}$ and $A_{contrast}$ are obtained by equations in Lines 423, 424 and 428 repectively. Finally, the top-20 edges with the largest weights from $A_{contrast}$ are selected for visualization. --- Rebuttal Comment 1.1: Comment: Thank you for the response and apologies for the misunderstanding on stacking multi-layers of Eq. (10). I have some further questions: - Q2: Why does BQN perform poorer than AGT on PPMI on most of the metrics? Why are the results of AGT missing on ADHD-200? - Q3: what are the results of BQN and ALTER with the criteria of highest AUC? It is claimed that this paper unifies data preprocessing and brain network construction as BioBGT[ICLR'25] ALTER, and BrainNETTF. Nonetheless, the dataset preparation in these studies does not appear to be performed in a unified way. For instance, ABIDE is parcellated by the Craddock 200 atlas, while ADNI is by the AAL-90 atlas. Different pre-processing tools are adopted for different datasets as well: ABIDE is pre-processed by PCP using five different tools; ADNI is pre-processed by DPARSF. Additionally, the number of samples in ADNI used in this work (124 samples) is significantly smaller than that used in BioBGT (407 samples). Any reason for using a smaller ADNI dataset with 2 classes rather than multiple classes? - Q4: please plot the convergence curve on the training accuracy, validation accuracy, and test accuracy for better assessment of overfitting. --- Reply to Comment 1.1.1: Comment: > Q2: Why does BQN perform poorer than AGT on PPMI on most of the metrics? Why are the results of AGT missing on ADHD-200? R2: These two questions share the same reason: **AGT is specifically developed for multi-class classification tasks, while ADHD-200 is a binary classification task**. The superior performance of the AGT on the PPMI dataset can be attributed to its ability to learn temporal relations between the multiple diagnostic groups. In the PPMI dataset, there are three diagnostic groups, and the AGT can learn the temporal relationship between them. However, this ability is not applicable to the ADHD-200 dataset, which consists of only two diagnostic groups. Specifically, the AGT has a special group-level temporal regularization module that learns temporal dynamics between diagnostic labels. This ability is implemented by designing a loss function $R_{\text{temp}} = \frac{1}{C-2} \sum_{c=1}^{C-2} \left( d_{c,c+1} + d_{c+1,c+2} - d_{c,c+2} \right)$, where $d_{c,c+1}$ denotes feature distance between group $c$ and group $c+1$. By minimizing the $R_{\text{temp}}$, the AGT learns the temporal dynamic relationship between labels. While Parkinson's disease has a distinct progression state process, thus AGT outperformed the BQN model on the PPMI dataset for the above model characteristic. However, for ADHD-200 dataset(consists of 2 diagnostic categories), minimizing the $R_{\text{temp}}$ implies that there is no difference between the ADHD group and the NC group, which is not reasonable. Therefore, AGT was unable to complete the comparison experiment on the ADHD-200 dataset. --- > Q3.1: What are the results of BQN and ALTER with the criteria of highest AUC? R3.1: We have conducted experiments to compare the proposed BQN and ALTER based on the criterion of the highest AUC. The results at [experiment_results](https://anonymous.4open.science/r/BQN-demo/figure/Table_2.png) indicate that **BQN consistently outperforms ALTER** on the ABIDE, ADNI, ADHD-200, and PPMI datasets. --- > Q3.2: It is claimed that this paper unifies data preprocessing and brain network construction as BioBGT[ICLR'25], ALTER and BrainNETTF. Nonetheless, the dataset preparation in these studies does not appear to be performed in a unified way. R3.2: We employed a uniform preprocessing approach and brain network construction method **for all models on each dataset, rather than across all datasets**. This is to alleviate the issue that different models employ different preprocessing and brain network construction methods on the same dataset, making models comparable on each dataset. Specifically: - For the ABIDE dataset, our preprocessing and brain network construction were aligned with those used by BrainNETTF, ALTER and BioBGT. - For the ADNI dataset, we maintained consistency with ALTER's preprocessing and brain network construction methods. - For the ADHD-200 dataset, our preprocessing and brain network construction were consistent with BioBGT's methods. Thanks for your question which makes the description of the experimental setting more clear. And, we will add them to the paper. --- > Q3.3: Additionally, the number of samples in ADNI used in this work (124 samples) is significantly smaller than that used in BioBGT (407 samples). Any reason for using a smaller ADNI dataset with 2 classes rather than multiple classes? R3.3: The ADNI dataset used in BioBGT is not available on the web and the details of the data selection are not provided. The full ADNI dataset, which is obtained from the authors of BioBGT, contains 538 samples, which are divided into four classes. Since they do not provide the details of the data selection for the classification task on three classes, we employed two classes, i.e., AD and NC, for the experiment in the paper. According to your suggestion, we have conducted an experiment on the four-class ADNI dataset, maintaining consistency with BioBGT in processing and brain network construction. The results are reported at [multi-class_experiment](https://anonymous.4open.science/r/BQN-demo/figure/Table_3.png). Among the models evaluated, the proposed BQN achieved the highest accuracy and AUC, indicating its good generalization ability. --- > Q4: Plot the convergence curve on the training accuracy, validation accuracy, and test accuracy for better assessment of overfitting. R4: Following your suggestion, we have revised the convergence plot at [convergence_experiment](https://anonymous.4open.science/r/BQN-demo/figure/BQN_convergence_acc.jpg). The plot provides a comprehensive view of the training accuracy, validation accuracy, and test accuracy over the model iterations. Observations from the plot indicate that while the training accuracy exceeds the validation and test accuracies, the latter two do not exhibit a decline after reaching a certain level. This aligns with the case of the loss curves and suggests that the proposed BQN does **NOT** have severe overfitting.
Summary: The paper proposes Brain Quadratic Network (BQN), a novel approach for brain network modeling that replaces traditional message-passing mechanisms with quadratic networks and Hadamard products. It shows that BQN outperforms GNNs and Transformers on fMRI datasets, achieving higher accuracy and efficiency. Theoretical analysis reveals that BQN implicitly performs community detection, capturing brain functional modules. ## update after rebuttal Claims And Evidence: Yes, the claims are well-supported by both theoretical analysis and extensive experiments. Theoretical connections to community detection via non-negative matrix factorization (NMF) validate BQN's ability to capture brain network structures. Empirical results on graph datasets demonstrate superior performance and efficiency compared to GNNs and Transformers. Methods And Evaluation Criteria: Yes, the proposed Brain Quadratic Network (BQN) and the evaluation criteria using fMRI datasets are appropriate for the problem of brain network modeling. Theoretical Claims: Yes, I checked the correctness of the proof for Theorem 5.1, which connects the Brain Quadratic Network (BQN) to nonnegative matrix factorization (NMF) for community detection. Experimental Designs Or Analyses: Yes, I checked the soundness of the experimental designs and analyses. The experiments on the ABIDE and ADNI datasets are well-designed, and the results are valid. The comparisons with GNN and Transformer baselines are appropriate, and the performance metrics used are suitable for evaluating the classification tasks. Supplementary Material: This paper does not provide any supplementary material. Relation To Broader Scientific Literature: The paper's contributions are well-aligned with the broader literature by extending quadratic networks to brain network modeling, connecting the model to community detection via NMF, and providing a simpler, more efficient alternative to GNNs and Transformers. Essential References Not Discussed: No, the paper does not omit any essential related works. Other Strengths And Weaknesses: **Strengths** 1) The paper is well-written and easy to follow, effectively conveying its contributions. 2) The paper provides experimental results on benchmark datasets. **Weaknesses** 1) The parameters $b$ and $c$ in Eq. 7 are not represented or utilized in BQN (Eq. 8). Should the layers of the model use MLP? 2) The results are somewhat insufficient. Given the limited number of datasets used and the lack of consistent trends across multiple metrics, it is recommended that the authors provide additional experimental results for other metrics in Figures 2, 3, and 5. Other Comments Or Suggestions: Some punctuations are missing after certain formulas, such as Equations 6 and 8. In Equation 8, it seems that $W_A$is reused. Questions For Authors: See Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1.The parameters *b* and *c* in Eq. 7 are not represented or utilized in BQN(Eq. 8). Should the layers of the model use MLP? R1. Yes, the matrix $\mathbf{W}$ in Eq. 8 represents a Multi-Layer Perceptron (MLP), and the parameters $b$ and $c$ in Eq. 7 are realized in the model implementation by setting bias=True for the MLPs. > Q2.The results are somewhat insufficient. Given the limited number of datasets used and the lack of consistent trends across multiple metrics, it is recommended that the authors provide additional experimental results for other metrics in Figures 2, 3, and 5. R2.According to your suggestion, we have conducted additional experiments to supplement the results in Figures 2, 3, and 5. For Figures 2 and 3, we have incorporated precision, recall and micro-F1 metrics on experiments with the ABIDE and ADNI datasets. The results are available at the following links: [Fig2_ABIDE](https://anonymous.4open.science/r/BQN-demo/figure/Fig2_ABIDE.jpg), [Fig2_ADNI](https://anonymous.4open.science/r/BQN-demo/figure/Fig2_ADNI.jpg), [Fig3_F1](https://anonymous.4open.science/r/BQN-demo/figure/Fig3_F1.jpg). These supplementary experiments demonstrate consistent trends across the datasets. For Figure 5, we utilze the micro-F1 score metric to provide a more comprehensive evaluation of model performance as the number of layers increases. The results can be accessed at [Fig5_F1](https://anonymous.4open.science/r/BQN-demo/figure/Fig5_F1.jpg). We believe these additional results enhance the robustness of our findings and address the concerns regarding the sufficiency of the experimental evidence. > Q3.Some punctuations are missing after certain formulas, such as Equations 6 and 8. In Equation 8, it seems that *$W_A$* is reused R3. Thank you for pointing this out. We will carefully review the manuscript to ensure proper punctuation is used throughout, especially after equations. We will revise Equations 6 to: $$ a\_{xy} = \begin{cases} r\_{xy}, & \text{if } r\_{xy} > \text{threshold}, \\\\ 0, & \text{otherwise}. \end{cases} $$ And for Equation 8, we will revise it to: $$ \mathbf{H}^l = (\mathbf{H}^{l-1} \mathbf{W}_A^l) \odot (\mathbf{H}^{l-1} \mathbf{W}_B^l) + (\mathbf{H}^{l-1} \odot \mathbf{H}^{l-1}) \mathbf{W}_C^l, $$ For $\mathbf{W}_A^l$ is reused, we use $\mathbf{W}_A^l$, $\mathbf{W}_B^l$ and $\mathbf{W}_C^l$ instead. And we will check variable symbols throughly to correct any reuse errors.
Summary: This paper investigates the GNN and Transformer, which follows the message passing pipeline, in brain network modeling. It observes that these two methods can’t enhance the performance compared to the vanilla classifier. Following by the analysis of the weakness of them from the brain network construction, it presents a novel and simple method based on Quadratic Network, i.e., Hadamard product, which has attractive properties, such as efficiency and latent clustering. Experiments verify the statement and the superiority of the proposed methods. Claims And Evidence: I appreciate that this paper can explore this essential question. It ignores existing methods’ modifications to GNN and Transformer and only shows the abilities of GNN and Transformer. The claims are supported by clear and convincing evidence: Both model analysis and experiments demonstrate that message passing is not necessary in brain modeling. Both theoretical analysis and experiments verify the property of the proposed BQN. Methods And Evaluation Criteria: The model analysis on GNN and transformer provides evidence to question the message passing in brain modeling. The proposed BQN makes sense by theoretical analysis on its clustering property. Its rationality is also guaranteed by the theory progress in the Quadratic Network. I suggest that the authors include them in the manuscript. Theoretical Claims: I checked the theorem and its proof. Experimental Designs Or Analyses: The design of experiments on both motivations and evaluation of proposed methods are convincing since the experimental setups are common in this field, including datasets, baselines, and criteria. Supplementary Material: Supplementary material is not provided. Relation To Broader Scientific Literature: Most SOTA methods on brain network modeling are based on GNN and Transformer, which follow the message passing mechanism. This paper questions its necessity and presents a simple, efficient, and novel method. I believe it may significantly impact this field. I also think it will motivate us to consider whether GNNs are necessary in many fields. Essential References Not Discussed: Sufficient. Other Strengths And Weaknesses: None Other Comments Or Suggestions: The source code is not provided. I suggest including it in the supplementary material. Efficiency is a remarkable property of the proposed BQN compared to methods based on GNN and Transformer. However, it is only verified in the experiments. I suggest emphasizing it in the introduction and abstract. The experimental evidence should be provided to justify its clustering property. Questions For Authors: In addition to the points in Other Comments or Suggestions, I have another concern. There are some methods based on prototypes, such as [Kan et al., 2022b], the essence of which is clustering. Why does BQN outperform them? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > Q1. The model analysis on GNN and transformer provides evidence to question the message passing in brain modeling. The proposed BQN makes sense by theoretical analysis on its clustering property. Its rationality is also guaranteed by the theory progress in the Quadratic Network. I suggest that the authors include them in the manuscript and emphasizing model efficiency in the introduction and abstract. R1. Thanks for your suggestion. We will incorporate the mentioned contents of the Quadratic Network into the manuscript, referencing [1], [2], [3], and [4]. Additionally, we will emphasize the model efficiency in both the introduction and abstract to better highlight the contributions of this paper. [1]Universal approximation with quadratic deep networks. Neural Networks, 2020 [2]An attention free transformer. arXiv preprint, 2021 [3]Attention-embedded quadratic network (qttention) for effective and interpretable bearing fault diagnosis. IEEE Transactions on Instrumentation and Measurement, 2023 [4]Towards efficient and interpretative rolling bearing fault diagnosis via quadratic neural network With Bi-LSTM. IEEE Internet of Things Journal, 2024. > Q2. The source code is not provided. I suggest including it in the supplementary material. R2. According to your suggestion, the source code has been made available at [https://anonymous.4open.science/r/BQN-demo](https://anonymous.4open.science/r/BQN-demo) for verification purposes. > Q3. The experimental evidence should be provided to justify its clustering property. R3. In response to your suggestion, we have analyzed the clustering properties of the proposed BQN model. The brain is segmented into functional regions using established criteria from prior research [5][6]. The clustering performance is evaluated using three standard metrics: Silhouette Coefficient (SC), Calinski-Harabasz Index (CH), and Davies-Bouldin Index (DB). Results are obtained for both the original data and the outputs of three well trained BQN models, using 1 layer, 2 layers and 3 layers respectively. | | init | layer_1 | layer_2 | layer_3 | | --- | --- | --- | --- | --- | | SC↑ | 0.004 | 0.179 | 0.250 | 0.327 | | CH↑ | 5.746 | 12.3536 | 18.877 | 22.528 | | DB↓ | 4.974 | 3.434 | 2.674 | 1.795 | The results reveal that the proposed BQN effectively captures the clustering properties of functional brain regions. [5]Distinct brain networks for adaptive and stable task control in humans. Proceedings of the National Academy of Sciences, 2007 [6]Prediction of individual brain maturity using fMRI. Science, 2010 > Q4. There are some methods based on prototypes, such as [Kan et al., 2022b], the essence of which is clustering. Why does BQN outperform them? R4. The performance superiority of the proposed BQN is primarily due to its quadratic network’s approximation capabilities and robust model architecture. Firstly, BrainNETTF[Kan et al., 2022b] use the Pearson matrix as the feature matrix, which already captures holistic brain region interactions. This limits the capacity of the Transformer's fully connected graph messaging mechanism. In contrast, BQN learns clustering properties and employs a quadratic network with more general function approximation capabilities, resulting in better ROI and graph representations. Secondly, BrainNETTF is a Transformer-based model with larger parameters, increasing the risk of overfitting on relative small brain datasets. BQN, with fewer parameters, reduces model complexity and inference uncertainty.
null
null
null
null
null
null
AnalogGenie-Lite: Enhancing Scalability and Precision in Circuit Topology Discovery through Lightweight Graph Modeling
Accept (poster)
Summary: This work introduces a decoder-only transformer for analog topology generation by solving three critical challenges. At the graph level, it simplifies current approach by removing redundant nodes and edges. At the sub-graph level, it employs subsequent subgraph minging to identify commonly reused subgraphs in the database. Lastly, it models a circuit graph as a shortest closed path by solving the Chinese Postan problem. Claims And Evidence: The three techniques in the paper are well supported by the experimental results. Methods And Evaluation Criteria: This paper introduces though evaluation metrics during the experiments. It would be better if there is a discussion for time cost of each method. The two techniques introduced in this work, subgraph mining and solving Chinese Postman problem, seems to introduce much time. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: From the formulation of the task, it is possible to leverage pre-trained language model, e.g., old models such as T5 or new models such as Llama and GPT-4o, as a prior and conduct fine-tuning on it. It would be better if there are discussion about them and a comparison with current train-from-scratch approach. Supplementary Material: No. I don't have a need to look at the supplementary. The main body contains all information I needed during the review. Relation To Broader Scientific Literature: It proposes a compact way to represent circuit topology. Although it is only trained on unconditional generation task, it can be used on conditional generation (e.g., text as condition) in the future. Essential References Not Discussed: No. Other Strengths And Weaknesses: No additional comments. Other Comments Or Suggestions: No additional comments. Questions For Authors: 1) the generation task in this work seems to mean unconditional generation, where we cannot control what will be generated. Is that right? Can it be used for text-based generation, where text describes the design requirements. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the reviewer's valuable comments. > **Q1:** It would be better if there is a discussion for time cost of each method. The two techniques introduced in this work, subgraph mining and solving Chinese Postman problem, seems to introduce much time. We would like to evaluate the time cost of our subgraph mining algorithm and Algorithm 1 for solving the Chinese postman problem empirically. Our experiment is conducted on a PC with a 16‐core i9-12900KF CPU and 32 GB RAM. We run each algorithm 5 times on the analog circuit dataset. The mean runtime for the subgraph mining algorithm is 1473.74 seconds, and Algorithm 1 is 783.30 seconds. **This one-time preprocessing cost can be amortized by the substantial memory and time savings during model pre-training**. > **Q2:** From the formulation of the task, it is possible to leverage pre-trained language model, e.g., old models such as T5 or new models such as Llama and GPT-4o, as a prior and conduct fine-tuning on it. It would be better if there are discussion about them and a comparison with current train-from-scratch approach. We appreciate the reviewer’s suggestion. In our study, we have already compared our train-from-scratch approach with methods that leverage pre-trained language models. Specifically, our experiments included: - AnalogCoder [1]: a **GPT-4o-based model** using domain-specific prompt engineering. - LaMAGIC [2]: a model fine-tuned on a **FLAN-T5-base** backbone. - Artisan [3]: a model fine-tuned on a **Llama2-7b** backbone. As detailed in Table 1 of our paper, our approach outperforms these baselines across several key metrics, including validity, scalability, versatility, novelty, and FoM. > **Q3:** the generation task in this work seems to mean unconditional generation, where we cannot control what will be generated. Is that right? Can it be used for text-based generation, where text describes the design requirements. **Our approach is not limited to unconditional generation**. We incorporate mechanisms to **steer the generation process toward a specific type of analog circuit topologies optimized for key performance metrics** (e.g., Figure of Merit) by leveraging **reinforcement learning with human feedback (RLHF)** to fine-tune our pre-trained model. The process begins with training a reward model that evaluates generated topologies based on validity, circuit type, and performance using a dataset labeled by humans. Once this reward model is trained, we apply proximal policy optimization (PPO) [4] to iteratively refine the pre-trained model. In each training epoch, the model generates a batch of candidate topologies that are scored by the reward model, and the model parameters are adjusted to maximize the expected accumulated reward scores, effectively steering the generation toward high-performance designs. **Although AnalogGenie-Lite currently does not support text-based input, its underlying next-token-prediction mechanism makes it naturally compatible with text generation**, and by augmenting the netlist dataset with text-based descriptions of design requirements, we can extend our approach to **include text-based control in future iterations (as acknowledged by the reviewer in Relation To Broader Scientific Literature section)**. [1] Lai, Yao, et al. "Analogcoder: Analog circuit design via training-free code generation." *arXiv preprint arXiv:2405.14918* (2024). [2] Chang, Chen-Chia, et al. "Lamagic: Language-model-based topology generation for analog integrated circuits." *arXiv preprint arXiv:2407.18269* (2024). [3] Chen, Zihao, et al. "Artisan: Automated operational amplifier design via domain-specific large language model." *Proceedings of the 61st ACM/IEEE Design Automation Conference*. 2024. [4] Ouyang, Long, et al. "Training language models to follow instructions with human feedback." *Advances in neural information processing systems* 35 (2022): 27730-27744.
Summary: AnalogGenie-Lite presents a decoder-only framework designed to discover novel analog circuit topologies by leveraging lightweight graph modeling. Its key contributions lie in three innovations: - A precise and efficient graph modeling approach that prunes redundant nodes and edges. - Optimal sequence modeling via solving the Chinese Postman Problem to derive near-optimal Eulerian circuits. Experimental results on a dataset of over 3350 real-world topologies across 11 analog circuit types demonstrate marked improvements. Claims And Evidence: **Main Claims**: - AnalogGenie-Lite achieves a dramatic reduction in sequence length (up to 71.11× over traditional adjacency matrix representations) and substantial improvements in validity. - The method is broadly applicable, as evidenced by case studies extending to domains like protein graphs and social networks. **Evidence Provided**: - Quantitative comparisons (e.g., Table 1 and Figure 5) show improved compression ratios, higher validity percentages, and enhanced performance metrics (FoM) over baselines. - Algorithmic pseudocode support the claims regarding efficiency and optimality in sequence . Methods And Evaluation Criteria: **Methods**: - Lightweight Graph Modeling and Optimal Sequence Modeling **Evaluation Criteria**: - Validity: Percentage of generated circuits that are SPICE-simulatable without errors. - Scalability: Maximum circuit size (in terms of devices) that can be generated. - Versatility & Novelty: Diversity of analog circuit types generated and the percentage of designs that differ from those in the training dataset. - Performance (FoM): A figure-of-merit combining key circuit performance metrics such as gain, bandwidth, and power. Theoretical Claims: N/A Experimental Designs Or Analyses: **Design**: - 3350 unique analog circuit topologies spanning 11 types. - The experimental design is robust, with ablation studies to isolate the contribution of each innovation (pruning, subgraph mining, and optimal sequencing). Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths**: - Innovation: The paper introduces a highly innovative and technically rigorous approach to analog circuit topology generation. - Efficiency: The significant reduction in sequence length and improved compression ratios are compelling, addressing a critical bottleneck in training large language models on graph representations. **Weaknesses**: - Ablation: Additional discussion on the sensitivity of performance to hyperparameter choices in the pruning and subgraph mining processes could be beneficial. Other Comments Or Suggestions: More detailed analysis regarding the integration of reinforcement learning with human feedback—specifically, how this impacts circuit FoM—could further enhance the paper. Questions For Authors: 1. How sensitive is the overall performance to the threshold used for pruning isolated nodes in subgraph modeling? 2. Could you elaborate on how reinforcement learning with human feedback is integrated into the generation process, in detail? 3. How does AnalogGenie-Lite handle circuits with highly irregular or non-repetitive topologies that may not benefit from subgraph simplification? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the reviewer's valuable and constructive feedback. > **W1 & Q1:** Ablation: Additional discussion on the sensitivity of performance to hyperparameter choices in the pruning and subgraph mining processes could be beneficial. How sensitive is the overall performance to the threshold used for pruning isolated nodes in subgraph modeling? We conducted an ablation study to assess how AnalogGenie-Lite's performance responds to two key hyperparameters in our subgraph pruning process: (1) the frequency threshold of subgraph occurrence in the dataset and (2) the threshold for the number of isolated nodes in a subgraph. Specifically, we experimented with frequency thresholds of 5%, 25%, and 50% along with isolated node thresholds of 4 and 8. A subgraph is pruned if its occurrence frequency exceeds the chosen threshold and if it contains more isolated nodes than the specified limit. After compressing the dataset, we followed our standard pretraining and finetuning procedure for AnalogGenie-Lite. The results are summarized in the table below: | | Validity (%) $\uparrow$ | Scalability $\uparrow$ | MMD $\downarrow$ | Op-Amp FoM $\uparrow$ | Power converter FoM $\uparrow$ | | --------------------- | ----------------------- | ---------------------- | ---------------- | --------------------- | ------------------------------ | | 5% freq & 4 isolated | 96.9 | 341 | 0.0419 | 15015.9 | 3.99 | | 5% freq & 8 isolated | 97.1 | 330 | 0.0409 | 15016.4 | 4.00 | | 25% freq & 4 isolated | 97.3 | 324 | 0.0408 | 15017.7 | 4.02 | | 25% freq & 8 isolated | 96.5 | 322 | 0.0409 | 15012.5 | 4.01 | | 50% freq & 4 isolated | 95.3 | 306 | 0.0420 | 14998.3 | 3.99 | | 50% freq & 8 isolated | 94.2 | 293 | 0.0433 | 14982.8 | 3.99 | Overall, **most performance metrics are relatively insensitive to these hyperparameters**. The one exception is scalability, which is notably affected by changes in the thresholds due to their direct impact on the number of subgraph pruning candidates. Notably, in 5% frequency results, aggressively lowering the thresholds to achieve higher compression introduces a large number of infrequently used special tokens into the tokenizer, which adversely affects training and ultimately hurts performance on metrics other than scalability. > **C1 & Q2:** More detailed analysis regarding the integration of reinforcement learning with human feedback—specifically, how this impacts circuit FoM—could further enhance the paper. Could you elaborate on how reinforcement learning with human feedback is integrated into the generation process, in detail? AnalogGenie-Lite pretrained-only model can initially produce a wide variety of analog circuit topologies without inherent bias toward specific performance metrics. To optimize it for generating high-performance circuits, we integrate RLHF in a two-step fine-tuning process. First, **a reward model—trained on human-labeled examples—is used to score generated circuit topologies based on type and performance**. Then, we **fine-tune the pretrained model with PPO: the model generates new designs, the reward model assigns scores, and PPO updates the model to maximize the expected accumulated reward**. This process directs the pretrained model toward optimal circuits, boosting the FoM for Op-Amps from 291.3 to 15017.7 and for power converters from 2.6 to 4.02. > **Q3:** How does AnalogGenie-Lite handle circuits with highly irregular or non-repetitive topologies that may not benefit from subgraph simplification? Our approach is robust even for highly irregular circuit topologies that do not benefit from subgraph simplification. Specifically, we **exploit a common feature in all analog circuits: the ground node, which connects to multiple device pins within the circuit**. By consolidating these multi-pin shared edge connections through graph-level simplification, AnalogGenie-Lite can maintain compression rate without subgraph simplification. As shown in Figure 5, **our graph-level method delivers a mean compression rate of 40.28$\times$ compared to adjacency matrix representation without the subgraph simplification**. Furthermore, because **analog circuit graphs are inherently sparse**, our optimal sequence modeling strategy can **maintain an additional 1.43$\times$ improvement in compression performance**. These two approaches ensure that our method remains effective even when subgraph simplification is less applicable.
Summary: The paper proposes AnalogGenie-Lite, a generative model for discovering analog circuit topologies using lightweight graph modeling. The main contributions of this work are converting device-pin representations from graph to sequence and employing LLMs to design the compressed sequence which implicitly contains topological information. In experiments, this work validates the effectiveness on circuit discover and does case study on protein and social analysis. Claims And Evidence: Yes. The claims are generally convincing. Methods And Evaluation Criteria: Yes. The benchmarks look reasonable. Theoretical Claims: The authors do not prove the claims theoretically. Experimental Designs Or Analyses: Why the baseline, AnalogGenie, is the version without fine-tuning? This work also does fine-tuning by reinforcement learning with human feedback. It seems that there is an unfair comparison. Supplementary Material: This supplement looks fine. Relation To Broader Scientific Literature: This work employs a new way to represent the graph and uses LLMs to model it. Essential References Not Discussed: None. Other Strengths And Weaknesses: Strengths: 1. The discussed problem is interesting and important. 2. The results are generally promising and good. Weaknesses: 1. While the authors compress the graph to sequence, which reduce the complexity of the problem to linear, it should also be important analyze the computational complexity of the 'Chinese postman problem'. 2. Could the authors also discuss other algorithms converting graph to sequence? I am just curious why the authors choose this method to compress the graph. 3. Graphs can also be compressed by other algorithms, the simplest one is just record the edges and their corresponding nodes. However, the authors only compare it with vanilla adjacency matrix and sounds like overclaim their scalability (compression rate). 4. Why only the authors only conduct case study for protein and ego graphs? While discovering circuit topologies is an interesting problem, I think the contribution will be greatly improved if the proposed methods can be applied to more fields. Other Comments Or Suggestions: None. Questions For Authors: Please answer my question in experiments setting and weaknesses. I am happy to turn to positive scores if the answers are satisfactory. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the constructive comments. We address your concerns below. > **Q1:** Why the baseline, AnalogGenie, is the version without fine-tuning? This work also does fine-tuning by reinforcement learning with human feedback. It seems that there is an unfair comparison. Our evaluation is designed to ensure **fairness between our work and the baseline AnalogGenie**. As discussed in Section 4.1 on the training setup, **our work is not fine-tuned for evaluations of validity, scalability, versatility, and novelty. We apply RLHF only when evaluating its performance on a specific circuit type**, thereby optimizing it for high-performance circuits. This approach allows us to assess our work both as a versatile foundation model and as a specialized agent. For our baseline, AnalogGenie is evaluated under the same conditions—it is **pre-trained solely for assessing validity, scalability, versatility, and novelty. For performance evaluation, AnalogGenie is also fine-tuned with RLHF**. > **Q2:** While the authors compress the graph to sequence, which reduce the complexity of the problem to linear, it should also be important analyze the computational complexity of the 'Chinese postman problem'. We would like to empirically evaluate the computational cost of Algorithm 1 for solving the Chinese postman problem. Our experiments on a 16‐core i9-12900KF (32 GB RAM) show that Algorithm 1 takes, on average, 937.36 seconds for the Protein dataset while the Analog circuit and Ego datasets took 783.30 and 296.06 seconds, respectively. **This one-time preprocessing cost is justified by the substantial memory savings during training**. > **Q3:** Could the authors also discuss other algorithms converting graph to sequence? I am just curious why the authors choose this method to compress the graph. Prior method like **adjacency vector** encodes each node by its connections to all preceding nodes, which results in a $O\left(n^2\right)$ complexity ($n$ is the number of nodes and $m$ is the number of edges). Alternatives include a **recursive binary tree [3]** that conditions on matrix rows and columns, reducing complexity to $O((n+m) \log n)$, and a **graph token sequence[4]** that first encodes node definitions followed by edge, achieving $O(n+m)$. We choose the Chinese postman sequence for two reasons. First, it has a complexity of $O\left(m\right)$, which is **efficient for representing sparse graphs like analog circuits**. Second, representing the graph as a traversal path not only **preserves critical structure information** but also **aligns well with the behavior of language models** by predicting the next token based on the current context. > **Q4:** Graphs can also be compressed by other algorithms, the simplest one is just record the edges and their corresponding nodes. However, the authors only compare it with vanilla adjacency matrix and sounds like overclaim their scalability (compression rate). We appreciate the reviewer's suggestion. While an edge list is compact, **prior work LaMAGIC has already explored this approach by representing graphs as a list of hyperedges**. In that work, the **edge-list underperformed compared to the adjacency matrix**. This indicates that, despite its compactness, the oversimplified structure of an edge list makes it harder for language models to learn and generate complex graph patterns. In contrast, our method is designed to **not only compress the graph but also to preserve structural information critical for effective generation**. Moreover, **our experiments are not limited to comparisons with the adjacency matrix**. We also compare our work with the **AnalogGenie sequence in Figure 5** and the **adjacency vector in Figure 6**. In response to the reviewer's valuable suggestion, **we will extend our case study in Q5 to include additional graph compression algorithms**. > **Q5:** Why only the authors only conduct case study for protein and ego graphs? While discovering circuit topologies is an interesting problem, I think the contribution will be greatly improved if the proposed methods can be applied to more fields. We have expanded our case study to include molecules [1] and 3D point cloud graphs [2]. As shown below, our method consistently outperforms alternatives in compression rate relative to the adjacency matrix. | Data | Adj. Vec | Rec. Bin [3] | Graph Tok [4] | Ours | | --------- | -------- | ------------ | ------------- | -------- | | Mol [1] | $2.12$ | $2.16$ | $3.57$ | $9.33$ | | 3D PC [2] | $2.00$ | $39.46$ | $158.41$ | $572.56$ | [1] Ramakrishnan, et al. "Quantum chemistry structures and properties of 134 kilo molecules." *Scientific data,* 2014. [2] Neumann, et al. "Graph kernels for object category prediction in task-dependent robot grasping." *MLG*. 2013. [3] Dai, et al. "Scalable deep generative modeling for sparse graphs." *ICML*, 2020. [4] Chen, et al. "Graph Generative Pre-trained Transformer." *arXiv,* 2025. --- Rebuttal Comment 1.1: Comment: Thanks for the comments from the authors. Considering most of my concerns have been addressed, I would like to increase my rate. --- Reply to Comment 1.1.1: Comment: Thanks for acknowledging our rebuttal and raising the score. Feel free to let us know if you have additional questions.
Summary: This paper addresses the challenge of sustaining integrated circuit (IC) performance in the post-Moore era by proposing AnalogGenie-Lite, a decoder-only generative model for discovering novel analog circuit topologies. Leveraging lightweight graph modeling, the framework incorporates concise device-pin representations, frequent sub-graph mining, and optimal sequence modeling to significantly enhance both scalability and precision. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: NA Experimental Designs Or Analyses: Although I did not personally execute the code, I carefully reviewed the experimental setup and the corresponding analyses, which appear to be reasonable. Supplementary Material: There is just one appendix section which I checked Relation To Broader Scientific Literature: The current paper offers a lightweight decoder version of AnalogGenie, which was recently accepted at ICLR 2025. Essential References Not Discussed: To my knowledge the most important references are discussed Other Strengths And Weaknesses: **Strengths** 1. The paper is engaging, well-structured, and presents a computationally leaner version of the AnalogGenie Decoder. 2. While it leverages established concepts, the paper offers a meaningful contribution by combining subgraph pruning and modeling approaches to enhance scalability. **Weaknesses** 1. Compared to the original AnalogGenie work, the innovation appears incremental. Although the improved scalability is promising, the performance gains also seem relatively modest in scope. Other Comments Or Suggestions: The paper is both interesting and technically solid. While the main contribution lies in enhancing scalability and thus appears incremental, these improvements meaningfully advance existing approaches. Overall, the strengths outweigh the limitations, and I recommend acceptance. Questions For Authors: Please see weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > **Q1:** Compared to the original AnalogGenie work, the innovation appears incremental. Although the improved scalability is promising, the performance gains also seem relatively modest in scope. We appreciate the reviewer’s insightful comments and would like to elaborate on our innovations in this paper. **Our approach is novel on three levels—graph, subgraph, and sequence—and each contributes significantly to improving circuit generation quality in terms of validity, scalability, and performance**. At the graph level, we introduce **a new graph representation that is considerably simpler than the current SOTA device-pin graph representation [1]**. Specifically, for multi-pin shared edge connections, our method achieves a space complexity of $O\left(n\right)$, compared to the $O\left(n^2\right)$ required by existing approaches. This advancement results in a 6.6% improvement in generation validity, a scalability boost of 2.9$\times$, and a performance (FoM) increase of 982.5 compared to the current SOTA [1]. At the subgraph level, **we are the first, to our knowledge, to apply frequent subgraph mining techniques to an analog circuit database to identify frequently reused subcircuits and replace them with compact representations.** This novel contribution leads to an additional 7.2% improvement in generation validity, enhances scalability by 1.24$\times$, and increases performance (FoM) by 40.2 on top of the improvements achieved at the graph level. At the sequence level, we model a circuit graph as an **optimal sequence** by formulating it as the shortest closed path that visits every edge of an undirected graph at least once, effectively **solving the *Chinese Postman Problem* [2].** This approach significantly reduces sequence length and further improves generation validity by 9.7%, scalability by 1.43$\times$, and performance (FoM) by 250.2 relative to the subgraph-level method. Finally, we clarify that **our method's performance improvements are significant by comparing it to the historical development of electronic design automation algorithms for analog circuits' topology discovery**. For instance, while op-amp circuits have been extensively studied by existing EDA algorithms [3-6], **early works such as Artisan [3] achieved an 1847.7 FoM gain** using LLM-based op-amp synthesis over traditional reinforcement learning methods [4]. On top of that, the current SOTA [1] further secures an additional **975.3 FoM gain relative to Artisan**. By integrating our three innovative techniques mentioned earlier, AnalogGenie-Lite attains an overall improvement of 23.5% in generation validity, a scalability enhancement of 5.14$\times$, and a **FoM gain of 1272.9 compared to the current SOTA [1]**. These results underscore the substantial advancement our approach offers in tackling the challenging domain of analog circuit design, **pushing the circuit performance's limit in the post–Moore's law era**. [1] Gao, Jian, et al. "AnalogGenie: A Generative Engine for Automatic Discovery of Analog Circuit Topologies." *arXiv preprint arXiv:2503.00205* (2025). [2] Edmonds, Jack, and Ellis L. Johnson. "Matching, Euler tours and the Chinese postman." *Mathematical programming* 5 (1973): 88-124. [3] Chen, Zihao, et al. "Artisan: Automated operational amplifier design via domain-specific large language model." *Proceedings of the 61st ACM/IEEE Design Automation Conference*. 2024. [4] Chen, Zihao, et al. "Total: Topology optimization of operational amplifier via reinforcement learning." *2023 24th International Symposium on Quality Electronic Design (ISQED)*. IEEE, 2023. [5] Lu, Jialin, et al. "Topology optimization of operational amplifier in continuous space via graph embedding." *2022 Design, Automation & Test in Europe Conference & Exhibition (DATE)*. IEEE, 2022. [6] Zhao, Zhenxin, and Lihong Zhang. "An automated topology synthesis framework for analog integrated circuits." *IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems* 39.12 (2020): 4325-4337.
null
null
null
null
null
null
DeFoG: Discrete Flow Matching for Graph Generation
Accept (oral)
Summary: This paper introduces **DeFoG**, a novel **graph generative framework** that decouples the **training and sampling** processes to improve efficiency and flexibility. The key innovation is the **discrete flow-matching (DFM) formulation**, which ensures **node permutation equivariance** and allows more expressive sampling methods tailored for graph structures. The paper provides **theoretical guarantees** linking training loss optimization to improved sampling dynamics. Extensive experiments across synthetic, molecular, and digital pathology datasets show that DeFoG achieves **state-of-the-art performance**, significantly reducing the required number of refinement steps compared to graph diffusion models. The main contributions of the paper are: - **A novel training-sampling disentanglement framework** for graph generative models. - **Theoretical justification** demonstrating that DeFoG faithfully replicates ground truth graph distributions. - **Exploration of novel sampling strategies**, improving efficiency while maintaining performance. - **Empirical validation** across various datasets, demonstrating superior performance compared to existing diffusion models. ## update after rebuttal The authors have addressed my concerns, so I raise my score. Claims And Evidence: The paper makes strong claims about: 1. **Training-Sampling Disentanglement**: Theoretical grounding is provided, but additional empirical comparisons with diffusion-based methods under different training configurations would strengthen the claim. 2. **Improved Sampling Efficiency**: The results show significant reductions in sampling steps (5–10% of diffusion models), but additional ablations on time-adaptive methods would clarify the practical efficiency gains. 3. **State-of-the-Art Performance**: DeFoG outperforms most existing diffusion models across various datasets. However, detailed breakdowns on dataset-specific improvements would be helpful. Overall, the claims are well-supported, though further empirical comparisons and ablations could enhance the robustness of the results. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the task: - **Synthetic, molecular, and pathology datasets** are well-chosen, covering diverse graph generation scenarios. - **Comparison with strong baselines**, including diffusion-based models, ensures fairness. - **Metrics such as validity, uniqueness, and novelty** are appropriate for generative models. However, additional analysis on the impact of different hyperparameters and computational trade-offs (e.g., memory and runtime comparisons) would be beneficial. Theoretical Claims: The paper provides **mathematical derivations** linking the discrete flow-matching formulation to graph generation performance. The proof structure appears sound, but verifying its correctness in practical implementations (e.g., sensitivity to noise schedules) would further validate its robustness. Experimental Designs Or Analyses: The experimental setup is well-structured, but there are some areas for improvement: - **Scalability Analysis**: Given the paper's emphasis on scalability, it would be useful to include an explicit study on how DeFoG scales with increasing graph size. - **Ablation Studies**: The ablation studies justify the importance of each component, but additional experiments on varying training configurations would be informative. - **Generalization to Out-of-Distribution Data**: Evaluating how DeFoG performs on unseen graph structures would enhance its applicability. Supplementary Material: The supplementary material appears comprehensive, but reviewing specific sections (e.g., additional proofs, hyperparameter details) would clarify implementation reproducibility. Relation To Broader Scientific Literature: The paper is well-grounded in the existing literature on **graph generative models**, particularly diffusion-based methods. However, connections to recent advances in **graph normalizing flows and autoregressive models** could be further discussed. Essential References Not Discussed: Some essential references about graph generative models are not discussed, such as the following references: [1] Han X, Chen X, Ruiz F J R, et al. Fitting autoregressive graph generative models through maximum likelihood estimation[J]. Journal of Machine Learning Research, 2023, 24(97): 1-30. [2] Xu M, Liu M, Jin W, et al. Graph and geometry generative modeling for drug discovery[C]//Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 2023: 5833-5834. Other Strengths And Weaknesses: ### Strengths: - **Novel formulation** that decouples training and sampling, offering greater flexibility. - **Theoretical justification** enhances confidence in the approach. - **Empirical validation across multiple domains**, demonstrating SOTA performance. ### Weaknesses: - **Scalability analysis is missing**, despite claims of efficiency. - **Limited analysis on generalization** beyond the tested datasets. - **Additional hyperparameter studies** would further validate robustness. Other Comments Or Suggestions: 1. **Clarify the computational efficiency trade-offs**—while fewer sampling steps are needed, does DeFoG require significantly more compute per step? 2. **Include additional experiments on scalability**—how does DeFoG perform on very large graphs compared to diffusion models? 3. **Compare with other flow-based and autoregressive models**—this would help position DeFoG within the broader space of graph generation. Questions For Authors: 1. **How does DeFoG handle highly imbalanced graph structures?** Some datasets may have extreme degree distributions—does this affect the sampling process? 2. **What are the computational trade-offs of training-sampling decoupling?** While DeFoG reduces sampling steps, does it increase per-step complexity? 3. **How sensitive is DeFoG to hyperparameter choices?** Particularly in terms of the interpolation and rate matrices. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for positively assessing our framework’s novelty, theoretical grounding, and empirical results. Below, we address the raised concerns: **I - Methods And Evaluation Criteria**: 1) *Hyperparameters sensitivity*: In the paper, we perform extensive hyperparameter sensitivity analyses covering $R^\omega$, $R^\text{DB}$, target guidance, stochasticity level, initial distributions, and training-sampling distortion (Sec. 5.3, App. B.1–B.4, C.1–C.2, Figs. 9, 10, 13, 14), as acknowledged by Reviewers 8xvx and 2q2N. These analyses sufficiently cover the key factors affecting generation and we provide supporting intuitions (Sec. 3.2, 5.3). 2) *Computational trade-offs*: We report training/sampling runtimes for DeFoG across datasets (App. F.3) and compare them with SOTA methods in Sec. 5.2. Note that step count scales linearly with runtime. Regarding memory usage, we propose to clarify in App. F.3: ``` "DeFoG’s memory usage matches existing diffusion models, with quadratic complexity in node number due to complete-graph modeling. Rate matrix overhead is negligible, and RRWP features are more efficient to compute than previous alternatives [3,4]." ``` **II - Theoretical Claims**: 1) Whether "noise schedules" refers to the choice of initial distribution or distortion functions, our theoretical results explicitly link training loss to sampling accuracy. Strong empirical results validate the practical relevance of these results, and the observed performance deterioration with fewer sampling steps confirms the theoretical predictions (Tables 1,2; Figs. 7,8,11,12). **III - Experimental Designs Or Analyses**: 1) *Scalability*: We believe our efficiency claims are well-supported: DeFoG reduces sampling steps to 5–10% of the original total (Tables 1,2) and uses more efficient additional features (Table 12). Moreover, DeFoG's memory usage matches existing diffusion models (see I.2), enabling similar graph scales as competing methods. Scaling these methods to larger graphs is an interesting and challenging problem by itself (e.g., [5] leverages sparsity to scale existing graph diffusion models, an approach also applicable to DeFoG), thus beyond this work's scope. 2) *Ablations*: See I.1. 3) *OOD Generalization*: Unlike conventional OOD tasks, graph generation typically aims to maintain distributional similarity [3,4] rather than handle distributional shifts. Thus, while an interesting direction, it is beyond our scope. For standard generalization, see VII-2. **IV - Supplementary Material**: 1) We ensure reproducibility via the provided code and detailed hyperparameters (App. B.4, F.4). **V - Broader Literature**: 1) *Graph flows/autoregressive models:* We extensively discuss key autoregressive models (GraphRNN, GRAN, BiGG, GraphGen) (Sec.4, App.A.1, Table 1). Regarding normalizing flows, we acknowledge their relevance (Sec. 4) but are unaware of recent graph-specific advances directly relevant to our work; any suggestions are welcome. **VI - Essential References**: 1) We included [1] in our related work, where we already discuss similar (autoregressive) models. 2) [2] is only a tutorial description on generative models for molecular graphs. Arguably, it is not an “essential reference'' on graph generative models. In case this was not a mistake, we kindly invite the reviewer to motivate this request. **VII - Other Strengths And Weaknesses**: 1) *Scalability*: See III.1. 2) *Generalization*: DeFoG generalizes across diverse domains (synthetic, molecular, digital pathology), going beyond typical benchmarks in graph generation studies [4,5]. The additional ZINC250k results requested by Reviewer 2q2N further support this, making our generalization analysis comprehensive. 3) *Hyperparameter studies*: See I.1 **VIII - Other Comments or Suggestions**: 1) *Efficiency*: DeFoG incurs no additional overhead per sampling step (Alg. 2), and RRWP features are computed more efficiently than in prior work [3,4] (App. G.4). 2) *Scalability*: See III.1. 3) *Flow/autoregressive comparison*: See V. **IX - Questions For Authors**: 1) *Imbalanced graph structures*: DeFoG implicitly captures distributional heterogeneity, confirmed by superior performance on very diverse structural graph statistics (degree, clustering, orbits, spectral, wavelet distributions; e.g., Table 7) and datasets. 2) *Computational decoupling trade-offs*: Per-step sampling efficiency matches/exceeds existing models (See VIII.1). Additionally, training-sampling decoupling enables optional independent tuning, potentially further improving performance at minimal extra cost via an optimized hyperparameter selection pipeline (App.B.4). 3) *Hyperparameter sensitivity*: See I.1. [3] - Digress: Discrete denoising diffusion for graph generation, Vignac et al., ICLR 2023 [4] - Discrete-state Continuous-time Diffusion for Graph Generation, Xu et al., NeurIPS 2024 [5] - Sparse Training of Discrete Diffusion Models for Graph Generation, Qin et al., ArXiv 2023 --- Rebuttal Comment 1.1: Comment: The authors have adequately addressed my concerns, and I am willing to raise my score.
Summary: This paper proposed a novel graph generative model via discrete flow matching. This framework provides flexible and efficient training and sampling methods. The paper also provides theoretical guarantee for this disentanglement framework. With rich empirical validation, the proposed DeFoG shows powerful modeling ability and robust generation quality. Claims And Evidence: The paper is easy to follow and extensive experiments demonstrate its effectiveness on graph modeling. Methods And Evaluation Criteria: The proposed method follows the widely used evaluation process. Theoretical Claims: I verified that the sampling and training algorithms are consistent with Equations 4 and 5. For the permutation invariance in graph modeling, I check the Appendix D.2.1 and the derivation is correct. Experimental Designs Or Analyses: This paper provides rich experiments to validate its powerful graph modeling ability. The high V.U.N. performance on general graph shows its robustness in generation quality. This paper also conducted extensive ablation on training and sampling efficiency. Supplementary Material: Yes, the code for the sampling module. Relation To Broader Scientific Literature: Graph generation has always been a meaningful topic in machine learning / generation tasks and it has a broader impact on scientific discovery. This paper provides a practical implementation in graph modeling and theoretical foundation for this as well. Essential References Not Discussed: This paper has discussed many related works in diffusion / flow matching based graph generative models. Recently I notice an interesting graph modeling method, which applies beta diffusion on graph modeling. I think the author could discuss it a little bit because the beta diffusion can handle both of continuous and discrete elements in graph, while your method aims to disentangle the graph modeling process with flexible training and sampling method. - Advancing Graph Generation through Beta Diffusion ICLR'25 Other Strengths And Weaknesses: ## Strengths - Well discussion of related literature and connection to previous results. - Important and useful empirical / theoretical results and their discussion. - Extensive discussion about related works on continuous-time discrete diffusion and discrete flow matching. - Sufficient analysis on sampling optimization. ## Weaknesses - Could you provide some results on ZINC250k, I think this benchmark is important for molecule graph modeling task. - Could you provide mean and standard deviation for synthetic graph generation. I'd like to know how stable the proposed method is. Other Comments Or Suggestions: How is the sampling distortion specific to the graph modeling, instead of usage in DFM. Questions For Authors: All of my questions are above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful feedback. Below, we address the raised concerns in detail. **Essential Reference**: We thank the reviewer for proposing this interesting reference. We will expand our related work section to include a discussion over methods that support both continuous and discrete data. Specifically, we propose to integrate the following in the revised manuscript: ``` Integrating continuous and categorical data within graph generative models is an important challenge, as many real-world applications involve heterogeneous data types (e.g., molecular graphs containing atomic coordinates alongside categorical atom and bond types). A recent example addressing this challenge is GBD [1], which incorporates beta diffusion to jointly model both continuous and discrete variables. Similarly, DeFoG is amenable to formulations involving mixed data types by leveraging an approach akin to MiDi [2], independently factorizing continuous and discrete variables. However, explicitly exploring this integration is beyond the scope of this work. ``` **Results on ZINC250 dataset**: To further support our empirical findings on real-world datasets, we provide results on the ZINC250k. We evaluate DeFoG’s performance on this dataset after 14 hours of training under the same setting of previous works [3] and compare it, to the best of our knowledge, against the strongest-performing methods currently reported in the literature [1,3,4]. | Model | Validity ($\uparrow$) | Uniqueness ($\uparrow$) | FCD ($\downarrow$) | NSPDK ($\downarrow$) | Scaffold ($\uparrow$) | |---|---|---|---|---|---| | GruM | 98.65 | - | 2.257 | 0.0015 | 0.5299 | | GBD | 97.87 | - | 2.248 | 0.0018 | 0.5042 | | CatFlow | 99.21 | **100.00** | 13.211 | - | - | | DeFoG (50 steps) | 96.65 ± 0.16 | 99.99 ± 0.01 | 2.123 ± 0.029 | 0.0022 ± 0.0001 | 0.4245 ± 0.0109 | | DeFoG | **99.22** ± 0.08 | 99.99 ± 0.01 | **1.425** ± 0.022 | **0.0008** ± 0.0001 | **0.5903** ± 0.0099 | DeFoG accomplishes state-of-the-art performance for this dataset. Notably, it also attains superior FCD performance using only 50 sampling steps, outperforming existing methods. We thank the reviewer for suggesting the ZINC250k benchmark. These results further demonstrate DeFoG’s generalization capabilities in molecular graph modeling, strengthening our contribution, and, thus, have been included into the updated manuscript. **Standard Deviation for Synthetic Graphs** In the Experiments section, we report VUN and the average ratio of MMDs for all the synthetic datasets, with both mean and standard deviation (std). We can observe DeFoG’s stable performance, in particular for a large number of steps. Additionally, we provide the full results for all the original graph-specific metrics, including Degree, Orbit, Wavelet, Spectral, and Cluster MMDs, with mean and std, in Table 7. **Sampling Distortion** The original DFM paper [6] considers evenly spaced sampling timesteps without any distortion. In contrast, we investigate how breaking this uniformity can yield more refined generative trajectories, especially at timesteps crucial for capturing specific graph properties. Our results (Sec. 5.3 – Sampling Distortion) demonstrate that, for graphs with strict structural constraints (e.g., planarity), it is beneficial to emphasize refinement at later steps, as categorical variables may abruptly transition as $t \rightarrow 1$, potentially violating the required structure. Refining these late steps thus helps detect and correct such errors. Conversely, for datasets without strict constraints (e.g., SBM), this refinement is not necessary. Moreover, we provide insights into the interplay between training and sampling distortions, observing that aligning these distortions typically yields the best generative performance, although this is not universally true. We also propose a simple heuristic to determine the optimal sampling distortion (see App C.2.). Finally, similar beneficial effects of distorted scheduling have been reported for other data modalities, such as image generation and language modeling [6]. Overall, sampling distortion allows us to exploit DeFoG's training-sampling decoupling to further enhance generative performance by adapting the generative process to different graph characteristics, refining the corresponding crucial timesteps. [1] - Advancing Graph Generation through Beta Diffusion, Liu et al., ICLR 2025 [2] - Midi: Mixed graph and 3d denoising diffusion for molecule generation, Vignac et al., ECML-PKDD 2023 [3] - Graph Generation with Diffusion Mixture, Jo et al., ICML 2024 [4] - Variational Flow Matching for Graph Generation, Eijkelboom et al., NeurIPS 2024 [5] - Digress: Discrete denoising diffusion for graph generation, Vignac et al., ICLR 2023 [6] - Generative Flows on Discrete State-Spaces: Enabling Multimodal Flows with Applications to Protein Co-Design, Campbell et al., ICML 2024 [7] - Discrete Flow Matching, Gat et al., NeurIPS 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your response, which has clarified several aspects of your work. In particular the 'Essential Reference' modeling discrete and continuous data at the same from a joint view or decomposition view is interesting. I will raise my score.
Summary: The authors adapt discrete flow matching for graph generation, replacing the usual SOTA discrete diffusion framework. This authors also utilize the flexibility of flow matching to further tune the sampling process to make it more efficient and generate higher quality samples in much fewer steps. This is primarily achieved by tuning the influence of target guidance and sample distortion (de-noising schedule) Claims And Evidence: The authors perform extensive benchmarks and ablation studies. They clearly support their claims of improved efficiency and sample quality. Methods And Evaluation Criteria: Authors rely on well established benchmarks for graph generation with standard datasets and metrics that match best practices in the literature. Theoretical Claims: I didn't check the proofs/derivations in the appendix in detail, but the theory in the paper heavily relies on previous established works and looks sound. Experimental Designs Or Analyses: All the experimental design is quite standard and looks good. The experiments and ablations are extensive and cover all introduced modifications Supplementary Material: I read the whole appendix, but didn't check the derivations thoroughly. They do look quite standard, so I don't expect any issues. Relation To Broader Scientific Literature: The paper tackles a very important and widely studied problem of graph generation. It builds on the SOTA approaches (discrete diffusion) changing the diffusion framework with discrete flow matching. While the model is overall similar to previous discrete diffusion approaches and while discrete flow matching has been used in other contexts before, the multiple various minor improvements presented in the paper, do noticeably improve upon existing SOTA. The potential choices are extensively experimentally evaluated, which is very valuable for the community, extending our knowledge of how best to build these graph generative models in practice. If the paper comes with a nicely written and easy to work with and understand codebase I can see it becoming a go-to graph/molecule generative model people develop upon, replacing DiGress in this role, that is currently used as a main building block in variety of graph and molecule generation papers. Essential References Not Discussed: References look good to me. Other Strengths And Weaknesses: I already covered the main strengths and weaknesses above. While novelty is not super high, as it's just a combination of existing ideas, not straying off the common path, I think it's a very valuable work in that it does a great job refining this current standard recipe and noticeably improving the resuls. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback, as well as for recognizing the importance of our algorithmic improvements, the thoroughness of our experimental validation, and the practical value of our contributions to the graph generation community. Aligned with the reviewer's perspective, we are committed to releasing a clean, well-documented, and easy-to-use codebase upon publication to facilitate adoption and further developments by the community. We remain open to any further suggestions or questions that may arise.
Summary: The authors apply discrete flow matching to graph generation. Claims And Evidence: Claimed contribution 1: > We introduce DeFoG, a novel flow-based graph generative model that effectively disentangles training and sampling for improved flexibility and efficiency; I feel this is misleading. DFM already decouples training and sampling (see [1] and [2]). In fact, the authors themselves dismiss their own contribution later in the introduction (lines 145 to 147): "While the DFM paradigm enables training-sampling disentanglement, it lacks a complete formulation and empirical validation on graph data. [...]". I will happily adjust my score if the authors adjust their claims, or can provide a convincing argument that the reasoning above is incorrect. [1] https://arxiv.org/abs/2402.04997 [2] https://arxiv.org/abs/2412.03487 [3] https://arxiv.org/abs/2412.06264 [4] https://arxiv.org/abs/2407.15595 Methods And Evaluation Criteria: Yes. Theoretical Claims: Somewhat. > Corollary 1 and 2 I don’t fully understand the need for these statements, given what is already known about sampling errors in the CTMC literature. Is having $O(\Delta t)$ error any different from the usual $O(h)$ error from using Euler’s method (see [2], [3] and [4])? To be more specific: the DeFoG loss is an evidence lower bound (ELBO) (see [2]). Experimental Designs Or Analyses: The experiments make sense. Supplementary Material: Part C. Relation To Broader Scientific Literature: This relates to the flow matching, as well as graph generation, literature. Essential References Not Discussed: None. Other Strengths And Weaknesses: See other boxes. Other Comments Or Suggestions: See other boxes. Questions For Authors: > 3.1. Learning Discrete Flows over Graphs The notation in this section is a bit odd. It seems like the node and edge sets are completely unrelated? Both nodes and edges are integers. I would expect the edge set to be something like $\mathcal{X}^2$. Given the definition of $p_{t|1}$ in this section, it is theoretically possible to have “floating” edges at any time t. In other words, edges that are not connected to any nodes. Is this correct? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their time and comments. We address the raised concerns below: **Claims And Evidence**. We agree with the reviewer that the training-sampling disentanglement is a contribution of DFM. In this regard, our contribution lies in making this decoupling effective for graph generation, where it had not been previously formulated, implemented, or empirically validated (as acknowledged in lines 145–150). Crucially, notice that applying existing DFM methods to graphs does not lead to improved performance (see Fig. 2 ablations). DeFoG addresses this by providing a graph tailored formulation that *effectively* leverages the decoupling between training and sampling, resulting in significant improvements in both sampling flexibility and efficiency. We understand that the original phrasing could be misleading and so we propose the revised formulation: ``` We introduce DeFoG, a novel graph generative model that effectively exploits the training-sampling decoupling inherited from its flow-based formulation, significantly enhancing sampling flexibility and efficiency. ``` We hope this effectively addresses the reviewer’s concerns and remain available for further discussion. **Theoretical claims**: - Our Corollary 1 does not relate to the CTMC sampling error. Instead, it establishes that minimizing the CE loss directly corresponds to minimizing an upper bound on the rate matrix estimation error, directly promoting accurate sampling. This justifies the use of the CE loss without requiring ELBO-derived approximations (more on this below). - Corollary 2 is not solely about the discretization error resulting from the CTMC sampling - which, as correctly noted by the reviewer is $O(\Delta t)$ (same as $O(h)$). Instead, it combines this term with the estimation error that results from Corollary 1 to bound the deviation of the generated graph distribution. Notice that this type of result is commonly sought in generative modeling to guarantee generation accuracy. For instance, Theorem 1 in [5] presents an analogous bound in the context of discrete diffusion, though with significant differences (e.g., they assume a bounded rate matrix estimation error, while we explicitly prove it). Regarding the connection between DeFoG’s loss (CE) and ELBO, [1] derives an ELBO loss suitable to our chosen rate matrices (decomposed into three terms weighted CE, KL divergence, rate matrix regularizer). However, they motivate using only the CE loss based on approximations upon the derived ELBO by dropping the rate matrix-dependent terms (see App. C.2 from [1]). In contrast, our bounds do not require such simplifications and are agnostic to rate matrix design choices (e.g., stochasticity level), reinforcing our training-sampling decoupling claim. Additionally, [2], which is a contemporaneous submission under ICML guidelines (alongside [3]), also establishes an ELBO loss; however, on our understanding, it applies to a different family of conditional rate matrices. We have included the discussion above, as well as the mentioned references, into the revised manuscript. We hope this clarifies the raised concern; if not, we remain open to further feedback. **Questions For Authors**: We use $x^{(n)}$ for the $n$-th node ($1 \leq n \leq N$) and $e^{(ij)}$ for the edge between nodes $x^{i}$ and $x^{j}$ ($1 \leq i < j \leq N$). Nodes are gathered as $x^{1:n:N}$, edges as $e^{1:i<j:N}$. Node and edge state spaces are denoted by $\mathcal{X}$ and $\mathcal{E}$ (cardinalities $X$, $E$, respectively), meaning nodes take values in $\\{ 1,\dots,X \\}$ and edges in $\\{ 1,\dots,E \\}$. Following standard practice in the field [6,7,8], we keep the number of nodes fixed throughout trajectories and explicitly model a complete graph, connecting all node pairs by edges. During diffusion trajectories, only node and edge classes change, not their structural existence. Crucially, one of the edge classes represents the absence of an edge ("non-existing” edge), but there is no “non-existing” class for nodes. Hence, each edge is always associated with its two vertices (there can be “floating” nodes, but no “floating edges”). We thank the reviewer for raising this important remark; we have made this more explicit in the updated manuscript. Overall, we appreciate the reviewer’s constructive feedback, and we think their questions improved our work. We remain open to any further suggestions. [5] - A Continuous Time Framework for Discrete Denoising Models, Campbell et al., NeurIPS 2022 [6] - Digress: Discrete denoising diffusion for graph generation, Vignac et al., ICLR 2023 [7] - Discrete-state Continuous-time Diffusion for Graph Generation, Xu et al., NeurIPS 2024 [8] - Cometh: a Continuous-time Discrete-state Graph Diffusion Model, Siraudin et al., ArXiv 2024
null
null
null
null
null
null
Towards Understanding Parametric Generalized Category Discovery on Graphs
Accept (poster)
Summary: This paper first proposes a theoretical analysis about parametric GCD, and then design a new graph contrastive learning method, SWIRL, with the insights from the theoretical analysis. They propose the first GCD loss upper bound theory and identifying some necessary conditions about category relationships for good GCD performance. Besides, they reveal that the current GCL methods can not satisfy these conditions from the lens of a pairwise Markov random field. The proposed GCL method alleviates the randomness of category relation in the embedding space. Claims And Evidence: Yes, all the claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense for the problem at hand. Theoretical Claims: Yes. I have checked the proofs of the main results, i.e., Theorem 3.5 and Theorem 4.3. I didn't see any obvious mistakes. Experimental Designs Or Analyses: I have reviewed all the experimental designs. I think these experiments are essentially complete and can support the authors' claims. Supplementary Material: I reviewed Appendices C, D, E, F, and G. Relation To Broader Scientific Literature: This article presents the first systematic theoretical analysis of GCD methods that employ a parametric classifier, addressing a gap in the current theoretical understanding of GCD methods. The authors' analysis of contrastive learning approaches utilizing InfoNCE and SupCon loss functions can, in fact, be regarded as an independent contribution to the theory of contrastive learning. Essential References Not Discussed: The references are comprehensive. Other Strengths And Weaknesses: **Strengths**: 1. Current (parametric) GCD methodologies lack a systematic theoretical analysis, and this paper fills that void. 2. The theoretical analysis in this paper inspired the authors' proposed SWIRL method, which has demonstrated high experimental performance. This indicates that the authors' theories can, to some extent, guide the design of GCD methods, a trait that is highly commendable in theoretical work. 3. The visualization of representations and decision boundaries (as depicted in Figures 1-3) intuitively elucidates the implications of the principal findings in Sections 3 and 4, thereby facilitating an understanding of the main theoritical results. **Weaknesses**: 1. If I have not misunderstood, the term $D_{KL}(P_{\mathcal{C}}\|\bar{h})$ included in the upper bound, according to Proposition 3.9, should originate from a prerequisite condition: the data distribution is class-balanced. However, the authors did not declare this condition in Theorem 3.5. 2. This paper employs the Wasserstein distance (Definition 3.1) to quantify the relationship between old and new categories, which incorporates a hyperparameter $\lambda$ that balances the influence of features and labels. However, the subsequent calculations do not elucidate the determination of this value or the methodology for its selection (e.g., in Figure 1). 3. The new evaluation metrics proposed in this paper, such as HRScore, appear to actually come from existing work [1]. 4. The meaning of $N$ in line 263, p5 is not given. What’s the difference between $N$ here and $n$ in Section 2.1? 5. In Sections 3 and 4, the authors demonstrate the embedding space and the Wasserstein distance between old and new categories in this space. Calculating this Wasserstein distance requires considering the semantic distances between categories in the label space. However, if the authors consider one-hot labels, the semantic distance between any pair of categories \( u \) and \( l \) would be the same as that between \( u \) and \( l' \). This implies that the distance would effectively reduce to a Wasserstein distance based solely on the representations \( z \). As a result, the authors' statement in line 290—"the embedding relations (i.e., distances) between categories are random and thus not easy to be aligned with the (either known or unknown) category semantic relations in \(\mathcal{Y}_C\)"—would become less meaningful. [1] W. An et al., “Transfer and alignment network for generalized category discovery,” in Proceedings of the AAAI Conference on Artificial Intelligence, 2024. Other Comments Or Suggestions: It appears that Lemma D.3 in Appendix D has not been utilized. Please verify this point and make the necessary revisions. Questions For Authors: - Q1: In line 285, p6, the authors state that “The above presents a more refined treatment of the embedding space than [1]”. Can you provide more details about this comparison? - Q2: In the propsoed SWIRL, the authors derive prototypes by clustering the average representations across two views. The rationale for not clustering the complete set of representations from both views remains unaddressed. Is there a discernible difference in performance between these two approaches? - Q3: Prototype-based contrastive learning methods have been extensively explored in the literature. The advantage of SWIRL over these existing approaches appears to be primarily rooted in its theoretical motivation. Are there any other distinctive features that significantly differentiate SWIRL from current prototype-based contrastive learning methods? I may change my score based on the authors' rebuttal regarding weaknesses and questions. [1] Z. Tan, Y. Zhang, J. Yang, and Y. Yuan, “Contrastive Learning is Spectral Clustering on Similarity Graph,” presented at the The Twelfth International Conference on Learning Representations, 2023. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the helpful comments and guidance! ### **Weaknesses** + **W1**: You are absolutely right! Thank you for your meticulous attention. We will revise this. + **W2**: In subsequent calculations involving **W**, we set $\lambda = 1$, with assuming that the embeddings and labels are important equally. + **W3**: The **H-score** in [1] is essentially the harmonic mean of **New ACC** and **Old ACC** (defined in **Appendix G.1**), both computed after remapping all classes via *linear sum assignment*. This may maps true new classes as old classes and vice versa, intruding the GCD purpose. Our proposed **HRScore** improves upon this by harmonizing **New RACC** and **Old RACC**—*without* remapping new clasess to old ones —better fitting GCD’s objectives. For formulas and distinctions, see **Appendix G.1**. + **W4**: Here, $n$ is the number of *observed* nodes in the graph dataset, while $N$ represents the total *potential* nodes (e.g., in citation networks, $N$ includes unpublished/unobserved papers). During contrastive learning, augmentations on $n$ nodes effectively select positive samples from $N \gg n$ nodes based on human priors. + **W5**: You’ve identified a key point we didn't elaborate on for a not that length manuscript. In the computation of Wasserstein distances in **Secs. 3 and 4**, class labels $\mathbf{y}^c$ are *not* one-hot. For example, in **Fig. 1**, adjacent old (yellow, $c=1$) and new (blue, $c=5$) classes are assigned: $$ \mathbf{y}^1 = [0.95, 0, 0, 0, 0.05, 0, 0, 0, 0], \quad \mathbf{y}^5 = [0.05, 0, 0, 0, 0.95, 0, 0, 0, 0]. $$ Similar setups apply to other adjacent (old, new) class pairs. All **Wasserstein distances** in **Secs. 3–4** are computed under this scheme. Some experiments conducted by us but not included in this manuscript show such *non-one-hot* labels generally improve GCD performance since such semantic-aware labels facilitate transfering the knowledge of distinguishing old classes into distinguishing new ones, aligning with our statement that "*embedding-space category relations (distances) are random and hard to align with semantic relations in $\mathcal{Y}_C$*". --- ### **Questions** + **Q1**: Using our notation, the differences between [2] and our work are summarized below, where $\pi$ is our *relation prior* in embedding space and $\mathbf{B}$ is the prior in the input space. Our key point lies in constructing a **PMRF** for embeddings and introducing posterior probabilities for the node relation graph $\mathbf{W}$ in the embedding space: | | [2] | Ours | |----------------|------------------------------|----------------------------------------------------------------------| | **Input Space** | $P(\mathbf{W}_X; \mathbf{B})$ | $P(\mathbf{W}_X; \mathbf{B})$ | | **Embedding Space** | $P(\mathbf{W}; \mathbf{Z})$ | $P(\mathbf{W} \mid \mathbf{Z}) \propto P(\mathbf{Z} \mid \mathbf{W}) P(\mathbf{W}; \pi)$ | + **Q2**: Our design is motivated by: 1. **Reduced computation**: Running **SS-KM** on $n$ samples is far cheaper than on $2n$. 2. **Enhanced consistency**: Prototypes derived from *averaged node embeddings* resemble clustering "mini-cluster" centroids from paired views, improving positive-pair alignment. Our empirical tests confirm that this strategy is better. + **Q3**: **SWIRL**’s distinction from other prototype contrastive methods is its *multi-level repulsion force*, whose efficacy is highlighted in **Line 2015 (p.38)** of the manuscript. --- ### **References** [1] *Transfer and Alignment Network for Generalized Category Discovery* , AAAI2024 [2] *Contrastive Learning is Spectral Clustering on Similarity Graph*, ICLR2024
Summary: This paper introduces Generalized Category Discovery on Graphs (GGCD), a novel task addressing open-world learning on graph-structured data where unlabeled nodes may belong to both known and novel classes. The authors defined a surrogate GCD loss to reflect the GCD performance and established a theoretical bound. To minimize this bound, they argue that it is necessary to reduce the Wasserstein distance between the distributions of known and novel classes while maintaining their independence. Subsequently, they analyze the limitations of existing baseline methods in satisfying these conditions and propose a heuristic approach, SWIRL, to address these challenges. The experiments demonstrate SWIRL’s effectiveness and support the theoretical claims. Claims And Evidence: Overall, the claims here are well-supported by clear and convincing evidence. Methods And Evaluation Criteria: The datasets are pervasive, and the criteria make sense. Theoretical Claims: I have reviewed the proof of Theorem 3.5, and it is substantially correct under the assumptions employed by the authors. Experimental Designs Or Analyses: Yes, I reviewed the experimental design and analyses, and overall, they are well-structured and appropriate for validating the paper’s claims. However, one issue is the lack of scalability analysis for SWIRL. Supplementary Material: All except the proofs of theorems and corollaries proposed in Section 4. Relation To Broader Scientific Literature: The paper advances generalized category discovery (GCD) and (open-world) graph machine learning. - It extends GCD [1] to graph data - It introduces a theoretical framework based on Wasserstein distance to quantify the relationships between old and new classes, diverging from prior GCD theories based on mutual information [2] or pairwise consistency distribution [3]. With this framework, a GCD loss upper bound is achieved. - This loss bound leads to two category relation conditions that the representation learning should meet. And the authors critiques popular graph contrastive learning (GCL) paradigm through a PMRF perspective, revealing its inability to control global category relations. This flaw is not discoverable with prior contrastive learning theory [4]. - Additionally, it contextualizes known GCN smoothing effects within GCD, demonstrating how oversmoothing disrupts category separation. [1] Generalized Category Discovery [2] Parametric Information Maximization for Generalized Category Discovery [3] SelEx: Self-expertise in fine-grained generalized category discovery [4] Contrastive Learning is Spectral Clustering on Similarity Graph Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: - The theory assumptions are well discussed and not very strong. - Many theory claims are verified with controllable experiments excluding irrelevant factors not mentioned in theories. Weaknesses: - This paper has so much math stuff that completely grasping it is not easy. - The three GCD baseline models from the computer vision community are not up to date. Other Comments Or Suggestions: Typos: - Section 1. Paragraph 4. Transferrs -> Transfers - P20. Postive-pair-> Positive-pair Suggestions: Subsection 4.1 is hard to read for those not familiar with MRF and the underlying probabilistic graph model is not very clear under the context of contrastive learning. It would be helpful to provide some intuitive examples or diagrams to illustrate how MRF concepts integrate with contrastive learning, thereby enhancing the readability and clarity of this section. Questions For Authors: 1: It is interesting to know if it is possible to extend the theoretical analysis in this article to non-parametric GCD methods, e.g., GCD [1]. 2: What’s the relationship between $\mathbf{W}_X$ in Sec. 4 and the data graph $\mathbf{A}$? [1] Generalized Category Discovery Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We are grateful for your constructive suggestions! ### **Experimental Designs or Analyses** We will include a **complexity analysis** in the revised version. Let: - $n$: Number of samples - $K$: Number of prototypes - $I$: SS-KM iterations - $d$: Embedding dimension **SS-KM** (executed every $t$ epochs): - **Space**: $O(nd + Kd)$ - **Time**: $O(nKId)$ (amortized: $O(nKId/t)$ per epoch) **SWIRL loss** (per epoch): - Similarity computation: $O(nKd)$ - Denominator summation + division: $O(nK)$ - **Total time**: $O(nKd)$ - **Space**: $O(nK)$ (similarity matrix) **Overall**: - **Time**: $O(TnKd + TnKdI/t)$ → With $I=10$, $t=20$: $O(TnKd)$ - **Space**: $O(nK + nd)$ - Compared to **InfoNCE** ($O(n^2d)$), this overhead is negligible. --- ### **Weaknesses** + **W1**: We will add schematic diagrams to **Sections 3 and 4** for clarity. + **W2**: This paper’s primary goal is *not* to propose the highest-performing method, but to provide a **theoretical analysis** of parametric GCD’s core motivation—*leveraging old-class knowledge to distinguish new classes*—and validate it via controlled experiments. **SWIRL** serves only to demonstrate that our theory can inspire new designs. Thus, comparisons with state-of-the-art GCD baselines are deferred to future work. --- ### **Suggestions** **Typos**: Thank you for your meticulous review! + **S1**: We treat node embeddings $z_i$ ($i \in [N]$) as random variables with pairwise dependencies defined by $\mathbf{W}$. Here, $W_{ij}=1$ indicates a relationship between nodes $i$ and $j$. $\mathbf{W}$ can be seen as the underlying graph of node embeddings, charterizing the relationships between node embedding random variables and the corresponding pairwise Markov random field (PMRF). The joint distribution on this PMRF is: \[ f_k(\mathbf{Z}, \mathbf{W}) \propto \prod_{(i,j) \in [N]^2} k(\mathbf{z}_i - \mathbf{z}_j)^{W_{ij}}. \] Each class corresponds to a connected component/cluster of the graph $\mathbf{W}$, with its center representing global class position in the embedding space. **Theorems 4.1–4.3** show these centers have *infinite variance*, making their positions uncontrollable. We will add diagrams to illustrate this intuitively in the revised manuscript. --- ### **Questions** + **Q1**: The conclusions in **Sec. 4** apply to *all* representation learning methods using **InfoNCE** or **SupCon loss** (thus including GCD [1]). For **Sec. 3**, the assumptions (e.g., Lipschitzness of $h$) do *not* hold for GCD’s SS-KM classifier. + **Q2**: $\mathbf{A}$ is the observed graph of $n$ nodes—a tiny subset of $N$ possible (augmented) nodes. Each augmentation step connects these $n$ nodes to another $n$ positives in $\mathbf{W}_X$ (an $N \times N$ graph with $N-2n$ isolated nodes). For theoretical purposes, large $N-2n$ does *not* matter, and the methods inspired by theoretical results actually plays with the small $n\times n$ graph. [1] *Generalized Category Discovery* , CVPR2022 --- Rebuttal Comment 1.1: Comment: Thanks for the response. Most of my concerns have been addressed. The remaining concern is that SS-KM employed in SWIRL appears to require application to the entire dataset at the beginning or end of each epoch. However, in many cases, the sample size n is excessively large, making the computational complexity potentially prohibitive. Could we alternatively implement it on mini-batches of b samples to reduce computational complexity? --- Reply to Comment 1.1.1: Comment: We sincerely appreciate your thorough review and insightful questions. In SWIRL, the SS-KM algorithm can indeed be applied at the batch level: within each mini-batch, SS-KM is used to obtain cluster centroids and assignments, which are then linearly combined with the old centroids from before processing the mini-batch via momentum-based updating. In fact, this approach was adopted in our experiments on **CIFAR100** (please see **W1 in our response to Reviewer Hzoz**). However, in the graph experiments reported in the manuscript, since most datasets did not require training in mini-batches, we did not implement this design. In future work, we will update the code to support mini-batch training on larger datasets.
Summary: The paper focuses Graph Generalized Category Discovery (GGCD), an node-level task aimed at identifying both known and novel categories in unlabeled nodes by leveraging knowledge from labeled old classes. The authors provide the first theoretical analysis for parametric GCD on graphs, quantifying the relationship between old and new classes using the Wasserstein distance in the embedding space. This analysis establishes a provable upper bound for the GCD loss, revealing two critical conditions: (1) small Wasserstein distance between old and new classes to transfer discriminative knowledge, and (2) sufficient separation to avoid overlap between old and new classes. Through a Pairwise Markov Random Field (PMRF) perspective, they show that mainstream graph contrastive learning (GCL) methods employing InfoNCE or SupCon losses inherently violate these conditions due to uncontrolled category relations and the smoothing effect of GCN encoders. To address this, the paper proposes SWIRL, a new GCL method designed to better control global category relations. Experiments on synthetic and real-world graphs validate the theoretical findings and show SWIRL's effectiveness Claims And Evidence: The efficacy of the SWIRL method has been amply demonstrated. The principal theoretical outcomes of the authors were validated under a specific category distribution layout (as illustrated in Figure 2). Although the results align closely with the theory, it is incumbent upon the authors to further elucidate whether this layout sufficiently attests to the universality of the theoretical findings. Methods And Evaluation Criteria: The authors have employed reasonable evaluation metrics, and the proposed method effectively addresses the target problem at hand. Theoretical Claims: I haven't looked at the proof process, but these theorems do align with my intuition. Experimental Designs Or Analyses: I have checked all claims, experimental designs, and analyses. Overall, the designs and analyses are valid to demonstrate authors’ claims. However, in the experiments of Section 4, the authors should present and discuss the performance of SWIRL-GPR, which is trained using the SWIRL method with a GPR encoder. Supplementary Material: Yes. Appendix A, B, C, and G. Relation To Broader Scientific Literature: This paper extends GCD from image domain to graph domain, with a novel theoretical analysis on GCD methods (e.g., SimGCD) that relies on parametric classification heads. Though the application cases are about graphs, most analyses in this work seem to fit into the cases of non-graph data. Hence, it can be taken as the first comprehensive analysis of parametric GCD performance from both classification head and representation learning sides. Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths** - The authors represent the first attempt to quantify the relationship between the distributions of old and new classes, which I think is crucial for GCD analysis, particularly for the analysis of generalization error. - The theoretical findings provide valuable insights that can inform the development of novel GCD methodologies. - The proposed method is designed with the inspiration from authors’ theoretical analyses. **Weaknesses** - The final hyperparameters for all methods are not reported. - Despite the extensive theoretical content presented earlier, the method proposed by the authors, SWIRL, appears to be essentially heuristic in nature. - The author's title suggests an exploration of generalized category discovery on graphs, but the content of the article primarily focuses on GCD methods that utilize parametric classifiers. I believe this is not appropriate. - The paper is too mathematical to read in some parts (e.g., Sec. 4.1). Other Comments Or Suggestions: - The last sentence of Assumption 3.3 mentions that the author's experiments also support the equivalence between \( L = D_{CE} \) and \( L = MSE \), but the author does not analyze this in the results discussions. - In Appendix G.1, the author has improved the existing evaluation protocol, but the explanation of its advantages is not sufficiently intuitive. I think some specific examples would help readers quickly grasp the key points of the improvements. Questions For Authors: 1. Do the theoretical results in Sections 3 and 4 directly apply to parametric GCD methods for image data? 2. The computational efficiency of the InfoNCE loss is high when the batch size is large. Some works in other fields [1] have found that the pairwise sigmoid loss without global normalization can enable more efficient contrastive training. Is it possible for your theory in Section 4 to extend to contrastive learning algorithms that utilize this type of loss? 3. The loss \( L_{te}(h) \) (Eq. 5), which reflects the GCD performance of the classifier \( h \), is a weighted sum of three GCD capabilities. However, how are the weights for the three components determined? 4. In the illustrative experiments (Sections 3.4 and 4.3), the authors employed a similar spatial arrangement for the category space: new classes are situated inside, while old classes are placed outside. Why was this design necessary? Have the authors experimented with other types of layouts? [1] Sigmoid Loss for Language Image Pre-Training, ICCV2023 Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for the valuable comments that helped improve our paper! --- ### **Claims and Evidence** **Theorem 3.5** summarizes three key aspects of **GCD**: **Old**, **New**, and **Reject Capability**. This work primarily focuses on the latter two (involving new-class data), so we intentionally made old classes easily distinguishable to eliminate interference and study how the ability to separate old classes transfers to new ones. From this perspective, our experimental setup validates the **New/Reject Capability** aspects of **Theorem 3.5**. For the **Old Capability** part of **Theorem 3.5**, we previously conducted experiments under an alternative setting: placing old classes *inside* and new classes *outside* (current old-class positions). The results confirmed that insufficient Old Capability negatively impacts new-class separation, aligning with theoretical predictions. ### **Experimental Designs or Analyses** The results for **SWIRL-GPR** are shown below. On dataset **D1**, SWIRL outperforms **SimGCD-GPR**, while the opposite holds for **D3**. Upon visualizing the representation space of **SWIRL-GPR on D3**, we observed significantly weaker intra-cluster cohesion compared to **SWIRL-GCN**, leading to poorer GCD performance. We attribute this to both **GPR encoders** and **SWIRL** emphasizing information from distant nodes/spaces *beyond local neighborhoods*, which neglects local structural cues and hinders tight cluster formation. Additionally, since **GPR encoders** are inherently challenging to train in unsupervised graph contrastive learning (e.g., via GRACE-style methods) [1], we recommend pairing **SWIRL** with a **GCN encoder** for better local-space cohesion. | **D1** | **HRScore** | **Old RACC** | **New RACC** | **Reject ACC** | |--------------|------------|--------------|--------------|----------------| | SimGCD-GPR | 77.18 | 99.79 | 62.92 | 99.48 | |SWIRL-GPR| **87.72** | **100.0** | **78.13** | 98.54 | | **D3** | **HRScore** | **Old RACC** | **New RACC** | **Reject ACC** | |--------------|------------|--------------|--------------|----------------| | SimGCD-GPR | **88.77** | **98.12** | **81.04** | 90.52 | | SWIRL-GPR | 82.39 | 96.88 | 71.67 | 89.58 | ### **Weaknesses** + **W1**: We will promptly release the code and document all hyperparameter settings in configuration files. + **W2**: This work focuses on *theoretically* explaining how old-class knowledge transfers in GCD. We achieved this goal, and our theoretical results guide GCD method design. The heuristic **SWIRL** merely validates this guidance; rigorous provable methods are left for future work. + **W3**: Thank you—we will explicitly clarify the paper’s scope in the revision. + **W4**: We will add schematic diagrams for **Sections 3 and 4** to improve readability. ### **Suggestions** + **S1**: Although not re-emphasized, all verification experiments used $L = D_{CE}$ (cross-entropy loss), and results aligned well with theoretical predictions, demonstrating their partial equivalence. + **S2**: Practical examples will be added in the revised version. ### **Questions** + **Q1**: Yes, except the content about GCN encoder. + **Q2**: Excluding global normalization violates the PMRF’s fundamental dependency assumption, making our framework incompatible with such losses. + **Q3**: This ultimately depends on which capability is prioritized in a specific application. Regarding the theoritical side, it does not influence the bound formulation. + **Q4**: Please refer to **Claims and Evidence**. [1] PolyGCL: GRAPH CONTRASTIVE LEARNING via Learnable Spectral Polynomial Filters, ICLR2024 --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response and are pleased to see the additional experimental results and discussion. We kindly request the authors to share the schematic diagrams designed for Sections 3 and 4 at their earliest convenience.
Summary: This paper studies the generalized category discovery problem in graph node classification context. The authors aim to answer the question “When and how do old classes help (parametric) generalized category discovery on graphs?” in a theoretical way. The answer from the authors is based on the relationships between old and new categories in the embedding space, the elements of which are later fed into a parametric classifier to make final predictions. Specifically, the Wasserstein distance between the distributions of old-class and new-class embeddings should be minimized, provided that the embeddings of the old and new categories do not overlap. Later, the authors find that the representation learning methods in current GCD methods tend not to meet this. And to address this issue, they propose a new GCL method, SWIRL. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No. Experimental Designs Or Analyses: Yes. This paper has sound and valid experimental designs to support the claims of authors. Supplementary Material: Yes. I have reviewed A, B, C, and G. Relation To Broader Scientific Literature: This paper is, as far as I know, the first theoretical work to answer “When and how do old classes help (parametric) generalized category discovery?” in the field of GCD. But the theories seem to hold only for those methods using parametric classification head (e.g., MLP). In spite of this, the quantization of the relationship between old and new categories and the GCD loss upper bound make this work quite novel. The analysis of contrastive learning methods in Sec. 4 is insightful to design new representation learning methods for GCD problem. Though the authors put a graph context, the main results in this work seem to hold for image data as well. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This paper is well-organized to answer the question “When and how do old classes help (parametric) generalized category discovery on graphs?” 2. The theorems answer the question clearly and the proposed SWIRL is motivated by the theoritical results. 3. The hyperparameter analysis shows that the proposed method is not sensitive to hyperparameters, which is good for practice. Weaknesses: 1. I suggest the authors test SWIRL on image datasets such as CIFAR100 to make this work more comprehensive. 2. There are a lot of theorems, and the paper is very dense in math. More intuitive discussion can help readers to understand. 3. In line 1938, p36, 035 should be 0.35. 4. For all datasets, the weights of supervision and self-supervision losses are fixed to 0.35 and 0.65, respectively. Is this an appropriate way? 5. In line 2076, p38, the authors state that “compared to other instance-to-prototype contrastive loss”. This statement is not accurate since $s=0$ can only derive one form of instance-to-prototype contrastive loss, whereas in reality, there are many other forms of instance-to-prototype contrastive losses. 6. In Sec. 4.1, the authors say that $\mathbf{W}_{X}$ is the subgraph sampled from $\mathbf{B}$. But no clear sampling definition and meaning are provided, leading to some confusion. What’s the shape of the sampled subgraph? Furthermore, how can $\Omega_{D}(\mathbf{W})$ align with the single-positive-sample manner in GCL methods such as GRACE? Other Comments Or Suggestions: See the weakness part. Questions For Authors: See the weakness part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your constructive suggestions! We hope the response can address your concerns. + **W1**: When keeping other hyperparameters consistent, adding **SWIRL** loss alone to **SimGCD** not only achieves a better **HRScore** but also reaches a high performance level as fast as at **epoch=5**. The implementation is based on the code https://github.com/CVMI-Lab/SimGCD. | CIFAR100 | HRScore (epoch = 5) | HRScore (epoch = 200)| Old RACC | New RACC | Reject ACC | |----------|----------|------------|----------|----------|------------| | **SimGCD** | 33.9 | 77.9 | 81.8 | 74.3 | 91.9 | | **SWIRL** | **65.7** | **78.4** | 78.4 | 78.1 | 90.8 | + **W2**: In the revised version, we will include a schematic diagram to illustrate the theoretical result in **Sec. 3**: semantically similar categories should also be close in the representation space. Additionally, we will add another diagram to visualize the **PMRF perspective of GCL** mentioned in **Sec. 4**. + **W3**: We have carefully checked the entire manuscript to ensure no similar errors remain. + **W4**: For **parametric GCD methods** (the main focus of this work), preventing new-class samples from being misclassified as old classes is crucial. Thus, the weight of the supervised signal typically needs to be lower than that of the self-supervised signal. Here, we follow the hyperparameter selection of **SimGCD**. Moreover, since all methods use the same weight settings, adopting different weights on other datasets does not affect the conclusions of this paper. + **W5**: Thank you for pointing this out. To be precise, when $s=0$, the **SWIRL loss** closely resembles **PCL [1]**. The key distinction of our method lies in how we **push away different samples and clusters with varying repulsion froce**. + **W6**: $\mathbf{B}$ represents the relationship among all **N samples** (including all possible augmented views) in our framework. The dataset of **n nodes** is just a subset. $B_{ij}$ denotes the similarity between nodes **i** and **j**, as well as the probability of sampling them as a positive pair. In each augmentation step, we sample one positive node **j** for each node **i**. Once sampled, the edge **(i,j)** becomes part of the subgraph $\mathbf{W}_X$. Since **n << N**, the **N × N** matrix $\mathbf{W}_X$ is actually a small graph with **2n non-isolated nodes**. The remaining **N-2n** samples are isolated and retained in $\mathbf{W}_X$ only for formulation convenience. + **W7**: $\Omega_D(\mathbf{W})$ equals **1** only when each row of $\mathbf{W}_X$ has exactly one non-zero element (i.e., each node has only one neighbor). Thus, multiplying by it ensures that in the sampled subgraph each node has a single positive node. [1] *Prototypical Contrastive Learning of Unsupervised Representations*, ICLR 2021. --- Rebuttal Comment 1.1: Comment: The authors have addressed my questions. I maintain my score and support its acceptance.
null
null
null
null
null
null
COExpander: Adaptive Solution Expansion for Combinatorial Optimization
Accept (poster)
Summary: This article proposes an Adaptive Expansion (AE) paradigm for COPs, demonstrating its advantages in a series of experiments on COP problems. ## update after rebuttal Thanks for your detailed response. All of my concerns are clearly resolved so I have raised my score to 4. Claims And Evidence: Generally clear and convincing. Methods And Evaluation Criteria: Yes, make sense. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: I notice that similar to this article, a recent D&C method, UDC, also uses GNN for initial tour construction (which does not seem entirely as described in Figure 4). Considering that your proposed COExpander requires an optimal solution as the label, can COExpander demonstrate outstanding performance compared to UDC under a similar runtime? Supplementary Material: I have read the supplementary material for detailed performances and literature review. Relation To Broader Scientific Literature: 1. This article proposes an Adaptive Expansion (AE) paradigm which bridges the global prediction (GP) and local construction (LC) paradigms through the utilization of a partial state prompted solution generator. 2. This paper presents a series of benchmarks, which is beneficial for future research. Essential References Not Discussed: Generally okay. Other Strengths And Weaknesses: **Strength:** 1. The experimental volume and workload of this article are large, and the literature review is relatively complete 2. COExpander can demonstrate good results **Weakness:** 1. I think I have tried my best to understand, but I still cannot fully comprehend the methods proposed in this article. This article lacks a simple and clear description of the method design and intuition (e.g., what effects have been made after predicting mask and using partial construction for MIS or TSP). I will ask some of my doubts in the Question part. Other Comments Or Suggestions: See Questions. Questions For Authors: 1. Is COExpander sensitive to specific strategies for determining operations for cop, and are there any relevant ablation experiments? 2. Is COExpander sensitive to the parameters in Eq (5), and what is the design concept of Eq (5)? 3. Will you plan to publish the code implementation? For Weakness1: 4. In Line 273, what does it mean that the model input contains optimal solution $x^*$. 5. In Algorithm 1, you mentioned Model Inference with M. The acquisition of M seems to require the optimal solution. How was this step implemented during testing? 6. Can you provide a simple and clear description of the method design and introduction? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear Reviewer F5eW, We are more than grateful for your time and recognition of our contributions. We sincerely present our point-to-point clarification to your questions below. > **Q0-Comparing with UDC.** **A0:** First, we have cited the UDC method and explained our exclusion of further comparison in Sec.1 (line 25-26, right). Second, we provide a brief performance comparison below, showing that COExpander generally outperform UDC (reported in original paper with the *best* settings) in terms of both optimality gap and per-instance efficiency. |Task|Gap(UDC,%↓)|Time(UDC,s↓)|Gap(Ours,%↓)|Time(Ours,s↓)| |-|-|-|-|-| |TSP500|1.58|1.88|**0.25**|**0.66**| |TSP1K|1.78|3.75|**0.64**|**2.43**| |TSP10K|4.03|60.00|**1.45**|**29.50**| > **Q1-Sensitivity to specific strategies for determining operations.** **A1:** In fact, to mitigate the overreliance on ad-hoc designs, the decisive operators in our work are based on the simplest greedy heuristics. Take MIS for instance, as detailed in Sec. 4.2.2, we sort the predicted heatmap in descending order and sequentially (greedily) attempt to add nodes to the independent set. Once the "independence" constraint is breached, the process halts and the next round of model inference commences. **Thus, the version we propose can essentially be regarded as a baseline strategy, while future efforts on more complex determining operations are encouraged and may necessitate ablations against ours.** We will also explicitly address this point in the paper to prevent misunderstandings. Thanks for your question! > **Q2-Sensitivity to the mask probability in Eq(5).** **A2:** Please refer to our **response A2 to Reviewer UKMH**, where the effects of $\alpha$ is systematically evaluated though supplementary experiments. > **Q3-Disclosure of the code implementation.** **A3:** Sure. Rest assured that the code and datasets of this work will be fully open-sourced as soon as the conference permits. At the moment, due to the regulation that "links may only be used for figures (including tables) and captions that describe the figure (no additional text)", we are more than willing to provide our code via anonymous links once we have your confirmation that it does not violate any of the rules. > **Q4-Explanation of model input containing optimal solution $x^{*}$.** **A4:** Recall the process of diffusion mechanism, in the training stage, $x^*$ serves as the starting point to generate the noising trajectory, i.e., $x_{0:T}=x_0,x_1,\cdots,x_T$ where $x_0=x^*$, and the noised representation $x_T$ is fed to the model to learn the denoising mapping. At inference, the noised vector $x_T$ is generated by random Gaussian noise without ground truth label. We note the description in Line 273 slightly inaccurate and have updated it accordingly to mark the difference of model input for training and testing phases. Thank you for the critical point! > **Q5-Acquisition of $M$ in the testing phase.** **A5:** First of all, we'd clarify that the mask $M$ is a Boolean matrix with the same shape as the solution $x$. Its purpose is straightforward: to mark whether an edge or node has been determined (assigned 0 or 1). Hence, $M$ is initialized as a zero matrix (vector) at the beginning of tests, and there is no requirement for the optimal solution $x^*$. We've updated our scripts in Algorithm 1 to explicitly show the initial setting of the mask. > **Q6-Request for a simple and clear description of the method.** **A6:** In this paper, we put forward a novel paradigm for the neural decisive process, aiming to harness the strengths of both global prediction and local construction methods. Briefly, the process of determining the solution for a problem instance is adaptively divided into multiple rounds. During each round, the model generates a heatmap that forecasts the probability of each node/edge being chosen. Subsequently, the solution is greedily constructed in descending order of the heatmap values until the constraints are breached. All the selected nodes/edges are then marked as 1 and, together with the partial solution, fed back into the model to initiate a new round of inference. This cycle continues, with more nodes or edges being determined in each successive round, until every decision variable has been taken into account and assigned a value of either 0 or 1 in the final solution. The modek is trained with randomly generated intermediate states (e.g. 10 out of 50 nodes have been decided in MIS) to learn better predictions for the solving stage. --- We hope our explanations adequately address your concerns. We are more than willing to provide further clarifications if any question remains, while we'd be rather grateful if, through your valuable reconsideration, our technical contributions merit an improved assessment overall. Best regards, The Authors
Summary: The paper proposes a learning-based approach for solving combinatorial optimization problems that combines global and local paradigms to achieve better performance. The authors describe their method and evaluate it empirically. Claims And Evidence: The claim to have reimplemented a range of state-of-the-art solvers is not credible. Methods And Evaluation Criteria: Yes. Theoretical Claims: n/a Experimental Designs Or Analyses: Yes, as far as possible based on the description provided. Supplementary Material: Appendix. Relation To Broader Scientific Literature: Adequate. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Potentially a very novel way to approach CO. Other Comments Or Suggestions: The approach is novel to the best of my knowledge and seems to work well in practice. However, the paper is too long for a conference format in my opinion (the appendices are 18 pages compared to 11 pages for the main paper including references). The authors claim to have reimplemented the solvers Gurobi, KaMIS, LKH, GA-EAX, and Concorde -- really? Are you seriously claiming to have reimplemented all of these solvers, including the closed-source Gurobi, that are the result of decades of implementation efforts? How did you verify that your reimplementations are faithful to the originals? Appendix D seems to suggest that your reimplementations are in fact faster than the originals, which would be a huge advance in itself. Please clarify this important point. This issue unfortunately casts doubt on the empirical results, in particular the improvements of the proposed method over the baselines. Similar to the classical solvers, the neural approaches were retrained in at least some cases, but it is unclear whether the training process reflects what was used to train those approaches originally. Thus, it is unclear to what extent improvements are due to the proposed approach. In addition, a custom set of benchmarks is used, making comparison with published figures difficult. Questions For Authors: Please clarify what you implemented and where you used existing implementations. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer CvKY, We appreciate your time reviewing and acknowledging the novelty and empirical efforts of our work. There appear to be several misunderstandings, chiefly concerning the claim about the "reimplementation" of baseline solvers and existing neural methods. We offer point-by-point clarifications below. --- > **Q1-About paper length.** **A1:** Thank you for your concern. Admittedly, our paper has a relatively long appendix. However, we'd like to explain that: First, each part of the appendix provides supplementary support for the core idea. It includes additional related work for a broader discussion of existing NCO efforts (A), detailes of the 3 paradigms in our proposed taxonomy (B), formal problem definitions (C), introduction of our re-standardized data and re-wrapped tools (D&E), implementation details for reproduction (F), and supplementary results and analysis (G). Second, as far as we know, many works [1,2,3, etc.] have comprehensive content without their contributions being devalued by top conferences, and we've double-checked to ensure no violation of the submission rules. Thus, we believe our paper contains the necessary context to present a complete picture of our proposal, with both narrative and procedural justifications, comparable to precedents of the same tier and area. --- > **Q2-Doubts on the claim of reimplementation.** > **A2:** Thank you for posing this significant question. **Firstly**, we sincerely apologize for any misunderstanding. In fact, we are not "re-implementing" the renowned solvers, rather, our work center on enhancing the usability of invoking them through more unified and efficient APIs for future research. This process is more aptly described as re-"wrapping"/"encapsulating" at the engineering level, without duplicating the underlying algorithms. Hence, the issue of “faithfulness” to the original implementations does not pertain here. Below we provide a sample code how the solvers are accessed within our framework: ``` solver = MISGurobiSolver() solver.from_txt("path/to/data") solver.solve() >>> (obj: xx, gt_obj: xx, gap: xx, std: xx) ``` We believe our efforts enables an easy and consistent use of these solvers for future research. **Regarding the improvement of LKH**, as elucidated in Appendix D.2, it involves an in-depth exploration of the complex built-in settings of the LKH package. We found that the LKH3 package utilizes a parameter named `SPECIAL`, which by default disables the use of 2-/3-opt operators, thus compromising the solving quality on TSP. So, we empirically removed it for enhancing both performance and efficiency, and leave it an open question prompting further investigation into its internal mechanisms. **In summary**, we regret any confusion caused and hope our clarifications accurately convey the nature of this part in our work—aiming for improved convenience and potential performance gains from an engineering perspective. We'd like to reiterate our primary technical contribution: the proposal of a novel neural paradigm and framework for COPs, supported by comprehensive experiments. --- > **Q3-Doubts on retraining of existing neural solvers.** **A3:** Thanks for your comments. Firstly, in the field of NCO research, it is a common and encouraged practice to retrain and re-evaluate existing methods, which facilitates a fairer comparison, as directly using results from previous works can lead to discrepancies of hardware, datasets, and testing pipelines. Retraining demands significant effort. **Regarding the data, our benchmark datasets have covered the vast majority of publicly available datasets, such as TSP(Uniform), MIS(RB-SMALL) and MIS(ER-700-800)**. In cases where consistent data is currently lacking, like for ATSP, MCl, MVC, MCut, etc., we synthesize new instances **following the same generating algorithm as previous works**. Our intention is to help establish standardized benchmarks for these problems, similar to the well-established TSP benchmark. It's worth noting that the results obtained from our retrained neural approaches generally match or even surpass those reported in the original papers (see the table below). Rest assured that we shall release the code and data as soon as the conference permits via anonymous links, in any case. |Data|Method|Ori. Gap(%)|ReImpl. Gap(%)| |-|-|-|-| |MIS-SMALL|DIFUCO|4.29|3.46| |MVC-SMALL|Meta-EGN|2.80|1.56| |MCl-SMALL|Meta-EGN|7.00|8.30| |MCut-SMALL|DIFUCO|0.48|0.06| |MCut-LARGE|DIFUCO|-0.11|-1.77| |TSP-100|GCN|1.39|0.26| |TSP-100|SYM-NCO|2.88|1.75| |ATSP-100|MatNet|3.24|3.26| --- ## References [1] UDC, NeurIPS 2024 (45 pages) [2] UniCO, ICLR 2025 (41 pages) [3] UCOM2, ICML 2024 (34 pages) --- We hope our clarifications adequately address your concerns. We'd be rather grateful if, through your meticulous reconsideration, stronger consensus could be reached on a more favorable assessment of our work. We remain positive to further discussions! Best regards, The Authors --- Rebuttal Comment 1.1: Comment: Thank you for the clarification -- this addresses my main concern and I will raise my score accordingly.
Summary: This paper introduces an adaptive approach for combinatorial optimization, where solution variables are incrementally determined using a dynamically adjusted decision step-size rather than fixed-size decisions. The method integrates global prediction (GP), producing probabilities for all variables at once, and adaptively expands partial solutions in each step. The authors demonstrate the effectiveness of their approach across several benchmarks, showing clear performance gains. However, the core idea appears closely related to the previously introduced 'Learning What to Defer (LwD)' framework [1], which also adaptively adjusts the number of decision steps through deferred actions. To clearly establish the novelty of this paper, a direct experimental comparison against LwD is necessary. Without such comparisons, the originality of the proposed method remains uncertain. For this reason, I currently give a weak rejection and recommend that the authors provide explicit comparisons to related adaptive decision-making methods to better highlight their contribution. [1] Ahn et al., "Learning What to Defer for Maximum Independent Sets", ICML 2020. --- **After Rebuttal**: I increased score because authors' rebuttal addressed my concerns. Claims And Evidence: Their claim that the GP method is not expressive enough and the LC method is not very efficient is valid. Adaptive decision-making appears to be a promising alternative that addresses these limitations. Methods And Evaluation Criteria: Their evaluation criteria are reasonable and clearly justified. Theoretical Claims: No theory here. Experimental Designs Or Analyses: The experimental design is reasonable, covering several benchmarks and large-scale cases. Supplementary Material: They provided massive additional experiments in appendix. Relation To Broader Scientific Literature: This strategy is related to decision-making methods in LLMs. Adaptive decision-making could potentially be useful in that context as well. Essential References Not Discussed: One critical missing reference is [1], which presents an almost identical claim and proposes an adaptive decision-making scheme that automatically stretches or shrinks decision steps using deferred actions. A direct comparison with this work is necessary, along with a clear discussion of the differences and contributions. **[1]** Ahn et al., *"Learning What to Defer for Maximum Independent Sets,"* ICML 2020. Additionally, for **partial solution generation**, you may refer to prior work related to GLOP: - Kim et al., *"Learning Collaborative Policies to Solve NP-hard Routing Problems,"* NeurIPS 2021 (which introduces a "reviser" to refine partial solutions). For **LC solvers**, you may also include the following work in the related works section, as it extends DeepACO: - Kim et al., *"Ant Colony Sampling with GFlowNets for Combinatorial Optimization,"* AISTATS 2025. Other Strengths And Weaknesses: **Weaknesses:** They only verify their method on simple constrained combinatorial optimization problems. Its effectiveness on more complex constrained problems, such as CVRP, PDP, and JSSP, remains uncertain, especially since hard constraints in these problems are more naturally handled by the LC method. Other Comments Or Suggestions: None Questions For Authors: I have one critical question: Can this type of method effectively handle hard-constrained problems? As far as I know, such methods perform well on locally decomposable problems like MIS and simple routing problems such as TSP. However, I have yet to see a non-LC method successfully solve complex routing problems, even relatively simple ones like CVRP, which involve multiple hard constraints. I would like to hear your discussion on this issue and how your algorithm might address these challenges. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer zhdB, We appreciate your meticulous review and constructive advice. Below we respond to the 3 major points mentioned in your comments. > **Q1: Comparison with Lwd.** **A1:** **Firstly**, it was our oversight not to compare LwD initially. Indeed, LwD is a highly relevant counterpart within the AE paradigm employing the RL learning strategy, and it merits a more in-depth discussion in our paper for its gradual "determination" of decision variables. **However**, it’s important to note that while LwD focuses predominantly on problems that are "locally decomposable" through RL-based policy learning, our COExpander represents a significant extension. It is designed to handle a broader spectrum of tasks, spanning node-selection tasks like MIS (isolated constraints), edge-selection tasks such as TSP (simple global constraints), and even more complex CVRP (detailed in Q3). This enhanced capability is underpinned by the utilization of more advanced backbone model architectures, specifically diffusion and consistency models, in combination with SL learning. We list the results comparing LwD and COExpander in [ae_solvers.png](https://anonymous.4open.science/r/COExpander-ICML-Rebuttal-0246/ae_solvers.png). **To summarise, we are more than delighted to have incorporated your recommended work into our discussion and experimental comparisons. Our collective efforts, we believe, will contribute to a more systematic establishment of the AE paradigm in the context of NCO problem-solving.** > **Q2: Suggestions on additional related works (LCP & GFACS).** **A2:** Thanks for your suggestion. We have carefully studied the works you provided and agree that LCP introduces a reviser mechanism for refining partial solutions and GFACS provides valuable insights into advanced sampling techniques for LC solvers, which are both significant approaches in the NCO community. Therefore, we have followed your advice to cite them in our related work to enhance the paper’s connection to the broader NCO research landscape. > **Q3: Applicability to more complex problems like CVRP.** **A3:** Firstly, COExpander can be directly applied to CVRP, as presented in [cvrp.png](https://anonymous.4open.science/r/COExpander-ICML-Rebuttal-0246/cvrp.png), which is the first attempt to demonstrate applicability of heatmap-based methods on VRPs, to our knowledge. To admit, there is limitation of SL methods to tackle complex constraints. Inspired by LwD, we have theoretically conceived to extend the AE paradigm to solve complex problems like VRPs via PPO. **(i) Overview.** AE paradigm is an iterative process and satisfies the Markov property, thus can be modeled as MDP. We plan to use PPO for model training, in order to better adapt to the relationship between actions and constraints. **Note that a feasible scheme is theoretically described as follows, while the implementation and empirical results have obviously exceeded the scope of this paper, hence reasonably remitted to future researches.** **(ii) Modeling.** state $s_t:=(x_t,M_t)$; action $a_t:=(m_t,z_t)$ (selection and assignment); transition $P(s_{t+1}|s_t,a_t)$ devided to $x_{t+1}^{i} = \begin{cases}z_t^i&\text{if }m_t^i=1 \\\ x_t^i&\text{else }\end{cases}$ and $M_{t+1}=M_t\lor m_t$; reward is only defined in the final step $R(s_t,a_t) := \begin{cases} -c(G, x_t) & \text{if } t=k-1,\\\ 0 & \text{else} \end{cases}$ **(iii) Training Model with PPO.** Given the model $f(\cdot, \cdot)$, $\pi_\theta^t := \pi_\theta(a_t|s_t, G) = f(G, s_t)$. $z_t$ of $a_t$ is sampled from $\pi_\theta^t$ via **Bernoulli sampling** and then set the determined variables as 0. The advantage function: $\hat{A}\_t= -\gamma^{k-1-t} \cdot c(G,x_{k-1})-\mathbb{E}\_\theta [ -c(G,x_{k-1} | s_t)]$. **(iv) The Selection Part of Action.** The selection part $m$ of $a$ is the core of controlling constraint satisfaction. Take CVRP as an example. - Reshape $m$, $z$, and $M$ to $n\times n$, where $n$ is the number of nodes including the depot. $z$ needs to undergo an OR operation with its transpose. - Make pre-assignment: $\hat{x}\_{i,j}=\begin{cases}z_{i,j}&\text{if }M_{i,j}=0, \\\ x_{i,j}&\text{if }M_{i,j}=1\end{cases}$. - Check Constraints: $C_i=\begin{cases}1&\text{if }\sum_{j=1}^{n}\hat{x}\_{i, j}>2\text{ or }\sum_{j=1}^{n}p_{i,j}\cdot d_j>1 \\\ C_i&\text{else}\end{cases}$ - Re-Assignment. $\hat{z}\_{i,j}=\begin{cases}1&\text{if }C_i=1,\\\ z_{i,j}& \text{else}\end{cases}$; $\hat{x}\_{i,j} = \begin{cases}\hat{z}\_{i,j}&\text{if }M_{i,j}=0,\\\ x_{i,j}&\text{else}\end{cases}$. - Selection Part. $m_{i,j}=\begin{cases}1&\text{if }\hat{z}\_{i,j}=1\text{ or } \sum_{j=1}^{n}\hat{x}\_{i,j}=2,\\\ 0&\text{else}\end{cases}$ --- We hope our point-by-point clarificaitons, supplementary experiments, and positive feedbacks on your requests, have satisfactorily addressed your concerns. We remain fully committed to involving further discussions with you towards an elevated evaluation of our work. Best regards, The Authors
Summary: This paper introduces COExpander, a method to solve combinatorial optimization problems by diffusion models. Unlike previous neural methods, COExpander is informed by local partial solutions and iteratively improves upon them to obtain a final solution. The authors study 6 graph CO problems and demonstrate good performance in terms of speed and drop compared to SOTA neural approaches. Claims And Evidence: The paper performs extensive experiments in several graph CO datasets and the evidence is convincing in terms of drop. Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claim is present. Experimental Designs Or Analyses: Yes, I checked the main text. They are overall sound, although some comparisons were not made (see below). Supplementary Material: I have read Appendix A and B and skimmed through the rest. Relation To Broader Scientific Literature: The paper is related to the NCO literature on classical graph CO problems, i.e., problems on the more theoretical side with one constraint. The paper is an improvement over the previous Fast T2T method with some additional iterations for partial solution refinement. Essential References Not Discussed: At least one essential reference is not discussed, iSCO [1]. This paper proposed a sampling method for solving CO problems including most of the ones studied in this paper. iSCO outperforms KaMIS: in ER [700-800], while the proposed COExpander has a more than 5% gap compared to KaMIS, meaning that COExpander should be worse than this paper. [1] Sun, H., Goshvadi, K., Nova, A., Schuurmans, D., & Dai, H. (2023, July). Revisiting sampling for combinatorial optimization. In International Conference on Machine Learning (pp. 32859-32874). PMLR Other Strengths And Weaknesses: I think the paper is original overall and a good contribution to NCO. Also, the authors made several contributions on the dataset side, including fixing solvers, which is appreciated. In terms of weaknesses: 1. I believe the main limitation of this approach is the applicability to CO problems with mostly a single constraint. Similarly to other previous approaches, such as DIFUSCO, I do not see how the approach could benefit more practical problems such as vehicle routing problems and job shop scheduling. 2. This approach is supervised and requires optimal solutions $x^\*$ unlike RL, unsupervised learning, or sampling-based approaches as iSCO. This may not be possible for more complex problems which either do not have or whose optimal solutions are hard to obtain. 3. Writing: I found some parts confusing in the flow. For example: in abstract and intro, (self-) “adaptive/adapted step-sizes” is repeated multiple times, but in the main text (Section 4) this is not clarified. I am still unsure about the meaning (see question). Other Comments Or Suggestions: Minor points: 1. About the claim “To address this issue, we abandoned node features and adopted convolutions only on edges. To our best knowledge, this is the first global predictor with good performance for ATSP.”, GLOP (Ye et al. 2024b) already implemented such network. 2. The release code and datasets are a good contribution. However, at the moment, it is just a promise. I would invite authors to consider submitting the code through a zip file or an anonymized link in the future. Questions For Authors: 1. Which part does the “adaptive” in your method stand for exactly? I could not find a single answer to this, as the term is mentioned in the abstract and introduction but not in Section 4. My understanding is that this is the number of determination steps, etc, from the ablation studies, but this is not adaptive and decided a priori. Related to this, what does the "self" in "self-adaptive" stand for? 2. How did you choose the $0.9$ in equation 5? 3. What is the meaning of encoding $y_t$ with a sinusoidal embedding layer? Are the nodes numbered in any way? 4. In Figure 3, why can more inference steps worsen performance in some problems? 5. You mentioned solving more general VRPs in future works. How would that work, given that, in my understanding, in such problems the mask may change values multiple times depending on the current tour? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear Reviewer UKMH, Thanks for your recognition and valuable questions. >**Q1: Interpretation of "adaptive"** **A1:** In our work, "adaptive" means exactly the fact that *the number of ascertained decision variables within one round of determination is not fixed*. Specifically, it is achieved via the autonomy of the determiantion operator (see Sec.4.2), where one determination round usually ends when the greedy operator yields invalid variable assignments, so the number of decisions per round is random. Note that $D_s$, which caused your confusion, is just a maximum limit for the **rounds**. E.g., $D_s=3$ for a task with 10 variables ensures the problem gets solved within 3 rounds, but **no prior is imposed on how the quantity of decisions is distributed across those rounds** (possibly [2,3,5] or [4,3,3]). For the term "self-", we have unified our wording to "adaptive" to describe such flexibility to avoid abuse of concepts. >**Q2: Choice of $\alpha=0.9$ in Eq.5.** **A2:** First, the model convergence and solving quality are not sensitive to $\alpha$. Second, $\alpha=0.9$ is preferred to ensure the model to be sufficiently (Pr=0.9) trained on cases to solve from scratch, while balancing the training diversity for solving from arbitrary intermediate states (Pr=0.1). Empirical results and detailed analysis are illustrated in the table and figures in [alpha_ablation.png](https://anonymous.4open.science/r/COExpander-ICML-Rebuttal-0246/alpha_ablation.png). >**Q3: Sinusoidal embedding layer** **A3:** **Generally**, we follow previous works (GCN4TSP,DIFUSCO,T2T,etc) to employ the sinusoidal embedding. It functions in a way similar to how position encoding works for Transformers. (Exactly as you speculated, it assigns numerical identifiers to nodes and edges for GCN.) **Specifically**, for node-tasks (e.g. MIS), the input graph contains only edge index ($E\in R^{|\mathcal{E}|}$), yet the (noised) solution vector pertains to nodes, i.e. $y_t\in [0,1]^{|\mathcal{V}|}$. Parallelly, for edge-tasks (e.g. TSP), only node coordinates ($V\in R^{|\mathcal{V}|\times2}$) are provided, while the solution are on the edges, i.e. $y_t\in [0,1]^{|\mathcal{E}|}$. Hence, the design rationale is to make up for the lacking part of each task type, i.e. dual initial embeddings for both nodes and edges to feed GCN in either case. Formally, $h_v=S(V),h_e=S(y_t)$ for edge-tasks; $h_v=S(y_t),h_e=S(E)$ for node-tasks. We hope this explanation resolves your confusion, and we'll update our paper with clearer notations. >**Q4: $I_s$-performance drop in Fig.3** **A4**: Unlike the multi-sampling or repeated random re-solving tricks, in our case, each inference step marks distinct time points in the denoising process, and the solution vector being denoised is utilized iteratively throughout the entire process. So, a larger $I_s$ merely implies a finer-grained denoising schedule towards the final solution ($\hat{x_0}$). Intuitively, Fig.3 showns an upward-trended solving quality as $I_s$ grows, however, **there is no hard guarantee of performance gain with greater $I_s$, esp. when $I_s$ is small (e.g. 1-5), since the inferences are sequentially linked rather than parallelly independent for "best-picking".** >**Q5: Applicability to VRP** **A5:** Please refer to **A3 to Reviewer zhdB**, due to the space limits. >**Q6: Claim of "the first good global predictor for ATSP".** **A6:** It appears to be a minor misunderstanding. GLOP is indeed a good work, though, as stated in Sec.6.2 of their paper, "we apply MatNet50 checkpoint as our reviser without retraining", implying that MatNet (NIPS21), a solver of the **local construction** (LC) type, was used as the backbone. Thus, it doesn't contradict our claim to be the first **global predictor** (GP) with good performance on ATSP. In any case, we've revised our paper with more careful wordings. > **Q7: Code release** **A7:** The regulations seem to emphasize that only figures are allowed in anonymous links during rebuttal. So we are more than willing to provide our code yet awaiting your confirmation that it does not violate any of the rules. > **Q8: Discussion of iSCO.** **A8:** We've noticed recent sampling methods like iSCO and RLSA(arXiv:2502.00277) and added them in our discussion of related work. Actually, they can be orthogonally performed on top of COExpander as a strong post-processor, where neural models serve to quickly yield high-quality initial solutions in aid of the infeasibility issue that sampling methods often encounter. Supplementary experiments on MIS (shown in the [rlsa.png](https://anonymous.4open.science/r/COExpander-ICML-Rebuttal-0246/rlsa.png)) validate the synergy of COExpander+RLSA. --- Hopefully our responses alleviate your concerns with satisfaction, and we stay open for any further interactions! Best regards, The Authors
null
null
null
null
null
null
GenMol: A Drug Discovery Generalist with Discrete Diffusion
Accept (poster)
Summary: The paper focuses on drug discovery and introduces a Generalist Molecular Generative Model (GenMol), a new discrete diffusion model for molecular generation. This approach has the potential to be applied to various drug-related tasks, including de novo generation, linker generation, and hit/lead generation. The model represents molecules as sequential data using fragment-based SMILES strings, known as SAFE strings. According to Figure 2, the work still utilizes atom-level tokens, which may be somewhat counterintuitive, as one might expect fragments to be used as tokens. The discrete diffusion model operates through a masking (forward) and unmasking (reverse) process. For goal-directed generation, the work incorporates fragment remasking and molecular context guidance to enhance generation quality. Experiments are conducted on (1) De novo generation, (2) fragment-based generation, (3) hit generation using the Practical Molecular Optimization benchmark, and (4) lead generation for five protein targets. The results, along with ablation studies, suggest that the proposed approach is promising. However, further clarification on certain aspects of the methodology and evaluation could strengthen the manuscript. Claims And Evidence: 1. It is unclear how many diffusion timesteps are used, making the claims about efficiency unconvincing. These claims appear around Lines 92–94, 200–208, and 634. For example, the diffusion timestep could be set to 500 or 1000, which is a common choice for graph and image diffusion models. But an autoregressive model would perform decoding in a number of steps equal to the maximum number of nodes, which is typically smaller than 500. Additionally, one advantage of autoregressive models is that they do not necessarily need to consider the entire molecular context during generation, whereas GenMol does. This raises further concerns regarding efficiency. The authors should explicitly state the hyperparameters used and provide a comparison of generation times. 2. The claim around Lines 93–96 that discrete diffusion models enable GenMol to explore chemical space with remasking should also apply to autoregressive models. While the claim holds for GenMol, it is somewhat unclear why this point is specifically listed as an advantage (Line 88). Further clarification would be helpful. 3. Regarding Lines 99–102, the authors suggest that using fragments as units aligns better with chemists’ intuition for chemical space exploration. If this is the case, why are fragments not used as tokens in the discrete model? It also introduces an additional concern in the following point. 4. The manuscript lacks a convincing discussion on why SAFE is chosen over SMILES, given that the discrete diffusion model operates at the atom level. In other words, when extending discrete diffusion models to molecular sequential strings, one might first consider SMILES as a more direct option. Is there a specific reason for opting for SAFE instead of SMILES? While the manuscript discusses fragment-based remasking, this concept could also be applied to SMILES, as SAFE is derived from SMILES, and the molecular structures represented by SAFE could also be represented using SMILES. Methods And Evaluation Criteria: 1. The details regarding tokenization are unclear. The vocabulary size is stated as 1880 (Line 726), but it is not explained how this vocabulary is constructed. Additionally, it is not discussed whether this vocabulary can generalize to broader tasks, which is important given the claim about the model being a generalist. Clarification on these aspects would be beneficial. 2. The fragment vocabulary derived from the dataset is not sufficiently discussed. The BRICS algorithm is briefly mentioned in Line 177, and I assume it serves as the underlying algorithm for defining fragments. However, BRICS tends to prioritize synthetically relevant fragments. As noted in previous work [1], BRICS has several limitations: it breaks bonds based on a limited set of chemical reaction rules, tends to generate a few large fragments, creates variations of the same underlying structures, and results in a very large vocabulary with many infrequent fragments. 3. It is unclear how the iterations in Figure 3 contribute to the quality and efficiency of generation. Additional analysis on this aspect would be helpful to better understand its impact. 4. Figure 3 shows that the framework heavily relies on the scoring oracle. What happens if the oracle is not available? Additionally, if the function used for the oracle does not provide accurate scores, how would this affect performance? Does the proposed GenMol model still perform well under these conditions? Reference: [1] Motif-based Graph Self-Supervised Learning for Molecular Property Prediction. NeurIPS 2021. Theoretical Claims: The work presents the derivation of Eq. (7) in Appendix C. I have no issues about it. Experimental Designs Or Analyses: 1. The authors should provide data statistics for the dataset used to pretrain GenMol. 2. In molecular and fragment-based generation, only one baseline is included. To ensure a comprehensive evaluation, the authors should consider testing additional baselines, including discrete diffusion models, graph diffusion models, and autoregressive models (e.g., SMILES-based models). This would help demonstrate the advantages of SAFE-based discrete diffusion in a fair comparison. 3. The benefits of the method proposed in Section 4 (or Figure 3a) could also be applicable to SMILES-based generation. Could the approach be adapted to other baselines? Discussing this possibility would enhance the generalizability of the proposed method. 4. There are several missing values in Table 4. Supplementary Material: Appendix A, D2. Relation To Broader Scientific Literature: The contribution could aid drug discovery in the fields of chemistry and biology. Essential References Not Discussed: There is a lack of survey and discussion on discrete molecular diffusion models. There are many graph-based discrete diffusion models, the authors should discuss their efficiency and performance and compare them in experiments if necessary. For example, the authors have discussed about DiGress, which could be used for fragment-based generation as presented in the original paper, but it is not tested in Table 2. Other Strengths And Weaknesses: The contributions of this work are multifaceted, including the GenMol model, which refers to the checkpoint used in the experiments and is expected to function as a generalist model. Additionally, the work introduces the GenMol framework (e.g., Figure 3a), which appears to have the potential for generalization to other models. However, the authors use the same name for both the model and the framework, which at times makes the manuscript difficult to follow. It is unclear whether certain benefits discussed apply specifically to the model or to the framework. For instance, the authors present discrete diffusion as a major contribution (Lines 80–108), but it does not appear to introduce any new developments to the discrete diffusion model itself (as described in Section 4). Using discrete diffusion models for molecules is not a new idea as well. Furthermore, does Section 5.1 focus on unconditional generation? If so, how can the proposed GenMol framework in Figure 3a be adapted to this problem? The goal of this work is to develop a generalist model, and accordingly, it has been tested on multiple tasks with results presented in various tables. However, the selection of baselines for each task may not be comprehensive. For example, in de novo generation and fragment-based generation, only one baseline is included, and for lead optimization, only two baselines are considered. Expanding the comparison to include more relevant baselines would strengthen the evaluation. Other Comments Or Suggestions: NA Questions For Authors: 1. How is the molecular context defined in Lines 106–107? 2. Could the fragment remasking approach be adapted to other autoregressive models? 3. The molecular optimization performance heavily depends on frequent Oracle calls (e.g., 10,000 times). What would happen if the number of calls were significantly reduced to fewer than 10 or if inaccurate models were used as Oracles? 4. Do Sections 5.1 and 5.2 utilize the framework presented in Figure 3a? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate your comments that our contribution could aid drug discovery and GenMol is a generalist. We address your concerns below. --- > One baseline in de novo and fragment-based generation. We included JT-VAE (Jin et al., 2018) in Table 9, and have provided the results of DiGress in our *first response to 5s8Z*. --- > No new developments in discrete diffusion. Graph discrete diffusion should be discussed. **GenMol is the first discrete diffusion framework on small molecule sequences**, which is superior to graph discrete diffusion that suffers from low validity and efficiency (GenMol vs. DiGress above). We also showed **confidence sampling achieves better quality and efficiency in molecular sequence generation**. Moreover, we proposed **MCG tailored for discrete sequence diffusion**. --- > It is unclear how many diffusion timesteps are used. GenMol uses confidence sampling with $N$ denoting the number of tokens to unmask at each step (line 323). GenMol with $N=1$ uses the same number of steps with AR models, and higher $N$ improves efficiency. In Table 1, GenMol with $N=3$ shows higher quality than the AR model with 2.5x faster sampling. --- > Could the fragment remasking be adapted to AR models? Yes, but it cannot be applied without relying on a specific generation order in AR models, leading to inferior performance (Table 5). --- > Why are fragments not used as tokens? Using fragments as tokens is challenging due to **the large vocabulary, limited generalizability, and the difficulty of tokenizing attachment points**. Applying BRICS to ZINC250k yields 39034 fragments, resulting in an imbalance with sequence lengths of 3~5. It also does not allow the model to generate novel fragments. We have introduced **a simpler yet effective approach that generates tokens at atom level and performs the remasking at fragment level**. --- > A reason for using SAFE instead of SMILES? Could the approach be adapted to SMILES generation? **SAFE makes fragment remasking simpler and more effective**, and adapting the approach to SMILES is non-trivial. As tokens that belong to the same fragment are not contiguous in SMILES, SMILES models need to classify each token into fragments before every remasking and cannot guarantee the re-predicted tokens belong to the same fragment. On the other hand, SAFE allows GenMol to easily identify fragments in a sequence and generate new fragments. --- > Tokenization details are unclear. We adopt the SAFE tokenizer (Noutahi et al., 2024) (lines 725-726), which is a BPE tokenizer with the SMILES regular expression. It covers 1873 different atoms, bonds, and ions, and thus can be universally used for molecule-related tasks. --- > Do Sections 5.1 and 5.2 utilize the framework in Figure 3a? It is unclear how the iterations in Figure 3 contribute to the quality and efficiency of generation. No, Sections 5.1 and 5.2 use the process of Figure 2(c1-c5) (line 173, Figure 3(a) caption). The generic quality and efficiency stem from the discrete diffusion process, and the iterations in Figure 3(a) are multiple rounds of the process, not the process itself. --- > What if the number of calls is low or the oracle is incorrect? We followed Gao et al., 2022 in Section 5.3, where the 10k oracle calls were set for all baselines. Nevertheless, we observed that **GenMol consistently performed better in both low and high oracle call regimes**, so we hypothesize that GenMol would perform better with fewer calls or an incorrect oracle, and will include this in the revision. --- > Only two baselines in lead optimization. Missing values in Table 4. In Table 4, baseline selection and the use of the ‘-’ symbol to indicate failure (no successful leads generated) followed Wang et al., 2023 (lines 412-413). --- > The same name was used for the model and the framework. **GenMol is the framework that uses discrete diffusion in versatile ways for diverse drug discovery tasks**. GenMol adopts the sampling of Figure 2(c1-c5) for *de novo* and fragment-constrained generation and adds fragment remasking of Figure 3(a) for goal-directed optimization. --- > How is the molecular context defined? See our *second response to 5s8Z*. --- > The fragment vocabulary is not sufficiently discussed. The training SAFEs are based on BRICS following Noutahi et al., 2024. During inference, we used two other rules (lines 234-236, 756-762): $R_{vocab}$, that cuts one non-ring single bond, to construct the vocabulary, and $R_{remask}$, that cuts all non-ring single bonds, to determine which fragments the remasking will operate on. --- > Data statistics should be provided. We used the dataset of Noutahi et al., 2024, and will include the statistics in the revision. --- We hope our response could address your questions and clarify any confusion. If our response is satisfactory, we would like to kindly ask you to consider raising your score. Otherwise, we will be happy to further discuss and update the paper.
Summary: GenMol is a generalist molecular generative model that uses a masked discrete diffusion framework with a BERT-based architecture to generate SAFE molecular sequences, enabling efficient, non-autoregressive decoding and better sampling efficiency. It introduces fragment remasking to optimize molecules by selectively regenerating fragments, enhancing chemical space exploration, and employs molecular context guidance (MCG) to refine predictions using full molecular context. GenMol outperforms existing models across various drug discovery tasks, demonstrating its performance in molecular design. ## update after rebuttal I believe the authors have adequately addressed all of my concerns. I have also reviewed the points raised by the other reviewers and continue to find this work solid, novel, and impactful. There are no remaining concerns—minor or major—from my side, so I have adjusted my score accordingly. Claims And Evidence: All of the claims are supported by extensive empirical evaluations and experiments. The paper is overall convincing and presents extensive empirical results. Methods And Evaluation Criteria: Yes, it follows previous works' evaluation metrics and datasets to evaluate itself. Theoretical Claims: There is just a simple derivation of MCG which is simple and straight-forward. Thus, I have checked all of the proofs for theoretical claims. Experimental Designs Or Analyses: The experimental design in the paper is generally well-structured, covering a range of drug discovery tasks to validate GenMol’s capabilities. The use of a single checkpoint without task-specific finetuning strengthens the claim of model generalization. However, I would like to mention this aspect: **Fragment Remasking & MCG Ablations**: The ablation studies show that fragment remasking and molecular context guidance (MCG) improve performance, but it is unclear how these compare to alternative molecular generation strategies beyond SAFE-GPT. Additional baselines, such as reinforcement learning-based optimization, could provide a more comprehensive validation. Overall, the experimental design is sound, but a more rigorous comparison with additional baselines would further strengthen the findings. Supplementary Material: Yes, broadly all of the parts. It is really nice to have extensive results and tables there. Relation To Broader Scientific Literature: The key contributions of GenMol build upon and extend several existing ideas in molecular generative modeling, discrete diffusion, and drug discovery optimization. Here’s how they relate to the broader scientific literature: 1. **Discrete Diffusion for Molecular Generation**: GenMol leverages a **masked discrete diffusion model**, which aligns with prior work on discrete diffusion in natural language processing (Austin et al., 2021) and molecular modeling (Sahoo et al., 2024; Shi et al., 2024). This approach contrasts with continuous diffusion models (e.g., Hoogeboom et al., 2022), which have been more commonly used in generative chemistry. By adopting a **BERT-like bidirectional attention** mechanism, GenMol improves upon **autoregressive generative models** (e.g., Gomez-Bombarelli et al., 2018) by enabling parallel token prediction and increased efficiency. 2. **Fragment-Based Molecular Design**: The **fragment remasking strategy** extends previous work in fragment-based drug design, which has long been a key principle in medicinal chemistry (Congreve et al., 2003). While prior generative models often treated molecules as atomic graphs (Jin et al., 2018; You et al., 2018), GenMol aligns more closely with chemist intuition by using fragments as the primary unit of generation. This also improves **chemical space exploration**, addressing limitations in previous sequence-based molecular design models like SAFE-GPT (Noutahi et al., 2024). 3. **Molecular Context Guidance (MCG)**: The proposed MCG technique enhances discrete diffusion by incorporating context-dependent constraints, similar in spirit to **controlled generation in NLP** (Keskar et al., 2019) and **conditional molecular design** (Zhavoronkov et al., 2019). Unlike previous methods that relied on reinforcement learning (Olivecrona et al., 2017) or constrained optimization (Gao et al., 2022), GenMol integrates context-awareness **directly into the generative process**, enabling more targeted molecular design. 4. **Efficiency and Generalization Across Drug Discovery Tasks**: GenMol’s ability to perform **de novo generation, fragment-constrained generation, and goal-directed optimization** in a unified manner improves upon **task-specific models** such as GraphAF (Shi et al., 2020) and f-RAG (Lee et al., 2024a). Unlike previous approaches that required separate fine-tuning for different tasks, GenMol’s single checkpoint generalizes across multiple drug discovery scenarios, making it more adaptable to real-world applications. In summary, GenMol synthesizes advances in **discrete diffusion, bidirectional sequence modeling, fragment-based design, and guided molecular generation** to offer a more **efficient, generalizable, and chemically intuitive** approach to molecular generation, surpassing limitations of existing autoregressive and fine-tuned models in drug discovery. Essential References Not Discussed: Maybe the work can cite (Nie, S., Zhu, F., You, Z., Zhang, X., Ou, J., Hu, J., ... & Li, C. (2025). Large Language Diffusion Models. arXiv preprint arXiv:2502.09992.) as the most recent work on sequence generation with diffusion models which has amazing results and has caught attention of the community recently. Other Strengths And Weaknesses: One of the key strengths of this paper is its **fragment-based generation approach**, which aligns well with industry needs. In practical drug discovery, designing and synthesizing molecules from predefined building blocks is significantly more efficient than synthesizing entirely new structures from scratch. This makes GenMol's fragment remasking strategy particularly valuable, as it reflects real-world workflows and can accelerate the molecular design process. Additionally, the paper demonstrates strong **methodological innovation** by combining **masked discrete diffusion with bidirectional sequence modeling**, enabling non-autoregressive parallel decoding. This enhances both sampling efficiency and the quality of generated molecules, addressing limitations in prior autoregressive and graph-based models. The **unified generative framework** that generalizes across multiple drug discovery tasks without requiring task-specific fine-tuning further adds to its significance, as it simplifies deployment in practical settings. On the other hand, while the paper presents extensive empirical validation, its clarity could be improved in certain sections. Some methodological details, such as how molecular context guidance (MCG) is integrated into the diffusion process, could be more explicitly explained to enhance reproducibility. Furthermore, while GenMol shows superior performance over existing methods, an **ablation study on model scalability and robustness**—such as how it performs on structurally diverse datasets beyond ZINC and UniChem (e.g. Enamine)—would further strengthen the work. Overall, this paper makes an **original and significant contribution** by developing a **computationally efficient, chemically intuitive, and versatile generative model** for molecular design. Its alignment with both **theoretical advancements in generative modeling and practical drug discovery needs** makes it a valuable addition to the field. Other Comments Or Suggestions: Nothing in mind. Questions For Authors: - What would be the cost of training this model on a broader space like Enamine 1B dataset, or even bigger datasets to fully explore drug space by leveraging this models' strengths? - Is there any possibility of conditioning this model on a specific gene expression or the protein embedding of a specific target to guide the generation towards a specific biological target or not? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: We sincerely appreciate your comments that all our claims are supported by extensive evaluations, that our proposed fragment-based generation approach aligns well with industry needs, and that our paper demonstrates strong methodological innovation. We address your questions below. --- > Additional baselines, such as reinforcement learning-based optimization, could provide a more comprehensive validation. (Also related to Reviewer 9oFx’s comment: *In de novo generation and fragment-based generation, only one baseline is included.*) We used the PMO benchmark (Gao et al., 2022) for the goal-directed hit generation task (Tables 3, 11-13), and **compared our GenMol with a total of 28 baselines**. These baselines span a broad class of molecular optimization methods, including reinforcement learning methods, genetic algorithm methods, and Bayesian optimization methods. The proposed GenMol achieves the best performance against all of these baselines and its ablated baselines (Table 5). Furthermore, here we compare GenMol with DiGress (Vignac et al., 2023), a graph discrete diffusion method on the *de novo* and fragment-constrained generation tasks, where **GenMol significantly outperforms DiGress in both quality and sampling efficiency**. **Table: *De novo* generation results.** | Method | Validity (%) | Uniqueness (%) | Quality (%) | Diversity (%) | Sampling time (s) | | --- | --- | --- | --- | --- | --- | | DiGress | 89.6 | 100.0 | 36.8 | 0.885 | 1241.9 | | GenMol ($N=1,\tau=0.5,r=0.5$) | 100.0 | 99.7 | 84.6 | 0.818 | 21.1 | | GenMol ($N=1,\tau=1.5,r=10$) | 95.6 | 98.3 | 39.7 | 0.911 | 20.9 | **Table: Fragment-constrained generation results.** | Method | Task | Validity (%) | Uniqueness (%) | Quality (%) | Diversity (%) | Distance | | --- | --- | --- | --- | --- | --- | --- | | DiGress | Linker design (scaffold morphing) | 31.2 | 84.3 | 6.1 | 0.745 | 0.724 | | | Motif extension | 21.8 | 94.5 | 4.2 | 0.818 | 0.794 | | | Scaffold decoration | 29.3 | 91.0 | 9.1 | 0.793 | 0.785 | | | Superstructure generation | 26.7 | 85.9 | 7.4 | 0.789 | 0.776 | | GenMol | Linker design (scaffold morphing) | 100.0 | 83.7 | 21.9 | 0.547 | 0.563 | | | Motif extension | 82.9 | 77.5 | 30.1 | 0.617 | 0.682 | | | Scaffold decoration | 96.6 | 82.7 | 31.8 | 0.591 | 0.651 | | | Superstructure generation | 97.5 | 83.6 | 34.8 | 0.599 | 0.762 | --- > How molecular context guidance (MCG) is integrated into the diffusion process could be more explicitly explained. (Also related to Reviewer 9oFx’s comment: *How is the molecular context defined in Lines 106-107?*) MCG is Autoguidance (Karras et al., 2024) specifically designed for the discrete diffusion of GenMol to fully utilize the given molecular context information. This is done by comparing two outputs from a single model, with good (i.e., less masked) and poor (i.e., more masked) input, respectively. Specifically, the good input $z_t$ is a given partially masked sequence, which is further masked by $100 \cdot \gamma$ % to yield poor input $\tilde{z}_t$, and the two resulting logits are compared and calibrated as Eq. (7). Intuitively, the logits with the poor input exaggerate errors in the logits with the intact input, and $w>1$ removes the errors from the good logits. Following the use cases of *context* in LLM literature, *molecular context* in MCG denotes the given information in the form of the input sequence, i.e., given fragments in fragment-constrained generation and partially masked sequences during the fragment remasking in goal-directed generation. We apologize for the brief description due to space limitations and will clarify this in a future revision. --- > What would be the cost of training this model on a broader space like the Enamine 1B dataset to fully explore drug space by leveraging this model's strengths? The cost of training GenMol on a new SMILES-formatted dataset comes from two sources: (1) preprocessing SMILES -> SAFE, and (2) training with new SAFE strings. SMILES to SAFE conversion of 249,455 ZINC250k molecules takes ~6 minutes. For GenMol to go through 1B training molecules with the same setting in the paper would take ~60 hours. --- > Is there any possibility of conditioning this model to guide the generation towards a specific biological target? **GenMol can be easily extended to be conditioned on specific biochemical conditions** by adopting any off-the-shelf conditioning methods. For example, protein pocket embeddings obtained from a pretrained protein model can be incorporated into GenMol via cross-attention to generate corresponding ligands [A, B]. --- *References* [A] Fu et al., Fragment and geometry aware tokenization of molecules for structure-based drug design using language models, ICLR, 2025. [B] Wang et al., Token-Mol 1.0: tokenized drug design with large language model, arXiv, 2024. --- Rebuttal Comment 1.1: Comment: I thank authors for their efforts on preparing the response. I have adjusted my score accordingly. All the best,
Summary: The paper introduces a versatile molecular generative model based on discrete diffusion applied to the Sequential Attachment-based Fragment Embedding (SAFE) representations. The method can address various drug discovery tasks uniformly (de novo generation, fragment-constrained generation, hit generation, and lead optimization) using a single backbone model. Following are the key contributions of the paper. 1. Present a discrete diffusion model that works for sequence generation for diverse drug discovery scenarios. 2. Introduce fragment remasking. 3. Introduces a new guidance method called Molecular Context Guidance (MCG) for fragment-constrained generation. 4. An ablation study that highlights the quantitative impact of each of the novel contributions stated above. Claims And Evidence: All the claims mentioned in the summary above are supported well by the experiments. Methods And Evaluation Criteria: Yes Theoretical Claims: There are no proofs as such. I checked the derivation in Appendix C. Experimental Designs Or Analyses: The experiments are carefully designed to support the main claims of the paper. Supplementary Material: I reviewed Appendix B, C, and D. Relation To Broader Scientific Literature: The paper introduces the non-autoregressive counterpart to SAFE-GPT (Noutahi et al., 2024). Both GenMol and SAFE-GPT use SAFE representations to solve various tasks in the drug discovery pipeline using a single model. The paper uses the MDLM (Sahoo et al (2024)) and introduces a new guidance method, which is inspired from autoguidance (Karras et al. (2024)). Essential References Not Discussed: No Other Strengths And Weaknesses: The paper can be improved by comparing the proposed guidance method to existing methods in the literature like Nisonoff et al. (2024). Other Comments Or Suggestions: 1. $x_{\theta}^l(z_t, t)$ is already a probability distribution over the token vocabulary, and you could introduce the temperature in the softmax used in $x_{\theta}^l(z_t, t)$ itself. Is there a specific reason for applying the softmax(log(x)) transformation again? If there is, please state it. Questions For Authors: 1. Why does the diversity increase with N (for example in table 1)? I would expect the diversity to be low for higher N. Is the diversity increasing solely at the expense of validity? 2. I'm struggling to understand how MCG is implemented for goal-directed generation. All we can see from eq 7 is that the scores of the tokens are constructed with the scores when some more tokens are masked (governed by $\gamma$). Reframing the question, how is the goal value $y$ involved in eq 12 in Appendix C? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your comments. We appreciate your positive comments that our paper introduces a versatile molecular generative model that can address various drug discovery tasks using a single backbone model, that all the claims are supported well by the experiments, and that the experiments are carefully designed. We address your concerns and questions below. --- > The paper can be improved by comparing the proposed guidance method to existing methods in the literature like Nisonoff et al. (2024). In MCG (Eq. (7)), how is the goal value $y$ involved? We appreciate the suggestion. MCG and Discrete Guidance (Nisonoff et al., 2024) are not directly comparable: MCG does not rely on the target property conditions for guidance while Discrete Guidance does. To use Discrete Guidance, one has to train a conditional generative model that is conditioned on the target properties, which cannot be easily transferred to other tasks. Instead, **MCG is specifically designed for the discrete diffusion of GenMol that is property-unconditional and general-purposed**. For example, MCG can be used for fragment-constrained generation as well as goal-directed generation, while Discrete Guidance cannot. Therefore, the two methods have different objectives and can be used orthogonally. The score $y$ (Eq. (5)) is used for fragment vocabulary selection which allows GenMol to effectively perform goal-directed generation without any target property-based diffusion guidance, and is not related to MCG. --- > $x^l_\theta(z_t, t)$ is already a probability distribution over the token vocabulary, and you could introduce the temperature in the softmax used in itself. Is there a specific reason for applying the softmax(log(x)) transformation again? To avoid introducing additional notation, we used $\log(x^l_\theta(z_t, t))$ to denote the logits generated by the model, and in the actual implementation, we applied the temperature $\tau$ to the generated logits during inference (Eq. (4)). We will clarify this in the revised version. --- > Why does the diversity increase with $N$ (in Table 1)? Is the diversity increasing solely at the expense of validity? A larger $N$ means the model predicts more tokens at once at each time step, which means the model is given fewer tokens to complete the sequence on average. The less conditional information results in more freedom in the generation, leading to an increase in diversity. On the other hand, it also puts more burden on the model capacity, which may lead to a drop in molecular validity (Table 1 and Figure 4).
null
null
null
null
null
null
null
null
Mastering Board Games by External and Internal Planning with Language Models
Accept (spotlight poster)
Summary: - This paper focuses on game playing for board games with LLMs - it compares external search, where model acts as a proposal function for a symbolic search algorithm, with internal search, where the model is trained on search trajectories to perform search itself - The paper claims three contributions - Multi action-value model, which can be used to score action-value pairs - External search, where the MAV is used with MCTS to guide search - Internal search, where the search procedure is used to generate trajectories that are then used to train a model to directly do search itself. - Models are evaluated across different board games, including Chess, variants of chess, Connect Four, and Hex) - MAV is trained on text representations of the game whih include the game being played and the input state/legal output space - MAV models are trained from scratch on a custom text representation - MAV is integrated into an MCTS solver for external search - MAV is used to predict state transitions and actions during the rollout phase - MAV is also trained with additional search information in the prompt on search traces from symbolics solvers to internalize search - MAV models are compared agaisnt numerous chess-playing baselines, achieving competitive ELO. - MAV-MCTS consistently performs better than MAV alone, and larger models perform better. - External MCTS improves with more simulations - The internal search MAV improves with increased search budget, scaling w.r.t. number of tokens. ## update after rebuttal The rebuttal has addressed my remaining questions, and I will maintain my score of 5. Claims And Evidence: - The claims are generally supported by results: - MAV models combined with symbolic search perform better than a variety of baselines across multiple games - MAV can be trained to internalize search Methods And Evaluation Criteria: - The method is described clearly and Fig 1 is helpful in showing the input/output format. - The evaluation criteria seem robust and the method is evaluated on multiple games.No theoretical claims made. Theoretical Claims: No theoretical claims made. Experimental Designs Or Analyses: The experimental designs are sensible and the token scaling analysis supports the results well. Supplementary Material: I checked Appendix A for timing results and appendix E for related work. Relation To Broader Scientific Literature: The paper is positioned well w.r.t. prior work on learning to search/game playing. The related work section is in the appendix but I think this decision makes sense as it is quite extensive. Essential References Not Discussed: Paper covers essential references Other Strengths And Weaknesses: - Strengths: - By including the state representation and action in the output space, MAV can act not only as a kind of Q value function but also as a world model and policy. - Weaknesses: - It's not clear what the relative gains for Connect Four and Hex are, since there are no baselines there. - MAV training requires access to a large amount of training data; it's not clear how it would generalize to real-world domains where there is no simulator/game engine. This is mentioned in the limitations. - Values are discretized; it would be nice to see how sensitive the method is to the number of bins here. - The MCTS rollout requires a parser that can verify/parse illegal actions, which reduces hallucination. It's also not clear here how one would implement this outside of a game environment with manually-defined rules. Other Comments Or Suggestions: N/A Questions For Authors: The current method outputs a state, top-5 actions etc. as a kind of CoT output. One of the big benefits of CoT is the ability to average/marginalize across reasoning paths, e.g. with self-consistency. Have you explored something like self-consistency on the best action prediction -- is this the same as just running more simulations? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thank you for taking the time to carefully review our paper and for the positive feedback! We have not tried self-consistency since we currently use greedy decoding, which we found to improve the strength of the MAV models compared to sampling. However, sampling multiple outputs from the internal search model and using self-consistency is a very promising idea for further improving the model strength! One piece of evidence that this approach would work is the performance of MAV with mean scoring, which outperforms MAV with greedy decoding, and can be viewed as a form of weighted voting.
Summary: This paper enhances LLM planning capabilities in board games through two approaches: external search (model-guided MCTS without game engines) and internal search (in-context linearized search trees). Using a pre-trained Multi-Action-Value (MAV) model for Chess, Chess960, Connect Four, and Hex, both approaches significantly improve playing strength, with external search achieving Grandmaster-level chess performance using human-comparable search budgets. Claims And Evidence: The performance claims are well-supported by comprehensive empirical evidence showing increasing Elo ratings with larger search budgets. Tournament results against Stockfish at various strengths provide credible evidence for the Grandmaster-level performance claim, with appropriate calibration between internal and external Elo ratings. Methods And Evaluation Criteria: The methodology is appropriate and comprehensive. Theoretical Claims: The paper doesn't make substantial theoretical claims requiring formal proofs. Experimental Designs Or Analyses: The tournament methodology using random pairings is sound with sufficient sample sizes. Analysis of model capabilities on both in-distribution and out-of-distribution positions demonstrates robustness. The experiments show a clear relationship between computational resources and performance. Supplementary Material: I didn't review the supplementary material. Relation To Broader Scientific Literature: See above. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: The paper presents a unified approach that integrates world modeling, policy, and value prediction in a single model, streamlining what previously required separate components. Its implementation of state tracking without external game engines enables self-contained operation, representing a significant advancement in model independence. The methodology demonstrates successful generalization across multiple diverse board games, suggesting broad applicability of the core techniques. The achievement of Grandmaster-level chess performance with human-comparable search budgets marks an important milestone in LLM reasoning capabilities. The innovative linearization technique for search trees enables effective in-context planning, creating a pathway for self-contained reasoning in language models. Weakness: The internal search performance remains inferior to external search, indicating room for further refinement. What's the bottleneck here? Meanwhile, the paper provides insufficient analysis of computational efficiency between the approaches, making it difficult to assess practical trade-offs for real-world applications. Other Comments Or Suggestions: N/A Questions For Authors: 1. How does the computational efficiency of internal vs. external search compare? At what point (if any) does internal search become more efficient? 2. Have you explored hybrid approaches combining both search methods? 3. How dependent is the approach on high-quality annotated game data? Do you have some intuitive explanations? 4. What are the primary failure modes of each approach? 5. A side question is that what modifications would be needed to extend this to imperfect information games? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to carefully review our paper and for the positive feedback! > The internal search performance remains inferior to external search We evaluated internal search only up to breadth=4, depth=2, resulting in a search budget of 21 nodes, whereas external search was evaluated using a search budget of 100 to 2000 nodes. Moreover, the underlying search algorithms are different (minimax and MCTS, respectively). Therefore, the performance gains are not directly comparable. The main bottleneck for scaling up internal search to larger search budgets is scaling up the context length and the inference latency increase that this would entail. Nevertheless, even while being more challenging to scale up to large search budgets, we believe that our results provide interesting insights into the internal search capabilities of LLMs. 1. The current advantages of external search are that (i) it is (depending on the exact search method and its implementation) potentially more easily parallelizable, whereas the internal search constructs the tree sequentially, and requires some extra tokens to capture the tree structure and the decisions pertaining to its expansion, (ii) the number of FLOPS required for internal search grows quadratically with the number of positions evaluated due to the quadratic cost of attention in transformer models, whereas it grows linearly for external search. Yet, this strength may also (contextually) be a weakness given that running the external search requires numerous LLM calls, whereas the invocation of internal search requires a single call, which may be advantageous when/if bottlenecked by the query volume, and potential delays in query completion. 2. We haven’t explored hybrid approaches yet, but that is a really interesting suggestion! A hybrid approach may be able to benefit from a degree of parallelism that is possible to achieve through the external controller, and yet expand sub-trees (rather than single moves) in different branches via internal search when that is potentially useful. It is an open question how best to combine the two, though hybrid approaches (of a slightly different nature) have been successfully utilized in chess engines previously, when it comes to different ways in which neural networks can be incorporated, given that certain positions may require either deep tactical calculations (big trees) or intricate strategic play (shallow trees, but harder/higher quality policies and evaluation). 3. Since the performance of the MAV model is capped by the performance of the game engine used to train it, the current approach depends on the ability to obtain or generate high-quality game-play data for the model to train on. There are many games where this is possible/easy to achieve. For other games, one can use techniques such as self-play as was done in AlphaZero. 4. For chess and chess960, the models are somewhat susceptible to drawing a winning game via the 50-move rule or 3-fold repetition. Enforcing these rules requires storing history (or additional variables), which MAV doesn’t do. This happens in particular with the internal search model, which wasn’t trained to predict the quickest win, while in the external search, it can be enforced. Speaking more broadly, these rules are imposed by the external governing bodies (such as international or national chess federations), and under some regulations, 75-move and 5-fold repetition are used. Given this ambiguity, it might be difficult for models trained in a supervised fashion to recognize the pattern even if history was enabled. Additionally, the MAV model is more likely to misevaluate a move, since it does not perform search. 5. There is a straightforward (but nontrivial) way to extend these methods to imperfect information games using Information-State Monte Carlo Tree Search ([IS-MCTS, Cowling et al. 2012](https://eprints.whiterose.ac.uk/75048/1/CowlingPowleyWhitehouse2012.pdf)). The value functions would be with respect to ground states (where all the information is revealed), and MAV would be trained in the same way as chess. The model would need to be augmented with an additional generative output that could sample a ground state given the current observation (excluding the hidden information), encoding a distribution that is learnable from data. The search would then start by first sampling possible world states from the generator, and run simulations as is currently done, starting from each one, but then aggregate statistics over information states, similar to the application of IS-MCTS with generative networks in [Li et al. 2023](https://arxiv.org/abs/2302.00797).
Summary: The paper introduces a specialized Multi-Action Value (MAV) 2B model trained exclusively on game data from Chess, Chess960, Connect Four, and Hex. MAV is designed to predict legal moves, track game states, identify the top-k actions, and determine the resulting board state after executing the optimal action. The authors convincingly demonstrate that the MAV model generalizes effectively to previously unseen board configurations. Additionally, the paper explores two distinct planning methodologies applied to MAV: 1) External Search: The authors implement an external loop utilizing Monte Carlo Tree Search (MCTS), significantly enhancing MAV's performance. This integration achieves results competitive with current state-of-the-art chess engines. 2) Internal Search: The internal search approach leverages distillation from search trees. This technique also shows a consistent and smooth performance improvement as the number of test-time tokens increases. Claims And Evidence: The claims are largely supported by clear quantitative results. Methods And Evaluation Criteria: Methods: The paper proposed three approaches to enhance LLM planning capabilities and Evaluation: For Table 1: It would be beneficial for the authors to clarify the distinction between Internal Elo and External Elo. The current description suggests that External Elo is aligned with human performance, while a brief note that Internal Elo reflects performance among agents would enhance clarity. It would also be interesting to examine the external Elo ratings for games beyond chess to assess how training primarily on one game generalizes to others. Theoretical Claims: The discussion is mostly empirical, and while there is a theoretical underpinning (such as mapping centipawn evaluations to win probabilities), no rigorous proofs were provided or checked. Experimental Designs Or Analyses: The experimental design is clear, effectively demonstrating the improvements contributed by each component and providing thorough comparisons with the existing engine. Additionally, it includes a detailed error analysis covering precision, recall, legality, and gameplay aspects Supplementary Material: The supplementary material provides an in-depth explanation of the algorithm. Additionally, Appendix D demonstrates effective generalization to the chess game. Relation To Broader Scientific Literature: The paper builds upon and extends recent work on: 1) MCTS and AlphaZero/MuZero paradigms for game playing. 2) Internal planning methods such as Chain-of-Thought and Tree-of-Thought approaches. Essential References Not Discussed: The authors have clearly listed related works and the corresponding contributions. Other Strengths And Weaknesses: Strengths: - Innovative Framework: The paper offers a novel exploration by integrating external and internal search strategies within a single model framework. - Advanced planning: It convincingly demonstrates how LLMs can be enhanced to plan and reason through complex, sequential decision-making tasks. - Comprehensive Evaluation: The experimental evaluation is thorough, clearly showcasing the impact of each contribution. Minor Weakness: - the MAV model appears to lack essential language capabilities. It would be beneficial to see if the LLM could provide textual reasoning for its solutions or incorporate additional communication functionalities typical of large language models. - Providing detailed information on the training costs for pretraining the MAV model and executing the internal search distillation process would offer valuable insights into the resource requirements of this approach, particularly when compared to other methods that enhance reasoning and planning. Other Comments Or Suggestions: line 363: and terminal states,. -> remove the extra comma Questions For Authors: 1) Could you elaborate on the computational trade-offs between external and internal search methods, specifically regarding training cost and inference latency? 2) How do you calibrate Internal Elo ratings relative to External Elo ratings, given that the latter are anchored to the CCLR Blitz ratings? 3) How would the MAV model’s performance be affected if it were limited to predicting only the best move? Would it still function effectively as a chess world model? If not, which aspects of its chess world modeling capabilities would be compromised, and to what extent? 4) Do you believe that fine-tuning a language model (already pre-trained on other tasks) on chess or other board game data could enhance its overall reasoning capabilities? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to carefully review our paper and for the positive feedback! 1. Regarding training cost, we used more than an order of magnitude more tokens to train the model on MAV data compared to the tokens used for fine-tuning the MAV model on internal search traces. We will clarify this in the text. Regarding inference, the main computational trade-off is that external search can be easily parallelized, whereas internal search is sequential. However, internal search could be optimized via techniques such as speculative decoding. 2. To estimate the external Elo of our agents, we use a 1D linear interpolation (specifically scypi.interpolate.interp1d) to map from internal Elo to external Elo given the known [CCLR Blitz Elos](https://github.com/official-stockfish/Stockfish/commit/a08b8d4) of the common instances of Stockfish in both pools. 3. Our preliminary results suggest that limiting MAV to predict only the best move at inference time doesn’t significantly affect its strength. However, training the model to evaluate multiple moves seems to be critical for strength in light of the results from [Ruoss et al. 2024](https://proceedings.neurips.cc/paper_files/paper/2024/file/78f0db30c39c850de728c769f42fc903-Paper-Conference.pdf) and for being able to use the model for external and internal search. 4. Regarding fine-tuning an LLM (already pre-trained on other tasks) on board game data to enhance its overall reasoning capabilities: This is a very interesting question and sounds like a plausible hypothesis. However, the volume of data required to make the MAV models play at a really high level likely exceeds the volumes traditionally used in fine-tuning stages by a fair margin. So, if the goal is to retain strong playing strength in such models, this would impose some additional restrictions. Nevertheless, recent evidence from other studies, e.g., [Muennighoff et al. 2025, s1: Simple test-time scaling](https://arxiv.org/abs/2501.19393), indicates that it may be possible to unlock the reasoning capabilities from pre-training by a fairly small amount of data, opening up new possibilities. Whether it is possible to do this in this case (by e.g. pre-training on some subset of games and then generalizing to a broader set through a smaller number of examples, or broader capabilities beyond board games) is an interesting question that would hopefully be explored in more depth in future work.
null
null
null
null
null
null
null
null
Learning dynamics in linear recurrent neural networks
Accept (oral)
Summary: The authors study the learning dynamics of linear RNNs. This is similar to the method in Saxe et al 2014, but extended to the recurrent setting. The extension leads to an energy function that has a sum of recurrent terms, and thus there is an interplay between recurrent and feedforward modes. The authors show the implication of the learning dynamics on what can and can’t be learned, and on the relative importance of late and early components of the task. ## After rebuttal After reading all the rebuttals and discussion with all reviewers, I am keeping my score. Claims And Evidence: The claims are supported by both analytic derivations and numerical experiments. Methods And Evaluation Criteria: There are no benchmarks, but none are needed. The simulations performed are adequate for the problem at hand. Theoretical Claims: I read all proofs, but did not verify every step of the math. The main derivation is similar to that of Saxe 2014. It would be useful to stress why the cross-term does not appear in this case. Is this because of the aligned assumption? Experimental Designs Or Analyses: No issues Supplementary Material: I read all sections. Relation To Broader Scientific Literature: As explained in the paper, this work is related to learning dynamics in feedforward networks and to the evolution of kernels with learning. The analysis of linear RNN with relation to data dynamics is novel. Essential References Not Discussed: The topic of lazy vs rich in recurrent networks is discussed in this paper: Schuessler, Friedrich, Francesca Mastrogiuseppe, Srdjan Ostojic, and Omri Barak. “Aligned and Oblique Dynamics in Recurrent Neural Networks.” Edited by Tatyana O Sharpee and Michael J Frank. eLife 13 (November 27, 2024): RP93060. https://doi.org/10.7554/eLife.93060. Other Strengths And Weaknesses: The extension of Saxe 2014 to RNNs provides new insights and explains phenomena that are unique to this setting. Other Comments Or Suggestions: Line 298 “see” twice Line 815 “more” appears twice. Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. In addition to answering the question about our main derivation and related work, we would like to point the reviewer to the list of new developments we have made since the initial submission, available in our rebuttal to Reviewer CWVm ([direct link here](https://openreview.net/forum?id=KGOcrIWYnx&noteId=RuV35U4JcB)). These include further mathematical and simulation results, which we hope will make our theory more rigorous and general. > The main derivation is similar to that of Saxe 2014. It would be useful to stress why the cross-term does not appear in this case. Is this because of the aligned assumption? Yes, the cross-term does not appear both because of the aligned assumption and because of the assumption that left and right singular vectors are constant across time, allowing us to diagonalize the weights such that each singular value dimension becomes decoupled. This can be thought of as rotating the weights so that they are aligned with the principal directions of the data at initialization. We note that other work (Braun et al., 2022) has alleviated the aligned assumption in feedforward networks but the derivation is quite involved and challenging to apply to recurrent networks as it is restricted to two-layer networks. In general, the aligned assumption appears to be reasonable when initializing a network with small random weights and has been shown to occur early in training in such cases (Atanasov et al., 2021). We will make sure to clarify this in-text. > The topic of lazy vs rich in recurrent networks is discussed in this paper: Schuessler, Friedrich, Francesca Mastrogiuseppe, Srdjan Ostojic, and Omri Barak. “Aligned and Oblique Dynamics in Recurrent Neural Networks.” Edited by Tatyana O Sharpee and Michael J Frank. eLife 13 (November 27, 2024): RP93060. https://doi.org/10.7554/eLife.93060. We thank the reviewer for pointing us to this interesting and important reference. We are now running additional rich and lazy learning experiments varying the scale of input-output connectivity modes in LRNNs with rotational dynamics to see whether similar phenomena might occur in the linear case. > Line 298 “see” twice Line 815 “more” appears twice. We thank the reviewer for pointing out the typo. We have fixed all typos in the new version of the manuscript.
Summary: The authors study the learning dynamics of linear recurrent neural networks (LRNNs). Using the approach of Saxe et al. (ICLR '14) to study deep linear feed-forward neural networks, the authors develop a similar theory for LRNNs. Under some assumptions on the weight matrices, the LRRN dynamics decouple into a set of uncorrelated modes that evolve independently. The authors find that LRNNs learn with larger and later singular values first, akin to the faster learning of larger singular values by linear feed-forward networks (sec. 3.2). They consider a task generated by a matching linear RNN and are able to analyse the stability of training (i.e. the problem of vanishing / exploding gradients) and of extrapolation in this setting. Since their dynamical equations decompose into an equation for the recurrent mode, and for the input-output mode, they are also able to identify the emergence of low-rank structures in the weights (sec 3.4) They also find that the ''richness'' of the learning dynamics, as measured by the distance of the learnt network to its NTK at initialisation, increases as recurrent computations become more important. An numerical experiment with a sensory-integration experiment where the assumptions are not exactly fulfilled shows good agreement with the theory regardless. ## After the discussion I maintain my score. I want to note that while I am aware with some of the recent literature on the theory of RNNs, I am much more familiar with the literature on the learning dynamics in feed-forward neural networks, so there might be some related literature I am missing. Claims And Evidence: The authors claim to introduce a new mathematical description of the learning dynamics of RNNs. While the assumption are somewhat restrictive (joint diagonalisability), they are stated clearly by the authors (kudos for that!) and they mirror the assumptions made by Saxe et al. and are therefore likely reasonable to make progress on this problem. In the feedforward case, the theory based on these assumptions has had quite an impact on subsequent research, so I think this is a good contribution (and long overdue!) to be done for linear RNNs, too. The wealth of phenomena that they analyse further suggests that their solution of linear RNNs is a useful toy model to understanding recurrent neural networks. Methods And Evaluation Criteria: This is a theory paper, and its methods are appropriate. Theoretical Claims: I looked at the derivation of the dynamical equations, and it seemed all good to me. Assumptions for this analysis are clearly spelled out at the top of page 3. Experimental Designs Or Analyses: Not applicable. Supplementary Material: No Relation To Broader Scientific Literature: The authors do seem to acknowledge related literature appropriately, both in the introduction and the discussion at the end. however, while I am aware with some of the recent literature on the theory of RNNs, I am much more familiar with the literature on the learning dynamics in feed-forward neural networks, so there might be some related literature I am missing. Essential References Not Discussed: None that I know of Other Strengths And Weaknesses: This is a nice paper that provides a clean theoretical analysis of linear recurrent neural networks, which remain an important tool to study sequential data, and remain the most important neural network model for theoretical neuroscience. Given the huge impact of similar work on linear feed-forward neural networks, I think this framework will prove a valuable tool for theory, too, and I think it should be accepted at ICML. Other Comments Or Suggestions: See above Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and the encouraging words. In addition to one clarification about this review below, we would like to point the reviewer to the list of new developments we have made since the initial submission, available in our rebuttal to Reviewer CWVm ([direct link here](https://openreview.net/forum?id=KGOcrIWYnx&noteId=RuV35U4JcB)). These include further mathematical and simulation results, which we hope will make our theory more rigorous and general. > The authors claim to introduce a new mathematical description of the learning dynamics of RNNs. While the assumption are somewhat restrictive (joint diagonalisability), they are stated clearly by the authors (kudos for that!) and they mirror the assumptions made by Saxe et al. and are therefore likely reasonable to make progress on this problem. In the feedforward case, the theory based on these assumptions has had quite an impact on subsequent research, so I think this is a good contribution (and long overdue!) to be done for linear RNNs, too. The wealth of phenomena that they analyse further suggests that their solution of linear RNNs is a useful toy model to understanding recurrent neural networks. We complete agree with the reviewer that this is the beginning of a longer research direction. While the assumption of joint diagonalizability has been commonly used in the learning dynamics literature (Saxe et al., 2014; 2018) and there is evidence that the parameters of the network align in this way at the beginning of training (Atanasov et al., 2021) (see also our response to Reviewer q5bu), we agree that this assumption may be restrictive in certain cases. Luckily, even if our assumptions are somewhat restrictive, our empirical results seem to be a bit more general, such as in the case of our sensory integration tasks.
Summary: While there has been substantial progress in understanding the learning dynamics of feedforward networks, there is relatively less work on studying it in the context of recurrent networks, especially when considering the task dynamics as well. In this paper, the authors analyze the learning dynamics of a linear RNN and derive an analytical solution with certain assumptions on the data and model initialization, for which the motivate well. Using this solution, they show that task dynamics impact the stability of the solution and the network's ability to extrapolate in time. Specifically, they divide the task into time-independent and time-varying components and show how these different aspects influence the solution. They also show that there is a trade-off between computation being more recurrent or feedforward, and that there is a phase transition between these two modes in terms of the task dynamics. This specifically emerges in the case where the data is not perfectly learnable, such as when the last time step, for which the loss is computed, does not follow the task dynamics of the rest of the data trajectory. Moreover, the trade-off between these types of computations leads to low-rank solutions that are known to emerge during RNN training. This is due to an effective regularization term that pops out of the energy function. The authors also derive a neural tangent kernel (NTK) for finite-width linear RNNs. Using this NTK, they show that recurrence leads to rich learning. Finally, they use their theory to explain how linear RNNs learn to perform sensory integration tasks, which is a common paradigm in computational neuroscience. Claims And Evidence: Overall, the paper did a good job supporting its claims with a thorough and extensive evaluation. One point of confusion for me is whether some of these results hinge on the fact that we only compute the loss on the final time step. This seems especially relevant for the phase transition between feedforward and recurrent computation. This does not make or break the study, but is an important point to address in the text when trying to understand these results. Methods And Evaluation Criteria: The analyses performed on the linear RNN and assumptions on task dynamics make sense given the scope of the study. It is also interesting to see how well the theory matches simulations in the sensory integration task, which is of broader relevance to the computational neuroscience community. It was especially interesting to see how the choice of objective (mean- vs. sum-integration) impacted extrapolation performance. Theoretical Claims: I skimmed the proofs in Appendix A of the gradient flow equations and energy functions and Appendix J which extends their approach to multiple outputs over time. Nothing popped out to me as incorrect. Experimental Designs Or Analyses: I looked over the experimental design of the main results and found them to be sound. Supplementary Material: I skimmed the proofs in Appendix A and Appendix J. Relation To Broader Scientific Literature: The authors do a great job situating their work in the broader related literature. Most relevant to their work is the body of literature on analyzing deep linear networks and neural tangent kernels, for which they cite important papers. Prior work which has looked at linear RNNs has not accounted for the impact of the task dynamics on learning dynamics and the solutions. The results in this paper also have bearing on the emergence of low-rank connectivity in RNNs, which is a common framework in computational neuroscience to account for the observation that neural activity tends to be low-dimensional in many brain regions. This work also bears on extrapolation to longer sequences, which is of primary concern in RNNs in general. Essential References Not Discussed: A relevant paper that talks about rich and lazy learning in nonlinear RNNs that may be worth citing includes: - Payeur et al., "Neural Manifolds and Learning Regimes in Neural-Interface Tasks," bioRxiv 2023. Other Strengths And Weaknesses: The paper is well written and does a good job supporting its claims. It is a novel approach to studying learning dynamics in linear RNNs, which is relevant to both deep learning theory and computational neuroscience. Other Comments Or Suggestions: I think the results in Section 3.4 could be better explained, as it took me awhile to understand what was going on. Figure 4, in particular, is hard to parse, and I need more hand-holding when going through it. Questions For Authors: Does this trade-off between feed-forward and recurrent computation partially depend on the fact that the last time step is all that's included in your objective function? How would these results change if intermediate time steps are also penalized? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and the clear summary of our work. In addition to addressing the limitations identified in this review, we would like to point the reviewer to the list of new developments we have made since the initial submission, available in our rebuttal to Reviewer CWVm ([direct link here](https://openreview.net/forum?id=KGOcrIWYnx&noteId=RuV35U4JcB)). These include further mathematical and simulation results, which we hope will make our theory more rigorous and general. > Overall, the paper did a good job supporting its claims with a thorough and extensive evaluation. One point of confusion for me is whether some of these results hinge on the fact that we only compute the loss on the final time step. This seems especially relevant for the phase transition between feedforward and recurrent computation. This does not make or break the study, but is an important point to address in the text when trying to understand these results. We thank the reviewer for the positive feedback and for raising this important and nuanced issue. In the main text and results, we consider the case where only the output at the last timestep enters into the loss for simplicity and interpretability. We fully agree with the reviewer that it is important to consider more general cases and how our results generalize. In the supplementary material, we have made an effort to generalize our derivation of the energy function and gradient flow equations to the case where the loss is computed over the output at each timestep. Crucially, we were able to derive the functional form of the energy function for this (multi-output) case, and it only differs from the original (single-output) case by an additional summation over the singular values of the data correlations for different output timesteps. An intuitive interpretation of this is that the multi-output energy function contains a summation of the single-output energy function for different outputs. Although this can result in different solutions depending on the task dynamics as a result of optimizing for many different outputs, we would expect several of our main results to hold if task dynamics are consistent across different outputs (e.g., in the case of a teacher-student setup). We are currently working to generalize some of our other results to this setting, including our derivation of the NTK and variation of feedforward and recurrent computation (see response below). However, it is possible that there may be additional phenomena to account for when considering the relationship between output dynamics, which will be an important direction for future work. > A relevant paper that talks about rich and lazy learning in nonlinear RNNs that may be worth citing includes: > Payeur et al., "Neural Manifolds and Learning Regimes in Neural-Interface Tasks," bioRxiv 2023. We thank the reviewer for pointing us to this relevant reference and we will make sure to include in our discussion of related work. > I think the results in Section 3.4 could be better explained, as it took me awhile to understand what was going on. Figure 4, in particular, is hard to parse, and I need more hand-holding when going through it. We thank the reviewer for the valuable feedback and we apologize for the lack of clarity on our part. We will revise Section 3.4 and modify Figure 4 to make the results easier to follow. > Does this trade-off between feed-forward and recurrent computation partially depend on the fact that the last time step is all that's included in your objective function? How would these results change if intermediate time steps are also penalized? We thank the reviewer for the insightful question. Following your question, we’ve now simulated this in the Dirac delta task and observed that pruning of connectivity modes still exists based on the magnitudes of $s_1$ and $s_T$, similar to the single-output case, although we have not yet proved mathematically that it’s a proper phase transition. We will make sure to include this and more analyses for the case where the loss includes outputs over multiple timesteps in the manuscript.
Summary: This paper analyzes the learning dynamics encountered when training a linear recurrent neural network (i.e., a linear time invariant system) using gradient descent. The paper derives a reduced form of the learning dynamics, and connects the stability of the model to the task being trained on. Claims And Evidence: The paper is reasonably clear mathematically, although I found it rather difficult to follow what was being said much of the time in the main text. Methods And Evaluation Criteria: Yes. Theoretical Claims: I went through the derivation of equations 5-8 and I did not find any issues. Experimental Designs Or Analyses: Yes, the experiments corresponding to Figs 4 and 5. No issues encountered. Supplementary Material: The appendix containing the proofs for equations 5-8. Relation To Broader Scientific Literature: The paper is of interest to neuroscience researchers, who have long been using recurrent neural networks as models of the brain. The paper is also of interest to people in deep learning working on SSMs, which are essentially linear (either time-invariant or time-varying) dynamical systems. Essential References Not Discussed: This paper seems quite close in approach and spirit: https://arxiv.org/pdf/2407.07279 Also, there does not appear to be any discussion of the recent popular work on State Space Models (SSMs), which are LTI systems: https://arxiv.org/abs/2008.07669 Other Strengths And Weaknesses: I think the paper would benefit from much clearer writing. There are too many distracting inline equations and symbols and the results are written in what is (in my opinion) an overly technical way. Other Comments Or Suggestions: N/A Questions For Authors: What separates your approach from that of Saxe (2014, 2018)? Is it just the depth of the network you study? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the positive feedback on the relevance and potential of the work for the neuroscience and machine learning community. These comments will enable substantial improvement to the manuscript. In addition to addressing the limitations below, we would like to highlight additional results and developments we have made to the paper since the time of initial submission, which we hope will substantially improve our paper by making our theory more rigorous and general. In particular, our new results cover: * **Learnability**: We prove that task dynamics that are exponential-in-time (i.e., constant, exponential, and inverse-exponential) are the only task dynamics with 0-loss solutions in our model. * **Phase transition**: Using Landau theory, we show analytically that for Dirac delta task dynamics, there is a (first-order for $T>3$, second-order for $T=3$) phase transition, solely dependent on the ratio of the first and last singular value. We verify these results in simulation. * **Experimental results**: We extend our sensory-integration tasks to also illustrate how early-importance task dynamics lead to instability. * **Generalisation**: We perform additional experiments to show that our feature-learning result generalizes to RNNs learning tasks with rotational dynamics (non-constant singular vectors through time). * **Learning dynamics**: We extend our solution to the recurrent connectivity modes to an analytical approximation using the Faa di Bruno formula. We also thank the reviewer for identifying the limitations of our paper, which we address below: > I think the paper would benefit from much clearer writing. There are too many distracting inline equations and symbols and the results are written in what is (in my opinion) an overly technical way We thank the reviewer for the valuable feedback and apologize for the lack of clarity. This work was written as a theory paper, and as such we have tried to condense the most critical equations into the main text and left the remainder of the derivations to the appendix. However, we agree that theory is more useful when it is clear and can be widely understood. To enhance clarity, we will revise the writing to be clearer by describing concepts in simple terms and reducing the overuse of symbols in-text where possible. > The paper is of interest to neuroscience researchers, who have long been using recurrent neural networks as models of the brain. The paper is also of interest to people in deep learning working on SSMs, which are essentially (either time-invariant or time-varying) dynamical systems. This paper seems quite close in approach and spirit: https://arxiv.org/pdf/2407.07279 > > Also, there does not appear to be any discussion of the recent popular work on State Space Models (SSMs), which are LTI systems: https://arxiv.org/abs/2008.07669 We thank the reviewer for pointing us to these important references, which are certainly related to our work (especially Smekal 2024). We did not discuss SSMs aside from citing a few references, but they are certainly very relevant to the study of RNNs and an interesting and important extension for future work. We will make sure to include the citations and more discussion around this topic, including the new results provided by our framework over and above those in these references (such as the results on stability and extrapolation, phase transition, rich and lazy learning, and validation in a sensory task). > What separates your approach from that of Saxe (2014, 2018)? Is it just the depth of the network you study? Because the RNN receives sequential input that is related to the output in a time-dependent way, the energy function includes a summation over time which includes time-dependent singular values. In contrast, in Saxe et al. (2014, 2018), there is no time-dependence in the data (or sequence) and thus only a single set of singular values. More explicitly, two aspects where the difference between our work and Saxe et al. (2014, 2018) can be seen clearly are in the data specification and in the energy function. Regarding the data, Saxe et al. (2014, 2018) considers the case where the data correlation has N singular values (one for each dimension), whereas we consider T data correlations, yielding T x N singular values in total. This helps us draw conclusions that are unique to the RNN setup (like the fact that later singular values are learned first), which would have been impossible to obtain by just extending the depth of Saxe et al.’s model. Regarding the energy function, Saxe et al. (2014, 2018) have an energy function given by $E=(s-ca)^2$, whereas our energy function is given by $E=\sum_{i=1}^T (s_i – cb^{T-i}a)^2$. The sum and the power of $b^{T-i}$ also enable some RNN-specific conclusions (such as the phase transition governing recurrent and feedforward computations), which again would not be possible to study with deep feedforward networks.
null
null
null
null
null
null
XAttention: Block Sparse Attention with Antidiagonal Scoring
Accept (poster)
Summary: This paper introduces a plug-and-play attention sparsity method with minimal additional computational overhead. The proposed approach uses the sum of antidiagonal values in the attention matrix as a proxy to determine block importance, enabling block selection to reduce the computational density of attention mechanisms. Extensive experiments demonstrate that the method significantly enhances attention sparsity and accelerates attention computation, all while maintaining the model's performance. ## update after rebuttal While the rebuttal has addressed my main concerns, I agree with reviewer 5Q7x that the technical mechanism appears incremental. So I keep my score (3). Claims And Evidence: Yes Methods And Evaluation Criteria: There are several issues: 1. The proposed method relies on the antidiagonal value-based block importance estimation, and the paper claims that the antidiagonal selection method is effective due to its advantages in Information Preservation and Pattern Detection. However, these advantages are not unique to antidiagonal values, as the main diagonal can also preserve information and detect patterns. Further theoretical or empirical evidence is required to clarify the rationale and necessity behind the antidiagonal selection strategy. 2. The method relies on a precomputed attention matrix to predict block importance, but the paper does not clearly explain when and how this step is performed. 3. How the Minimum Threshold Prediction works is not that clear. The paper should explicitly explain the definition of a time of threshold adjustment and when the threshold is adjusted, especially during the prefilling process. Theoretical Claims: There is no theoretical claim. Experimental Designs Or Analyses: There are several issues: 1. There is a lack of sufficient justifications for why the selected baselines are the most appropriate for evaluating the proposed method. 2. The experiments only test a narrow range of stride values, and the paper does not provide a clear strategy for determining stride. This may limit the generalizability of the method, as stride selection could significantly impact performance. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: Different from existing works that rely on pooling methods, the paper proposed an antidiagonal selection strategy to achieve attention sparsity, which introduces less additional computational overhead. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper addresses an important problem and shows promise, but the motivation and justification for the proposed method are currently unclear. Other Comments Or Suggestions: None Questions For Authors: 1. What are the unique motivations for using antidiagonal values to estimate block importance? 2. If the sparsity level is predefined, how can the proposed method achieve a predefined sparsity? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### 1. Antidiagonal Selection: Performance and Insights Antidiagonal selection offers significant advantages over other patterns: - Retains **all token information** while simultaneously detecting both **vertical and slash patterns** critical in LLM prefill - Diagonal patterns **miss** slash patterns with probability $\frac{\text{stride} - 1}{\text{stride}}$ - Other patterns (horizontal lines, pooling methods) lose token-level information - Empirically validated superiority: outperforms diagonal patterns (Table 6) and pooling-based methods used by FlexPrefill ### 2. Precomputation Process Precomputation occurs before computing the full attention for $Q\times K^T$. Based on block selection results, we determine which blocks require full attention computation. The process is detailed in Algorithm 1. ### 3. Minimum Threshold Prediction Minimum Threshold Prediction is an **optional algorithm** which finds the optimal threshold for each head, further optimizing efficiency and accuracy. This process: - Is conducted **offline** (not during prefill) - Does not increase runtime computational complexity - Profiles configurations in advance, determining minimum thresholds per head ### 4. Baseline Selection Justification Our selected baselines represent SOTA methods for sparse prefill attention. Previous approaches: - Have high precomputation complexity and overhead (Figure 5) - Limited applicability (FlexPrefill/MInference don't support video generation or chunk prefill) - Require additional fine-tuning (SeerAttention) ### 5. Stride Selection Strategy Stride is indeed critical for balancing efficiency and accuracy: **Different stride values (tested on RULER with Llama3.1 8B):** | Stride | 64 | 32 | 16 | 8 | 4 | 2 | 1 | | -------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- | | Overhead | 1.97% | 3.12% | 4.81% | 7.54% | 14.9% | 28.9% | 57.4% | | Accuracy | 81.21 | 84.23 | 88.04 | 88.42 | 88.38 | 88.39 | 88.27 | These results show stride = 8 maintains accuracy comparable to stride = 4 with less overhead, and performance remains effective for stride ≤ 16. According to our complexity analysis: - Precomputation: $\frac{2n^2d}{\text{stride}} + \frac{3n^2}{\text{stride}^2}$ - Full attention: $4n^2d + 4n^2 + 6nd^2$ With stride = 4, precomputation cost is < 1/8 of full compute With stride = 8, precomputation cost is < 1/16 of full compute (As shown in Figure 5, while the actual latency closely aligns with theoretical predictions, it may be slightly higher due to factors such as vector reordering, memory overhead, and the implementation details of the Triton kernel.) ### 6. Predefined Sparsity Implementation Fixed sparsity can be achieved by setting a block limit k and selecting the top-k highest-scoring blocks. This approach is equivalent to the Top-k strategy (Table 8). However, we believe fixed sparsity across all inputs is suboptimal since information density varies between requests. XAttention's **dynamic sparsity determination** enables better generalization and accuracy across diverse scenarios. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. It addresses my main concerns. --- Reply to Comment 1.1.1: Comment: Thank you for confirming that your main concerns have been addressed. Please don’t hesitate to let us know if you have any further questions—we’d be more than happy to clarify or provide additional details. If there are no remaining concerns, we would greatly appreciate it if you could consider updating your evaluation to reflect your current view of the paper. Very best, Authors
Summary: This paper introduces XAttention, a plug-and-play framework that accelerates long-context inference in Transformer models through block sparse attention. The key innovation is using the sum of antidiagonal values in the attention matrix as a proxy for block importance, allowing for identification and pruning of non-essential blocks. This approach achieves considerable speedups (up to 13.5x) while maintaining comparable accuracy to full attention across language, video understanding, and video generation tasks. The method doesn't require retraining and shows promising results compared to other block-sparse approaches like MInference, FlexPrefill, and SeerAttention. Claims And Evidence: +The claims made in this submission are generally supported by convincing evidence. The main claim of achieving significant speedups while maintaining accuracy is well-supported by experimental results across multiple benchmarks. -However, the evidence is somewhat limited in model diversity, with primary evaluations on Llama-3.1-8B-Instruct for language tasks, Qwen2-VL-7B-Instruct for video understanding, and HunyuanVideo for video generation. The efficacy of the approach across model architectures beyond these specific models is not fully established, which weakens the generality claim somewhat. It would test more models, e.g., Mistral Nemo 12B Instruct, Phi 3.5 Mini 3.8B Instruct, Qwen2.5 7B Instruct, and so on. Methods And Evaluation Criteria: +The paper uses established benchmarks (RULER, LongBench, VideoMME, and VBench) that directly test the challenges of long-context understanding, which align well with the goal of the work. The comparison against strong baselines (FlashAttention, MInference, FlexPrefill, SeerAttention) provides good context for understanding the gains. +The ablation studies are valuable in examining the contributions of different components (antidiagonal pattern, threshold block selection, minimum threshold prediction), though more exploration of hyperparameter sensitivity would strengthen the evaluation. Theoretical Claims: The paper does not have theory. Experimental Designs Or Analyses: +The experimental design is generally sound. The authors evaluate on diverse tasks (language, video understanding, video generation) with varying sequence lengths (4k to 256k tokens), providing a comprehensive picture of the method's capabilities. -One issue is the relatively simple setup for video generation experiments. While the comparison against full attention is done using the same random seed and prompts, only PSNR, SSIM, and LPIPS metrics are reported without detailed analysis of generation quality or more nuanced evaluation. More qualitative analysis would be beneficial. Supplementary Material: The paper does not have supplementary material. Relation To Broader Scientific Literature: The authors connect their approach to previous work on sparse attention (Sparse Transformer, LongFormer, BigBird, etc.) and more recent work on attention optimizations like FlashAttention and inference acceleration methods. The paper clearly identifies its novelty compared to related approaches like MInference and FlexPrefill, highlighting that those methods incur significant computational overhead for pattern selection, which XAttention addresses through its antidiagonal scoring technique. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strengths: - The antidiagonal scoring idea is novel and interesting - it provides a simple yet effective heuristic for identifying important attention blocks without requiring expensive computation. - The speedup achieved (up to 13.5x for 256k context) is impressive. Figure 4 clearly shows how XAttention consistently outperforms other sparse attention methods across different sequence lengths. - The method is training-free and can be applied as a drop-in replacement, making it immediately useful for practitioners without requiring costly retraining or fine-tuning. The dynamic threshold prediction approach shows thoughtful consideration of the varying sparsity patterns across different attention heads. Weaknesses: - Limited model diversity - the evaluation focuses primarily on Llama-3.1-8B-Instruct for language tasks, with limited exploration across model families or sizes. This raises questions about how well the approach generalizes across different model architectures and scales. - The video generation experiments, while novel, feel preliminary. More detailed analysis beyond basic metrics would strengthen these results. Other Comments Or Suggestions: NA Questions For Authors: NA Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### 1. Model Generalizability We tested XAttention across diverse architectures with consistent results: **LLMs Accuracy (RULER)** | Model | Method | Average (4k-128k) | Performance Delta | | ----------------- | ----------- | ----------------- | ----------------- | | Mistral Nemo 12B | Full | 67.97 | – | | | MInference | 64.49 | -3.48 | | | FlexPrefill | 64.61 | -3.36 | | | XAttn S=4 | **67.92** | **-0.05** | | | XAttn S=16 | 67.47 | -0.50 | | Phi 3.5 Mini 3.8B | Full | 84.68 | – | | | MInference | 81.89 | -2.79 | | | FlexPrefill | 82.83 | -1.85 | | | XAttn S=4 | **84.86** | **+0.18** | | | XAttn S=16 | 83.82 | -0.86 | | Qwen2.5 7B | Full | 77.84 | – | | | MInference | 74.02 | -3.82 | | | FlexPrefill | 75.10 | -2.74 | | | XAttn S=4 | **77.75** | **-0.09** | | | XAttn S=16 | 77.21 | -0.63 | **LLMs Speed-up** | Model | Method | 8k | 16k | 32k | 64k | 128k | 256k | | ------------ | ---------- | -------- | -------- | -------- | -------- | --------- | --------- | | Mistral Nemo | XAttn S=16 | **1.7×** | **2.6×** | **4.7×** | **8.3×** | **9.9×** | **10.9×** | | Phi 3.5 | XAttn S=16 | 1.4× | **2.3×** | **4.7×** | **8.1×** | **11.0×** | **11.9×** | | Qwen2.5 7B | XAttn S=16 | **1.8×** | **2.7×** | **4.9×** | **8.5×** | **12.4×** | **13.9×** | **Video Generation (Wan 2.1 14B)** | Threshold | PSNR (↑) | SSIM (↑) | LPIPS (↓) | Density (%) | Speed-up | | --------- | -------- | -------- | --------- | ----------- | -------- | | 0.90 | 21.20 | 0.739 | 0.212 | 35.8 | 2.5× | | 0.95 | 22.67 | 0.794 | 0.167 | 51.2 | 1.8× | --- ### 2. Hyperparameter Sensitivity Threshold = 0.9 demonstrates strong **generalizability** across models: **Same threshold (0.9) on different models:** | Model | Llama | Mistral Nemo | Phi3.5-mini | Qwen2.5 | | ------------- | ------ | ------------ | ----------- | ------- | | Sparsity | 23.06% | 24.93% | 29.32% | 21.15% | | Performance Δ | -0.01 | -0.05 | +0.18 | -0.09 | **Trade-off curve with LLama 3.1 8B (stride=4):** | Threshold | 0.1 | 0.7 | 0.8 | 0.9 | 0.95 | 1.0 | | --------- | ----- | ----- | ------ | ------ | ------ | ------- | | Sparsity | 4.31% | 5.62% | 10.35% | 23.06% | 49.88% | 100.00% | | Accuracy | 41.34 | 73.96 | 84.39 | 87.51 | 87.64 | 87.52 | --- ### 3. Video Generation Quality These quantitative metrics are well-acknowledged methods used in efficient visual content generation works to compare the difference between original generated content and the efficient generated content, such as used in Distrifusion (Li et al., https://arxiv.org/pdf/2402.19481) and Sparse VideoGen (Xi et al., https://arxiv.org/pdf/2502.01776). They are extremely strict compared to content level scores since they require pixel-level exactness, and the scores XAttention achieved is a level which very similar to the original generation. Beyond quantitative metrics, we've included qualitative samples in the Figure 3 comparing between XAttention-generated videos and full attention videos, confirming they are virtually indistinguishable. --- Rebuttal Comment 1.1: Comment: Thank you for your response. The new results seem strong and fixed my concerns. Thus, I will raise my score from 3 to 4. --- Reply to Comment 1.1.1: Comment: Thank you for your thoughtful confirmation and for raising your score. We're glad the new results addressed your concerns, and we truly appreciate your updated evaluation! Very best, Authors
Summary: - This paper proposes an efficient attention model. - It finds out that the antidiagonal values in the attention matrix provides a powerful proxy for block importance - Unlike existing methods that primarily rely on computationally intensive and lossy solutions like token pooling to identify important blocks, Xattention directly use attention scores which are more efficient. - The proposed module is tested on several benchmarks including language process, video understanding and video generation. Claims And Evidence: - Not exactly, - Although extensive experiments are provided. The algorithm itself is confusing, the implementation details, calculation equations, theoretical computational complexity are not provided to help understand this method. - The motivation and design choice of threshold prediction and dynamic programming are confusing. - The method achieves both notable performance and efficiency improvements compared to previous methods, but the underlying reason remains unclear. Methods And Evaluation Criteria: Yes, three different tasks (language process, video understanding and video generation) on several benchmarks are adopted for evaluation. Efficiency comparison of the proposed attention module is also provided. Theoretical Claims: Yes, the Algorithm 1 is checked. - The Algorithm 1 involves two levels of loops to select blocks, what's its computational efficiency and why it is more efficient? - Why the approximate softmax attention is calculated inside the loops. Experimental Designs Or Analyses: Yes Supplementary Material: No Supplementary Material provided. Relation To Broader Scientific Literature: None Essential References Not Discussed: None. Other Strengths And Weaknesses: - Some implementation details of experiments are not provided. For example, does XAttention needs finetuning or it can directly replace standard attention during inference? In Language tasks, does XAttention both works in prefill and decoding stages? - The figures shown in the paper are confusing, which seem not align with the main algorithm and demand extensive effort to understand. Other Comments Or Suggestions: I think the paper writing needs extensive improvement. Especially the algorithm, implementation details, design choice analysis and computational complexity comparison. Questions For Authors: Does this method use any cuda or triton implementation? Ot it uses any existing efficient attention components or code? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### 1. Algorithm Clarification XAttention precomputes attention within the **antidiagonal pattern** and uses these scores to guide block-sparse attention selection. Our method: - Requires no fine-tuning - Achieves **lowest overhead** among prefill acceleration approaches (Figure 5) - Delivers up to **13.5× speedup** over FlashAttention (Figure 4) --- ### 2. Complexity Analysis **Precomputation complexity:** $$\frac{2n^2d}{\text{stride}} \quad (\text{approximate attention}) \ + \ \frac{3n^2}{\text{stride}^2} \quad (\text{approximate softmax})$$ **Full attention complexity:** $$4n^2d + 3n^2 + 4nd^2$$ With **stride = 4**, precomputation cost is < **1/8** of full compute With **stride = 16**, precomputation cost is < **1/32** of full compute. **Figure 5** shows a breakdown of precompute time that closely aligns with this theoretical result. (While the actual latency closely aligns with theoretical predictions, it may be slightly higher due to factors such as vector reordering, memory overhead, and the implementation details of the Triton kernel.) --- ### 3. Threshold DP Design Choice As described in Section 2.3, this is an **optional component** for optimizing sparsity by assigning different thresholds per head. XAttention also works effectively with global thresholds on LLMs, VLMs, and video generation. --- ### 4. Antidiagonal Pattern Effectiveness The antidiagonal pattern: - Retains all token information - Simultaneously detects vertical and slash patterns with $1/\text{stride}$ probability - Prevents information loss compared to horizontal lines or pooling methods - Empirically outperforms both diagonal patterns (Table 6) and pooling methods (e.g., FlexPrefill) --- ### 5. Two-Level Loop Structure In Algorithm 1: - Outer loop: blocks (`blocknum`) - Inner loop: stride slices (`stride`) Despite appearing as nested loops, the **actual complexity is O(n)**, not O(n²), where n represents sequence length. --- ### 6. Softmax Approximation Line 120 explains why we normalize attention scores - to create a probability distribution for threshold selection. The Softmax complexity here is $\left(\frac{\text{stride}}{\text{blocksize}}\right)^2$ of full attention (e.g., $\frac{1}{1024}$ with stride = 4). Top-k could replace Softmax, but Table 8 demonstrates this performs worse than our threshold method. --- ### 7. Implementation Details As noted in line 20 of the abstract, our method is: - **Plug-and-play** - Requires **no finetuning** - **Specifically designed for prefill stage** We used both Block Sparse Attention (https://github.com/mit-han-lab/Block-Sparse-Attention) CUDA kernel and Triton kernel for implementation. --- ### 8. Figure Presentation Thank you for these suggestions. We will improve figure clarity in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the response, after carefully reading the rebuttal, code and paper, some of the algorithm do make sense. I will raise my score accordingly. But I am still wondering: - Do the important blocks calculated with the the antidiagonal pattern statistically align with the importance blocks calculated with original attention scores or pooled attention scores? --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to carefully review our rebuttal, and paper. We greatly appreciate your willingness to raise your score based on the deeper understanding of our algorithm. You raised a great question. We conducted a thorough statistical analysis to address this point. First, we evaluated the Spearman's rank correlation between various scoring strategies and the original full attention map scores across 100 random inputs with sequence length of 4K: | Method | Antidiagonal | Diagonal | Sum Pooling | Max Pooling | | ------------- | ------------ | -------- | ----------- | ----------- | | Correlation ↑ | 0.49 | 0.32 | 0.14 | 0.01 | While revealing, correlation alone doesn't fully capture our algorithm's objective. Since XAttention only needs to identify which blocks to compute (not their exact ranking), we further analyzed using Area Under the ROC Curve (AUC) - a more suitable metric for binary classification tasks. For this analysis, we: 1. Labeled blocks from the full attention map exceeding the threshold as positive (1), others as negative (0) 2. Used scores from each method as predictions 3. Calculated AUC to measure each method's ability to correctly classify important blocks | Method | Antidiagonal | Diagonal | Sum Pooling | Max Pooling | | ----------- | ------------ | -------- | ----------- | ----------- | | AUC Score ↑ | 0.84 | 0.69 | 0.55 | 0.52 | The antidiagonal pattern significantly outperforms other methods in both metrics. An AUC of 0.84 indicates strong discriminative ability - the antidiagonal scores successfully identify ~84% of the truly important blocks that would be selected using full attention scores. This superior performance aligns with our theoretical analysis: the antidiagonal pattern efficiently captures both vertical and slash patterns while preserving token-level information, making it an excellent proxy for identifying attention hotspots. We're open to any additional questions you might have about our work.
Summary: This paper introduces XAttention, a novel block-sparse attention mechanism leveraging an "antidiagonal scoring" method to efficiently approximate standard transformer attention. XAttention aims to accelerate inference in Long-Context Transformer Models (LCTMs) by using an antidiagonal scoring strategy to identify and prune less important blocks within the attention matrix. The authors argue that summing antidiagonal values within blocks effectively captures critical attention regions, allowing for computational savings without substantial accuracy loss. Empirical evaluations on multiple benchmarks (RULER, LongBench, VideoMME, VBench) demonstrate that XAttention achieves competitive accuracy compared to full attention methods while substantially reducing computational overhead, showing speedups up to 13.5×. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: There are no theoretical claims in this work. Experimental Designs Or Analyses: Yes Supplementary Material: The authors did not provide the supplementary materials. Relation To Broader Scientific Literature: The key contributions of this paper show that the sum of antidiagonal selected elements serves as a proxy for the overall importance of the corresponding attention block, which is related to block-wise sparse attention. Essential References Not Discussed: The following essential reference about block-wise attention is not discussed in this paper: [1] Blockwise Self-Attention for Long Document Understanding, ACL 2020 Other Strengths And Weaknesses: Strengths: 1. The observation that antidiagonal values within the attention matrix can serve as a powerful indicator of block importance makes sense. 2. Clearly motivated method with an efficient antidiagonal scoring strategy. 3. Extensive and solid experiments across diverse benchmarks (text and video domains), providing good empirical support. 4. Demonstrates substantial computational efficiency with competitive accuracy relative to dense attention methods. Weaknesses: 1. While the paper presents some refinements in block sparse attention methods compared to existing approaches such as [1], the improvements appear to be incremental. It would be valuable for the authors to elaborate on how antidiagonal selections offer significant performance. 2. While the use of the antidiagonal as an importance indicator for the block is an interesting idea, it appears to be the primary novel contribution of the work. It appears that the improvements made to the methods are somewhat limited. 3. Lack of deeper analyses and insights to justify why antidiagonal scoring outperforms other potential scoring methods compared to the other pooling methods. It would be beneficial if the authors could provide additional theoretical insights to further substantiate and highlight the advantages of this approach compared to existing methods. 4. Lack of a speed comparison between Xattention and FlashAttention. [1] Blockwise Self-Attention for Long Document Understanding, ACL 2020 Other Comments Or Suggestions: The spacing between the captions for tables and figures and the surrounding text appears insufficient, which could affect both readability and the overall presentation. It is recommended that additional spacing be introduced to enhance the visual clarity of the document. For example: 1. The caption for Figure 2 (lines 66–68) 2. The caption for the table (lines 165–169) 3. Table 3 (lines 279–280) 4. The caption in Table 4 (lines 294–295) Questions For Authors: Could you please explain why the appendices have not been included? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### 1. Antidiagonal Selection: Performance and Insights Antidiagonal selection offers significant advantages over other patterns: - Retains **all token information** while simultaneously detecting both **vertical and slash patterns** critical in LLM prefill - Diagonal patterns **miss** slash patterns with probability $\frac{\text{stride} - 1}{\text{stride}}$ - Other patterns (horizontal lines, pooling methods) lose token-level information - Empirically validated superiority: outperforms diagonal patterns (Table 6) and pooling-based methods used by FlexPrefill --- ### 2. Novel Contribution We respectfully disagree with the assessment of limited contributions. While block-sparse attention exists, **effectively implementing it without accuracy/efficiency degradation remains challenging**: - FlexPrefill and MInference: Suffer from **high precomputation costs** and rely on last block for pattern detection, preventing **chunk prefill** and adaptation to **non-text tasks** like video - SeerAttention: Requires **additional parameter training**, limiting generalizability. Also it has poor empirical performance. - Our work: First to identify antidiagonal pattern as an effective block importance proxy, creating a **simple yet powerful method** that significantly improves the efficiency-accuracy trade-off --- ### 3. FlashAttention Comparison As shown in the Figure 4 of our submitted paper, XAttention achieves up to **13.5× speedup** over FlashInfer FlashAttention (one of the fastest implementations): | Prefill Time (ms) | 8k | 16k | 32k | 64k | 128k | 256k | | ------------------------- | ---- | ---- | ---- | ----- | ------ | ------ | | FlashInfer | 5.0 | 19.3 | 77.2 | 314.5 | 1269.5 | 5192.1 | | FlashAttention (Official) | 4.9 | 19.1 | 76.8 | 312.2 | 1265.8 | 5186.5 | | XAttn Stride = 4 | 3.3 | 8.2 | 24.4 | 63.2 | 181.4 | 543.6 | | XAttn Stride = 16 | 2.6 | 5.2 | 13.4 | 38.3 | 134.0 | 383.6 | ### 4. References and Formatting Thank you for noting these issues. We will address them in the revision. --- ### 5. Appendices The main paper effectively covers our contributions. We will include an appendix showing additional visual samples, theoretical analysis, and experimental results in the next revision.
null
null
null
null
null
null
GradPS: Resolving Futile Neurons in Parameter Sharing Network for Multi-Agent Reinforcement Learning
Accept (poster)
Summary: This paper investigates parameter sharing techniques in cooperative multi-agent reinforcement learning (MARL). The authors observe gradient conflicts among multi-agent policies and propose a new partial parameter-sharing scheme. This method exhibits superior performance on benchmarks such as SMAC and PredatorPrey. Claims And Evidence: Yes. Methods And Evaluation Criteria: 1. The motivation is clear and well-supported by the toy example in Figures 1-3. 2. The experiments demonstrate the superior performance of the proposed GradPS compared to baselines on benchmarks. Overall, the experiments are thorough enough to support the claims made in this paper. However, I would like to suggest a few additional experiments that could potentially strengthen the paper even further: 1. It might be insightful to visualize the group assignment pattern throughout the training process. This could offer valuable insights for future research on PS in MARL. 2. Considering that cloning and optimizing weights separately may introduce some overhead, it would be helpful to report the training time and memory usage. 3. \alpha also appears to be a quite important hyperparameter. it would be beneficial to include a parameter sensitivity analysis for this variable. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. The supplementary materials, have been reviewed thoroughly. Code is provided, which is good for the community. Relation To Broader Scientific Literature: As adequately discussed in the related work section, this work bridges the neuron learning efficiency and MARL. Essential References Not Discussed: No. Other Strengths And Weaknesses: This paper proposes a novel PS technique for MARL. The method is well-motivated by the observation that MARL often experiences gradient conflicts among agents, causing futile neurons and leading to degraded performance. The proposed method is clearly explained and well-supported by experiments. However, it might introduce extra overhead, and some experimental results may need further explanation (see the Question Section for details). Other Comments Or Suggestions: N/A Questions For Authors: Method-related: 1. How is gradient conflict retained during the interval T? Will this introduce significant overhead in terms of time and space? Experiment-related: 1. From the analysis in Section 4, fewer futile neurons should lead to better expressiveness and performance. While the proposed GradPS provides the best performance and the lowest futile neuron percentage, baselines with higher futile neuron percentages do not seem to correspond to lower performance. Are there any possible reasons for this? 2. Could the authors provide a cost comparison (training time and memory requirements) for the proposed methods and baselines? 3. Is there any pattern or relationship between the clone ratio and the heterogeneity of the agents? 4. The authors state that \rho does not impact performance, though it seems to tie to the computation overhead. In this case, is it safe to assume that we should always set a large \rho? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your valuable comments and suggestions, your comments are very important to us. We will improve our work based on your suggestions. We address your concerns as follows. ## Methods: >1. ... visualize the group assignment pattern... Thank you for your suggestion. It is very valuable, and we will include it in the revised manuscript. The following table shows an example of the grouping of **a neuron** in the 5m_vs_6m environment. Before 0.5M steps, the neuron is not futile. At 0.6M timesteps, the neuron is futile, and the agents for the neuron were grouped into (1,2,3) and (4,5). The grouping is restored at 1.2M timesteps. At 1.6M timesteps, the agents were grouped into (1,3) and (2,4,5). |Timesteps|0.2M|0.4M|0.6M|0.8M|1M|1.2M|1.4M|1.6M|1.8M|2M| |-|-|-|-|-|-|-|-|-|-|-| |Neuron|/|/|123/45|123/45|123/45|/|/|13/245|13/245|13/245| >2. ...report the training time and memory usage. Appendix D.6 shows the number of parameters (including factorization networks) used by different methods. We provide more results as follows. We present the wall-clock training time (in hours) for each algorithm, averaged over 5 random seeds. The computational overhead of GradPS is small. It takes less training time than SNP, SePS and NoPS, most of the time. |Time(h)|FuPS|GradPS|FuPS+id|SNP|SePS|NoPS|Kaleidoscope| |--|--|--|--|--|--|--|--| |Predator-Prey Small|2.45|2.74|2.55|2.72|5.23|5.45|4.37| |Predator-Prey Medium|5.11|5.65|5.23|5.34|6.86|7.18|6.32| |Predator-Prey Large|5.89|6.32|5.84|6.62|9.92|11.65|9.05| |5m_vs_6m|9.21|9.74|9.33|10.64|12.56|14.21|11.86| |MMM2|12.94|13.83|13.02|14.95|18.42|19.34|15.53| |27m_vs_30m|23.12|24.75|23.45|25.77|36.53|42.32|31.45| The following table depicts the number of parameters for each agent network compared to the full parameter-sharing approach. The additional memory overhead of GradPS is lower than that of SePS and NoPS. |Parameters|FuPS|GradPS|FuPS+id|SNP|SePS|NoPS|Kaleidoscope| |-|-|-|-|-|-|-|-| |Predator-Prey Small|100%|145%|102%|123%|198%|624%|218%| |Predator-Prey Medium|100%|146%|103%|124%|197%|1034%|217%| |Predator-Prey Large|100%|147%|104%|124%|196%|1540%|216%| |5m_vs_6m|100%|140%|102%|116%|297%|519%|226%| |MMM2|100%|176%|102%|128%|296%|1036%|218%| |27m_vs_30m|100%|195%|105%|136%|293%|2762%|215%| >3. ...$\alpha$ sensitivity analysis... $\alpha$ controls how futile neurons are identified. We have performed a sensitivity analysis for three environments; their results are depicted as follows. |Env.|$\alpha$=0.1|$\alpha$=0.2|$\alpha$=0.4| |-|-|-|-| |5m_vs_6m(win rate)|72.3|74.5|67.6| |MMM2(win rate)|78.3|82.6|67.6| |Predator-Prey Medium(return)|167.4|173.9|178.8| The results show setting $\alpha=0.2$ is a good choice. We agree that such a threshold may be environment-dependent, and we would like to explore it in the future. ## Questions: #### Method: > ...How gradient conflict retained..., introduce significant overhead... The accumulated gradients are stored in a tensor (T $\times$ agents $\times$ dim), representing the historical gradient values of neurons across each agent over the past T periods. These saved gradients are used to compute neuron efficiency, not for back-propagation. It introduces little computational overhead. Moreover, we have shown in the above parameter table that the memory overhead of GradPS is low. #### Experiment: >1. ...higher futile neuron percentages do not seem to correspond to lower performance. ... We appreciate your insight. MARL performance is influenced by multiple factors, and futile neuron is just one of them. We agree that higher neuron percentages do not necessarily lead to lower performance. >2. ... a cost comparison... We have already shown the execution time and memory overhead in the above table. >3. ...relationship between the clone ratio and the heterogeneity.. The cloning ratio $K$ is not directly correlated with agent heterogeneity. $K$ is a neuron-level value, it is not an agent-level value. We will explore whether the aggregate $K$ of all the neurons can represent agent heterogeneity. >4. ...$\rho$ does not impact performance, though it seems to tie to the computation overhead...always set a large $\rho$ $\rho$ is the probability of restoring the parameters of a neuron from grouped to shared status. We have conducted experiments that set $\rho$ to a large value (from 0.05 to 0.5), and we find that increased $\rho$ does not impact the computational overhead significantly. |Time(h)|$\rho$=0.05|$\rho$=0.2|$\rho$=0.5| |-|-|-|-| |5m_vs_6m|9.74|9.66|9.68| |MMM2|13.83|13.64|13.94| Gradual parameter recovery requires sufficient time to prevent rapid performance degradation; thus, a large $\rho$ may not be the optimal choice, as shown in the following table. |Env.|$\rho$=0.05|$\rho$=0.2|$\rho$=0.5| |-|-|-|-| |5m_vs_6m(win rate)|74.5|73.2|63.6| |MMM2(win rate)|82.6|80.2|57.6| --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal and the additional details provided. I encourage incorporating these updates into the revised manuscript to strengthen the paper. After careful consideration, I will maintain my original rating.
Summary: This paper studies the gradient conflict in parameter-sharing networks and proposes a Gradient-based Parameter Sharing (GradPS) method to resolve futile neurons in the PS network. It dynamically creates multiple clones for each futile neuron. For each clone, a group of agents with low gradient-conflict shares the neuron’s parameters, which are updated according to the gradients of each agent group. GradPS performs better than the other parameters sharing (PS) methods for the SMAC and predator-prey benchmarks. Claims And Evidence: This paper is well-motivated. This paper shows the futile neuron phenomenon in MARL and the neuron gradient conflict of neurons by experiment on SMAC. I think the proposed method is interesting. Methods And Evaluation Criteria: The proposed method GradPS addresses the challenge of gradient conflicts in parameter-sharing (PS) networks for MARL. By dynamically cloning futile neurons and grouping agents with low gradient conflicts, GradPS effectively balances the trade-off between sample efficiency and policy diversity. Overall, the method and evaluation are well-designed. Theoretical Claims: N/A Experimental Designs Or Analyses: This paper conducts evaluations on SMAC and predator-prey benchmarks. Experiments show that the proposed method outperforms the state-of-the-art PS methods. Ablation studies validate parameter sensitivity. Some issues: 1. How do you determine the futile threshold $\alpha$ ? 2. I am concerned about the computational overhead of this method. While I noticed the Execution Time Table in the appendix, the paper does not provide any explanation for it. Supplementary Material: I have reviewed the supplementary material, mainly the additional experiment results. Relation To Broader Scientific Literature: This paper tackles the issue of policy homogeneity in parameter-sharing networks for multi-agent reinforcement learning from the angle of neuron gradient conflict among agents. This paper highlights the futile neuron phenomenon and the neuron gradient conflict of neurons in MARL, which have been explored in multi-task learning but not in MARL previously. Essential References Not Discussed: N/A Other Strengths And Weaknesses: ## Strengths + Originality: This paper studies the gradient conflict among agents, a concept not previously explored in depth in MARL. The proposed gradient-based PS method is novel and addresses the policy homogeneity in parameter-sharing networks from the angle of neuron gradient conflict among agents. + Significance: GradPS can learn diverse behaviors through multiple clones to avoid gradient conflict, and enjoys good sample efficiency by sharing the gradients among agents of the same clone neuron. + Clarity: The paper is well-structured and well-written. ## Weaknesses + Analysis of the futile neuron threshold $\alpha$ is missing. + The algorithm lacks sufficient theoretical analysis. Other Comments Or Suggestions: N/A Questions For Authors: + Under what circumstances does gradient conflict usually occur? If agents share a common goal, such as pursuing the same prey, is there still gradient conflict or futile neuron? + Could you discuss situations where this approach is not effective or does not provide performance gains? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for viewing our work as interesting. Thanks for your valuable comments regarding the novelty of the futile neuron phenomenon and neuron gradient conflicts in MARL. We will improve our work based on your suggestions. We address your concerns as follows. ## Experimental: >1. How do you determine the futile threshold? We perform a simple parameter sensitivity experiment to determine the futile threshold $\alpha$. The experimental results for SMAC (5m_vs_6m and MMM2) and Predator-Prey are shown as follows. |Env.| $\alpha$=0.1 | $\alpha$=0.2 | $\alpha$=0.4 |--|--|--|--| |5m_vs_6m(win rate)|72.3|74.5|67.6| |MMM2(win rate)|78.3|82.6|67.6| |Predator-Prey Medium(return)|167.4|173.9|178.8| The sensitivity experiment shows that the performance for $\alpha=0.2$ is good. Thus, we fix the futile threshold of all the experiments as 0.2. It is possible that such a threshold should be flexible for different environments. Moreover, better thresholds, such as deviation from the average futile ratio, could be used. We want to explore these in the future. >2. ...the computational overhead of this method... In the appendix, the execution time (wall-clock training time) is shown as a ratio over the FuPS approach. The experimental results show that the additional computational overhead brought by GradPS is small; it is only slightly higher than FuPS. The other methods (e.g., SePS, NoPS) require significant computational overhead. In the following table, we present the wall-clock training time (in hours) for each algorithm, averaged over five random seeds. GradPS achieves lower computational overhead than other parameter-sharing methods (SNP, NoPS). |Execution time(h)|FuPS|GradPS|FuPS+id|SNP|SePS|NoPS|Kaleidoscope| |--|--|--|--|--|--|--|--| |Predator-Prey Small(1M)|2.45|2.74|2.55|2.72|5.23|5.45|4.37| |Predator-Prey Medium(1M)|5.11|5.65|5.23|5.34|6.86|7.18|6.32| |Predator-Prey Large(1M)|5.89|6.32|5.84|6.62|9.92|11.65|9.05| |5m_vs_6m(2M)|9.21|9.74|9.33|10.64|12.56|14.21|11.86| |MMM2(2M)|12.94|13.83|13.02|14.95|18.42|19.34|15.53| |27m_vs_30m(2M)|23.12|24.75|23.45|25.77|36.53|42.32|31.45| ## Weaknesses: >1.Analysis of the futile neuron threshold $\alpha$ is missing. We have discussed $\alpha$ in the above response. >2. The algorithm lacks sufficient theoretical analysis. We employ a simple yet effective method to mitigate gradient conflicts while encouraging future research to develop more solutions to gradient conflict issues in MARL. We will perform an in-depth theoretical analysis of GradPS and other parameter-sharing methods. ## Suggestions >1. Under what circumstances does gradient conflict usually occur? If agents share a common goal... Thanks for raising this question; we will discuss it in the discussion section. Gradient conflicts commonly occur in multi-agent systems. Even if agents share a common goal, they may have different partial observations, be located in different positions of the game, and their actions may be different, which may lead to different agent gradients (e.g., divergent actions). For example, when capturing one prey, it is possible that the best action for agent 1 is to move up, while the best action for agent 2 is to move down. The optimal actions for these two agents are different; this will cause gradient conflict, too. To verify the above analysis, we train two predators to capture one prey in a 5 $\times$ 5 map, and the spawn point of the agents and the prey is fixed. At each time, there is only one prey. Experiments show that the proportion of futile neurons is about 20%, which suggests that there are gradient conflicts, even if the two agents are pursuing the same prey. >2...situations where this approach is not effective... In a highly stochastic environment, SMACv2, where agent types are randomized each episode, GradPS fails to deliver performance improvements as the gradient conflict patterns vary due to the randomness of agents themselves. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and the additional details provided. My concerns have been addressed, and I will maintain my original rating.
Summary: This paper addresses the balance between parameter sharing and behavioral diversity in MARL from the perspective of gradient conflict and resolution. First, the concept of gradient conflict in MARL is introduced, and experiments verify its impact and patterns on multi-agent policy training. Subsequently, a method is proposed to group agents at the neuron level by detecting gradient conflicts. Finally, the effectiveness of the proposed method is validated in SMAC and particle-based environments. Claims And Evidence: The paper's structure is generally sound, supported by relevant evidence. However, some arguments may lack rigor or contain inaccuracies. These are pointed out below in order of appearance: 1. (minor)Line 40 claims the work is "neuron-based" to distinguish it from others. This may be inaccurate, as other works (e.g., network pruning in PS) can also be considered neuron-based. 2. (major)The paper makes multiple direct connections between an agent's policy/acquired skills and its gradients. This connection may be problematic. Gradients indicate only a temporary direction of improvement for the neural network. First, this improvement might not align with the optimal direction. Furthermore, a momentary gradient direction does not immediately constitute a stable policy or skill. 3. (minor) Lines 181-184 state that neuron efficiency decreases with an increasing number of agents. This is clearly not a universal conclusion. In MARL scenarios where agents perform identical “tasks”, increasing the number of agents might actually increase neuron efficiency. Methods And Evaluation Criteria: This paper constructs a strong PS algorithm from the perspective of gradient conflict and resolution. The overall idea is suitable for this problem. However, concerns about the proposed method's necessity and practicality are the primary reasons I cannot give this paper a high score. (Major) Regarding necessity, the proposed method uses gradient conflict to identify operable neurons and divide agents. However, I believe it does not address the core issue. This method dynamically assigns “roles” to agents by identifying gradient conflicts, but the essence should be to identify the functional heterogeneity of agents in MARL tasks. (Major) Regarding practicality, the proposed method still relies on two hyperparameters: the number of groups and the threshold for detecting gradient conflicts. This even introduces more hyperparameters compared to existing grouping methods (e.g., SePS), which significantly hinders its practical application. Theoretical Claims: This paper does not involve complex theoretical proofs. I have carefully reviewed all theoretical definitions and related claims. I believe there are just some minor issues with the notation used in the formulas and definitions. For example, in Equations 1 and 2, the definitions of *gradient conflict* and *Neuron Efficiency* lack a subscript for neuron $n$. In Equation 3, the superscript in the definition of *Futile Neuron* could be easily mistaken for an exponent symbol. Experimental Designs Or Analyses: The overall design of the experimental section in the paper is reasonable, but there are some issues with its specific implementation. (Minor)Regarding the choice of baselines, I appreciate the authors' selection of SePS, SNP, and Kaleidoscope for comparison. However, considering that the proposed method performs dynamic grouping, it would be beneficial to also compare it with other dynamic grouping methods. Examples include AdaPS, ADMIN [1], or MADPS [2], which are mentioned in the Related Work section. (Minor)Regarding the ablation study, since the authors mention a specially designed parameter restoration approach in lines 302-308, this approach should be compared with other restoration methods. [1] ADMN: Agent-Driven Modular Network for Dynamic Parameter Sharing in Cooperative Multi-Agent Reinforcement Learning. IJCAI 2024. [2] Measuring Policy Distance for Multi-Agent Reinforcement Learning. AAMAS 2024. Supplementary Material: I carefully reviewed the supplementary material, including the code related to the paper. This resolved some of my previous concerns. Relation To Broader Scientific Literature: This paper offers a novel perspective on diversity learning in MARL, utilizing the concepts of gradient conflict and futile neurons. Essential References Not Discussed: Since the abstract and introduction both mention the relationship between parameter sharing and diversity learning, the related work section should also include works on diversity-based learning. Although Section 3.1 already covers some diversity-based MARL works, some are still missing. There is a considerable amount of work in this area, including but not limited to: CDS [1], RODE [2], LIPO [3], MADP [4], FOX [5], and DICO [6]. [1] Celebrating diversity in shared multi-agent reinforcement learning. NeurIPS 2021. [2] Rode: Learning roles to decompose multi-agent tasks.ICLR 2021. [3] Generating diverse cooperative agents by learning incompatible policies.ICLR 2023. [4] Measuring Policy Distance for Multi-Agent Reinforcement Learning. AAMAS 2024. [5] Fox: Formation-aware exploration in multi-agent reinforcement learning. AAAI 2024. [6] Controlling behavioral diversity in multi-agent reinforcement learning.ICML 2024 Other Strengths And Weaknesses: **Other Strengths**: 1. The paper investigates diversity learning in MARL from a novel perspective: gradient conflict. 2. The paper proposes a neuron-level parameter allocation method. **Other Weaknesses**: Practicality: Besides the aforementioned concerns about hyperparameter usage, I have concerns about the computational cost and practicality of neuron-level agent assignment. The method requires calculating an assignment matrix of at least *Agent\*HiddenDim* size, which can be computationally expensive. My careful review of the code revealed that the agents' policies are restricted to shallow networks. Its effectiveness and computational load in deeper networks, which are common in real-world scenarios, warrant further investigation and may be limited. Other Comments Or Suggestions: Considering the concerns regarding necessity and practicality, I recommend that the authors emphasize investigating the fundamental connection between gradient-conflicted neurons and essential MARL elements, instead of concentrating solely on presenting a new algorithm. Questions For Authors: Beyond the specific concerns already raised, I have the following questions, which may influence my assessment of whether my potential concerns become definitive shortcomings. 1. Regarding the gradient calculation for individual neurons across different agents in Figure 1, I have several questions. Shouldn't the backpropagated gradient during neuron training be a tensor, rather than a scalar positive or negative value? Can the sign of a single neuron's gradient represent the "direction" of overall knowledge? The overall knowledge should be a non-linear combination of all neurons, so, wouldn't a holistic view of all neurons better represent the knowledge? 2. Regarding lines 138-140 ("These findings suggest that although the knowledge learned by agents may be diverse, their differences may cause different gradients for neurons."), what is the precise meaning of this statement? 3. In lines 274-->236 ("Gradient conflicts arise from the diverse observations and actions of individual agents.") Observations and actions are inputs and outputs of the network. However, gradients depend not only on network input/output structure but also on the loss function. I recommend a more thorough derivation, as this information should also encompass elements of the MDP, such as state transitions. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your valuable comments, we address your concerns as follows. ## Claims: >1. .."neuron-based".. GradPS is a neuron-based method **from the angle of gradient conflict**. We agree that pruning-based PS can be viewed as neuron-based too. >2. ..a momentary gradient direction.. We agree that gradients indicate temporary improvements. We will soften the claim regarding the direct connections between gradient and diversity/skill. >3. ..neuron efficiency decreases In Dec-POMDPs, agents with identical tasks may take different actions due to partial observations, leading to gradient conflicts from divergent optimal actions. In Figure 3 (Left), these agents share the same goal (removing all enemies) and with the same type. Neuron efficiency decreases with more agents due to higher partial observation diversity. Please refer to our response to Suggestion 1 of Reviewer MPio for experiments regarding agents with identical task. ## Methods: >1. ..the functional heterogeneity of agents.. The relationship between gradient conflicts and functional differences is worth exploring. Even agents with an identical task can lead to gradient conflicts due to partial observation issues, as analyzed above. **Thus, analyzing the learning efficiency of MARL for gradient conflict is needed.** We do not link gradient conflicts with role (functional) differences. However, according to our experiments (Fig. 10), our approach is able to discover hidden roles. >2. .. more hyperparameters..hinders its practical application. Our work is practical, requiring (only 4) no more hyperparameters than existing grouping methods without the need of pre-training (unlike SePS/AdaPS). The table below compares hyperparameters across PS/grouping methods. |Method|Hyperparameter Count|Description| |-|-|-| |SePS|4|Clusters count, VAE training step, latent dimension, KL weight| |AdaPS|4|Clusters count, Drop threshold, VAE training trajectory, KL weight| |ADMN|4|Module count, Layer number, Module outputs size, Combination weight| |MADPS|5|Obs subset size, sampling times, Fusion/Division threshold, Fusion/division interval| |SNP|7|Actor/Critic Pruning ratios $\times$ 3, Subnetworks number| |Kaleidoscope|6|Reset probability, Ensembles, Actor/Critic diversity, Actor/Critic reset interval| ## Theoretical Claims: > ...formulas and definitions... We will fix the notations. ## Experimental: >1. ..compare with other grouping methods.. AdaPS, ADMN [1], or MADPS [2] ADMN and MADPS are **not open-source**. Reproduction attempts at such a short time would risk introducing unfair biases. GradPS performs better than AdaPS as shown in the following table. |Env|GradPS|AdaPS| |--|--|--| |Predator-Prey Medium|173.9|94.7| |5m_vs_6m|74.5|42.9| |MMM2|82.6|52.2| >2. ..compared with other restoration methods.. We compare two restoration methods, which use the average value of the group or the value of a random clone to directly replace the neuron weights. Our method is better. |Map/GradPS|Our|Average Restoration|Random Restoration| |--|--|--|--| |Predator-Prey Medium|173.9|154.2|142.1| |5m_vs_6m|74.5|64.2|54.7| |MMM2|82.6|53.5|35.7| ## Essential References: >..some diversity-based MARL works.. Thanks, we will discuss all of them in related work. ## Weaknesses: >.. computational cost and practicality.. effectiveness in deeper networks. We have shown in Appendix Table 10 and 11 that the increased overhead is small. It is much smaller than partial PS methods (e.g., SePS and NoPS). Our work is practical for multi-layer agents. In Appendix D.5.2, for a 4-linear-layer agent, there are more futile neurons in the first linear layer than the other layers. Although there are more layers, such a method (QMIX-4-layer) performs weaker than standard QMIX, due to futile neurons. Applying GradPS in the first linear layer leads to higher performance gain than applying it to the other layers, justifying our first-layer focus. Moreover, we show that GradPS works in different agent networks (distributional or risk-sensitive) in Appendix D.4. Figure 7. ## Suggestions: >..gradient-conflicted neurons and MARL elements.. It is necessary to study the fundamental connection between futile neurons and essential MARL elements following ADMN and MADPS. ## Questions: >1. ..gradient be a tensor..the overall knowledge.. Neuron gradient is not a vector (tensor) gradient, as defined in Definition 1. The neuron gradient is the gradient with respect to the output for a neuron, which is just a scalar (positive or negative). A holistic view of all neurons can better represent the whole knowledge. >2. ..the precise meaning.. We will soften the claim regarding the direct relationship between gradient conflicts and policy diversity. >3. ..gradients depend not only on network input/output structure.. Gradient conflicts correlate with diverse agent observations/actions and are influenced by factors such as loss functions and environmental stochasticity.
Summary: This paper identifies "futile neurons," neurons with conflicting gradient updates, in parameter-shared multi-agent reinforcement learning (MARL). It proposes GradPS, which dynamically clones these neurons, grouping agents with low gradient conflict to promote diversity and efficiency. Experiments on SMAC and Predator-Prey benchmarks show improved performance compared to existing parameter-sharing methods. Claims And Evidence: The main claims of improved diversity and performance are generally supported by experiments. However, convergence plots are missing, making it hard to assess long-term stability. More explicit reproducibility analysis (e.g., variability across random seeds) would also help validate the claims. Methods And Evaluation Criteria: The proposed GradPS algorithm is innovative and appropriate for the MARL setting. Experiments involve relevant benchmarks and comparisons. However, it is unclear whether baseline methods are implemented optimally, such as from existing strong repositories, potentially biasing results in favor of GradPS. Theoretical Claims: The paper does not make theoretical claims requiring proof validation. Definitions of gradient conflict and futile neurons are clearly presented and intuitive. Experimental Designs Or Analyses: Experiments cover multiple relevant benchmarks. Still, convergence properties are not shown explicitly, and some analyses (Sections 5.2/5.3) lack clarity. Specifically, the grouping mechanism for neurons and sensitivity to hyperparameters (e.g., number of groups K) need clearer explanation. Supplementary Material: All. Relation To Broader Scientific Literature: Related to MARL and deep learning phenomenon. Essential References Not Discussed: No. Other Strengths And Weaknesses: Weaknesses: Clarity: 1. Section 5.2 and 5.3 is hard to read. Method: 1. The "futile" neuron seems a kind of dormant neuron. The necessity of this new definition is not convincing 2. The grouping method which is important is not mentioned in the main paper. Experiment: 1. The performance after convergence is missed. 2. Lack of comparison with naive partial parameter-sharing baselines, e.g., the agents share the whole network but the last layer. Other Comments Or Suggestions: Put more details in main paper, especially in the method section. Questions For Authors: 1. Are all algorithms implemented based on the same optimized repo such as pymarl2[1] and pymarl[3]? 2. Is the introduction of "futile" necessary? 3. Why does the execution time of GradPS increase, since the number of parameters for each agent is the same as the fullps baseline? [1] Hu, Jian, et al. "Rethinking the implementation tricks and monotonicity constraint in cooperative multi-agent reinforcement learning." arXiv preprint arXiv:2102.03479 (2021). [2] Jianye, H. A. O., et al. "Boosting multiagent reinforcement learning via permutation invariant and permutation equivariant networks." The Eleventh International Conference on Learning Representations. 2022. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: ## Claims: > 1. ...convergence plots are missing, ...More explicit reproducibility analysis... All experiments were repeated with **5 different random seeds** to ensure reproducibility, with means and variances shown in all experimental result figures of this work. Convergence of GradPS are evaluated by running GradPS in three environments for 3M instead of 2M steps. The results are depicted as follows. After 2M steps, for MMM2, the performance of GradPS continues to increase; for Predator-Prey and 5m_vs_6m, their performance is stable. |Env.|0.5M|1M|1.5M|2M|2.5M|3M| |-|-|-|-|-|-|-| |Predator-Prey Medium(return)|138.5|174.4|187.5|191.2|189.7|190.4|191.5| |5m_vs_6m(win rate)|28.2|57.9|68.7|73.4|74.3|72.5|74.6| |MMM2(win rate)|4.8|25.1|67.5|79.3|85.2|91.4|90.2| ## Methods: >1. ...whether baseline methods are implemented optimally... For fairness, all parameter-sharing (PS) methods (e.g., FuPS and SNP) were implemented in the same repositories (e.g., pymarl). PS implementation for pymarl (located in rnn_agent.py) is the same as pymarl2 and pymarl3. From this point, pymarl2 and pymarl3 are not optimized pymarl. Moreover, we also evaluate distributional and risk-sensitive agents, where are not included in pymarl2&3. We show in the following table that for pymarl3 QMIX experiments, GradPS outperformed FuPS+ID and FuPS. This shows that our method works in pymarl3. |Env (pymarl3)|GradPS|FuPS+Id|FuPS| |-|-|-|-| |Predator-Prey Medium (return)|187.2|165.8|139.6| |5m_vs_6m (win rate)|73.4|68.9|65.7| |MMM2 (win rate)|85.8|74.2|72.3| ## Experimental: >2 ...the grouping mechanism and sensitivity to hyperparameters... **We have described the grouping method in Section 5.1** and provided more details in Appendix C. We describe the hyperparameters and present more results as follows. $T$ is the period for identifying futile neurons. An excessively large T may delay futile neuron detection. $K$ denotes the number of groups. Setting K too large will lead to low sample efficiency. $\rho$ indicates the recovery probability for a group. In short-term tasks, even $\rho$=0 has a negligible effect on performance. Sensitivity analysis for $T$, $K$, and $\rho$ is shown in the left, middle, and right of Figure 10 in Appendix D.5.3, respectively. The $\alpha$ determines futile neuron identification. Our sensitivity analysis (see table) shows $\alpha$=0.2 yields promising results. |Env.| $\alpha$=0.1 | $\alpha$=0.2 | $\alpha$=0.4 |--|--|--|--| |5m_vs_6m(win rate)|72.3|74.5|67.6| |MMM2(win rate)|78.3|82.6|67.6| |Predator-Prey Medium(return)|167.4|173.9|178.8| ## Weaknesses: >1. Section 5.2 and 5.3 is hard to read. We will revise them to increase readability. ### Method: >1. ...seems a kind of dormant neuron... Dormant neurons have low normalized activation scores, while futile neurons suffer from gradient conflicts unrelated to activation. A non-dormant neuron can be a futile neuron. We have listed the differences as follows. | Feature|Dormant Neuron|Futile Neuron| |-|-|-| |Definition|Normalized Average Activation Value falls below $\tau$ |Neuron Efficiency is below $\alpha$| |Normalized activation score|Small|Any| |Gradient conflict|Unclear|Large| |Gradient|Near Zero|Any| |Parameter Update|Near Zero|Slowed down by gradient conflict| >2. ...the grouping method is not mentioned in the main paper... **We have described the grouping method in Sec 5.1 of the main paper.** ### Experiment: >2. ...Lack of comparison with naive baselines... We compared with **SOTA partial PS methods**: SNP, which prunes a shared network for individual policies, and Kaleidoscope, which learns agent-specific masks. Our method outperforms both, as shown in experiments. Following the reviewer's suggestion, we also compared with Naive LastPS (sharing all but the last layer), where our approach demonstrates significantly better performance, as it is depicted in following table. |Env|GradPS|Naive LastPS| |-|-|-| |Predator-Prey Medium(return)|173.9|74.7| |5m_vs_6m(win rate)|74.5|54.6| |MMM2(win rate)|82.6|16.2| ## Suggestions: >Put more details in the method section. We will add more details to improve its readability. ## Questions: >2. Is the introduction of "futile" necessary? In PS networks, gradient conflicts produce "futile neurons" - neurons hindered by conflicting updates. As detailed in our comparison table, these are systematically identifiable through their distinct characteristics. >3. why...the execution time of GradPS increase..., is the same as the fullps baseline... In GradPS, the number of parameters per agent differs from that in FuPS, as demonstrated in Appendix D.6. GradPS requires additional parameters for cloning neurons and storing gradients. For computational overhead, GradPS requires about 1.1$\times$ the training time of FuPS; it requires significantly less (40-50%) time than SNP and NoPS. --- Rebuttal Comment 1.1: Comment: Thank you for your response. However, the issue I raised has not been fully addressed. ## Performance after Convergence Authors add some experimental results of GradPS in the rebuttal. But apparently, most of the experiments in both the main paper and appendix are not converged, comparing convergence performance of baselines and GradPS makes the improvement convincing. ## Code Base I do not agree that "pymarl2 and pymarl3 are not optimized pymarl". And the authors do not answer my questions about whether all the algorithms are implemented fairly based on the same optimized code base. ## The Necessity of "Futile" Theoretical and experimental results are needed to show difference and relation between "dormant" and "futile". ## Presentation of Grouping Method My original review means the grouping method should not be present in only several purely text sentences with any equations. Overall, the experiments are not convincing and the introduction of "futile" is not fully supported for me. But this work is still interesting. The authors could consider making their experiments solid and add support for introducing new concepts. --- Reply to Comment 1.1.1: Comment: Thanks for your comments. We would like to point out that the review process is overseen by PC/SAC/AC. Thanks to the ICML 25 policy, we will release the reviews of this work regardless of whether this work is accepted or not. ## Summary of New Experiments 1. We compare GradPS and other PS methods with much longer training time to see convergence plots on nine sets of experiments. 2. We implement GradPS and other PS methods on Pymarl3, and conduct more than six sets of experiments. 3. We empirically evaluate dormant and futile neurons through three sets of experiments. 4. We compare GradPS with more PS baselines. ## Convergence Plot In the initial rebuttal, we had conducted multiple sets of experiments by running the SMAC environments for more than 2 million steps, and the predator prey environments for more than 1 million steps. Here, we have conducted **nine sets of experiments** with extended training steps. The results, shown in the Pymarl folder of the [anonymous link](https://anonymous.4open.science/r/review1_1-4871/), demonstrate that GradPS outperforms other methods after all algorithms converge. ## Code Base We have already responded directly in the previous rebuttal that all algorithms in the paper are implemented fairly based on the same repository (Pymarl, DMIX, and RMIX). We **had conducted** multiple GradPS experiments on **Pymarl3** in previous rebuttal. In this rebuttal, we **implement GradPS and others fairly on Pymarl3**. We have run multiple (more than 6) sets of experiments on Pymarl3. The experimental results are shown in the Pymarl3 folder of the [anonymous link](https://anonymous.4open.science/r/review1_1-4871/). GradPS performs better than the competing methods. **These results indicate that GradPS works effectively both for Pymarl and Pymarl3.** ## The Necessity of "Futile" >Theoretical and experimental results are needed to show difference and relation between "dormant" and "futile". We would like to point out that the ICML 2023 Oral paper [2] which introduces the concept of dormant neurons in RL, is an empirical research. It does not involve any theoretical proof. We follow their approach, which focuses on empirical discovery. **We show that futile neurons are different from dormant neurons both conceptually and empirically.** We had described the conceptual difference among dormant and futile neurons in detailed table in previous rebuttal. In this rebuttal, we report the percentage of dormant neurons and futile neurons in agent networks with VDN during the training process in three simple payoff matrix games (described in Sec. 4.3 and Appendix D.2.3). Moreover, we evaluate the MSE of reconstructing the Payoff matrix when using ReDo[2] and GradPS. ReDo and GradPS are methods developed to reduce the percentage of dormant neurons and futile neurons, respectively. The default parameters of ReDo and GradPS are used. The results are shown in Matrix Game folder in [anonymous link](https://anonymous.4open.science/r/review1_1-4871/). The results for each matrix game are located in different sub-folders. For the $f_2= f_1 \times 0$ and $f_2=f_1 \times 0.5$ games, the Neuron Percentage graphs show that dormant neurons gradually decrease and stabilize to fix values (around 10%), whereas futile neurons rise to 100% after a few thousand environment steps. For the $f_2= f_1\times -1$ game, the percents of the dormant neurons and the futile neurons fluctuate around 30%-40% and 80%-90%, respectively. **These figures show that the percentages for the two types of neurons are different across training processes.** For all three matrix games, the MSE of reconstructed payoff matrix is shown in MSE.pdf figures. For these graphs, the MSE for the original VDN, VDN with ReDo, VDN with GradPS are shown. When using ReDo with VDN, the MSE does not change significantly compared to the VDN method. However, when using GradPS with VDN, the MSE drops significantly. For these matrix games, in terms of reducing MSE, GradPS, as a futile-neuron-method, works better than ReDo[2], a dormant-neuron-based method. **These findings further highlight the significant difference among the two types of neurons.** ## Presentation of Grouping Method >My original review means the grouping method should not be present in only several purely text sentences with any equations. We had presented the grouping method through texts and equations (Sec 5.1 and Appendix C.3.), figure (Figure 6), and algorithm (Algorithm 1). We will improve the presentation of our work. ## More baselines To address the concerns of Reviewer EAb8, we have compared GradPS against AdaPS and MADPS in Pymarl. The results shown in the PS-baseline folder of the [anonymous link](https://anonymous.4open.science/r/review1_1-4871/) indicate that GradPS performs better than them. [1] The StarCraft Multi-Agent Challenge. [2] The Dormant Neuron Phenomenon in Deep Reinforcement Learning, ICML 2023 Oral
null
null
null
null
null
null
Graph-Supported Dynamic Algorithm Configuration for Multi-Objective Combinatorial Optimization
Accept (poster)
Summary: The work proposes an extension to the dynamic algorithm configuration (DAC) framework. Particularly, the work proposes to use graph convolutional neural networks to learn embeddings of Pareto fronts such that multi-objective combinatorial optimization algorithms are dynamically optimized while proposing novel solution candidates. The work evaluates the proposed GS-MODAC approach on two different problem domains with various different problem sets and shows that the approach is capable of outperforming classical, static algorithm configuration as well as a multi-agent based dynamic algorithm configuration approach. ## update after rebuttal I have read all reviews and am keeping my score. Claims And Evidence: The claimed contributions are mostly convincing and seem well supported by the empirical evidence provided in the main part of the paper and are further complemented by analysis and experiments in the appendix. Some wordings seem questionable to me. * Line 025-031 (right side) the statement reads as if "general DAC" approaches are primarily limited to multi-objective continuous optimization. However, this is not the case as other works show that RL-based DAC can be applied in a variety of problem domains (see, e.g. the works by Speck et al. [2021](https://arxiv.org/abs/2006.08246) or [2022](https://andrebiedenkapp.github.io/assets/pdf/paper/22-PRL-DAC4AIPlanning.pdf) highlight that DAC can theoretically outperform the best static configurator and selector while providing additional empirical evidence that is capable of beating a static oracle). * Line 101-104 (right side) "Unlike static configurations, DAC aims to balance exploration and exploitatoin, increasing the likelihood of finding high-quality solutions." This statement is only partially true. A static configuration can contain hyperparameters of a schedule that influences how the underlying algorithm behaves. A simple example of this are the exponential decay of learning-rates in DL which are often configured via static configurations but dynamically influence the learning behaviour of the target task. The claim of the reward function being instance-invariant does not seem well supported to me. While I agree that the reward scale is instance-invariant due to the used normalization method, the reward space as such is still instance dependent. Lastly, from the text it is not clear to me what the optimization objectives of SMAC and irace are. I believe that it is necessary to have a statement to show that SMAC and irace opimize the same normalized objective to highlight that the GS-MODAC approach truly outperforms such strong static baselines. A small-ish nitpick about the claims: The work discusses generalization to unseen instances, though they are always part of the same domain. Thus the claims about generalization might be reframed from this viewpoint. There can be made no claims about cross-domain generalization, though the work definitely shows that the learned policies can handle a diverse set of problems from the same domain. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the application at hand. Since, to the best of my knowledge, the standard DACBenchmark does not have any relevant benchmarks, it would be good to see that the authors make their benchmark publicly available (potentially as part of DACBench) to facilitate future reproducibility and additional work in this direction. Another small nitpick: A random configuration policy as baseline would have helped to highlight how meaningful dynamic configuration actually is in the proposed problem setting as the work otherwise does not provide insights on the learned configuration policies. Theoretical Claims: N/A Experimental Designs Or Analyses: See my remarks in "Claims And Evidence". Additionally, in RQ4 I do not understand the experimental setup and what is actually being investigated. Does the "GS-MODAC model" refer to the learned policy or the graph convolutional neural network. More explanation is needed to understand what we can learn from this experiment and what the results in Table 4 actually represent. Supplementary Material: I read the appendix (Appendix D is falsely referenced on line 373-right column and instead the reference should be Appendix C) to check the additional experiments, though I did not read everything in detail. Relation To Broader Scientific Literature: The literature is mostly well covered, though some of the insights about generalizability of learned DAC policies should be better set into context with related DAC works. (See the next section) Essential References Not Discussed: The works by Speck et al. [2021](https://arxiv.org/abs/2006.08246) & [2022](https://andrebiedenkapp.github.io/assets/pdf/paper/22-PRL-DAC4AIPlanning.pdf) that I also referred to earlier similarly showed that domain-dependent policies (i.e. DAC policies learned in a similar fashion to the work under review) can generalize fairly well to larger problem instances, though with some performance drop-off. Relatedly, Shala et al. [2020](https://ml.informatik.uni-freiburg.de/wp-content/uploads/papers/20-PPSN-LTO-CMA.pdf) showed that DAC policies can generalize to longer horizons, and larger problem dimensions, where the former seemed to be the more limiting factor. Further, the method used by Shala (i.e. Guided Policy Search) was specifically selected to be reward-scale invariant in-order to facilitate better generalizability (see the discussion in the appendix). Lastly, a work by Bordne et al. ([2024](https://arxiv.org/abs/2407.05789)) discussed how to exploit structure in the configuraiton space to facilitate better DAC policies. I believe this idea of exploring ways of exploting structure in DAC problems is related to the idea presented in this work, where structure in the state-space is exploited for better DAC policy learning. Other Strengths And Weaknesses: The work provides a very elegant solution to the DAC problem and should be of high interest for the AutoML community, but also the RL community as the proposed setting and how to deal with the state space could inform research directions in this community as well. A weakness is that the work presents itself as being only applicable to multi-objective combinatorial optimization problems. I believe this is not the case. In it's current form the GS-DAC approach should be applicable to a wide variety of mutli-objective problems. There would not even be need for big changes in the algorithm, just the action space for PPO might change. Further, the GS-DAC approach also seems to not be limited to the multi-objective case. While it is highly applicable for this setting, the idea of using graph (convolutional) neural networks to learn better state embeddings could be used in a broader variety of problems. For example, I don't see why it would not be possible to use the GS-DAC idea and apply it directly in the setting described by Bordne et al. (2024) Other Comments Or Suggestions: N/A Questions For Authors: Will you make the benchmark available as part of the DACBenchmark? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you to the reviewer for their thoughtful review and helpful suggestions. We have provided our responses to the comments below. **Statement general DAC** Our intention was to convey that DRL-based DAC approaches are primarily applied to single-objective (continuous) optimization problems. We will clarify this statement in the final version to prevent any misunderstanding. Additionally, we will include references to Speck et al. (2021, 2022) to highlight the capability of DAC methods to outperform the best static configurators in planning domains. **DAC aims statement** Thanks for this remark. We will update the statement to reflect that DAC aims to better balance exploration and exploitation by dynamically reconfiguring parameters throughout the optimization process. **Reward scale is instance-invariant** By incorporating normalization of the search space, we ensure that a similar reward signal can be obtained for different instances included in the training. As such, we avoid the training to overfit on instances where a larger difference in the hypervolume could be more easily obtained. We agree with the reviewer that instance-invariant is a better word choice to describe this and will update this in the final version. **Objectives SMAC and irace** Irace and SMAC are configured to optimize for hypervolume, which we consider as the main objective for our multi-objective optimization problems. To ensure clarity, we will explicitly state this in the baseline descriptions. **Cross-domain generalizability** We indeed test the ability of trained models to generalize to different problem configurations (larger problem size, more constraints, and different objectives) from the same problem domain. We choose this to account for the problem-specific operators that are used in the algorithmic setups that the trained model learned to dynamically configure. We believe that we make no cross-domain generalization claims and will ensure this again in the final version. **Random Configuration baseline** Thanks for this remark. We will include this baseline in the tables to better illustrate its comparative performance. **RQ4 Clarification** In RQ4, we investigate the extent to which a policy trained on one set of objectives (trained to optimize objectives A and B) can generalize to a different set of objectives (C and D). Specifically, objectives A and B correspond to Makespan and Balanced Workload, while objectives C and D refer to the Average and Maximum Workloads of machines in this experiment. Results demonstrate that models can be transferred to the problem configured with different objectives configurations, finding solutions of similar or better quality than the configured baselines. To improve clarity, we will update the caption for GS-MODAC in Table 4 to explicitly indicate that it was trained on different objectives. Additionally, we will revise the text to ensure a clearer explanation of what this experiment aims to assess. **Appendix reference** Thanks for noticing. We will update the reference in the final version. **References** Thank you for providing these relevant references. We appreciate the insights from Speck et al. (2021, 2022), Shala et al. (2020), and Bordne et al. (2024) regarding the generalization capabilities of domain-dependent DAC policies. We will incorporate these works into our literature review. **Methods applicability** The reviewer correctly remarks that the proposed method can also be applied to a wide variety of multi-objective problems outside of the combinatorial optimization domain. In future work, we would like to discover how we can use graph embeddings (and GNNs) from the proposed method in single-objective problem settings (e.g., in the setup of Bordne et al.). **DACBenchmark** We plan to release our source code upon acceptance, and will contact the authors of DACbenchmark to see how to integrate with them.
Summary: This work considers the problem of dynamically configuring the parameters of evolutionary algorithms for solving combinatorial optimization problems. It specifically focuses on multi-objective optimisation problems, in which there are multiple (and often conflicting) objective functions. Unlike previous methods that rely on hand-engineered features, the authors propose encoding the properties of currently found solutions as a graph, and using a graph neural network to learn the relevant features automatically. The method is compared with statically configured EA algorithms, a Bayesian optimisation approach, and a recent RL method, generally showing better performance. Claims And Evidence: The claims in the abstract and introduction are supported by appropriate evidence in the paper text. Methods And Evaluation Criteria: - My main criticism of the method is that the procedure for constructing the graph is not actually given. This should be specified precisely -- based on Figure 1, it involves some form of determining datapoints on the Pareto front, and allowing all pairwise connections between them. - There is only one Pareto front at the end of the optimisation process, but the graph representation seems to rely on several intermediate ones. It should be specified how/when these intermediate fronts are determined. - Following on from the above, there do not seem to be clear semantics associated with the use of a graph here. The edges in this graph may not have a meaningful representation (but this is difficult to say without knowing how the graph is constructed). In my opinion, to validate this, an ablation should be conducted that compares GNNs with DeepSets (another permutation invariant learning method), effectively excluding the edges from the feature learning process. Theoretical Claims: N/A Experimental Designs Or Analyses: - Default hyperparameters are used for the MADAC baseline, but it is applied to a different type of problem altogether than in the original paper. The hyperparameters of both the proposed method and MADAC should be tuned for the considered problems to make the comparison fair. An alternative explanation for the observed performance difference is that the hyperparameters of MADAC are wrongly chosen. - Multiple runs of the training loop: the training of the RL agent is itself a random process and should be repeated over multiple seeds. As far as I can tell (and the authors should clarify if this is not so), the models for each problem setting are trained only once; then error bars are given over 10 runs of the base EA algorithm but keeping the same RL model. Again, an alternative explanation is that this one model happened to obtain better performance. Supplementary Material: I have scanned the supplementary material for details that would answer my comments above. I did not read the supplementary material in full. Relation To Broader Scientific Literature: If validated successfully, this represents a contribution to the literature of hybrid methods combining RL and metaheuristics as well as multi-objective optimization. Essential References Not Discussed: To the best of my knowledge, all essential references in this area are discussed. Other Strengths And Weaknesses: The paper in general is fairly well-organised and well-written. Other Comments Or Suggestions: - Section 2: it is worth mentioning why MADAC is multi-agent: what is each of the agents responsible for? - Section 3: it is worth discussing in more detail why transitions and reward functions differ for each problem instance in the MDP. Questions For Authors: Key questions to address include: 1. The rationale for using a graph representation and the detailed procedure for constructing it. 2. The potential issues that I raised around the evaluation methodology. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thanks for the insightful feedback. **Graph Configuration and Rationale** The graph serves as a representation of the solutions in the current population on multiple objective planes. The values used as node features in the graph consist solely of the objective values from the different solutions in the population. These values are normalized based on the best objective values obtained during the search and the worst observed values from the first generation of solutions. The nodes are interconnected to form a graph with Non-Dominated Sorting, which ranks them into different Pareto fronts. Nodes within the same front are interconnected with each other. The motivation behind the graph representation is to eliminate the need for manual state space design, a process known to be cumbersome and suboptimal. We expect that our method leverages the graph-based representations to learn similar (yet advanced) features dynamically during the optimization process to reflect the current state in the multiple objective planes. Therefore, we plot and interconnect solutions on the plane, which intuitively allows the GCN to understand the status of solutions and help configure the algorithm for better convergence and diversity. We will extend the explanation of the graph construction. **Pareto Fronts Identification** The representation captures all solutions present in the population at a given iteration of the search. The graphs are constructed by identifying the Pareto fronts, which serve as structured layers of solutions based on their dominance relationships. To achieve this, we rely on Non-Dominated Sorting, a method that ranks solutions into different fronts. The first front consists of non-dominated solutions, while subsequent fronts contain solutions that are dominated by those in higher-ranked fronts. **DeepSets** Solutions within the same Pareto front are interconnected by edges, whereas no connections exist between different fronts, maintaining their distinct structure. We have conducted an ablation where we adapted our architecture to use a DeepSets approach, which excludes the edges and processes the nodes independently using shared MLPs. Results below show a significant decline in performance for DeepSets, highlighting the importance of the graph structure and the interdependencies between nodes for effective learning. ||Bi-FJSP-25j5m||| |:-:|:-:|:-:|:-:| ||mean|max|std| |MADAC|9.24*10^4|9.27*10^4|3.09*10^3| |GS-MODAC (No feature)|9.47*10^4|9.92*10^4 |4.21*10^3| |GS-MODAC (One GCN)|9.38*10^4|9.88*10^4|4.87*10^3| |GS-MODAC (DeepSets)|9.39*10^4|9.91*10^4|4.84*10^3| |GS-MODAC|9.54*10^4|10.0*10^4|4.40*10^3| **Hyperparameter Configuration** Although we applied MADAC to a different domain than in the original paper, we chose to use its default configuration because it has been shown to perform well across a wide range of objective spaces, including smooth and convex, disconnected, multimodal, and non-convex problems. This versatility, as highlighted in the original MADAC paper, supports our decision. Likewise, we use the default hyperparameters of the PPO algorithm when training the GS-MODAC method. By maintaining these default settings, we ensure a fair and consistent comparison, avoiding potential biases that could result from excessive tuning of either method. **Seeds setting training** In our experiments, we trained multiple models (3) for each method for each problem configuration using different seeds. We observed that GS-MODAC consistently exhibited stable training performance across different seeds, reinforcing the robustness of our approach. In contrast, MADAC showed a larger variance in learning performance across different runs. Due to this instability, we selected the best-performing MADAC models. We will add this information to our final publication. Additionally, we trained separate models for each problem configuration, and these models consistently outperformed the baselines across different configurations. This also indicates that the stability of our method, rather than random performance. **Multi-Agents MADAC** MADAC involves multiple heterogeneous agents, each responsible for adjusting a specific type of parameter within an algorithm. Its action is the value of the parameter that should be adjusted. We will clarify this point in the definition of MADAC **Transitions and reward functions** We formulate the DAC process as a contextual MDP (Biedenkapp et al., 2020). Here, the transition function models how the algorithm’s state evolves after an action is taken. For different problem instances, the algorithm might encounter different search landscapes, which means that the same parameter change could lead to different transitions depending on the instance at hand. Similarly, the reward function reflects the quality of the transition. Since different problem instances may require different configurations to optimize performance, the reward function must account for these differences. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. Based on the clarifications, the problems with the experimental methodology are worse than I thought and are effectively fundamentally flawed. Since RL algorithms are known to be highly sensitive to hyperparameters, and also exhibit significant variability even with the same hyperparameters, the methods must undergo tuning first. Furthermore, the authors are doing "random seed optimization" and cherry-picking the best-performing RL models. This makes the conclusions of the experimental part competely unreliable. You can check [1,2] for details on how and why these aspects are important. Unfortunately, I have little choice but to downgrade my rating to Reject. While there are hints the results should hold when the evaluation is carried out properly, we cannot draw reliable conclusions based on this methodology. [1] https://arxiv.org/abs/1709.06560 [2] https://proceedings.neurips.cc/paper_files/paper/2021/file/f514cec81cb148559cf475e7426eed5e-Paper.pdf --- Reply to Comment 1.1.1: Comment: We appreciate your critical feedback. We respectfully disagree with the claim that our conclusions are unreliable and would like to clarify why we believe our methodological choices were reasonable and justified: **Hyperparameter Configuration:** We fully agree that reinforcement learning (RL) algorithms can be sensitive to hyperparameters. However, our choice to use the default hyperparameters for both MADAC and PPO (used in our proposed method, GS-MODAC) was not due to oversight but a deliberate decision to ensure fairness and comparability. We choose this to rely on the extensive prior evaluation in the original MADAC paper, which demonstrated that their default configuration generalizes well across a wide range of problem types, including smooth, convex, disconnected, multimodal, and non-convex landscapes. Additionally, PPO's default settings have been shown to work well across many tasks, and by maintaining these default settings, we ensure a fair and consistent comparison. Besides this, the primary contribution of our method lies in its ability to perform well across a diverse range of multi-objective problems, rather than optimizing it for any specific benchmark. By using default configurations and avoiding extensive tuning, we aimed to evaluate the generality and robustness of the approach in a controlled and neutral setting, highlighting its cross-domain applicability without relying on problem-specific adjustments (as also noted by Reviewer 4 (qYda)). **"Random Seed Optimization":** We want to emphasize we did not perform "random seed optimization" or cherry-pick results. For GS-MODAC, we reported results from training configurations across multiple random seeds (three per configuration), consistently observing low variance and stable performance. This robustness across seeds is an important strength of our method. In contrast, we observed that MADAC baseline exhibited a high variance in performance across seeds. Rather than ignore this behavior, which in itself is a valuable finding, we chose to report the results obtained with the best MADAC models, as this reflects the algorithm's potential in practice, rather than its average-case performance under instability. We agree this could have been better explained in the paper, and we will make this clearer in the final version. However, the difference in stability between GS-MODAC and MADAC is itself a meaningful result. To further support this remark, we regenerated results for the Bi-FJSP 5j5m problem configuration using all trained models for both GS-MODAC and MADAC. The table below summarizes these results, highlighting both the central tendency and variability of each method. GS-MODAC not only consistently outperforms MADAC but also shows significantly less performance fluctuation across random seeds. We also observe that GS-MODAC, as demonstrated by the results in the paper, maintains strong and competitive performance against all baselines, regardless of the selected model. These results also show that the performance we reported for GS-MODAC is representative of its general behavior and not the result of picking only the best runs. To fully eliminate any doubts, we release our source code and all trained models upon acceptance. | | mean performance | max performance | |---|---|---| | NSGAii | 1.87*10^4 | 2.02*10^4 | | irace | 1.92*10^4 | 2.04*10^4 | | SMAC3 | 1.91*10^4 | 2.04*10^4 | | MADAC (worst performing model) | 1.70*10^4 | 1.92*10^4 | | MADAC (average performance) | 1.74*10^4 | 1.93*10^4 | | MADAC (best model) | 1.82*10^4 | 1.95*10^4 | | GS-MODAC (worst performing model) | 1.92*10^4 | 2.04*10^4 | | GS-MODAC (average performance) | 1.92*10^4 | 2.04*10^4 | | GS-MODAC (best model) | 1.93*10^4 | 2.04*10^4 | We will strengthen the discussion of these choices in the paper to address any ambiguity.
Summary: This paper composed a graph-neural network (GNN) based deep reinforcement learning method to dynamically optimize the configurations of multi-objective combinatory optimization problems. The proposed model takes the normalized muti objectives as input and uses the GNN to learn to iteratively involve the algorithm configurations. The authors evaluated the model's performance on both scheduling and routing problems. Claims And Evidence: 1. The key difference between the proposed method and previous works is that the proposed model uses a neural network to represent the objective space as embedding for optimization instead of manual feature engineering. However, in L206, in the left column, the details of mapping the objective space to graphs are not clearly presented. What values are exactly processed to convert the objective space to graphs? Without explaining this may hurt the claim of automation embedding. Methods And Evaluation Criteria: 1. As we know, the transformer is also a powerful tool to represent structured embeddings and complex structures. Even though in Table 8, the authors present that the GCN structure is experimentally superior to the transformer and GAT structure, there is no explanation and inspiring hint behind the choice of this neural network structure. This may limit the contribution to the community. I suggest that the authors discuss the reason why a graph structure with the pair-wise embedding technique is necessary to represent the objective space. Theoretical Claims: No theoretical proof in the paper. Experimental Designs Or Analyses: 1. From Table 9, we can see that the optimization solver takes most of the running time, which means the solver will seriously limit the training time; as we know, reinforcement learning generally has low sampling efficiency. Considering the reported training time in the L283 right column, I would worry about the application value of this pipeline. Could authors also report the MADAC model's training/running time as a reference comparison? Supplementary Material: I have read through all supplementary material, including the definition of targeting scheduling and routing problems, multi-objective problem setting, ablation studies, and running time analysis. Relation To Broader Scientific Literature: This work may inspire the CO community by using neural networks to optimize the heuristic solver's configurations. Essential References Not Discussed: None from my side. Other Strengths And Weaknesses: Strengths: 1. The evaluation is sufficient and solid, including the ablation study for components and neural network structure and the generalization testing cross-size and type of problems. 2. The paper is well-structured, and the logic is clear, making the paper easy to follow. Weaknesses 1. L206, left column. What inputs are exactly in the objective space seems not to be clearly shown. Adding some equations to present the states seems to be beneficial. 2. The community will benefit if authors could provide the source code for validating and following up on works. Other Comments Or Suggestions: 1. [Minor] Figure 1. I think there should be a connection from the "Problem Instance" cell to the " Algorithm " cell. 2. [Minor] L724. An extra space for the $0.3$%. Questions For Authors: 1. L214, right column. What does "demanding" mean here? As I understand, does this mean "optimization demanding"? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for these insightful points. Here are our detailed responses. **Motivation behind the graph structure** The motivation behind adopting a graph representation is to eliminate the need for manual state space design, a process known to be cumbersome and suboptimal. In this context, we use a graph structure to dynamically represent the relationships between solutions in the multi-objective optimization process. Specifically, the pair-wise embedding technique captures the relationships and distances between solutions, which allows the model to learn richer and more complex features of the objective space over time. We draw inspiration from various metrics for convergence and diversity in multi-objective optimization, such as the number of elite solutions, spacing between solutions, and hypervolume. These metrics reflect how well the optimization process is progressing across multiple objective planes. By using a graph representation, we can naturally plot and interconnect solutions on the objective space plane, which intuitively enables the GCN to understand the relative position and status of each solution. Results in Table 8 indicate that GCN is most effective, probably due to its ability to effectively capture local structural dependencies of the objective space. However, transformers are an effective alternative, with average performance being only 0.3% worse than GCN and its best-found solutions only 0.5% worse. Please refer to our response to reviewer F2sV, where we performed an additional ablation to highlight the effect of interconnecting the different solutions in our graph representation. **Solver time and sampling Efficiency** We agree that the optimization solver’s contribution to the overall running time is significant, a challenge common to all parameter tuning methods, whether static or dynamic. As demonstrated in Tables 2 and 3, GS-MODAC effectively generalizes to larger instances and more complex problem variants, which avoids the re-training/fine-tuning, and saves the computational time from the solver (and, consequently, training time). Furthermore, we show that GS-MODAC performs well across a variety of distributed problem instances, highlighting its ability to be trained once as a one-time investment. This enables the method to maintain strong performance on new problem configurations, including those not encountered during training, without the need for retraining. **MADAC training and running time comparison** In terms of inference time, we observe that MADAC takes 15.5s for Penta-FJSP (5j5m) and 305s for Penta-FJSP (25j5m), slightly longer than GS-MODAC (see Table 9). We will add this to Table 9 as a reference comparison. We have reported on the training time of MADAC in the “Training” subsection. GS-MODAC takes a little longer training time than MADAC on CVRP, since the latter (with its original training settings) converged quickly to sub-optimal policy. Instead, GS-MODAC can continuously improve the policy with more time. MADAC was trained until full convergence on the FJSP problem variants, taking more training time than the proposed method. **Inputs in the objective space** The values used as node features in the graph consist solely of the objective values from the different solutions in the population. These values are normalized based on the best objective values obtained during the search and the worst observed values from the first generation of solutions. The nodes are interconnected to form a graph using Non-Dominated Sorting, which ranks them into different Pareto fronts. The first front consists of non-dominated solutions, while subsequent fronts contain solutions that are dominated by those in higher-ranked fronts. Nodes within the same front are interconnected with each other. The problem-specific calculations for obtaining objective values are detailed in Appendix A. The Non-Dominated Sorting is performed based on Definition 3. We will add the equation for this to the paper. **Source code release** We provide our code in supplementary material and will indeed release our source code upon acceptance. **Minor points in Figure 1 and L724** Thanks for noticing, we update the Figure and value accordingly. **Demanding** In this context, "demanding" means that evolving toward the Pareto front becomes increasingly challenging or difficult as the search progresses. Early in the optimization process, improvements are relatively easy to achieve, but as the algorithm gets closer to the optimal Pareto front, making further progress requires (or demands) significantly more effort/computational costs.
Summary: This paper presents a DRL approach for dynamically configuring evolutionary algorithms in multi-objective combinatorial optimization. The process is modeled as a Markov decision process, where solution convergence is represented as a graph, and a GNN enhances state representation. Experimental results show that the proposed method outperforms traditional and DRL-based approaches in efficacy and adaptability across objective numbers. Claims And Evidence: No, the paper provides an insufficient survey of reinforcement learning-assisted evolutionary algorithms for multi-objective combinatorial optimization. Review papers in this field indicate the existence of such approaches, yet they are not discussed in the paper. Consequently, the absence of comparisons with state-of-the-art algorithms undermines the evaluation of the experimental study. Methods And Evaluation Criteria: Yes, the multi-objective Flexible Job Shop Scheduling Problem (FJSP) and the Capacitated Vehicle Routing Problem (CVRP) are used to evaluate the effectiveness of the proposed method. As classic multi-objective combinatorial optimization problems, they serve as meaningful benchmarks for assessing the algorithm’s performance. Theoretical Claims: Yes, this paper is primarily based on empirical study and does not include rigorous mathematical proofs. Experimental Designs Or Analyses: Yes, there is one point I am unclear about. According to the experimental setup in Chapter 4, the proposed method requires longer training times than the baseline. How is the fairness of the algorithm comparison ensured under these circumstances? Supplementary Material: Yes, but I just have a glance at the code provided in the supplementary material. Relation To Broader Scientific Literature: This paper models the dynamic algorithm configuration of a multi-objective evolutionary algorithm as a Markov decision process. A graph represents the convergence of solutions in the objective space, and a GNN learns their embeddings to enhance the state representation. Essential References Not Discussed: The literature on reinforcement learning-assisted evolutionary algorithms for multi-objective combinatorial optimization is not discussed. Other Strengths And Weaknesses: 1. The reward function is highly dependent on the hypervolume calculation and relies on the hypervolume of the previously observed population. However, the hypervolume value is strongly influenced by the reference point, and the results can vary significantly depending on the reference point. Moreover, as the population evolves dynamically, the reference point set based on the initial population may no longer be appropriate. This raises concerns about the effectiveness of the proposed method. 2. The metrics used to define the states in the MDP are specified, but their detailed calculations are not clearly explained. Other Comments Or Suggestions: 1. The state configuration in Figure 1 is not intuitive. Additionally, the DRL agent should be depicted more clearly in the diagram compared to the conventional flow of evolutionary algorithms. 2. "NSGA-ii" should be corrected to "NSGA-II." Questions For Authors: 1. Which points are used for the normalization of the objective function? How does this calculation account for changes in the population? 2. How is fairness maintained in the experiments when comparing different types of algorithms? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer's effort and insightful feedback. Here are our detailed responses. **Insufficient survey of DRL methods for assisting MOEAs** To the best of our knowledge, our literature review (Section 2, L137) includes most existing works that propose frameworks for controlling multi-objective evolutionary algorithms in a way that generalizes across different algorithms and problems. In recent surveys (Yang et al., 2024; Ma et al., 2024) that were cited in our paper, most reinforcement learning-based multi-objective approaches are designed for specific evolutionary algorithms (mainly differential evolution), aim to configure target weight adjustments in MOEA/D, or focus solely on operator selection. Given this, we have chosen to compare our approach with MADAC, which we identify as the most relevant state-of-the-art algorithm, and is the closest work to our proposed method. Following the reviewer's suggestion, we have cited another latest survey paper (Song et al., 2024) to refer interested readers to learn the related work on reinforcement learning-assisted evolutionary algorithms for (multi-objective) optimization. We appreciate any additional references recommended by the reviewer and would be happy to incorporate them into our paper. Reference: Song, Y., Wu, Y., Guo, Y., Yan, R., Suganthan, P. N., Zhang, Y., ... & Feng, Q. (2024). Reinforcement learning-assisted evolutionary algorithm: A survey and research opportunities. Swarm and Evolutionary Computation, 86, 101517. **Reference point influence** We acknowledge that the choice of the reference point can influence absolute hypervolume values. However, our approach relies on the relative improvement in hypervolume rather than its absolute magnitude. As long as the reference point allows for meaningful differentiation between better and worse populations during the optimization process, the reward function remains effective. Besides this, in order to address variations in absolute hypervolume values across instances, we normalize the reward signal using the reference point and an ideal point. This normalization ensures a stable and consistent reward signal throughout the optimization process and across different training instances. **State space metric calculations** The graph serves as a representation of the solutions in the current population on the multiple objective planes. This approach involves interconnecting normalized objective points in the different Pareto fronts to create a structured visualization of the solution space. We have specified the calculations of the different objectives in Appendix A. For normalization, we use regular min-max normalization, given the bounds of the objectives known at a certain iteration (as described in L215). Pareto fronts are identified using non-dominated sorting, following Definition 3 to identify Pareto fronts. Following your suggestion, we will extend the calculation details in our paper. We will also release the implementation for better understanding. **State Configuration in Figure 1** Based on the above comments of the reviewer, we will provide more explanation of the state-space configuration in the caption of the figure to make it more intuitive. Additionally, we agree that this work is more towards the DRL domain and we will enhance the DRL agent part in the diagram to make it more clearly presented than the evolutionary algorithm flow. We will centralize the DRL agent, provide more details about it, and simplify the evolutionary algorithm. **Points for normalization** The points used for the normalization of the objective function are the minimum and maximum values of each objective known at a given iteration. These bounds are recalculated at each step, ensuring that they reflect the changes in the state of the population. As such, we keep track of the entire objective space by keeping the worst objective values (typically present in the first generation) and updating the minimum values when better objective values are obtained. As noted above, we will clarify this in the paper. **Algorithmic Fairness** To ensure a fair comparison, we set algorithmic parameters, such as population size and computational budget, to be identical across all compared methods. Additionally, to maintain fairness in tuning and training time, we provided the automated algorithm configuration methods with a similar number of runs (e.g., 2000 for FJSP) as the proposed GS-MODAC method. Furthermore, the MODAC baseline method was trained until full convergence to ensure that its full potential was realized, taking more training time than the proposed method on the FJSP problem variants. --- Rebuttal Comment 1.1: Comment: Thank you for your response. It has addressed some of my concerns, and I have accordingly raised my score. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to review our rebuttal and for reconsidering the score. We sincerely appreciate your thoughtful effort and consideration.
null
null
null
null
null
null
Provable Benefit of Random Permutations over Uniform Sampling in Stochastic Coordinate Descent
Accept (poster)
Summary: The paper considers coordinate descent with updates performed on iid coordinates (RCD) versus coordinates chosen by random permutation (RPCD). The permutation updates are empirically known to outperform the iid baseline, but this has lacked theoretical justification. The main result of the paper proves the RPCD method converges faster than RCD for a class of quadratic objectives with constraints on the quadratic form. Claims And Evidence: The claims are well supported by simple experiments measuring convergence and solid proofs. Methods And Evaluation Criteria: The evaluations generally make sense for the setting. One area somewhat under explained is section 4.1 in particular the algorithm to search for ‘difficult’ PSD matrices and section 4.2 where the log-sum-exp term is included for some of the experiments. Both sections would benefit from a bit more explanation; why is that algorithm a better candidate for generating hard quadratic programs than any other matrix distribution? When the authors write that the log-sum-exp terms make the loss less coordinate-friendly, is there a reference or quick explanation they could give for why this should be especially difficult for these algorithms? Intuitively I can look at this function as very non-separable and guess that coordinate descent would have slower convergence, but the plots in Table 3 with or without the LSE terms appear almost identical. I should mention I quite like this section for trying to give empirical ways to stress test the authors' hypothesis. Theoretical Claims: The results appear correct, I mainly confirmed the analysis in the appendix up to page 19, although I didn’t go through all the details of the very technical followup chasing polynomial root identities. Experimental Designs Or Analyses: Yes Supplementary Material: Yes, see above Relation To Broader Scientific Literature: The paper seems to improve on earlier bounds at the expense of stricter assumptions. I can appreciate why the problem is difficult as analyzing RPCD requires evaluating n gradient updates at a time. The trick of (essentially) constraining the class of Hessians to the two-dimensional space of matrices invariant to conjugation by permutation makes sense, although it is quite limiting for the problem. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate the valuable questions and comments, and we express our gratitude for the positive feedback and recognizing the novelty of our work. We provide detailed responses to the questions below. ### Methods And Evaluation Criteria > *The evaluations generally make sense for the setting. One area somewhat under explained is section 4.1 in particular the algorithm to search for ‘difficult’ PSD matrices and section 4.2 where the log-sum-exp term is included for some of the experiments. Both sections would benefit from a bit more explanation; why is that algorithm a better candidate for generating hard quadratic programs than any other matrix distribution? When the authors write that the log-sum-exp terms make the loss less coordinate-friendly, is there a reference or quick explanation they could give for why this should be especially difficult for these algorithms? Intuitively I can look at this function as very non-separable and guess that coordinate descent would have slower convergence, but the plots in Table 3 with or without the LSE terms appear almost identical. I should mention I quite like this section for trying to give empirical ways to stress test the authors' hypothesis.* - To elaborate on the **Algorithmic Search** part in Section 4.1, this method works by using a scipy optimizer (as in Step 4) to maximize the value $\rho(\mathcal{M}\_{\boldsymbol{A}})$, while Steps 1–3 ensure that the matrix $\boldsymbol{A}$ is generated to be inside the desired function class (unit-diagonal PSD with fixed $\lambda_{\min} = \sigma$). The worst case examples are likely to be in a (Lebesgue) measure zero set, which motivated us to use such an optimization approach rather than sampling-based methods. We will add a more detailed explanation in our next revision. - For the quadratic+LSE experiments in Section 4.2, we would like to clarify that our purpose of using the LSE term was to simply construct a non-quadratic example; it is not specially designed as a hard instance for CD algorithms, while we did add the orthogonal matrix $\boldsymbol{Q}$ to see if random permutations are beneficial for the 'non-coordinatewise-separable' cases as well. - We also conducted experiments expanding the range of $\alpha$ up to 100 in the quadratic+LSE setting. Across all experiments, RPCD consistently converged faster than RCD. However, convergence slowed with increasing $\alpha$. However, the performance gap between RPCD and RCD narrowed as $\alpha$ increased. (You may also refer to our response to reviewer gqcX.) We provide a link to the experimental results: https://anonymous.4open.science/r/rcd_rpcd-7416/ ### Relation To Broader Scientific Literature: > *The paper seems to improve on earlier bounds at the expense of stricter assumptions.* While we thank the reviewer again for the diligent review and positive response, we would like to carefully point out that our assumptions are not necessarily *stricter* than directly relevant previous works that consider *permutation-based* coordinate descent algorithms. Specifically, [LW19] considers a narrower class of problems than we do (as noted in Section 3.3, Comparison with Previous Work). While [WL17] considers a slightly larger class of quadratics allowing small diagonal perturbations on permutation-invariant Hessians, [GOVW18] also relies on the permutation-invariant Hessian assumption, although they considered negative off-diagonal cases. [LW19] Ching-Pei Lee and Stephen J. Wright. Random permutations fix a worst case for cyclic coordinate descent. IMA Journal of Numerical Analysis, 2019. [WL17] Stephen J. Wright and Ching-Pei Lee. Analyzing random permutations for cyclic coordinate descent. Mathematics of Computation, 2017. [GOVW18] Mert Gurbuzbalaban, Asuman Ozdaglar, Nuri Denizcan Vanli, Stephen J. Wright. Randomness and permutations in coordinate descent methods. Mathematical Programming, 2018. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for answering my questions. I will keep my positive score.
Summary: Authors reiterate the problem of proving that the coordinate descent method with a random permutation of coordinates (RPCD) is theoretically faster than classical random coordinate descent (RCD). They prove that the asymptotic lower bound on the convergence rate of RCD is worse than the upper bound on the convergence rate of RPCD. Furthermore, they show a strict gap between the convergence rates of RCD and RPCD on the class of quadratic functions with permutation-invariant Hessians. Claims And Evidence: The claims seem well-supported by the provided theory, but I have not independently checked the proofs. Methods And Evaluation Criteria: The proposed class of functions appears adequate and is somewhat accepted in other works (Lee & Wright, Random permutations fix a worst case for cyclic coordinate descent) Theoretical Claims: I have not verified the correctness of the provided claims. Experimental Designs Or Analyses: The experiments seem appropriate for a work with a predominantly theoretical focus. Supplementary Material: No, I did not. Relation To Broader Scientific Literature: Other studies analyze the Lipschitz smoothness of the function as the key parameter driving RCD-related methods (https://arxiv.org/pdf/1805.09185, https://arxiv.org/pdf/2102.07245). However, the current paper is more directly comparable to Lee & Wright, 2019 (https://optimization-online.org/wp-content/uploads/2016/07/5562.pdf). As the authors themselves note, their convergence results for RPCD (Equation 10 in the paper) are weaker than those of Lee & Wright (Equation 3.15 in the linked work). In the appendix, the authors claim to improve the convergence rate and extend its applicability to a broader range of values for $\sigma$. Essential References Not Discussed: This paper (Xu & Yin, 2015, https://epubs.siam.org/doi/epdf/10.1137/140983938) analyzes an RPCD-like algorithm, which performs descent over a subspace rather than exact minimization within it. The study establishes a \(1/\sqrt{k}\) convergence rate in the convex setting, assuming standard conditions. Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: The paper conjectures that the benefits of RPCD extend beyond quadratic objectives. Have you considered applying your analysis framework to more general convex functions, such as strongly convex but non-quadratic functions? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We are grateful for the reviewer's valuable questions and rich comments. Below, we have put together our responses to the questions. ### Relation To Broader Scientific Literature > *As the authors themselves note, their convergence results for RPCD (Equation 10 in the paper) are weaker than those of Lee & Wright (Equation 3.15 in the linked work). In the appendix, the authors claim to improve the convergence rate and extend its applicability to a broader range of values for $\sigma$.* We would like to clarify that our RPCD upper bound results in the main paper (Theorem 3.3) are presented in a slightly weaker but more concise form solely for the purpose of readbility and easier comparison with the RCD lower bounds, and we have demonstrated in Appendix D that our analysis is not fundamentally weaker than that of previous works. ### Essential References Not Discussed Xu & Yin (2015) analyze an RPCD-like algorithm by fixing the update order without loss of generality. This approach does not fully address the distinction between a randomly permuted coordinate selection and a deterministic, fixed-order update. Nevertheless, thanks for the suggestion and we will consider adding discussions in our next revision. ### Questions > *The paper conjectures that the benefits of RPCD extend beyond quadratic objectives. Have you considered applying your analysis framework to more general convex functions, such as strongly convex but non-quadratic functions?* This is a very interesting question. It is true that our framework (or more precisely, the proof techniques) cannot be directly applied to broader function classes, as it all heavily relies on the idea of using matrix operators that act on the 'Hessian-like matrices' of the quadratics. There are two possible directions to generalize to general non-quadratic objectives: * One would be to derive local convergence results for non-quadratics, which we discuss in our response to Reviewer gqcX. * Another direction could be to use existing results for other permutation-based algorithms like SGD with random reshuffling (SGD-RR) [MKR20] by basically substituting the finite-sum gradient oracles with the coordinate blocks of the gradients. Unlike SGD-RR, however, CD-type algorithms already enjoy *linear convergence* (even without random permutation) if the function is $\mu$-strongly convex. (This is also related to the analysis of SGD-RR under the interpolation condition by [FTS23], where plugging in $\sigma = 0$ yields linear convergence.) Thus, we cannot benefit from the variance reduction analysis in previous work, and it turns out that directly applying variance reduction analysis from SGD-RR leads to a loose upper bound. It is more likely that the effect of using permutations will be closer to a *preconditioning-like* effect (similar to previous analyses on cyclic CD [SY16]), which is largely different from SGD-RR, and we leave this direction for future investigation. [MKR20] Konstantin Mishchenko, Ahmed Khaled, Peter Richtárik. Random Reshuffling: Simple Analysis with Vast Improvements. NeurIPS 2020. [FTS23] Chen Fan, Christos Thrampoulidis, Mark Schmidt. Fast Convergence of Random Reshuffling under Over-Parameterization and the Polyak-Lojasiewicz Condition. ECML PKDD 2023. [SY16] Ruoyu Sun, Yinyu Ye. Worst-case Complexity of Cyclic Coordinate Descent: $O(n^2)$ Gap with Randomized Version. Mathematical Programming, 2016.
Summary: The paper studies the stochastic coordinate descent method for quadratic optimization and focuses on two schemes: uniform sampling versus random permutation. Under the mild unit-diagonal assumption, the authors show that random-permutation coordinate descent (RPCD) converges faster than random coordinate descent (RCD). Precisely, the authors establish the following results. 1. The authors prove a lower bound for RCD. 2. For a special function class, RPCD is proven to be faster than the lower bound of RCD mentioned above. 3. For the same special function class, RCD admits an even stronger (slower) lower bound. The authors also discuss how to generalize their results and provide several numerical experiments. ## update after rebuttal I maintain my score. Claims And Evidence: All claimed theorems are proved. Methods And Evaluation Criteria: N/A. Theoretical Claims: I mainly viewed proofs of Theorems 3.1 and 3.4. As far as I can check, they are correct. Experimental Designs Or Analyses: The experiments are sufficient and details are also reported. Supplementary Material: 1. In Line 622, $\lim_{k\to\infty}$ should be $\lim_{T\to\infty}$. 2. Line 720, I can't see the reason why $\mu_i<\mu_1,\forall i=2,\dots,n$ since they possibly equal to each other. But the final result still holds as $\sum_{i=1}^n d_i\mu_i^{2T}\geq d_2\mu_1^{2T}$. Relation To Broader Scientific Literature: The analysis may be useful in more general optimization problems. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: The paper is well-written in general. I can't find any major weaknesses. Other Comments Or Suggestions: 1. I think it's better to clarify that $\mathcal{M}_{\boldsymbol{A}}^{k}$ denotes the function composition in stead of its $k$-th power. 2. For some places, the identity matrix is written as $\boldsymbol{I}$ instead of $\boldsymbol{I}_{n}$ as defined at the beginning of Section 2. 3. It's better to say Theorems 3.1 and 3.4 hold with respect to the Lebesgue measure. 4. It could be better to provide the explicit expression of $\mathcal{M}_{\boldsymbol{A}}^{\mathrm{RCD}}(\cdot)$ since it is computable and will be used many times in the proof. Questions For Authors: The paper focuses on asymptotic results. Could the authors say something about non-asymptotic analysis? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We deeply appreciate the constructive review and feedback. We thank the reviewer for reading the paper thoroughly in detail, and we hope our response relevantly addresses all points raised in the review. ### Appendix > *Line 622: $k\to \infty$ should be $T\to \infty$.* Thanks for finding the typo. We will fix this in our next revision. > *Line 720: I can't see the reason why $\mu_{i}<\mu_{1}$ (they possibly equal to each other).* It is true that we must consider cases when $\mu_i = \mu_1$ and yet the limit remains unchanged. We will fix the details in our next revision as well. ### Comments > *I think it's better to clarify that $\mathcal{M}\_{\boldsymbol{A}}^{k}$ denotes the function composition instead of its k-th power. For some places, the identity matrix is written as $I$ instead of $I_{n}$ as defined at the beginning of Section 2.* We will elaborate on these points in Section 2 in our next revision. > *It's better to say Theorems 3.1 and 3.4 hold with respect to the Lebesgue measure.* It is true that the ‘measure zero sets’ are with respect to the Lebesgue measure. We will also add this in our next revision. > *It could be better to provide the explicit expression of $\mathcal{M}_{\boldsymbol{A}}^{\text{RCD}}$ since it is computable and will be used many times in the proof.* We deferred the explicit form of $\mathcal{M}_{\boldsymbol{A}}^{\text{RCD}}$ to Line 660 of the Appendix, but we appreciate the feedback and will consider moving this into the main text in our next revision. ### Questions > *The paper focuses on asymptotic results. Could the authors say something about non-asymptotic analysis?* This is a very good question. The reason why our results are asymptotic is that they involve matrix powers. Specifically, see Line 705 (Theorem 3.1), Line 943 (Theorem 3.3), and Line 2081 (Theorem 3.4). However, with a more refined analysis, we can determine the minimum number of iterations to guarantee RPCD's faster convergence compared to RCD. In order to obtain non-asymptotic convergence guarantees for some large enough yet finite $K$ or $T$, we must find the non-asymptotic counterparts of parts like Lines 708-714 and Lines 941-948 in our proofs. However, this is not so straightforward as, for instance, the quantity in Lines 716-719 reaches $1$ at the limit but is smaller than 1 for finite $T$. In fact, our RCD lower bound increases as $T \rightarrow \infty$, while our RPCD upper bound decreases as $K \rightarrow \infty$, which necessitates characterizing the "crossing" point when the RPCD upper bound becomes smaller than RCD lower bound after a certain number of iterations; this makes the non-asymptotic analysis harder. We plan to add these discussions as a remark in our next revision.
Summary: This paper investigates the convergence rates of random coordinate descent (RCD) and random permutation coordinate descent (RPCD) for minimizing a class of quadratic functions. The key contributions are: (a) a novel lower bound for RCD's contraction rate on general positive definite quadratic functions and a stronger version on a specific function class (denoted by $\mathcal{A}\_{\sigma}$); (b) an upper bound for RPCD's contraction rate on $\mathcal{A}\_{\sigma}$; (c) showing that the upper bound of RPCD is strictly smaller than the lower bound of RCD on $\mathcal{A}\_{\sigma}$. ## Update after rebuttal I am satisfied with the further clarifications in the rebuttal, and I decide to keep my score. Claims And Evidence: Yes, the theoretical claims are supported by proofs and numerical experiments. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have not checked the full technical details, but the overall correctness seems to be sound. Experimental Designs Or Analyses: The numerical experiments are mainly usesd to support the theoretical claims and seem to be valid. Supplementary Material: I have checked the supplementary material. Relation To Broader Scientific Literature: Although the results are restricted to a specific function class, the analysis in this paper provides insights on the benefit of RPCD over RCD on a broader class of problems. Essential References Not Discussed: None. Other Strengths And Weaknesses: I think this paper is a solid contribution to the understanding of coordinate-descent-type algorithms, and the theoretical results and their implications are well presented. The main weakness is that the results only hold on a restricted function class, and it is likely that the technical tool developed in this paper may be difficult to generalize to other problems. Other Comments Or Suggestions: Here are a few additional comments aiming at helping the author(s) further improve the article: 1. Since the whole analysis focuses on quadratic functions, it may not be very meaningful to introduce the general optimization problem in equation (1). Clearly setting the scope of this paper helps the readers better comprehend the main contributions. 2. Since different theorems are presented for different function classes, I suggest the author(s) making a table that clearly classifies the problems and summarizes the known results for each class. This may help the readers understand which part of the theory is proved and which is unknown. Questions For Authors: I have one question mainly on the design of the numerical experiments. Is there any specific reason why you consider the quadratic+LSE problem in Section 4.2? Since the setting (iii) is no longer within the scope of the theory, it may be more interesting to investigate problems that are more commonly seen in machine learning tasks. Also, is there any $\alpha$ that breaks the superiority of RPCD? This may provide some insights on how the benefit of RPCD relies on the quadratic form of the objective function. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive response and meaningful questions. Below, we summarize and respond to your questions one by one. ### Weaknesses > *The main weakness is that the results only hold on a restricted function class, and it is likely that the technical tool developed in this paper may be difficult to generalize to other problems.* While the reviewer’s comment is correct overall, we would like to carefully add that we have empirical evidence that the function class $\mathcal{A}_{\sigma}$ represents the worst case among general quadratics and that there still exist potential ways to extend the proof techniques to a slightly larger subclass of quadratics (see **Appendix E**). As a side note, for deterministic algorithms, it is possible that convergence results on quadratics could also yield *local* convergence rates of non-quadratic functions by considering the Hessian at the optimum (see, e.g., [Bertsekas, 1997, Proposition 4.4.1]). While this motivates us to extend our results to non-quadratic objectives and demonstrate local convergence, directly applying this approach is likely challenging in the stochastic setting. Identifying a suitable operator for analyzing the expected iterate norm is a technical hurdle, and we therefore leave this extension to future work. (We also leave another discussion point regarding non-quadratic functions in our response to reviewer h7Qf.) [Bertsekas, 1997] Dimitri P Bertsekas. Nonlinear programming. Journal of the Operational Research Society, 48(3):334–334, 1997. ### Comments > *Since the whole analysis focuses on quadratic functions, it may not be very meaningful to introduce the general optimization problem in equation (1). Clearly setting the scope of this paper helps the readers better comprehend the main contributions.* Our purpose in starting with the general minimization problem (1) was for a smoother exposition of the introduction section, as some of the previous works we discuss therein consider general convex functions. Nevertheless, we appreciate the feedback and will try to clarify the scope of our results by adding more details in the **Summary of Contributions** paragraph. > *Since different theorems are presented for different function classes, I suggest the author(s) making a table that clearly classifies the problems and summarizes the known results for each class. This may help the readers understand which part of the theory is proved and which is unknown.* We tried to add a summary table of our results in the main paper, but had to remove it due to space limitations; we will add this table in our next revision. ### Questions > *Is there any specific reason why you consider the quadratic+LSE problem in Section 4.2? Since the setting (iii) is no longer within the scope of the theory, it may be more interesting to investigate problems that are more commonly seen in machine learning tasks. Also, is there any $\alpha$ that breaks the superiority of RPCD? This may provide some insights into how the benefit of RPCD relies on the quadratic form of the objective function.* For the quadratic+LSE experiments in Section 4.2, we would like to clarify that our purpose in using the LSE term was to simply construct a non-quadratic example. Considering the reviewer's question, we also conducted additional experiments on (1) expanding the range of $\alpha$ up to 100 in setting (iii) in our experiments, and (2) a logistic regression task with a ridge penalty, following the setup in [NSLFK15] (with n=m=100). The loss function for this experiment is as follows: $$ \min_x \frac{1}{m} \sum_{i=1}^{m} \log(1 + \exp(-b_i a_i^{\top} x)) + \frac{\lambda}{2} ||x||^2, $$ where $a_i$ and $y$ are drawn from the standard normal distribution and $b_i=\text{sign}(a_i^{\top} y)$ but randomly flipped the sign with probability 0.1. Across all experiments, we observed that RPCD consistently converged faster than RCD. Another thing to note is that convergence slowed with increasing $\alpha$ in (1) and decreasing ridge penalty in (2). We provide a link to the experimental results: https://anonymous.4open.science/r/rcd_rpcd-7416/ [NSLFK15] Julie Nutini, Mark Schmidt, Issam H. Laradji, Michael Friedlander, Hoyt Koepke. Coordinate descent converges faster with the Gauss-Southwell rule than random selection. In Proceedings of the 32nd International Conference on Machine Learning, 2015. --- Rebuttal Comment 1.1: Comment: Thanks for the response. I decide to keep my score.
null
null
null
null
null
null
Improving Model Alignment Through Collective Intelligence of Open-Source Models
Accept (poster)
Summary: This paper introduces MoAA to enhance the alignment of LLM by using collective intelligence of multiple open-source LLMs. The authors propose a 2 stage training method: the first stage uses MoA to generate diverse, high-quality synthetic SFT data, and the second stage applies DPO using MoA as a reward model. The results show significant improvements in performance on many benchmarks, including AlpacaEval2 and Arena-Hard. MoAA is shown to over models trained with data from individual LLMs, providing evidence for its effectiveness in improving alignment through synthetic data generation and preference optimization. The method also shows a self-improvement pipeline, where models fine-tuned with MoAA-generated data surpass initial capabilities. ## update after rebuttal Checked the responses. I keep my rating. Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence, because MoAA improves model alignment through synthetic data generation and preference optimization. The paper provides results showing significant performance improvements on many benchmarks. The ablation studies also show superiority of MoAA over simpler data generation methods and reward models. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria make sense for the problem, as they are well-aligned with the goal of improving model alignment for LLMs. The 2 stage approach, involving MoA for synthetic data generation and MoA as a reward model for preference optimization, is a logical strategy to enhance alignment by leveraging diverse open-source LLMS. The benchmark datasets are appropriate for evaluating model performance, and their use in measuring alignment and safety ensures a comprehensive evaluation of the methods across various domains. Theoretical Claims: The paper doesn’t present formal proofs for theoretical claims, but instead relies on expeimental results to support its method. The claims about effectiveness of MoAA are backed by experimental evaluation, including benchmark results and ablation studies. However, the evidence provided appears robust and supports claims made in the paper without notable issues. Experimental Designs Or Analyses: Yes, experiment design and analysis presented in paper are sound and valid.The authors use many datasets s to evaluate model performance, and 2 two-stage MoAA method is tested through a series of comparisons with other data generation and reward models. The paper also includes detailed experimental setups, hyperparameter choices, and clear performance metrics. Supplementary Material: Section B, Section C, Section F Relation To Broader Scientific Literature: The key contributions of this paper are based on and extend prior work in model alignment for LLM, particularly in synthetic data generation, multi-agent model collaboration, and preference optimization. The concept of using LLMs for data generation is from earlier work on model ensembles and MoE, is further refined in the MoAA. The paper's method uses synthetic data for SFT and DPO. However, the novelty lies in the integration of open-source LLMs into a collaborative framework, so it’s relying on models like GPT-4, and demonstrating significant improvements across benchmarks, aligning with recent trends toward using open-source LLMs for scalable AI development. Essential References Not Discussed: The paper provides a strong context by citing relevant works on model alignment, synthetic data generation, and preference optimization, but it could benefit from a discussion of some recent advancements in the field of multi-agent systems and collaborative learning that are directly related to the Mixture of Agents (MoA) framework. For example, while the paper references work on Mixture of Experts (MoE) and other multi-agent frameworks, it does not cite more recent studies on dynamic multi-agent coordination and cooperative learning algorithms, such as those found in "Large Language Model Based Multi-Agents: A Survey of Progress and Challenges" (Guo et al., 2024) and "LLM-Blender: Ensembling Large Language Models" (Jiang et al., 2023). These works explore sophisticated methods for coordinating multiple models to achieve better performance, which could provide additional context for the MoAA approach presented in this paper. Other Strengths And Weaknesses: One of the key strengths of this paper is its originality in combining existing ideas from multi-agent systems, synthetic data generation, and preference optimization to propose a scalable and effective approach for improving the alignment of large language models (LLMs). The Mixture of Agents Alignment (MoAA) method is particularly innovative in leveraging the collective intelligence of multiple open-source LLMs to generate diverse, high-quality synthetic data for model training. This approach addresses the challenge of model alignment without relying on costly proprietary models, making it both significant and practical for advancing open-source AI development. The paper is also clear in its methodology and experimental design, presenting strong empirical results that support the proposed approach. However, a potential weakness is the limited discussion of the long-term scalability of the self-improvement pipeline, which could benefit from further exploration. Overall, the paper provides valuable insights and presents a promising direction for improving LLM alignment through open-source models. Strengths: 1. Originality: MoAA introduces a novel method to model alignment by using collective intelligence of open-source LLMs. 2. Significance: The paper addresses key challenges in model alignment, particularly the high cost and scalability issues of human-labeled data, making a strong case for the use of synthetic data generated through MoAA. 3. Clear Method: The 2 stage training process (MoAA-SFT and MoAA-DPO) is clearly outlined, providing a structured method to model alignment that is easy to follow and replicate. Weaknesses: 1. No comparison with other approaches: Although paper compares MoAA with baseline models, it could benefit from a deeper exploration of how MoAA compares to more traditional or recently developed alignments beyond synthetic data generation. 2.Multi-turn data: The method to multi-turn instruction generation is mentioned briefly, but it lacks a detailed solution to the issue of discontinuity in multi-turn data. Other Comments Or Suggestions: None Questions For Authors: 1. How do you ensure the diversity of the synthetic data generated by MoAA? Could biases from the participating models affect the alignment performance? 2. What are the potential computational costs associated with using the MoAA framework, especially in terms of runtime and resources required for fine-tuning? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their detailed and thoughtful assessment of our work. We appreciate your recognition of the originality, significance, and methodological clarity of our MoAA framework. > The paper provides a strong context by … (Jiang et al., 2023). > Thank you for suggesting these valuable references. We actually cited both papers in our work. > No comparison with other … data generation. > Thank you for this constructive feedback. We agree that comparing with a broader range of alignment methods provides a more comprehensive evaluation of our approach. We have conducted extensive comparisons with several contemporary alignment pipelines: 1. We benchmarked against MagPie [1], which follows a similar SFT and DPO process with its generated data, and Meta-Rewarding LLM [2], an iterative alignment method utilizing self-judgment for improvement. Our method demonstrates competitive or superior performance against both approaches, as detailed in Appendix E and Table 13 of the manuscript. 2. As shown in Figure 4, we compared the quality of data generated by single models versus our MoA approach for MoAA-SFT. Notably, even when compared to GPT-4o-05-13, one of the most powerful closed-source models available at the time of writing, our synthesized data produced superior performance metrics on downstream tasks. 3. We also conducted comprehensive comparisons of different reward models within the alignment pipeline. As demonstrated in Table 6, our MoA-based judge was evaluated against individual LLM judges and specialized trained reward models. The results show that our method delivers highly competitive performance. We hope this answer your concern and are happy to answer any remaining questions! [1] [MAGPIE](https://arxiv.org/abs/2406.08464v1) [2] [Meta-Rewarding Language Models](https://arxiv.org/pdf/2407.19594) > Multi-turn data: … discontinuity in multi-turn data. > We thank the reviewer for rasing this concern. We design our pipeline to be able to handle the multi-turn data coherently. As briefly outlined in Section 3.1.2, our approach ensures conversational continuity through a context-aware generation process. Specifically, when generating multi-turn conversational data, we maintain coherence by providing both proposer and aggregator models with the complete conversation history. For each new turn, the previous question-answer pairs from the aggregator are included as context, allowing models to generate responses that maintain thematic consistency and proper reference resolution across turns. The aggregator has access to the full conversation history when refining proposer outputs. We have found this approach significantly reduces discontinuities in multi-turn data generation. We will include a more detailed description of this process in the revised draft. > How do you ensure the diversity … alignment performance? > This is a great question. Ensuring diversity in our synthetic data is essential for effective alignment. MoA inherently generates diverse outputs by combining responses from multiple LLMs with different architectures, training methods, and capabilities. This diversity in the underlying models naturally leads to varied perspectives in the generated data. As demonstrated in the original MoA paper and our Section 4.2, our approach doesn't simply combine model responses - it organically integrates and refines them. Potential biases from individual models are often corrected during aggregation, especially when using our most capable model as the aggregator. Our experimental results in Section 4.3 confirm this approach preserves diversity while enhancing data quality, with MoAA-trained models showing better generalization across diverse tasks compared to those trained on individual model data. We acknowledge that more systematic study of bias mitigation in collaborative alignment frameworks remains a valuable direction for future research. > "What are the potential computational costs … for fine-tuning?" > Thank you for this question! While MoA data generation requires more computation than single models, our approach remains cost-effective because: - Data generation is a one-time cost, while inference efficiency of the resulting fine-tuned model is equivalent to any similarly-sized model. As shown in Table 11, our distilled model achieving 90.6% of MoA performance at only 5.4% of the computational cost during inference. - Our parallel implementation runs proposer models simultaneously, significantly reducing wall-clock time. - Using efficient open-source models generates high-quality data without expensive proprietary APIs, with demonstrated cost savings of 23% compared to GPT-4o. Thank you again for your constructive feedback. We believe these revisions have strengthened our manuscript and addressed your concerns while further highlighting the contributions of our work.
Summary: The paper proposes Mixture of Agents Alignment (MoAA) that uses multiple LLMs to (1) generate high-quality responses for SFT training (2) provide high-quality rewards for DPO training. Experiment results show that MoAA performs better than using a single teacher LLM and the benefits are not just due to having multiple agents but also several design decisions made in MoAA. MoAA also shows potential for self-improvement, which can push the frontier of open-source LLMs without reliance on stronger external supervision. ## update after rebuttal I have read the author response, I'll keep my score. Claims And Evidence: Yes, the claims are supported. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem. Theoretical Claims: The paper does not make theoretical claims. Experimental Designs Or Analyses: For MoA as a reward model, I think it's important to compare to some other baselines such as those in [1]. [1] Replacing Judges with Juries: Evaluating LLM Generations with a Panel of Diverse Models Supplementary Material: Yes, I checked the Appendix. Relation To Broader Scientific Literature: Prior work has shown that a mixture of agents could perform better than single individual model in both instruction following [1] and evaluation [2]. This paper is a straightforward extension that utilizes those findings and apply them to LLM alignment. [1] Mixture-of-Agents Enhances Large Language Model Capabilities [2] Replacing Judges with Juries: Evaluating LLM Generations with a Panel of Diverse Models Essential References Not Discussed: I think it's necessary to include some papers that utilize a mixture of agents for LLM as a judge such as [1]. [1] Replacing Judges with Juries: Evaluating LLM Generations with a Panel of Diverse Models Other Strengths And Weaknesses: Previous work [1] has already shown that mixture of agents could perform well in instruction following, so it's not surprising to see that their outputs could be used for (self) distillation and train a strong student model. I feel that the paper is missing some interesting explorations including how the capabilities of different models affect the final performance. If one model is significantly stronger than other models, will the method still work? [1] Mixture-of-Agents Enhances Large Language Model Capabilities Other Comments Or Suggestions: Please see other parts. Questions For Authors: The whole MoA procesure is heavily sequential which could cost huge inference time if each model takes long to generate its output. Given that current SOTA models are mostly long CoT models, this may be an issue. Do you see any potential ways for the whole MoA process to be more time efficient? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their constructive feedback and thoughtful evaluation of our work on Mixture of Agents Alignment (MoAA). We appreciate your recognition of our paper's contributions and the validity of our claims and experimental design. We will address each concern below. > For MoA as a reward model, I think it's important to compare to some other baselines ... > Thank you for bringing this up! We will make sure to include and discuss the Replacing Judges with Juries paper [1] in the updated manuscript. To compare our MoA as reward model to the juries [1], we implemented the their method and evaluated it on the PPE benchmark [2]. PPE consists of 18k diverse data points spanning human preference and reasoning tasks. It achieves an average of 0.607 while our method achieves 0.661. Note that in order to make the comparison fair, we use the same models in the MoA for the panel of juries. The detailed results of the juries method and other methods are displayed below: | Model | MMLU Pro | MATH | GPQA | MBPP Plus | IFEVAL | Human Pref. | AVG | | --- | --- | --- | --- | --- | --- | --- | --- | | Gemma-2-27b-it | 0.68 | 0.73 | 0.54 | 0.58 | 0.52 | 0.6169 | 0.611 | | GPT-4o-mini (2024-07-18) | 0.71 | 0.81 | 0.57 | 0.54 | 0.56 | 0.6646 | 0.642 | | Claude-3.5 Sonnet (2024-06-20) | 0.81 | 0.86 | 0.63 | 0.54 | 0.58 | 0.6733 | 0.682 | | MoA as reward model | 0.76 | 0.79 | 0.58 | 0.62 | 0.57 | 0.6465 | 0.661 | | Juries | 0.66 | 0.67 | 0.56 | 0.61 | 0.57 | 0.57 | 0.607 | [1] [Replacing Judges with Juries](https://arxiv.org/pdf/2404.18796) [2] [How to Evaluate Reward Models](https://arxiv.org/pdf/2410.14872) > Previous work [1] has already shown that mixture of agents could … If one model is significantly stronger than other models, will the method still work? > This is an excellent question. We have conducted several experiments to address this specific concern in the paper and list them here for clarity. First we include an imbalanced ensemble with one model (Gemma-2-9B-it) significantly outperforming others. Specifically, we evaluated a small-scale MoA setup (due to limited compute) with Gemma-2-9B-it, Llama-3.1-8B-Instruct, and Mistral-7B-Instruct-v0.3 as proposers, and used a two-layer MoA with Gemma-2-9B-it as the aggregator. Table 15 demonstrates that the fine-tuned gemma model shows better performance than the strongest individual model in the mix by a large margin. This finding challenges the conventional thinking that alignment requires supervision from models more capable than the target model. Second, we conducted extensive ablations examining MoA performance with varying combinations of proposers and aggregators, as documented in Table 8 and Table 15. Our analysis revealed that MoA performance serves as a reliable predictor of the distilled model's final performance, with several key insights transferring directly to MoA distillation. Notably, we found that while the ordering of proposers has minimal impact on outcomes, the capability of the aggregator model significantly influences results. Another contribution of our work is demonstrating that high-quality alignment can be achieved using exclusively open-source models. Our framework achieves competitive performance compared to strong proprietary models like GPT4-o-2024-05-13 for SFT tasks, both in terms of model performance (shown in Table 3) and cost-efficiency (with 23% cost reduction as detailed in Table 10). This finding challenges the prevailing assumption in the field that effective alignment necessarily requires access to powerful proprietary models as supervisors, potentially democratizing powerful alignment techniques for the broader research community. > The whole MoA procesure is heavily ... potential ways for the whole MoA process to be more time efficient? > Thank you for the feedback. This practical consideration touches on a key motivation behind our distillation framework. While data generation with MoA is indeed computationally intensive, our approach offers a valuable efficiency tradeoff: the one-time cost of MoA data generation yields a distilled model that is both efficient during inference and recovers most of the performance benefits. To address the computational challenges during data generation, we implemented batched implementation enables all proposer models to generate responses in parallel, significantly reducing runtime. We've also explored promising efficiency improvements, such as enabling the aggregator to begin generating while proposers are still completing their work, which showed encouraging early results. Although a comprehensive exploration of MoA efficiency optimizations falls beyond the scope of this work, our preliminary investigations suggest several promising directions for making the MoA process more time-efficient. We believe these efficiency considerations represent an important avenue for future research that could further enhance the practical applicability of our approach. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I will keep my score.
Summary: The paper demonstrates the usage of a preexisting technique, Mixture of Agents, as an alignment method. The core contribution is to use a combination of open-source LLMs as a replacement for larger proprietary models, while still obtaining competitive results with the larger proprietary models. The authors first first distill from MoA in SFT and then generate preference pairs using MoA as a reward model to finally do DPO. They show the effectiveness of using MoA for alignment across AlpacaEval, ArenaHard and MT-Bench. They also conduct a number of experiments and ablations to study significant variables in their setup to provide a comprehensive analysis of the method. Claims And Evidence: The paper’s main claims are to be an effective alignment model for small LLMs. They make related but separate assertions for the usefulness of MoA for SFT and DPO. In both cases, they show that their results are competitive with significantly larger proprietary models through evaluations and comparative analyses. Methods And Evaluation Criteria: The proposed methods are very simple. If MoA would simply be treated as a black box, the approach used is typical for alignment pipelines. Given this, the benchmark datasets, evaluation tasks and evaluation metrics are sensible for measuring the performance of this method. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is sound, and comparisons between models are attempted to be made in near-identical conditions I particularly appreciated the analysis of the quality of the MoAA generated SFT and preference synthetic data. Supplementary Material: I reviewed the hyperparameter selection, and various evaluations in the supplementary material Relation To Broader Scientific Literature: The key contributions of this paper heavily rely on a previous paper: “Mixture-of-Agents Enhances Large Language Model Capabilities”. The contributions of this paper are mostly derivative (i.e. they showcase the utility of MoA as an alignment technique). They also perform evaluations on the same that have significant value in demonstrating the utility of MoA for alignment. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thorough evaluation of our work. We appreciate your recognition of the contributions of our alignment framework. We are pleased that you found our experimental design sound and our analysis of the quality of the MoAA-generated SFT and preference synthetic data useful. Regarding the concerns raised in your review we answer below. > The key contributions of this paper heavily rely on a previous paper: 'Mixture-of-Agents… > While our work builds upon the Mixture-of-Agents (MoA) framework, we believe our contributions extend significantly beyond simply showcasing MoA's utility for alignment. Our work introduces several novel aspects: 1. **Novel use of multi-LLM as a Judge**: we introduced MoA as a competitive reward model that requires zero additional training. As far as we know, not until recently, the researchers in the community started to look into using combined LLMs for model alignment. For example, [1] routes preference pairs from specific domains to domain-specific expert judges. In contrast, our MoA framework combines and refines judgments from multiple judges, utilizing their collective expertise to improve alignment. And this distinction makes our approach unique, since don’t use a router which can introduce additional bias during training. In addition, our method leverages the strengths of multiple models collaboratively rather than relying solely on isolated, domain-specific experts. [1] Tengyu Xu, et al. (2024). The Perfect Blend: Redefining RLHF with Mixture of Judges. (online on 09/30/2024) 2. **Cost-Efficient Performance**: Our alignment method, purely relying on open-source models, is competitive against strong proprietary LLM such as GPT4-o-2024-05-13, for SFT, on both the model performances (as shown in Table 3 in our manuscript) and cost (saves 23% as shown in Table 10). This represents a potential departure from the standard paradigm where alignment has typically required access to proprietary models as supervisors. 3. **Strengthening the Strongest Model in MoA**: Furthermore, we tried finetuning the strongest model in the model mixtures of MoA, and still observed a clear performance boost with MoAA. We think this is a non-trivial finding because improving the strongest model in the mix provides evidence that our method can potentially push the frontier open-source models further without the supervision of stronger LLMs. Specifically, we evaluated a small-scale MoA setup (due to limited compute) with Gemma-2-9B-it, Llama-3.1-8B-Instruct, and Mistral-7B-Instruct-v0.3 as proposers, and used a two-layer MoA with Gemma-2-9B-it as the aggregator to generate the data mix. Table 15 demonstrates that the fine-tuned gemma model shows better performance than the strongest individual model in the mix by a large margin. This finding challenges the conventional wisdom that alignment requires supervision from models more capable than the target model
Summary: This work proposes a novel alignment framework that uses multiple open-source LLMs within an MOA(mixture of agents) architecture to enhance model alignment via synthetic data generation (MoAA-SFT:) and preference optimization (MoAA-DPO). Key experimental results are presented for aligning Llama-3.1-8B-Instruct and Gemma-2-9B-it and evaluating these on AlpacaEval 2, MT-Bench, and Arena-Hard benchmarks. Strengths: - Novel use of MOA for SFT and DPO, reduction in data generation costs for alignment compared to GPT-4o, and reduction in inference costs for aligned model in comparison to using MOA with almost similar performance. - Extensive experimental evaluations and ablation studies. Claims And Evidence: Yes key claims are well supported by experimental results Methods And Evaluation Criteria: Yes Theoretical Claims: None Experimental Designs Or Analyses: Yes Supplementary Material: Checked appendix K which show prompt templates for proposers, aggregators, in MOA etc. Relation To Broader Scientific Literature: The idea of using the collective knowledge of multilple open source LLMs via an MOA architecture for SFT and DPO is novel and is a valuable contribution. Essential References Not Discussed: None Other Strengths And Weaknesses: Although not a major weakness, the novelty may be a bit low as MOA has been previously proposed although its use in alignment is novel. Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the positive feedback and the supportive overall recommendation. We are pleased that you recognize our novel application of the MOA architecture for alignment through both synthetic data generation (MoAA-SFT) and preference optimization (MoAA-DPO). We appreciate your acknowledgment of the key contributions of our work, particularly the novel use of MOA for alignment tasks (both SFT and DPO), the significant reduction in data generation costs compared to GPT-4o and the decrease in inference costs for the aligned model while maintaining comparable performance. Furthermore, we agree that while the MOA architecture itself builds upon existing work, our paper's primary contribution lies in its novel application to the alignment problem, which you have acknowledged as valuable. Thank you! Thank you again for your review. We believe our work opens up exciting new directions for cost-effective alignment techniques using ensembles of open-source models.
null
null
null
null
null
null
Test-Time Adaptation for Online Vision-Language Navigation with Feedback-based Reinforcement Learning
Accept (poster)
Summary: This paper investigates the problem of Vision-Language Navigation (VLN) adaptation during deployment and introduces FEEDTTA, which uses feedback-based reinforcement learning. The main idea is to give the agent simple binary feedback after each navigation attempt (+1 for success or -1 for failure). To enhance learning from this binary feedback, they present Stochastic Gradient Reversion (SGR), a gradient regularization technique that helps maintain a balance between plasticity and stability. Claims And Evidence: The claims in the paper are backed up by evidence from experiments. The authors show that FEEDTTA works better than Test-time Adaptation baselines, while being comparable to Offline-training based methods. The paper provides sufficient evidence. Methods And Evaluation Criteria: The evaluation focuses on flexibility and interactivity, with standard evaluation protocol and a new metric called Adaptive Success Rate (ASR) to measure how well the agent adapts. This method is tested on three well-known benchmarks, which cover different navigation scenarios and instruction types. Theoretical Claims: was not reviewed in depth Experimental Designs Or Analyses: This paper tests on three benchmarks (REVERIE, R2R, and R2R-CE) and provides clear explanations of their results and why FEEDTTA performs as it does. The experimental design is sound. The comparison with both offline training methods and other test-time adaptation approaches provides a comprehensive evaluation of FEEDTTA's performance. Supplementary Material: was not reviewed in depth Relation To Broader Scientific Literature: I am not familiar with the literature in this area Essential References Not Discussed: I am not familiar with the literature in this area Other Strengths And Weaknesses: Strengths: The problem of test-time adaptation in VLN is interesting. The Feedback-based Test-Time Adaptation is simple and aligns well with how humans might interact with navigation agents in real-world scenarios. Weaknesses: Figures 2, 3, 5, and 7 are not vector graphics and appear blurry. Other Comments Or Suggestions: Improve the quality of figures to enhance readability Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your **positive evaluation** of our work. We are committed to enhancing the quality of the figures in the final version upon acceptance. If there are any further questions or suggestions during the discussion period, we would greatly appreciate the opportunity to address them and further refine our work **towards clear acceptance**. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. After reading the other reviews and the rebuttal, I recommend weak acceptance of this paper. I encourage the authors to revise the paper to incorporate the rebuttal, either in the main text or in the supplementary materials. --- Reply to Comment 1.1.1: Comment: We greatly appreciate your **supportive evaluation and recommendation for acceptance**. We will incorporate the feedback from the rebuttal phase into the final version accordingly.
Summary: FeedTTA is a test-time adaptation (TTA) framework for online vision-language navigation (VLN). It utilizes LLM for external interaction, providing binary feedback to the deployed navigation algorithm and establishing a feedback-based online reinforcement learning mechanism. By leveraging binary episodic feedback and gradient regularization, FeedTTA enhances adaptability while balancing plasticity and stability. Experiments demonstrate that FeedTTA outperforms state-of-the-art methods, excelling in unfamiliar environments. Claims And Evidence: 1、I notice that the paper deploys FeedTTA on an A100 GPU, which is a high-end server-grade GPU rather than typical mobile hardware. This raises concerns about its feasibility in real-world navigation scenarios with limited computational resources. It would be helpful to discuss whether FeedTTA increases the computational burden on the base navigation algorithm and compare its hardware requirements and inference speed with standard VLN models to enhance practicality. Methods And Evaluation Criteria: 1、Can we leverage large language models (LLMs) to provide more detailed scene-aware feedback or sub-goal feedback instead of simple binary feedback? This might help mitigate the issue of extreme binary feedback. 2、You mention the issue of extreme binary feedback—does this directly correspond to the sparse reward problem in reinforcement learning? 3、I notice that you propose using SGR to generate counterfactual gradient information to address the extremity of binary feedback. Could LLM-generated counterfactual evaluations replace this approach? If so, would slightly less accurate LLM-based navigation evaluations actually improve performance? 4、The paper suggests that TTA should flexibly handle different navigation outcomes, yet it still relies on an LLM-generated binary feedback mechanism, considering only success or failure. Would a more diversified navigation evaluation be more effective? For example, a failed navigation attempt that still gets close to the goal, or a successful navigation where the agent deviates from the ideal final position. Theoretical Claims: SGR (Stochastic Gradient Regularization) mitigates the extremity of binary feedback signals by introducing counterfactual gradient updates, which help smooth the learning process. However, its role in preventing catastrophic forgetting needs further clarification. Experimental Designs Or Analyses: Can the current method be applied to VN tasks, where there are no complex textual instructions, and test failures stem solely from visual observations and scene layout differences? Supplementary Material: yes. Relation To Broader Scientific Literature: The paper proposes a novel feedback mechanism utilizing LLM for TTA training and introduces the SGR module to address the binary extremity issue in feedback. Experimental results in the VLN setting outperform previous works. Essential References Not Discussed: Without Other Strengths And Weaknesses: Strengths The paper proposes a novel approach using LLM for TTA training, introducing an innovative feedback mechanism to enhance adaptation. Weaknesses Would incorporating LLM for feedback provide greater flexibility and lead to better performance? Does using LLM introduce a higher computational burden for fine-tuning? Could modifying the feedback mechanism eliminate the need for the SGR module? These questions require further discussion. Other Comments Or Suggestions: See the above comments. Questions For Authors: See the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed feedback. Below, we provide our responses to each comment and hope they contribute to a better evaluation of our work. ### Q1. Computational feasibility of FeedTTA > A : We first clarify that **FeedTTA does not require high-end server-grade GPUs and can be efficiently deployed on practical hardware (e.g., GTX 1080)**. The trained DUET policy consists of 181.08M parameters, whereas FeedTTA trains only 78.67M parameters for adaptation—just 43% of the total—making adaptation highly efficient. Furthermore, FeedTTA increases memory usage by only 0.67%, requiring 4.42 GB compared to the 4.39 GB used by baseline VLN models during inference. Lastly, while FeedTTA introduces some latency due to backpropagation, its episodic updates do not impact real-time navigation performance, making it **a practical and feasible solution for real-world deployment**. --- ### Q2. Can we leverage LLMs to provide more detailed feedback instead of simple binary feedback? > A : As discussed in Section 5.3, LLMs achieve at most 73% accuracy in predicting simple navigational outcomes. While this is sufficient to guide baseline policies, errors can accumulate, hindering stable adaptation. In our response to Q3 of reviewer vESv, we showed that dense step-wise rewards, though less efficient than sparse goal-based feedback at test time, still significantly improve performance. As LLMs advance in navigational reasoning, their potential for richer feedback remains an exciting research direction. --- ### Q3. Could LLM-generated counterfactual evaluations replace SGR? > A : The counterfactual reasoning of SGR is a regularization technique applied on a limited number of parameters, which means that the large portion of parameters should be updated based on proper feedback for intended functionality. Furthermore, while LLMs can indeed reason counterfactual scenarios, their reliability on predicting navigation outcomes itself still remains as a challenge, making them unsuitable as a direct replacement for SGR. --- ### Q4. Can FeedTTA be applied to VN tasks? > A : **Yes, FeedTTA can be applied to VN tasks even in the absence of complex language instructions**, as it only requires determining success or failure within the navigation system. To identify the dominant modality influencing navigation outcomes, we analyze navigation consistency for each trajectory in the REVERIE dataset, where each trajectory is paired with multiple language instructions. Specifically, we compute the average success rate across different instructions for each trajectory. We then identify trajectories with consistent outcomes—defined as those with a high (> 0.8) or low (< 0.2) average success rate—and calculate their proportion within the validation set. Our experiment yields a ratio of 0.72, suggesting that **visual observations are a key factor not only in VN tasks but also in VLN**, where they play a more decisive role compared to language variations. --- ### Q5. How does SGR prevent catastrophic forgetting? > A : The expected absolute value (EAV) of the gradients quantifies the deviation from the case where neither forgetting nor adaptation occurs, indicating the extent of policy forgetting and adaptation. For brevity, we omit the dimension index $m$ in subsequent derivations. > >In a standard gradient update, the EAV is given by: >$$\sum \mathbb{E}[|\nabla_{\theta} J(\theta)|] = \sum |g_{\theta}|.$$ > >For small $\alpha$ and $p$ such that $|\alpha| < p$, the EAV for the SGR-modified gradients is: >$$\sum \mathbb{E}\left[ \left\|\nabla_{\theta} J(\theta)^{\prime} \right\| \right] = \sum \left[ p \left\|\alpha g_{\theta} \right\| + (1 - p) \left\| \frac{g_{\theta}}{\alpha p + (1 - p)} \right\| \right].$$ > >Using a first-order approximation: >$$\sum \mathbb{E}\left[ \left\|\nabla_{\theta} J(\theta)^{\prime} \right\| \right] \approx \sum \left[ p \left\|\alpha g_{\theta} \right\| + (1 - p) \left\| (1 + p) g_{\theta} \right\| \right] = \sum (1 - p^2 - \alpha p) \left\| g_{\theta} \right\|.$$ > >Applying SGR scales the EAV by a factor of $(1 - p^2 - \alpha p) \leq 1$, reducing the gradient magnitude compared to the standard gradient update. >- **$\alpha = 0$ (Gradient Dropout):** > The scaling factor is fixed at $1 - p^2$. >- **$\alpha > 0$ (Gradient Scaling):** > The scaling factor is controlled by $\alpha$, but remains bounded above: $1 - p^2 - \alpha p < 1 - p^2.$ >- **$\alpha < 0$ (Gradient Reversion):** > The scaling factor is controlled by $\alpha$, bounded both above and below: $1 - p^2 < 1 - p^2 - \alpha p \leq 1.$ > >This result demonstrates that **reversing a subset of gradients as proposed in SGR provides a strategic way to balance plasticity and stability in adapting to unseen environments.**
Summary: The paper introduces FEEDTTA, a test-time adaptation (TTA) framework for vision-language navigation (VLN) that uses binary episodic feedback to adapt navigation policies in unfamiliar environments. To maintain stability during learning from binary signals, the authors propose Stochastic Gradient Reversion (SGR), a technique that reverses gradient directions for randomly selected parameters. Experiments on REVERIE, R2R, and R2R-CE benchmarks show FEEDTTA outperforms other TTA methods and sometimes even surpasses offline training approaches. Claims And Evidence: Most claims are well-supported by empirical evidence: - FEEDTTA's superior performance over other TTA methods is demonstrated across multiple datasets and metrics. - The effectiveness of SGR compared to gradient dropout and scaling is shown through ablation studies. - The claim about LLMs as potential feedback oracles is supported, though with acknowledged limitations in reliability. However, some claims require closer scrutiny: - The claim of outperforming state-of-the-art offline methods is limited to specific configurations (primarily REVERIE with DUET) rather than being a general finding. - The interpretation of increased trajectory length as beneficial "exploration" lacks rigorous justification. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for online VLN: - Binary episodic feedback is a practical choice for real-world deployment scenarios. - The introduction of ASR effectively measures adaptation capability by examining both preserved and converted success rates. - The three benchmark datasets (REVERIE, R2R, R2R-CE) represent diverse VLN tasks. The evaluation could be strengthened by examining how the sequence of navigation episodes affects adaptation performance, as online learning is inherently sequence-dependent. Theoretical Claims: The theoretical claims are limited and mostly sound. The derivation of the scaling factor in SGR (in supplementary material) ensures consistency in expectation during gradient updates. The analysis of how SGR alleviates non-stationarity is reasonable but not formally proven. Experimental Designs Or Analyses: The experimental designs are generally sound: - Comparisons with other TTA methods (Tent and FSTTA) establish clear baselines. - Ablation studies on feedback quality and quantity provide useful insights. - The catastrophic forgetting analysis appropriately measures stability. One limitation is the lack of analysis regarding sequence effects in online learning - different orderings of test examples might yield different adaptation results. Supplementary Material: I reviewed the supplementary material, including: - LLM prompts for feedback oracles - SGR mathematical derivation - Hyperparameter sensitivity analyses - Trajectory visualizations These materials effectively complement the main paper and substantiate its claims. Relation To Broader Scientific Literature: The paper effectively bridges three research areas: - VLN: It addresses a gap in online adaptation where previous works focused primarily on offline training. - TTA: It identifies limitations of entropy minimization approaches for sequential decision-making tasks. - Feedback-based RL: It adapts concepts from RLHF literature to navigation tasks. The binary feedback mechanism builds on established sparse reward RL approaches, though the paper positions this in the novel context of test-time adaptation for VLN. Essential References Not Discussed: The paper would benefit from discussing: - Connections to continual reinforcement learning literature, particularly works addressing non-stationarity in online learning environments. - Prior work on sample ordering effects in online RL, such as curriculum learning approaches. - Research on uncertainty-aware navigation that could provide context for understanding where adaptation is most effective. Other Strengths And Weaknesses: Strengths: Practical approach that requires minimal feedback, making it feasible for real-world deployment Weaknesses: Limited novelty in the basic approach, as online RL with sparse rewards is well-established Other Comments Or Suggestions: N/A Questions For Authors: - How would different orderings of test examples affect FEEDTTA's performance? Have you experimented with different sequence orderings to quantify this effect? - The trajectory length often increases after adaptation. Could you provide more evidence that this represents beneficial exploration rather than inefficient navigation? - How does FEEDTTA compare to approaches that use more informative feedback (beyond binary signals) in terms of adaptation efficiency? Is there a trade-off between feedback simplicity and adaptation speed? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your **positive evaluation** of our work. We hope our response fully addresses all concerns and demonstrates the strength of our contributions. ### Q1. How does different sequence orderings affect adaptation? > We agree that online learning is sequence-dependent. However, we show that the benefits of FeedTTA is invariant to sequence ordering through the following experiments with three different configurations. We use the 'validation unseen' split of the REVERIE dataset and compare with the DUET policy. For all configurations, the reported numbers of FeedTTA are the average of the results from 3 different seeds, with standard deviation reported in brackets. > **1. General TTA** > - In this configuration, all episodes are randomly ordered regardless of scene IDs, which corresponds to the experimental setting reported in Table 1 of our paper. |Method|SR|SPL|RGSPL| |---|---|---|---| |DUET|46.98|33.73|23.03| |FeedTTA|65.33 $(\pm1.10)$|42.63 $(\pm1.98)$|28.71 $(\pm1.45)$| > **2. Per-Scene TTA** > - Here, we analyze the effects of random episode orders for each scene ID. Note that in this setting, the adaptation is performed per-scene, and not throughout the entire validation set. The results below are in the form of (DUET / +FeedTTA) |Scene ID | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | |---|---|---|---|---|---|---|---|---|---|---| |SR | 48.63 / 60.78 $(\pm1.65)$ | 72.22 / 77.16 $(\pm0.87)$ | 32.65 / 37.51 $(\pm0.96)$ | 46.85 / 51.75 $(\pm1.00)$ | 43.34 / 51.09 $(\pm3.25)$ | 30.30 / 39.46 $(\pm4.49)$ | 44.84 / 56.76 $(\pm5.92)$ | 50.60 / 64.57 $(\pm7.48)$ | 45.89 / 71.54 $(\pm2.74)$ | 55.67 / 69.47 $(\pm1.53)$ | |SPL | 30.74 / 40.80 $(\pm0.96)$ | 56.61 / 61.51 $(\pm1.73)$ | 19.88 / 22.79 $(\pm0.49)$ | 37.27 / 38.98 $(\pm1.30)$ | 25.55 / 29.31 $(\pm3.14)$ | 19.81 / 24.73 $(\pm3.57)$ | 35.21 / 30.53 $(\pm3.12)$ | 34.36 / 38.59 $(\pm5.51)$ | 29.54 / 50.29 $(\pm6.74)$ | 44.91 / 56.42 $(\pm2.87)$ | > **3. Continual TTA** > - For this configuration, we fix the episode orders for each scene ID, and set the adaptation sequence based on mixed scene ID orders, evaluating continual adaptation performances across different scenes. |Method|SR|SPL|RGSPL| |---|---|---|---| |DUET|46.98|33.73|23.03| |FeedTTA|54.81 $(\pm1.89)$|36.70 $(\pm0.87)$|23.74 $(\pm0.47)$| > These experiments confirm that **sequence ordering does influence navigation outcomes**; however, the **benefits of FeedTTA remain consistent**, as evidenced by superior performances with low variations across different seeds. --- ### Q2. Does increased trajectory length represent beneficial exploration? > A : We justify that the increased trajectory length (TL) indicates beneficial exploration by empirically testing the hypothesis: ***“The overall increase in TL primarily results from episodes that would have failed in the original navigation but succeeded after applying FeedTTA”***. In the table below, we compare the increase in TLs for the successful navigation episodes after adaptation, categorized based on the pre-tested results before applying FeedTTA. For this experiment, we use the 'validation unseen' split of the REVERIE dataset with DUET as the base policy. Here, we discover that **the average TL increase is significantly larger for fail-to-success cases than success-to-success cases**. This clearly highlights the **role of FeedTTA in overcoming failure cases through extended exploration in unseen environment.** ||Success->Success|Fail -> Success| |---|---|---| |Increased TL|3.54 $(\pm1.45)$|10.65 $(\pm3.75)$| --- ### Q3. How does FeedTTA compare to approaches that use more informative feedback (beyond binary signals)? > A : The rationale behind choosing a simple binary episodic feedback mechanism stems from the practical limitations of the online test-time navigation environment: > - Human involvement should be minimal, as following every navigation steps to provide rewards is infeasible in real-world environment. > - Reward systems used in offline learning (e.g. step-wise distance-based rewards ) are infeasible at test-time, as we assume no access to ground-truth goal position or pre-defined maps. > > We empirically evaluate the efficiency of the feedback system by comparing our method with the step-wise distance-based reward system used in HAMT, where the feedback is defined as the reduction in distance to the target at each step. Additionally, if the agents successfully arrives at the goal positions, 2 is given as a success signal and otherwise -2 as a penalty. As we observe from the table below, **our binary episodic feedback surpasses the distance-based dense reward system, even without access to ground-truth information**. This clearly demonstrates that the **proposed feedback mechanism appears to be simple, yet efficient and effective in improving navigation performance**. |Feedback Strategy|SR|SPL|RGSPL| |---|---|---|---| |Distance-based (Dense)|63.25|42.89|28.46| |Goal-based (Sparse)|66.49|45.38|30.75|
null
null
null
null
null
null
null
null
Efficient and Scalable Density Functional Theory Hamiltonian Prediction through Adaptive Sparsity
Accept (poster)
Summary: The paper presents a significant advancement in SE(3) equivariant neural networks by introducing a scalable and efficient approach for Hamiltonian prediction. Through innovative sparse gating mechanisms and an adaptive training scheduler, SPHNet achieves remarkable computational savings without sacrificing accuracy. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Experiments follow previous baseline methods. Supplementary Material: Yes Relation To Broader Scientific Literature: The Sparse TP and Pair Gates could be extended to other SE(3)-equivariant networks, benefiting a wide range of applications in computational chemistry. For example, molecular energy and force field predictions in addition to the Hamiltonian matrix prediction. Essential References Not Discussed: No Other Strengths And Weaknesses: 1. The paper addresses a critical bottleneck in SE(3) equivariant graph neural networks—high computational cost due to tensor product (TP) operations. 2. The Sparse Pair Gate filters out unimportant node pairs, reducing computational overhead. The Sparse TP Gate prunes insignificant interactions across different tensor product orders, improving efficiency while maintaining performance. 3. The proposed Three-phase Sparsity Scheduler enables stable training and convergence by progressively optimizing sparse representations, ensuring the balance between efficiency and precision. 4. Experimental results are good. But I'm curious about the effect of each module on the model performance. It would be better if authors can provide more experiments studying the contribution of each module. Other Comments Or Suggestions: N/A Questions For Authors: N/A Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank all your valuable comments very much, and would like to discuss more on the 4th question of ‘Other Strengths And Weaknesses’: We had conducted a series of ablation study to examine the contribution of each module, and the results had been included in the Appendix B.2. As shown in Appendix Table 6 (we also listed the results below), we removed the Sparse Pair gate, the Sparse TP gate, and the Vectorial Node Interaction block from the SPHNet model, respectively, to see their impact on the model efficiency. We found that the two kinds of sparse gates can improve the speedup ratio from 1.73x to 7.09x, demonstrating the effect of sparse gates on the model acceleration. Also, each of the Sparse pair gate and Sparse TP gate alone can provide significant acceleration, improving the speedup ratio from 3.98x to 7.09x and 2.73x to 7.09x. You can find the detailed analysis in Appendix B.2. Additionally, we add a similar study by adding the Sparse pair gate and Sparse TP gate to the QHNet model here. Specifically, we applied the Sparse Pair gate on the second Non-diagonal pair block and applied the Sparse TP gate on the Node-wise interaction blocks and both Diagonal and Non-diagonal pair block. As shown in the table below, the sparse gate can effectively accelerate the training speed of the QHNet model and save computational consumption. We will also add this part of the experiment to the Appendix in our future version also. **Table:** The effect of sparse gates on SPHNet/QHNet model on PubChemQH dataset. | Model | Sparse Pair Gate | Sparse TP Gate | Vectorial Node Interaction block | Spherical Node Interaction block | *H* [$10^{-6}E_h$] | Memory [GB/Sample] ↓ | Speed [Sample/Sec] ↑ | Speedup Ratio ↑ | |-------|--------------------|------------------|-------------------------------------|------------------------------------|-----------------------------|-------------------------|-------------------------|--------------------| | SPHNet | ✓ | ✓ | 4 | 2 | 97.31 | 5.62 | 3.12 | 7.09x | | QHNet | ✗ | ✗ | 0 | 5 | 123.74 | 22.50 | 0.44 | 1.00x | | SPHNet | ✗ | ✓ | 4 | 2 | 94.31 | 8.04 | 1.75 | 3.98x | | SPHNet | ✓ | ✗ | 4 | 2 | 87.70 | 6.98 | 1.20 | 2.73x | | SPHNet | ✗ | ✗ | 4 | 2 | 86.35 | 10.91 | 0.76 | 1.73x | | SPHNet | ✓ | ✓ | 0 | 5 | 97.08 | 8.47 | 1.08 | 2.45x | | QHNet | ✗ | ✓ | 0 | 5 | 128.16 | 12.68 | 0.90 | 2.04x | | QHNet | ✓ | ✗ | 0 | 5 | 126.27 | 10.07 | 0.73 | 1.66x | | QHNet | ✓ | ✓ | 0 | 5 | 128.89 | 8.46 | 1.45 | 3.30x | Besides, we conduct an extra ablation study to evaluate the effect of different modules in the SPHNet architecture. Specifically, the standard SPHNet model has 4 Vectorial Node Interaction blocks, 2 Spherical Node Interaction blocks, and 2 Pair Construction blocks. We removed all the sparse gates and reduced the number of these three kinds of modules to 1, respectively, and observed the model performance. As shown in the table below, we found that both Vectorial Node Interaction block and Spherical Node Interaction block significantly affect the model performance, indicating the design of architectures with progressively increased irreps orders has an important positive impact on the models. Interestingly, we found that remove one Pair Construction block would not strongly affect the model accuracy, suggesting that there is actually room to further speed up the model. We will explore this further in our future work. **Table:** The effect of different modules on SPHNet model on PubChemQH dataset. | Model | Sparse Pair Gate | Sparse TP Gate | Vectorial Node Interaction block | Spherical Node Interaction block | Pair Construction block | *H* [$10^{-6}E_h$] ↓ | |--------|------------------|----------------|----------------------------------|----------------------------------|-------------------------|----------------------| | SPHNet | ✗ | ✗ | 4 | 2 | 2 | 86.35 | | SPHNet | ✗ | ✗ | 1 | 2 | 2 | 96.01 | | SPHNet | ✗ | ✗ | 4 | 1 | 2 | 97.35 | | SPHNet | ✗ | ✗ | 4 | 2 | 1 | 89.17 |
Summary: In this paper, the author proposes a new efficient equivariant operation based on Tensor Product (TP), named Sparse tensor product gate, to improve the efficiency of equivariant networks for Hamiltonian matrix prediction task. From the experiment, the proposed model achieves SOTA performance on QH9 and PubCHemQH, while improving the efficiency about 3-7 times faster than previous method. Claims And Evidence: The experiments are valid and well support the claim of this paper. Methods And Evaluation Criteria: Sound evaluations. A comprehensive benchmark on existing datasets including QH9 and PubChemQH with reasonable metrics including the MAE on Hamiltonian matrix, MAE on eigen energies, and cosine similarity on the electronic wavefunction. The proposed method greatly improves the model efficiency, which is important for the training and inference of Hamiltonian matrix prediction task. Theoretical Claims: N/A. Experimental Designs Or Analyses: Sound evaluations and experiments. Supplementary Material: N/A. Relation To Broader Scientific Literature: N/A. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strength: 1. The proposed technique is interesting and valid. From the experiments, the sparse adaptive TP can greatly improve the efficiency while the performance will not drop a lot through the ablation study in Appendix B.1. This well supports the main motivation of this paper. 2. The experimental results are strong, achieving the SOTA performance on QH9 and PubChemQH. 3. The writing and organization of this paper is clear. Weakness: 1. From the experiments, although the performance improves a lot, the reasons for such improvement have not been well discussed. Since the sparse adaptive TP mainly aims to improve efficiency, I assume the usage of such operation will not bring performance improvement. Therefore, more ablation studies should be included to discuss this. Other Comments Or Suggestions: N/A. Questions For Authors: Questions: 1. Would you mind sharing the source of the PubChemQH datasets? It seems they are not publicly available yet. 2. I find the time/sec is linearly decreased with the increasing of sparsity rate. How about the relationship of model performance with sparsity ratio? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments and suggestions. We address each of the comments individually below. **Weakness:** Your assumption regarding the effect of the sparse gate is correct. As demonstrated in the ablation study presented in Table 6 of Appendix B.2, both types of sparse gates significantly improve speed with only a slight loss in accuracy. Therefore, compared to QHNet, the performance improvement in accuracy of our method primarily stems from SPHNet’s architectural design. Given that SPHNet has already achieved strong performance, the minor accuracy loss due to sparsity is entirely acceptable. For more details on the ablation study of different sparse gate, please refer to Appendix B.2 of the paper. Additionally, further experiments on applying the sparse gate to QHNet can be found in our response to the 4th Reviewer EBLX. Besides, to further address your concerns, we conducted an ablation study to evaluate the effect of different modules in the SPHNet architecture. Specifically, the standard SPHNet model has 4 Vectorial Node Interaction blocks, 2 Spherical Node Interaction blocks, and 2 Pair Construction blocks. We removed all the sparse gates and reduced the number of these three kinds of modules to 1, respectively, and observed the model performance. As shown in the table below, we found that both Vectorial Node Interaction block and Spherical Node Interaction block significantly affect the model performance, indicating the design of architectures with progressively increased irreps orders has an important positive impact on the models. Interestingly, we found that remove one Pair Construction block would not strongly affect the model accuracy, suggesting that there is actually room to further speed up the model. We will explore this further in our future work. **Table:** The effect of different modules on SPHNet model on PubChemQH dataset. | Model | Sparse Pair Gate | Sparse TP Gate | Vectorial Node Interaction block | Spherical Node Interaction block | Pair Construction block | *H* [$10^{-6}E_h$] ↓ | |--------|------------------|----------------|----------------------------------|----------------------------------|-------------------------|----------------------| | SPHNet | ✗ | ✗ | 4 | 2 | 2 | 86.35 | | SPHNet | ✗ | ✗ | 1 | 2 | 2 | 96.01 | | SPHNet | ✗ | ✗ | 4 | 1 | 2 | 97.35 | | SPHNet | ✗ | ✗ | 4 | 2 | 1 | 89.17 | However, since our model has very different architecture and components from other models, and the specific module in the model is not a one-to-one correspondence, it's hard to make a substitution of the specific module and carry on the ablation study with the other model like QHNet. We are sorry that we are not able to carry on the ablation study with a different model and explain why the SPHNet has such performance improvements compared to others. We'd like to give a try in the future to explore more about this. **Questions For Authors 1:** Thank you very much for your interest! We are very glad to share our data with the whole community. But our organization require a review process for open sourcing data, which might take some time. We are already actively promoting this data open source process and hope to be able to share our data soon. **Questions For Authors 2:** Thank you for your question. We have actually evaluated the relationship between model performance and sparsity ratio in Section 5.4. As shown in Figure 3, for all three datasets, the Hamiltonian MAE remained stable within a certain sparsity range. However, when the sparsity rate reached a particular threshold, we observed a significant increase in the Hamiltonian MAE, which we interpret as the upper limit of sparsity. This suggests that a suitable range of sparsity has little impact on model accuracy while significantly improving computational efficiency. We provide a detailed analysis of this in Section 5.4.
Summary: This paper tackles the Hamiltonian prediction task. It proposes to learn a mask to select important pairs in pair-wise interactions in both node interactions and non-diagonal pair construction blocks. Moreover, the paper also use similar techniques to select important paths in the tensor product during pair construction. The proposed SPHNet can reduce the computational cost. The experiments are conducted on QH9, PubChemQH and MD17 datasets. Claims And Evidence: - The paper compares SPHNet and QHNet (as well as WANet) for speedup. However, the architecture of SPHNet and QHNet are not the same. As a result, the speed-up ratio may not entirely come from the sparse gates. - Figure 3 shows the effect of sparsity rate on prediction accuracy, it would be interesting to see the effect of sparsity rate v.s. computational cost for training and testing. Methods And Evaluation Criteria: Yes. Theoretical Claims: The equations mainly server to describe the method. Experimental Designs Or Analyses: - The result for water molecule in Table 3 does not seem to be good. Is it because of using the sparse gates? Supplementary Material: No. Relation To Broader Scientific Literature: They are adequately discussed. Essential References Not Discussed: Not I am aware of. Other Strengths And Weaknesses: - In section 4.1: "TOP(·) is used to select the elements with the highest weights from the set with a given probability 1 − k", what does it mean? Is there still randomness? Can the path selection still get updated during training when using TOP? - In equation 12, there is superscripts $\ell_1, \ell_2, \ell_3$ for the weights $w_{ij}$. However in equation 9 there is not. What is the shape of $w_{ij}$? - The proposed sparsity selection introduces additional hyper parameters, including the scheduler step $t$ and the sparsity rate. The sparsity rate may negatively affect the performance if not chosen adequently. Other Comments Or Suggestions: - Running title is not formatted. - Section 3: $C \in \mathbb{R}^{n\times n}$, is the dimension for coefficient correct? Questions For Authors: - Can the choices of retained paths be updated during the second phase of sparsity scheduler? - How are the sparsity weights initialized? - In equation 6, what does the superscript 0 of the inner product mean? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments and suggestions. Below are our detailed responses. **Claims And Evidence** 1. Thank you for your question. As you noted, SPHNet’s lightweight design allows it to run 1.73× faster than QHNet on the PubChemQH dataset. However, the primary acceleration comes from sparse gates. To isolate their effect, we included ablation study in Appendix B.2. Beside, to further address your concerns, we test these gates on QHNet. Please refer to our answer to the 4th reviewer EBLX for the details due to the length limitation of the response. 2. The computational cost of key sparsity has already been included in Figure 3, where the number above each “$\times$” symbol represents the training speed at that specific sparsity level. In Appendix Figure 8, we provide a more detailed visualization of how time scales with sparsity. Additionally, we have listed the complete results for training speed and GPU memory usage at each sparsity level on the PubChemQH dataset in the [table](https://imgur.com/a/tM8STT4) below. The results indicate that as sparsity increases, both time and memory costs decrease in an approximately linear manner. **Experimental Designs Or Analyses:** 1. We agree with your opinion that the sparse gates cause suboptimal results. As we had analysis in Section 5.3, since the water molecule is very small (only 3 atoms) compared to other molecules, there is not much room for the atom pairs and TP combination reductions. Therefore, spare gates may remove the necessary interaction combinations within the system and cause poor results. However, we would like to declare that the sparse gates are not designed for these extremely small molecules, and we are more concerned about their performance on larger molecules, which turn out to be very efficient on the large molecule dataset PubChemQH. **Other Strengths And Weaknesses** 1. Sorry for the confusion caused by the word "probability." A clearer term would be "percentage." Specifically, in the second phase, TOP(·) selects elements whose weights are within the top $( 1 - k )$ percent, with no randomness involved—selection is purely based on learnable weights. We have revised the manuscript for clarity. Regarding path selection, the learnable weights continue to update in phase two, as they participate in subsequent operations (Equations 4 and 5). Only selected paths' weights are updated, while TOP(·) consistently selects the highest-weighted elements. As weights evolve, the selected paths adjust accordingly, allowing the sparse gate to gradually learn the optimal path set. 2. Thank you for the question. The $w_{ij}$ is a $R^{k}$ vector, where $k$ is the number of elements in the complete set $ U_c= \{(\ell_1, \ell_2, \ell_3) \mid \ell_3 \in [|\ell_1 - \ell_2|, \ell_1 + \ell_2] \} $ in the tensor product. And the $w_{ij}^{\ell_1, \ell_2, \ell_3}$ is a single number from the $w_{ij}$ vector. Note that in Equation 12 we selected the $ w^{\ell_1, \ell_2, \ell_3}_{ij} $ that within the $U^{TSS}_p$ set. 3. We used a fixed scheduler step $( t = 3 )$ across all datasets and experiments, as results remained stable, indicating minimal impact on performance. We recommend setting $( t = 3 )$, but users can adjust it with minimal tuning cost. For sparsity rate, Section 5.4 shows performance remains stable within a reasonable range, with degradation occurring only beyond a certain upper limit, which depends on molecular size. Users can select this parameter based on our experimental results for their specific molecular sizes. **Other Comments Or Suggestions:** 1. Thank you for pointing that out—the more precise shape should be $ n \times n_0 $, where $ n_0 $ corresponds to the number of occupied orbitals. The reason we sometimes write $ C $ as $ n \times n $ is that the KS equation is typically solved for all eigenstates, including virtual orbitals. We will revised this and running title in the revised version. **Questions For Authors** 1. Please see previous responses. 2. Thank you for raising this question. We addressed this in Section 4.1. To ensure that the combination selection is as unbiased as possible, we initialized the learnable matrix $ W $ (the sparsity weights) as an all-one vector. This means that, at the beginning, all combinations are considered to have the same importance. 3. Thank you very much for pointing this out. Here, the result of inner product is still a irrep with different order of feature, and the superscript stands for the order of irrep feature we used (actually, all the superscripts in our paper stand for the order of irreps feature). But we just found that there is a typo in the writing of the formula. The correct formula should be $<x_i,x_j>^{1:}$, and the superscript ${1:}$ stands for the irrep feature larger than order zero. We are very sorry about the mistake and have fixed it in the manuscript.
Summary: This paper introduces SPHNet, an SE(3) equivariant graph neural network designed to efficiently and scalably predict Density Functional Theory (DFT) Hamiltonian matrices. The core contribution is the incorporation of adaptive sparsity to address the significant computational cost associated with high-order tensor products (TP) operations in equivariant networks, which limits their application to large molecular systems. SPHNet employs two novel mechanisms: a Sparse Pair Gate to filter unimportant node pairs and a Sparse TP Gate to prune less significant interaction components within the tensor products. To manage the sparsity dynamically during training, a Three-phase Sparsity Scheduler (random, adaptive, fixed phases) is proposed to ensure stable convergence while achieving high sparsity levels (up to 70%). The main findings reported are that SPHNet achieves state-of-the-art accuracy on the QH9 and PubchemQH datasets while demonstrating significant computational improvements both in speedup and memory usage. ## Update after rebuttal Thank you for your thoughtful rebuttal. I appreciate the clarifications provided regarding your methodology. I look forward to the revised manuscript. Claims And Evidence: The main claims are: 1. SPHNet significantly improves computational efficiency (speed and memory) for Hamiltonian prediction compared to existing SE(3) equivariant models. 2. This efficiency gain is achieved through novel adaptive sparsity mechanisms (Sparse Pair Gate, Sparse TP Gate, Three-phase Sparsity Scheduler) that reduce tensor product operations. 3. SPHNet maintains or improves prediction accuracy despite the induced sparsity, achieving SOTA results on QH9 and PubchemQH datasets. The ablation study supports the claim that significant sparsity (up to 70% for PubChemQH) can be introduced without substantial accuracy loss. The evidence provided appears generally supportive: * Experimental results on benchmark datasets (QH9, PubchemQH, MD17) are presented, comparing SPHNet against baselines like QHNet and WANet. * Quantitative results are reported, claiming up to 7x speedup and 75% memory reduction. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria are appropriate for the problem of accelerating DFT Hamiltonian prediction using physics-informed machine learning. * The core method involves introducing adaptive sparsity into an SE(3) equivariant GNN architecture. Targeting the tensor product operations, known computational bottlenecks in such networks, with sparsity is a reasonable strategy for improving efficiency. The specific gate mechanisms (Sparse Pair, Sparse TP) and the sparsity scheduler are novel contributions designed to implement this strategy effectively. * Evaluation Criteria: The evaluation uses standard metrics: * MAE of the predicted Hamiltonian elements, observables and similarity of coeff matrix elements, compared to DFT calculations. * Inference speed (Samples/Sec) and GPU memory usage for assessing scalability, which is a primary goal of the paper. * Established datasets like QH9, PubchemQH, and MD17 are used. The larger basis set size of Def2-TZVP is important for showing scalability. Theoretical Claims: The paper focuses primarily on algorithmic innovation and empirical validation rather than presenting novel theoretical claims or proofs within ML or DFT. The effectiveness of the Three-phase Sparsity Scheduler seems justified empirically. Experimental Designs Or Analyses: As stated in methods and evaluation criteria, the experimental design and analyses are reasonable, with good baselines, datasets, metrics and ablation studies. Supplementary Material: I went through the supplementary material (appendices) of the paper but did not go through the attached code. Relation To Broader Scientific Literature: This paper fits within the active research area of applying geometric deep learning to directly predict the converged Hamiltonian under DFT. It improves the efficiency of SE(3) equavariant GNNs by reducing the tensor product operations needed by improving network sparsity. The effectiveness of this technique is shown on DFT Hamiltonian prediction given a molecule graph. Essential References Not Discussed: From the perspective of PIML for DFT, the paper covers the key related areas reasonably well, citing major works in SE(3) equivariance, Hamiltonian prediction, and general network sparsification. Other Strengths And Weaknesses: * The authors show that by using the TSS induced sparsity, the inference speed and memory usage of SE(3) network based Hamiltonian prediction is greatly improved while maintaining similar accuracy as baseline models. * The reported efficiency improvements (speed and memory) are substantial (up to 7x speedup, 75% memory reduction) while maintaining competitive or SOTA accuracy, demonstrating practical value. This is especially important for the PubchemQH results. * The paper is generally well-written and clearly structured. The methodology is explained with helpful diagrams. The experiments are comprehensive and well-documented in the main text and appendices. I am a bit confused about the notation used in equation (1). Shouldnt H and S be basis size dependent ($n \times n$), $\epsilon$ be orbital set size dependent ($n_o \times n_o$) and C be ($n \time n_o$)? I apologize if I misunderstood the notation. Other Comments Or Suggestions: * SPHNet is a somewhat famous model used for 3D point cloud analysis (https://arxiv.org/abs/1906.11555). I think the domains are far enough that it can't be confused, but consider changing the name of the model. * What do "higher coefficient rates" mean in this context? It is used to claim even more performance gains )Page 7, above Table 3). * Discuss the computational overhead of the sparsity gates and scheduler themselves, although this is likely small compared to the savings from reduced tensor products. * Minor: * Use the draft/review latex environment when submitting for review so that the line numbers are visible. * Page 6: Should be "The GTO orbital basis is used for MD17 and QH9 is used for ..." * Page 6: Note instead of Noted * Maintain consistent spacing for citations Questions For Authors: 1. the selection probability for pairs *increases* with distance in the Sparse Pair Gate is counter-intuitive compared to typical distance cutoffs and goes against the nearsightedness principle (E. Prodan,& W. Kohn, Nearsightedness of electronic matter, Proc. Natl. Acad. Sci. U.S.A. 102 (33) 11635-11638, https://doi.org/10.1073/pnas.0505436102 (2005)). Can you explain this further, and does this hold across different systems/basis sets? 2. What is the computational overhead introduced by the Sparse Pair Gate, Sparse TP Gate, and the adaptive phase of the scheduler relative to a dense model operating at the same FLOP count? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you very much for your valuable comments and suggestions. Below are our detailed responses. **Other Strengths And Weaknesses** Thank you for pointing that out—your notation is the more precise way to express it. $C$ should indeed be $n \times n_0$ in Equation 1, where $n_0$ corresponds to the number of occupied orbitals. We sometimes write $C$ as $n \times n$ because the KS equation is typically solved for all eigenstates, including virtual orbitals. We appreciate your keen attention to notation and will make sure to clarify this in the revised version. **Other Comments Or Suggestions** 1. Thank you for pointing this out. We are carefully considering this, and while we may not be able to provide a definitive new name in the short term, we will update this in the subsequent versions. 2. The higher coefficient rates imply that as the molecular system grows larger, we can apply a higher sparsity rate to further accelerate the model without sacrificing accuracy. We apologize for the confusion and have revised this to 'higher sparsity rates' for better clarity. 3. As you mentioned, compared to the tensor product operation, all these operations only introduce minimal computational overhead and have little impact on the overall speed. However, this discussion is still meaningful, and we will include it in the subsequent version of the manuscript. In the three-phase sparsity scheduler, for a given unsparsified set $U$, the additional computational overhead in the first phase has a complexity of $\mathcal{O}(|U|)$, contributed by the RANDOM(·) operation. The second phase has a computational overhead of $\mathcal{O}(|U| \log{|U|})$, arising from the TOP(·) operation. Since we fix the learnable weight matrix and the selected elements, there is no additional computational overhead in the third phase. For detailed information, please refer to Equation 3. For the sparse TP gate, the computational overhead comes from the element-wise multiplication of two weight vectors (Equation 5), so its complexity is $\mathcal{O}(|U_c|)=\mathcal{O}(L^3)$. For the sparse pair gate, the additional computational overhead mainly comes from the linear layer $F_p(\cdot)$ in Equation 7, with its complexity being $\mathcal{O}(d_{hidden}|U_p|)$, where $|U_p|$ is always the square of the number of atoms and $d_{hidden}$ is the hidden feature dimension. Other operations, including the inner product (Equation 6) and the weight calculation (Equation 9), are necessary operations in our framework, even without the sparse pairwise gate. 4. -7. Thank you for the comments. We have revised the manuscripts accordingly. **Questions For Authors** 1. Thank you for the question. We agree with your point. In fact, in our experiments, the number of selected short-distance pairs still constitutes the majority of all selected pairs, which aligns with the nearsightedness of electronic matter, as shown in [Figure A](https://imgur.com/a/0EGNbF8). Each bar represents the fraction of selected pairs within the range from k to k+1 Å relative to the total number of selected pairs. Therefore, when we say *“as the pair length increases, the probability of a pair being selected also rises,”* we mean that pairs at longer distances are indeed retained at a higher proportion in [Figure B](https://imgur.com/a/0EGNbF8)—though their absolute count remains significantly lower compared to short-distance pairs. For example, among 10 pairs at 25–26 Å, 6 may be selected, whereas among 300 pairs at 4 Å, 100 may be selected. There are two possible reasons for the observed tendency to retain long-distance pairs. First, long-range interactions, including electrostatic interactions and weak van der Waals forces, are crucial for accurately describing large molecules. Consequently, incorporating such interactions could be beneficial for overall accuracy, a property that has been studied in previous research (Knörzer J, et al. https://arxiv.org/pdf/2202.06756; Li Y, et al. https://arxiv.org/pdf/2304.13542.). Second, since long-distance pairs constitute only a small fraction of all pairs, once sufficient data has been collected to characterize short-range interactions, selecting more long-distance "hard samples" could contribute to improving final accuracy. However, due to the lack of large molecular system datasets, we conducted a similar experiment on the QH9 dataset (Figure A-B), where we found that the selected ratio was similar to the proportion of short-distance pairs in the PubChemQH dataset. However, since the maximum atomic distance in this dataset is less than 8 Å, we did not observe a preference for retaining long-distance pairs. This is reasonable, as long-range atomic interactions typically become significant only when the interatomic distance exceeds 12 Å. As larger molecular datasets become available in the future, we hope to conduct further experiments to validate this hypothesis. 2. We have discussed that in the previous responses. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful rebuttal. I appreciate the clarifications provided regarding your methodology. I look forward to the revised manuscript.
null
null
null
null
null
null
IntLoRA: Integral Low-rank Adaptation of Quantized Diffusion Models
Accept (poster)
Summary: This paper proposes the IntLoRA quantization method, which can adapt quantized diffusion models with integer-type low-rand parameters to include inference efficiency during training. The proposed IntLoRA enables the pre-trained weights to be quantized during training and the IntLoRA weights can be seamlessly merged into pre-trained weights to obtain the quantized downstream weights without PTQ. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I have checked the proofs for the proposed IntLoRA quantization adaptation. Experimental Designs Or Analyses: I have checked the soundness/validity of the quantitative comparison on different generation tasks and the comparison of training and inference efficiency. Supplementary Material: Yes. At a glance of the code. Relation To Broader Scientific Literature: The proposed method solves the problem of additional PTQ for the adaptation of quantized diffusion models. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: this paper is well-written and easy to follow. The proposed IntLoRA is a novel method to solve the problem of additional PTQ for the adaptation of quantized diffusion models. Extensive experiments have been conducted to verify the effectiveness of the proposed method. Besides, the code is also provided in the supp. Weaknesses: It is better to validate the effectiveness of the IntLoRA on more diffusion models other than the StableDiffusion. Other Comments Or Suggestions: (1) It is better to also compare the inference speed of the SHIFT and MUL variants of IntLoRA. (2) Figures 4-6 should be termed as qualitative comparisons other than quantitative comparisons. Questions For Authors: In Figure 1(a), why the quantized merged weights are in FP16? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > ### Results on More Diffusion Models. Good comments! As suggested, we evaluate our IntLoRA on the FLUX.1-dev model. Since FLUX is notoriously costly to fine-tune even using LoRA, due to limited computational resources, we only give the results of FP16 vanilla LoRA and our IntLoRA on 15 text-subject pairs of Dreambooth. The results are shown below. \\begin{array}{l|ccccc} \\hline \text{methods} & \text{nbits} & \text{DINO} & \text{CLIP-I} & \text{CLIP-T} & \text{LPIPS} \\\\ \\hline \text{FLUX-LoRA(FP)} & W16A16 & 0.3564 & 0.6490 & 0.2501 & 0.8383 \\\\ \text{FLUX-Ours(MUL)} & W8A8 & 0.3150 & 0.6272 & 0.2348 & 0.8372 \\\\ \\hline \end{array} It can be seen that the performance drop of our IntLoRA compared to the original LoRA is acceptable, while achieving inference speedup. This preliminary result shows the potential generalization of the proposed method on other Diffusion model backbones. > ### Efficiency Comparison. In Tab.3 of the paper, we give the forward time cost of MUL and SHIFT: 0.87s/img for MUL, and 0.84s/img for SHIFT. One can see that the SHIFT is relatively faster than the MUL. However, it is worth noting that the speedup of the log2 quantized model can be further accelerated by hardware-level optimizations. Since this work focuses primarily on the algorithmic level, we leave further log2 acceleration for future work. > ### Typos. We will fix the types in the caption of Figures 4-6 in the revision. Thanks for your suggestion! > ### Responds to other Questions. In weight merging of previous methods, i.e., $\mathbf{W'} = \mathcal{Q}(\mathbf{W) + AB}$, the $\mathcal{Q}(\mathbf{W})$ is the quantized INT8 weights, but $\mathbf{AB}$ is low-rank FP16 weights. This arithmetic inconsistency between FP16 and INT8 caused the $\mathcal{Q}(\mathbf{W})$ to have to upcast back to FP16 to allow addition with LoRA weights. As a result, the summed results $\mathbf{W'}$ is in FP16 type and needs PTQ to accelerate inference.
Summary: The paper introduces IntLoRA, which uses integer-type LoRA weights to fine-tune directly on the quantized models, for both training and inference efficiencies. To achieve this, the authors propose three novel techniques. First, the authors propose the Adaptation-Quantization Separation (AQS), which allows for the coexistence of zero-initialized gradients and a quantization-friendly distribution. Then, the Variance Matching Control (VMC) mechanism is developed to fine-tune the channel-aware variance, ensuring favorable distribution shape for log2 quantization. Last, the Multiplicative Low-rank Adaptation (MLA) is used to allow independent optimization of pre-trained and adaptation terms, enlarging the parameter space of LoRA fine-tuning. Extensive experimental results indicate that IntLoRA achieves sota results on both efficiency and performance. Claims And Evidence: The claims of this paper is well supported by evidence such as distribution visualization for quantization weights, the quantitative ablation results for method's design rationality. Methods And Evaluation Criteria: • The proposed method make sense. In the Method section, the motivation of each module is clearly stated before the technical details. • The author also proposed the log2-quantized fine-tuning pipeline, which is hardware efficient. • The evaluation metrics are commonly used in image generation tasks, such as FID, CLIPscroe, and the authors also gives sufficient qualitative results. Theoretical Claims: The process of quantization is well-justified with Adaptation-Quantization Separation, the Variance Matching Control, and Multiplicative Low-rank Adaptation. Experimental Designs Or Analyses: • This work validate the effectiveness of IntLoRA on various diffusion generation tasks, including Dreambooth fine-tuning, ControlNet fine-tuning, and Style-customization, all tasks being widely adopted in previous diffusion fine-tuning works. • The performance is noteworthy. The IntLoRA achieves sota performance on FID and CLIP on the Dreambooth fine-tuning, even under a more challenging integer-type low-rank weights. • The experimental analysis is well supported by the empirical evidence. Supplementary Material: The authors have attached Suppl., which contains the code for reproduce. Relation To Broader Scientific Literature: The integer-type Lora fine-tuning without additional PTQ is almost unexplored by previous work. The IntLoRA can inspire future works towards both training and inference efficient lora tuning pipeline. Essential References Not Discussed: The authors give a detailed comparison with a highly related work, i.e., EfficientDM, with both functional analysis and solid experiments on the diffusion network quantization tasks. Other Strengths And Weaknesses: **Strentgh:** 1) The paper is well-motivated, and the sufficient experiments validate the effectivess of the proposed method. 2) The presention is easy to follow and well-organized. **Weakness:** 1) It is better to give more discussions for the dfiferent parts of the proposed method, such as e Adaptation Quantization Separation (AQS). 2) The relationship with other model compression methods is not clear. Other Comments Or Suggestions: None Questions For Authors: 1) What is the effects of the rank of the parameter and how to determine it? 2) What is the reason for using variance matching control (VMC) to adjust the distribution of the adaptation term? 3) Does the proposed method can be combined with other model compression methods, such as knowledge distillation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > ### More discussions of Adaptation Quantization Separation (AQS). The main motivation of the proposed AQS is to **address the effect of zero-initialized weights of LoRA during quantization**. Specifically, the zero-initialized weight leads to infinite results during quantization due to the divide-by-zero (please refer to Eq.1 in the original paper). In order to maintain the zero-initialized weights while achieving the quantization-friendly distribution, we propose the AQS, which decomposes the LoRA weights into a zero-initialized part that requires gradients (corresponding to the vanilla LoRA), and a non-zero part that does not require gradients (for ease of quantization) to facilitate quantization. In this way, AQS facilitates both model learning and quantization. > ### Correlation with Other Compression Methods, e.g., Knowledge Distillation. Our approach focuses primarily on quantization to reduce the memory footprint during fine-tuning large models. However, since quantization and other compression methods are orthogonal technically, **our approach can therefore potentially be combined with other methods such as distillation** to achieve further computational cost reduction. Specifically, we can first use the large pre-trained model as a teacher model and first obtain a small student model. Afterwards, we can use the proposed IntLoRA on the student model to further obtain quantized downstream task weights, thus realizing further speedup. Since the knowledge distillation requires complete back-propagation of the model gradients, we cannot finish this process due to limited computational resources. However, given the promising performance of IntLoRA on existing small models (which can be viewed as distilled student models), combining IntLoRA and knowledge distillation is practical. We will give a more detailed discussion in the revision. > ### Responds to Other Questions. - We assume you refer to the rank in the LoRA. Following the common practice, we determine the LoRA ranks through ablation experiments in Fig.9 in the Appendix. Performance generally improves as we increase the rank, but the rate of growth varies. To balance a tradeoff, we choose $r=4$ in all of our experiments. - The motivation of VMC is to adjust the distribution shape of the adaptation term for effective log2 quantization. In detail, the log2 quantization requires that most of the values are distributed near zero to use as many log buckets as possible to reduce quantization error. As shown in Fig.8 in the Appendix, the proposed VMC can control the distribution shape to achieve sharp peaks and light tails, thus utilizing more buckets on the logarithmic scale to reduce the quantization error. --- Rebuttal Comment 1.1: Comment: Based on the authors' responses, my concerns have been addressed, and I can raise my scores based on these considerations: - The significance of directly obtaining the quantized merged weight is noticeable. I have noticed that the author also demonstrated in the rebuttal that the low-rank FP matmul is even slower than INT8 dense matmul. This evidence further makes this paper stronger. - The technique is impressive. The auxilary matrix in AQS to address the zero-initialization problem as well as the reformulation of the LoRA in MLA is novel. And these methods communicate well with empirical evidence. - I appreciate the inclusion of more experiments on the generalization of this method during the rebuttal, which provides a more comprehensive evaluation. Given the growing popularity of combining different model compression for further accelerations, I recommend the authors to incorporate this discussion in the revision, as it could serve as a useful reference for future researchers. --- Reply to Comment 1.1.1: Comment: Thank you very much for your positive feedback and raised rating score. We are delighted that our responses have addressed your concerns. We also appreciate your recommendation of including more contents of combining different acceleration methods. In the next version, we will provide a detailed discussion between other compression methods and ours. Your valuable suggestions have significantly contributed to the improvement of our work, and we sincerely thank you once again!
Summary: This work proposes a LoRA-based method to fine-tune the quantized weights of diffusion models that consists of Adaptation-Quantization Separation (AQS) for addressing the issue of zero-initialized weights in LoRA tuning for the quantized pre-trained model and Variance Matching Control (VMC) to determine an appropriate R to balance between quantization difficulty and information retention. After using AQS and VMC to preprocess the weights before tuning, the fine-tuning is performed for the adaptation term and then the result will be merged to the original pre-trained term by either Int-multiply or Bit-shifting to yield the quantized merged weights. The proposed method was tested on a number of diffusion-based tasks along with a couple of LLM tasks. Claims And Evidence: The claims are somewhat confusing and may be misleading. This work is mainly focusing on an adaptation method with pre-trained diffusion models. Only a couple of experiments with LLMs were presented, thus it is hard to justify the scope of this work beyond diffusion models due to lack of experiments and comparisons on LLMs. However, the introduction and the related work sections are very confusing since a lot of prior works were about LLMs with adaptations using LoRA variants or about quantized diffusion models without adaptations except for one work (EfficientDM, He et al., 2023). This confusion remains until the conclusion, thus makes the claims and conclusions very confusing as if this work have solved the problems in LLMs in general. The proposed method was not fully investigated with LLMs in my view. However, the conclusion does not mention that this work is limited to diffusion models, thus may mislead that this work may work for all the cases including LLMs. The contributions of this work should be clearly described along with all supporting evidences. Methods And Evaluation Criteria: This work reported performance in accuracy and what quantization was used. However, this work did not report computation time and memory usage during fine-tuning and pre-processing, which seem important to fully understand the capability of the proposed methods. It is also important to see these results not only for tuning stage, but also for the preprocessing stage as well as for the final merging stage. It seems that the proposed method has a quite heavy preprocessing stage with AQS and VMC. Theoretical Claims: N/A Experimental Designs Or Analyses: It is unclear why the selected experiments require quantization. Subject-driven generation and style-customized image generation do not seem to have large datasets (25 subjects, 18 styles), so it will be great if this work argues, justifies and reports the reasons to perform these tasks with quantized models and the clear, realistic evidences for them. Supplementary Material: I did not review the supplementary material since it was good enough to read the main paper to assess this work - it was clear. Relation To Broader Scientific Literature: This work was very confusing due to mixing references about LLMs with adaptation, diffusions without adaption and so on. It will be great if this work focuses on a single topic, justifies and investigates that with full experiments, and concrete conclusions without over-claims. Or this work should focus on LLMs like other related works on LoRA variants to clearly show the advantage of this work over prior works. Essential References Not Discussed: 1) It seems that this work missed the most important work by Han Guo et al., LQ-LORA: LOW-RANK PLUS QUANTIZED MATRIX DECOMPOSITION FOR EFFICIENT LANGUAGE MODEL FINETUNING, ICLR 2024 This work is not about LLMs, but addressed the issue of zero-initialization of LoRA by proposing a matrix decomposition, which might be related to the proposed method in this work. Comparing with this work seems critical for me to properly assess this work. There is also another work called GPTQ-LoRA by Chai et al. See the above work for more info. Thus, it seems very important to properly discuss and compare with this work. 2) While the fundamental basis of this work is in diffusion models and their adaptations, this work failed to properly cite and discuss these works. Without proper justification of quantized adaptation, it will be difficult to appreciate the current work. For example, subject-driven generation does not have to fine-tune the LoRA weights. See the following work: Rinon Gal et al., AN IMAGE IS WORTH ONE WORD: PERSONALIZING TEXT-TO-IMAGE GENERATION USING TEXTUAL INVERSION, ICLR 2023 This work achieved subject-driven generation by only fine-tuning a single token, which is much smaller than the LoRA weights. For the cases like this, it will be very hard to justify the necessity of using quantized models for fine-tuning. Thus, this work should search for literature on the tasks of adapting diffusion models to tasks, to see if the proposed method is indeed useful for many cases. Other Strengths And Weaknesses: - Did R introduce additional memory? If so, then the goal of quantizing diffusion models may not be satisfied well due to increased memory usage. It seems that R looks quite large, actually the same size as W, so comparing with other methods in Table 1 and other tables may not be fair. Table 1 simply wrote nbits, which can not explain this additional memory and thus, the proposed method may use more memory. Thus, please add the info about actual GPU memory usage, computation time of pre-training, fine-tuning and merging, and so on. - Sigma_R in Eq. (6) seems important to determine. How can we guarantee that the optimal one exists? How sensitive it is to other tasks and baseline pre-trained models? These properties do not seem to be well-investigated since similar baselines were used (SD and SDXL). Introducing additional parameter on top of the rank of LoRA seems undesirable. Other Comments Or Suggestions: - It is unclear how important it is to solve the problem of fine-tuning quantized diffusion models, so it will be important to justify it with concrete examples and applications in the introduction. - It is unclear if Figure 1 is accurate: A hat and B hat are INT4, but tuning with FP16 activations may require floating point operations. Figure 1 should be accurate to reflect these facts if they are true. - I strongly recommend to revise this work to clearly differentiate among LLMs works with adaptations, Diffusions without adaptations and so on and then to clearly indicate the contribution of this work. - In Figure 3, is Adaptation Term also Int4 or FP16? Is the fine-tuning operating on FP16 then? Questions For Authors: - Was there any divided-by-zero issue in Eq. (3) for the practical cases? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > ### Why Introduce Many LLM Related Works? Thanks for your comment! Although there is a lot of work on Diffusion quantization, **very little work** explores the adaptation of quantized diffusion, which is also recognized by Reviewer wfKN. Therefore, we introduce related methods in quantized LLM adaptation to **ensure a comprehensive baseline comparison**. > ### Clarification of the Claims. Consistent with the title, this work mainly focuses on Diffusion models. The experiments on NLP tasks are **preliminary explorations** to answer potential concerns about the method generality. As suggested, we will make the relationship between Diffusion and LLM clearer to avoid any misleading or over-claims. > ### Efficiency of Different Stages. In Tab.3, we give the training speed and memory usage during **fine-tuning stage**. Our IntLoRA is similar to that of QLoRA, but we can directly obtain the quantized merged weights. For the pre-processing and post-processing stage, we give the time cost as follows. \\begin{array}{l|ccc} \\hline \text{stage} & \text{pre-processing} & \text{fine-tuning} & \text{post-processing} \\\\ \\hline \text{time} & 28.8s & 128.2s & 0.5s \\\\ \\hline \\end{array} The **pre-processing** accounts for 18% total time. However, this costs **can be shared** across different tasks. For **weight merging**, the latency can be neglected. > ### Justification of Quantized Adaptation & Why Dreambooth needs Quantization & Comparison with Text-Inversion. Our quantized adaptation enjoys benefits in **both tuning and inference**. Specifically, 1) The low-bit adaptation **reduce GPU usage** during tuning. Although Dreambooth involves small samples, it still struggles when fine-tuning large models. For example, **loading FP16 FLUX consumes over 40GB, let alone fine-tuning with Text-Inversion or LoRA**. In contrast, our 4-bit quantized IntLoRA reduces the loading memory to <15GB, facilitating tuning on consumer-level GPUs. 2) The quantized downstream model **boost inference efficiency**. As shown in the `Response to Reviewer 1Zzg`, the INT GEMM is even more efficient than FP low-rank matmul, allowing for fast inference. As a comparison, the Text-Inversion has the same latency of the pre-trained large model. As shown in Tab.1, our IntLoRA can achieve acceleration with negligible performance loss. We will include more discussion and related works in the revision! > ### Comparison with LQ-LoRA and GPTQ-LoRA. Both **LQ-LoRA** and our IntLoRA share the idea of avoiding zero-initial low-rank weights. But they have several essential differences. 1) The **formulation is different**. LQ-LoRA uses the decomposition $\mathbf{W'} = \mathcal{Q}(\mathbf{W}) + \mathbf{AB}$ to achieve non-zero LoRA weights, our IntLoRA introduces the auxiliary matrix $\mathbf{R}$ and use $\mathbf{W'} = \mathcal{Q}(\mathbf{W + R) -R +AB}$ for this goal. 2) LQ-LoRA requires **multi-iterations** to search for the optimal approximation, while our IntLoRA does not. 3) We also **compare LQ-LoRA on Dreambooth** as follows. One can see that our IntLoRA achieves better performance. \\begin{array}{l|ccccc} \\hline \text{methods} & \text{nbits} & \text{DINO} & \text{CLIP-I} & \text{CLIP-T} & \text{LPIPS} \\\\ \\hline \text{LQ-LoRA} & W8A8 & 0.4056 & 0.6624 & 0.2824 & 0.8126 \\\\ \text{Ours-MUL} & W8A8 & 0.4498 & 0.6882 & 0.2858 & 0.8062 \\\\ \text{LQ-LoRA} & W4A8 & 0.4022 & 0.6797 & 0.2680 & 0.8198 \\\\ \text{Ours-MUL} & W4A8 & 0.4242 & 0.6913 & 0.2710 & 0.8181 \\\\ \\hline \\end{array} As for **GPTQ-LoRA**, it appends LoRA weights on the GPTQ quantized models, which needs additional PTQ during inference deployment. Although GPTQ-LoRA is not accepted and not open-source, we are happy to add related discussion in the revision. > ### The Cost of R & Ablation on sigma_R. - As stated in Line215, the $\mathbf{R}$ can be **generated on-the-fly** under the same random seed, and **can be deleted once used**. The performance in all tables has included the effects from $\mathbf{R}$. - Since $\mathbf{R}$ is first normalized and then rescaled by $\alpha$, the ablation of $\alpha$ in Fig.7 can reflect the role of $\sigma_R$. One can see **the performance is robust** when $\alpha >1.4$. We adopt a fixed $\alpha = 1.5$, i.e., the same $\sigma_R$, in all experiments and the results are satisfactory. > ### Clarification on Fig.1 & Dtype of Adaptation-Term. Similar to common QAT practice, the weights and activations (and Adaptation term) during training are "simulatively quantizated" with FP16 dtype, for accurate gradient backpropagation. However, **the inference is strictly in INT dtype (or 2^X format for SHIFT).** Fig.1 is the **post-tuning weight merging stage**, so it is correct since both W and A are INT. We will clarify this in revision. > ### Solution to Divide-by-Zero. Because $\mathbf{W}_{round}-z$ is ensured to be an integer, we replace the 0 with 1 and then zero mask the division result at corresponding slots.
Summary: This paper proposes IntLoRA that allows for seamless weight merging after efficient low-bit parameter-efficient fine-tuning (PEFT). The paper is motivated by the observation that existing low-bit PEFT (e.g., QLoRA) requires an additional round of PTQ due to a mismatch in precision between pre-trained and (low-rank) adapter weights. Towards this goal, authors proposed various techniques like task-agnostic auxiliary weights, variance matching control, and multiplicative/bit-shifting LoRA. Finally, authors demonstrate compute/memory efficiency of IntLoRA as well as the superior quality of images generated from diffusion models trained with IntLora against several popular baselines. Claims And Evidence: Technical claims/evidence regarding IntLoRA look impressive to me. However, I am not entirely sure if PTQ is necessary in e.g., QLoRA. Given that adapter weights are generally much smaller than pre-trained weights, I believe the additional compute cost from the adapter forward-pass at inference time can be completely hidden by overlapping its computation with the forward-pass from pre-trained weights. In theory, this of course incurs more FLOPs and GPU memory usage from loading adapter weights, but this can be quite marginal compared to memory/compute costs from pre-trained weights in practice. If authors can demonstrate the significance/necessity of weight-merging in general, it would make the paper stronger. Methods And Evaluation Criteria: The proposed techniques--auxiliary weights, multiplicative/bit-shifting LoRAs, and VMC-- are all well-motivated. Furthermore, authors evaluated the effectiveness of IntLoRA in terms of compute/memory efficiency and image generation quality against reasonably chosen baselines. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes. They look good to me. Supplementary Material: I appreciate that authors provided more qualitative analysis (i.e., image/text generation quality) in Appendix. Relation To Broader Scientific Literature: N/A/ Essential References Not Discussed: I believe authors have cited major relevant work. However, I am not very familiar with quantization research, so I would count on other reviewers' opinions. Other Strengths And Weaknesses: See "Claims And Evidence." Other Comments Or Suggestions: See "Claims And Evidence." Questions For Authors: See "Claims And Evidence." Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > ### The significance of weight merging Good comments! In fact, the efficiency of INT-type matrix multiplication is extremely efficient in the highly optimized GEMM on modern GPUs, and we demonstrate below **the INT8 matmul is even faster than that of FP32 low-rank matrix multiplication**. Given a weight matrix $\mathbf{W} \in \mathbb{R}^{M\times N}$, activation $\mathbf{X} \in \mathbb{R}^{N \times L}$ and the low-rank decomposition $\mathbf{W = AB}$ ( where $\mathbf{A} \in \mathbb{R}^{N \times r},\mathbf{B} \in \mathbb{R}^{r \times M}$), we adopt the shapes from SD-1.5, i.e., $M = N = 320, L = 4096, r=4$. We use INT8 GEMM CUDA kernel to calculate $\mathcal{Q}(\mathbf{W})\mathcal{Q}(\mathbf{X})$ ($\mathcal{Q}$ denotes the quantization operator), and use the common FP32 `torch.matmul` in Pytorch to compute the $\mathbf{ABX}$. The experimental results on NVIDIA A5000 GPU are as follows: $$ \\begin{array}{l|ccc} \\hline \text{setups} & \text{runtime} & \text{GPU} \text{costs} & \text{FLOPs} \\\\ \\hline \text{INT8 GEMM} & \text{48.71 ms }& \text{1.34MB} & \text{0.8388G} \\\\ \text{FP32 matmul} & \text{52.16 ms }& \text{5.01MB} & \text{0.0209G} \\\\ \\hline \\end{array} $$ Although full-rank matrix multiplication has 40x higher FLOPs than its low-rank counterpart, the INT matrix multiplication shows even faster speeds and less GPU footprints due to quantized INT8 type on integer-optimized GEMM kernels. From the above analysis, **the low-rank FP32 matmul is not negligible than the INT8 pre-trained weights**. Therefore, it makes sense to apply the proposed IntLoRA to directly obtain the quantized merged weights.
null
null
null
null
null
null
AxBench: Steering LLMs? Even Simple Baselines Outperform Sparse Autoencoders
Accept (spotlight poster)
Summary: The paper proposes AXBENCH, a benchmark for evaluating steering and concept detection methods in LLMs using synthetic data. ## Data generation For data generation, AXBENCH does the following: * Given a list of natural language descriptions of concepts, they used SAE concept lists for GemmaScope. * They then generate training and evaluation as follows: * Given a concept, the LLM is prompted to select three genres, text, code, and math, that are related to that concept. * Given a genre, they randomly select seed instructions from an instruction pool that belongs to the genre. * Three types of examples are generated * Positive examples: Prompt the LLM to respond to the instruction sampled for that genre while incorporating the concept c. * Negative examples: Prompt the LLM to respond to the instruction sampled for another genre * Hard negative examples: Finding contrasting concepts use these contrasting words in sentences while ensuring the words are used in their alternative meaning. ## Evaluation: ### Concept detection: The goal is to detect if we can find the concept in the hidden layer of the model; this is done by training a concept classifier on a hidden dimension (extracted by a method) using the dataset described previously and evaluating it on a heldout test set. ### Model steering: The goal is to measure the effectiveness of controlling the LLM output using a given method. They evaluate it as follows: * For each concept, sample 10 instructions from Alpaca-Eval * prompt the LLM to generate a response while intervening on its forward pass in-place using one of the steering methods. * use another LLM to measure the quality of the generated output after interventions by taking the harmonic mean when scored between [0-2] of the following: * Concept score represents how well the concept is incorporated into the response. * Instruct score represents how well the response is related to the instruction. * Fluency score represents how fluent the response is. ### Methods: The papers compared the following methods: * Difference-in-means: This calcautes the difference in means of hidden dimensions between positive and negative class $w_{\text{DiffMean}}$ the detection score is the dot product between $w_{\text{DiffMean}}$ and hidden representation, for steering $w_{\text{DiffMean}}$ is added to hidden representation. * Principle component analysis: subtract the mean of a given class H from each h, then find the top principal component $w_{\text{{PCA}}$ the unit vector that captures the largest variance along its direction. For concept detection and intervention the same procedure as done in difference-in-means. * Linear artificial tomography: Compute PCA on the pairwise activation differences $w_{\text{LAT}}$ captures the dominant direction of variation among the activation differences. For concept detection and intervention the same procedure as done in difference-in-means. * Linear probe: learning a linear layer to classify the concept given the hidden layer. For concept detection and intervention the same procedure as done in difference-in-means. * Supervised steering vector: learn an intervention that maximises the LM probability for a given response, this is similar to supervised fine-tuning but here they learn a vector instead of changing the weights of the model itself. * Rank-1 representation finetunig ReFT-r1 (**this method was introduced by the paper**) learns concept detection and steering on supervised data by combining the training objectives of linear probing and supervised steering. * Sparse autoencoder, pretrained SAEs from GemmaScope were used. * SAEs with ROC AUC selection, over each training example, compute ROC AUC over the dataset given true labels, and select the best feature by this metric, i.e instead of relying on the concepts from pretrained SAE find the features in the SAE that correlated to the label the most and use them for concept detect and control. * Gradient-based baselines (for concept detection only) * Train a linear classification head on the token representation to predict the ground-truth concept-presence class label y * For an evaluation sentence x, the LM generates hidden representations h with n tokens. * Calculate the gradient of the output classification head with respect to each hidden representations; this leads to the token-level importance * For concept detection, they then use max-pooling to get sequence-level predictions. * Prompting: An LLM is used to evulate if the output of the prompt has the concepts for concept detection. For model steering, they use an LLM to engineer a prompt given a concept, which they then use to steer the local model by prepending it to the actual instruction. * Finetuning, here they look into full-parameter supervised fine-tuning, LoRA and LoReFT. ## Models: Evaluation was done on two open models Gemma-2-2B-it and Gemma-2-9B-it Claims And Evidence: - This is a benchmark paper that compares interpretability methods on a synthetically generated dataset. How they generated the dataset makes sense, as do the evaluation metrics proposed. - The paper proposes ReFT-r1, a method that learns concept detection and steering on supervised data by combining the training objectives of linear probing and supervised steering. Overall, it is competitive in both Concept detection and model steering. Methods And Evaluation Criteria: Yes! For evaluation, they create CONCEPT500 dataset a synthetically generated dataset of 500 concepts - For concept detection, 144 examples were used for training, and 72 were used during testing. - For concept steering, the paper samples 10 instructions and generates up to 128 tokens for each instruction. They release all 16K concepts in GemmaScope as the CONCEPT16K. The overall evaluation was done on two layers in two different open source models , the evaluation is very comprehensive. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes Supplementary Material: I have skimmed over the supplementary material Relation To Broader Scientific Literature: When SAEs were introduced, they seemed like a solution for most interpretability problems. This paper evaluates the claims in a systematic way, comparing SAEs with supervised dictionary-learning methods and showing that in both cases of concept detection and concept steering, SAEs seem to be behind. The paper offers a good benchmark that can be used by future work to compare newly developed interpretability methods. Essential References Not Discussed: No Other Strengths And Weaknesses: Other Strengths - The use of the harmonic mean for measuring model steering is quite interesting (not super obvious but makes sense for this evaluation). - The benchmark is very comprehensive in terms of the methods they are evaluating, including most state-of-the-art methods and popular intervention methods. - The paper offers useful and unintuitive insights like "better classification does not directly lead to better steering." Other Comments Or Suggestions: ## Minior: * Please define $n$ in section 4 notation. Questions For Authors: - How do you construct the instruction pool related to a given genre (lines 151 and 152) - The synthetic dataset constructed in section 3.1 is only used on concept detection not model steering correct? - The procedure for concept detection from gradient based methods described in Appeindix H, doesn't really make sense I am not sue how use max-pooling on token importance for a given concept can be used for concept detection. - For prompting its unclear why you needed different methods for concept detection and concept steering for the prompting case it seems that both are somewhat the same task. - Lines 285, you mentioned, "we sample 10 instructions. We generate up to 128 tokens for each instruction over 14 steering factors." what does it mean to to steer over 14 steering factors? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your comments! We address them point-by-point below. **In our responses to other reviewers, we provide additional human evaluations on our LLM judges and updated, much higher quality SAE concepts**. > **Q1**: Please define $n$ in section 4 notation. **A1**: Thank you for the suggestion! In Section 4, we use $n$ to denote the number of tokens in a given sequence. We will clarify this in our next revision. > **Q2**: How do you construct the instruction pool related to a given genre (lines 151 and 152)? **A2**: Due to space limitations, we provide details in Appendix I linked from section 3.1. For text-based instructions, we sample from Dolly-15K. For code-based instructions, we sample from a collection of Python-code puzzles formatted in an Alpaca-style (i.e., instructions paired with corresponding responses). For math-based instructions, we sample from GSM8K. With additional page allowance, we will ensure these details are included in the next revision. > **Q3**: The synthetic dataset constructed in Section 3.1 is only used for concept detection, not model steering, correct? **A3**: Yes, that is correct—it is used exclusively for concept detection. For model steering, we use existing instructions and concepts. > **Q4**: The procedure for concept detection from gradient-based methods described in Appendix H doesn't really make sense. I am not sure how using max-pooling on token importance for a given concept can be used for concept detection. **A4**: Thank you for raising this point! We agree that aggregating gradients at the sequence level might provide additional insights into gradient-based sensitivity methods. The main reason for applying max-pooling for IxG and IG is to maintain consistency with the setup used in other methods, ensuring a fair comparison. If resources permit, we will consider adding IxG and IG baselines that aggregate sequence-level gradients via sum or mean in the next revision. > **Q5**: For prompting, it's unclear why you needed different methods for concept detection and concept steering in the prompting case—it seems that both are somewhat the same task. **A5**: Thank you for the question. For concept detection, we use a template-based prompt asking the model, “Do you think the concept is incorporated in the given sentence?” (see Appendix J.3 for the full prompt). For model steering, we use gpt-4o-mini (acting as our prompt engineer) to generate a steering prompt (e.g., “You must include terms related to planting Apple trees in your responses. Here are some examples...”), which is quite different from our concept detection prompt. > **Q6**: In line 285, you mentioned, "we sample 10 instructions. We generate up to 128 tokens for each instruction over 14 steering factors." What does it mean to steer over 14 steering factors? **A6**: Thank you for raising the question. For model steering methods, we follow an existing paradigm in which we insert activation addition interventions into the model’s forward pass as follows: $ h_{Steer} = h_{Original} + \alpha \cdot w_{Steer} $ Here, $\alpha$ represents the steering magnitude, and $w_{Steer}$ is the learned steering vector for a given method. As suggested by previous work, $\alpha$ is treated as a hyperparameter, and different values (steering factors) must be tested to select the optimal one for final evaluation. In our work, we use 14 different settings for $\alpha$ across all methods, and for each setting, we sample 10 instructions to steer. Due to space limitations, the discussion on steering factors is included in Appendix K. With additional page allowance, we will include further details in the main text in our next revision. --- Rebuttal Comment 1.1: Comment: Thank you for the response and for the clarification. My recommendation remains as is; I think it's a very good benchmark and recommend acceptance.
Summary: This work develops a benchmark for testing different concept detection and model steering methods in LLM representation spaces. They test many different types of methods such as SAEs, finetuning, and prompting on their benchmark, including novel methods and novel applications of existing methods. As noted in the title, SAEs perform less well than many other methods, especially in model steering. ## Update after rebuttal The authors have addressed points raised in my rebuttal, and included nice additional empirical results for human eval, zero-shot prompting, and different SAE labels. These are nice additions to an already great paper, and I recommend acceptance. Claims And Evidence: Yes, the submission uses solid empirical evidence to support its claims. Methods And Evaluation Criteria: The dataset creation procedure and steering / concept detection methods tested make a lot of sense. I would like to see more examples or other quality checks of generated data / judged responses. For instance, to what extent do the judgements agree with some trusted judgements (human or otherwise). Many concept detection / steering methods are considered, and a novel method is even developed. This is impressive, especially given that slight modifications have to be made to some of them, and many different hyperparamters have to be searched. Theoretical Claims: n/a. this is mostly an empirical work. Experimental Designs Or Analyses: I read the experimental details in the main paper, and some of the details in the supplementary (e.g. on hyperparameters). The experimental setup seems reasonable and good. Supplementary Material: I skimmed through most of the Appendix, but did not look into the many experimental results presented there. Appendix A is a great idea, and I found illustrative the data examples and prompts used. The discussion in the hyperparameter Appendix is also useful, and shows that the authors did try to do solid hyperparameter tuning for all methods, and included caveats about their tuning (e.g. one steering factor for all prompts). Relation To Broader Scientific Literature: This work is highly relevant to mechanistic interpretability and model steering literatures. It calls into question the utility of certain mechanistic interpretability techniques in model steering and even concept detection. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: * Many other interesting related experiments in the appendix, such as learning to predict new concept subspaces. * Ablations make the work more complete, e.g. comparing addition vs. clamping for SAE features, ablating $\alpha$ (it is interesting that high and low $\alpha$ have similar effects). Weaknesses: * Would be good to be clearer on the details of the prompting baseline in the main paper. This two-stage prompting technique is probably a bit more powerful than zero-shot prompting. It would be great to see how zero-shot prompting with a simple template works. * Would prefer to see more examples as in Appendix N. The concept shown there is a bit vague, e.g. the first ReFT-r1 "failed to inject concept" response could arguably havea concept score of 2. Other Comments Or Suggestions: n/a Questions For Authors: Could you comment on concept discovery? And for instance the utility of SAEs or other methods for this? For instance, the concepts you consider in this paper are SAE features in the first place. Also, how would you expect the fact that concepts are taken from Gemma SAE features to affect your empirical results? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your comments! We address them point-by-point below. **We supply preliminary human evaluations as well as results for a new prompting baseline.** > **Q1: Performing a proper human evaluation to ground AxBench.** **A1**: Great suggestion! We conducted a preliminary human evaluation by recruiting 5 participants, each of whom rated 30 steering generations from different methods (e.g., Prompt, ReFT‑r1, SAE) using the same scoring system as our LLM judges. We then computed two types of Pearson correlation coefficients to assess agreement: - **Human-Human Agreement:** For each pair of human participants, we computed the Pearson correlation between their ratings, applied Fisher’s Z transformation, averaged them in the transformed space, and converted the result back to a Pearson correlation coefficient, yielding a score of **0.57**. - **LLM-Human Agreement:** We also computed the Pearson correlation between the LLM’s ratings and each human’s ratings, applied Fisher’s Z transformation, and averaged these values across the 5 participants, yielding a score of **0.58**. These measures indicate that the LLM behaves much like just another human participant in terms of rating consistency. Given that steering is a highly subjective task about which individuals frequently disagree, these strike us as high correlation numbers overall. Moreover, prior work on similar annotation tasks [[1]](https://arxiv.org/abs/2406.06369)[[2]](https://arxiv.org/abs/2204.00447) has reported correlations of at most 0.6, further validating our findings. We have uploaded our anonymized human annotation interface [here](https://tinyurl.com/2xyd44d). > **Q2**: Would be good to be clearer on the details of the prompting baseline in the main paper. It would be great to see how zero-shot prompting with a simple template works. **A2**: This is a great suggestion! With additional page allowance, we will further clarify our prompting baseline in the main text. We also ran additional experiments with zero-shot prompting, using a template-based approach that directly asks the model to incorporate a concept in its response. Below are the results: | Method | 2B L10 | 2B L20 | 9B L20 | 9B L31 | Avg. | |----------------------|-------------------|-------------------|-------------------|-------------------|-------| | Prompt | 0.731 | 0.744 | 1.081* | 1.062* | 0.905*| | Prompt (Zero-shot) | 0.735 | 0.727 | 1.051 | 1.016 | 0.882 | where * means statistically significant. As expected, zero-shot prompting performs worse than LLM-aided prompting. The gap between these two variants is surprisingly small, indicating that instruction-tuned models can be easily steered without sophisticated prompt engineering. > **Q3**: Would prefer to see more examples as in Appendix N. **A3**: We will provide additional examples and improve their quality in our next revision. > **Q4**: Could you comment on concept discovery? **A4**: Although SAEs lag far behind supervised methods in empirical results, we believe they serve as excellent “hypothesis generators” for concept discovery in an unsupervised manner. However, these hypotheses must be causally verified through intervention-driven experiments, as we did for model steering in AxBench. The concepts identified by SAEs can naturally become learning targets for supervised methods, which yield better subspaces. On the other hand, supervised methods can utilize any concept list—not just those derived from SAEs. > **Q5**: How would Gemma SAE features to affect your empirical results? **A5**: Great point! We agree that the labels from the auto-interpretability pipeline contain noise. To validate our pipeline, we ran preliminary experiments using a set of high-quality SAE labels released recently [[1]](https://arxiv.org/abs/2501.08319) for both SAE and ReFT‐r1, and we observed similar performance patterns with the Gemma-2-2B model (i.e., ReFT-r1 is significantly better than SAE). Interestingly, SAE steering performance increased slightly, suggesting that better feature annotations yield improved steering results. Below are the results: | Method | 2B L10 | 2B L20 | |--------------------|-------------------|-------------------| | SAE (original) | 0.175 | 0.175 | | SAE (new) | 0.25 | 0.25 | | ReFT‐r1 (original) | 0.55 | 0.50 | | ReFT‐r1 (new) | 0.59 | 0.45 | Additionally, SAE’s performance on concept detection increased from 0.75 to 0.85 for both layers. In summary, **higher quality concept labels improve SAE’s performance without closing the gap to other supervised methods**. We will incorporate these newer findings into our next revision. Note that the original and new results are based on around 20 concepts. --- Rebuttal Comment 1.1: Comment: We thank the authors for their thorough rebuttal. These additional results (human eval, zero-shot prompting, and different SAE labels) add to the paper. I think this is a great paper, and recommend acceptance.
Summary: This paper proposes a new large-scale benchmark for steering and interpretability, with reported results on Gemma2. They find that prompting outperforms all existing interpretability and steering methods, followed by fine-tuning, for steering. They also find that SAEs are not competitive for either task. The authors propose a novel adaptation of Representation Finetuning called ReFT-r1 for steering models in an interpretable manner, with results validating its utility. Finally, the authors publicly release trained supervised dictionaries along with their benchmark. Claims And Evidence: The claims made in the paper are supported by very clear and convincing evidence. Methods And Evaluation Criteria: The proposed benchmark dataset along with the evaluation metrics are highly intuitive for the task at hand. - One concern is that the features chosen are the labels the authors found for the evaluated SAEs on Neuronpedia. However, these SAEs were labeled with an (admittedly barely) outdated auto-interpretability method. Thus, many of the labels may have been erroneous or, not specific, or generally irrelevant; however, they were still the features by which all methods were compared to. It would be beneficial for the authors to further discuss the limitations of these features, and whether other concepts may be important or interesting to consider and evaluate for as well. - Furthermore, the authors may want to include discussion of how the choice of layer may affect the results. Perhaps intervention in later layers of the model would have been more effective after the model has transitioned to the next token prediction task and can be steered towards predicting tokens related to the desired behavior. - Finally, the authors may want to elaborate on how feature sets should be generated for training supervised dictionaries without having to first train an SAE, label it, and then use those labels as the feature set. Is there some unsupervised method to generate relevant features in an automatic manner? Theoretical Claims: This is not applicable as no theoretical claims or proofs were given in the paper. Experimental Designs Or Analyses: The soundness and validity of the experimental designs and empirical results were all validated. No issues were found beyond what was discussed in "Methods And Evaluation Criteria". Supplementary Material: All parts of the supplementary material were reviewed along with the main paper. Relation To Broader Scientific Literature: The key contributions of this paper relate to broader literature in mechanistic interpretability and model steering, particularly with respect to literature on sparse autoencoders, as this paper empirically shows that SAEs are limited in their capacity to steer and control models, as well as to predict the presence of concepts in latent representations. This paper is beneficial to the community in that it sets a more rigorous standard for thoroughly evaluating methods on desired downstream tasks and benchmark against existing methods. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - This paper is well-written, easy-to-understand, and thorough in its evaluation. - The results are novel and highly relevant to the community, with implications on future research on model steering and sparse autoencoders. Weaknesses: - This is not a significant weakness, but given the breadth of methods evaluated, I am curious why the authors stuck primarily to linear probes. It would have been interesting to see a comparison with nonlinear probes, where steering can be performed by increasing the activation while minimizing distance from the original representation. - The authors did not validate their LLM-as-a-judge setup. Given that this setup seemed to be highly unstable in other configurations, a small-scale validation would be beneficial. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for your comments! We address them point-by-point below. **We provide preliminary results by using a set of high-quality SAE labels released recently below.** > **Q1**: One concern is that the features chosen are the labels the authors found for the evaluated SAEs on Neuronpedia. However, these SAEs were labeled with an (admittedly barely) outdated auto-interpretability method. **A1**: Great point! We agree that the labels from the auto-interpretability pipeline contain noise. **To validate our pipeline, we ran preliminary experiments using a set of high-quality SAE labels released recently** [[1]](https://arxiv.org/abs/2501.08319) for both SAE and ReFT‐r1, and we observed similar performance patterns with the Gemma-2-2B model (i.e., ReFT-r1 is significantly better than SAE). Interestingly, SAE steering performance increased slightly, suggesting that better feature annotations yield improved steering results. Below are the results: | Method | 2B L10 | 2B L20 | |--------------------|-------------------|-------------------| | SAE (original) | 0.175 | 0.175 | | SAE (new) | 0.25 | 0.25 | | ReFT‐r1 (original) | 0.55 | 0.50 | | ReFT‐r1 (new) | 0.59 | 0.45 | Additionally, SAE’s performance on concept detection increased from 0.75 to 0.85 for both layers. In summary, **higher quality concept labels improve SAE’s performance without closing the gap to other supervised methods**. We will incorporate these newer findings into our next revision. Note that the original and new results are based on around 20 concepts. > **Q2**: Furthermore, the authors may want to include discussion of how the choice of layer may affect the results. **A2**: Yes, the choice of layer does seem to affect steering performance. For example, in larger 9B models, steering at earlier layers (e.g., L20 vs. L31) appears to be advantageous, which is not as evident for smaller 2B models. While later layers might be more effective for generating tokens, they involve fewer downstream layers, potentially reducing the expressivity of the steering vector. > **Q3**: Finally, the authors may want to elaborate on how feature sets should be generated for training supervised dictionaries without having to first train an SAE. Is there some unsupervised method to generate relevant features in an automatic manner? **A3**: Great suggestion! Yes, this would be an interesting extension study. For supervised dictionary learning methods (SDLs), obtaining labels from SAEs is not required; we use SAE labels solely to maximize SAE performance. One promising direction is to compile expert-level concept lists (e.g., reasoning-related concepts such as "backtracking") and then use SDLs to identify effective steering vectors for both concept detection and model steering.
Summary: - Authors introduce AxBench, a benchmark to evaluate LLM steering, ie. capability at following instruction. In that context, authors evaluate multiple representation-based techniques, including sparse auto-encoders and linear probes, to achieve that goal. The findings show that representation techniques are still far behind simple prompting and traditional fine-tuning. In parallel, authors introduce a new steering technique, called ReFT-r1, that shows improvement upon other representation-based techniques, but with still a gap compared to prompting and fine-tuning. Claims And Evidence: - There are multiple claims made in this work. A. "There is no benchmark for making direct comparisons between" a variety of representation-based techniques. B. Representation steering, and sparse autoencoders among other, "is still far behind simple prompting and finetuning baselines". - While claim B. is supported by multiple experiments through the paper, with experiments done on Gemma2 2B and 9B, the claim A is problematically not supported in the paper. The Section 2 related work does not make any mention of concurrent benchmarks to evaluate model instruction following, and there is a mix of the two contributions (AxBench and ReFT-r1) through the overall paper. - For instance, Figure 2 is clealry mixing the two topics: the left and middle part are about AxBench, while the right is about steering techniques. Methods And Evaluation Criteria: - Authors use Gemma-2 as base for their experimental settings. The AxBench benchmark evaluates both Concept Detection and Model Steering. Authors execute a quick deep dive into the experimental details Theoretical Claims: See 'Claims and Evidence' Experimental Designs Or Analyses: - One could have appreciated reporting results on other benchmarks, beyond just AxBench. Or alternatively, performing a proper human evaluation to ground AxBench. Otherwise, this makes difficult to verify the claims in this work. AxBench is synthetically built and evaluated, and no point of comparison with other benchmarks are provided. Supplementary Material: - Readers appreciate the abundant material in the Appendix. Note that some could be probably removed. In particular the historical notes would probably be a welcome addition in an introduction book or a thesis, but not in a concerence paper. Relation To Broader Scientific Literature: - Surprisingly given the title of this paper, the Related Work section is only about steering techniques, and no other instruction following benchmarks are discussed or mentioned. This is problematic and makes hard to understand this paper contributions. See other sections of this review. Essential References Not Discussed: - The overall work focuses only on the single benchmark AxBench as introduced by the author. One could have appreciated if the authors could link their work with other instruction following benchmarks. There are many, including the famous IFEval paper that is surprisingly not mentioned in this work. Other Strengths And Weaknesses: - The work proposes a deep dive analysis on the introduced AxBench, and how it helps measuring model 'steering-ness'. However, one weakness of this work is to fail to connect this in-depth analysis with other benchmarks, such as the widely adopted IFEval, and with human evaluation, i.e AxBench evaluation is synthetically created (Concept Detection) and synthetically evaluated (Model Steering). Other Comments Or Suggestions: - The overall paper is not easy to read, with many abbrevations such as SDL and SAE. In particular, SAE is only introduced in L058 where it merely mentions it to be about unsupervised methods, but it is only latter in the document that one can assume that SAE are sparse autoencoders. SDL in L058 too hasn't any references attached to it. And the rest of the section of 2 "Related Work" does not give more details about SDL. - More problematic perhaps than abbrevations, the paper seems to not be able to pick which contribution to highlight between AxBench and ReFT-r1. For instance, the title of the paper is about 'AxBench' but the related work section only focuses on steering techniques, lacking any references at other instruction following benchmarks and why they might fall short. Findings as well reported in the conclusion continue mixing both concepts: on one had ReFT-r1 is not as good as prompting, but on the other hand "No matter the outcome, (...) evaluation benchmarks like AxBench are necessary (...).". It is not clear what is the value proposition here. A clear articulation with the SOTA seem to be needed here. Either this paper is about ReFT-r1, and then one could hope having competitive results compared to other methods, or it is about AxBench (as pointed by the title), and then one could hope having a clear articulation with other Instruction Following Benchmarks. - The paper deeps dive quickly into AxBench, and fails to connect it to other benchmarks or human validation. One could have appreciated a better articulation with the rest of the literature. - The legend of Figure 2 is bit confusing. It mentions "(a)", "(b)" and "(c)", but those elements are not reported in the actual figure. Besides, it mixes AxBench (the actual benchmark) with explanations about how two familiies of steering techniques work. Questions For Authors: Dear authors, thank you for your work: - Could you please describe what is the main significant contribution of this work? It seems two contributions are mixed: AxBench (from the title) and ReFT-r1 (from the abstract). However, I found difficult to follow the articulation with the rest of the literature. The Section 2 "Related Work" is only about other steering techniques, and ignores all the literature about Instruction Following Benchmarks. On the other hand, the title of the work starts with the token "AxBench", which makes the reader expecting a Benchmark paper (like IFEval, MIABench, M-IFEval, etc.). Could you please clearly position this work? Is that a work that contributes a new benchmark, and hence Section 2 should be probably re-written, or is that a work about ReFT-r1, in such case the title is probably not adequate. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your comments! We address them below. **We articulate our goal and how AxBench differs from IFEval-like benchmarks, and supply preliminary human evaluations.** > **Q1**: The Section 2 related work does not mention any concurrent benchmarks to evaluate model instruction following (e.g., IFEval). The paper dives quickly into AxBench and fails to connect it to other benchmarks. **A1:** We want to clarify the purpose of creating AxBench. First, **AxBench is not an instruction following benchmark for LLMs.** IFEval is intended to compare the instruction-following abilities of LLMs using *a small set of easily verifiable metrics*. AxBench is *an open-vocabulary evaluation benchmark* for concept-based steering and detection, which compares the efficacy of different steering/detection methods on a single LLM, and is not verifiable with simple rules. **Our concepts can be highly abstract** and are described using natural language descriptions (i.e., this is not comparable to asking LLMs to respond with X tokens or include keywords Y, as in IFEval). **Unlike IFEval, a rule-based evaluation metric is impossible for such open-ended concepts.** Moreover, **our concepts are not artificially created, and are grounded with interpretability methods**. They emerge in LLM's representations and are found in an unsupervised manner with SAE [[1]](https://arxiv.org/abs/2408.05147) [[2]](https://arxiv.org/abs/2410.13928). Another benefit of doing so is having access to thousands of such concepts, so that we can evaluate steering methods at scale. We will clarify the distinction between steering and instruction following in our next revision. > **Q2**: Performing a proper human evaluation to ground AxBench. **A2**: Great suggestion! We conducted a preliminary human evaluation by recruiting 5 participants, each of whom rated 30 steering generations from different methods (e.g., Prompt, ReFT‑r1, SAE) using the same scoring system as our LLM judges. We then computed two types of Pearson correlation coefficients to assess agreement: - **Human-Human Agreement:** For each pair of human participants, we computed the Pearson correlation between their ratings, applied Fisher’s Z transformation, averaged them in the transformed space, and converted the result back to a Pearson correlation coefficient, yielding a score of **0.57**. - **LLM-Human Agreement:** We also computed the Pearson correlation between the LLM’s ratings and each human’s ratings, applied Fisher’s Z transformation, and averaged these values across the 5 participants, yielding a score of **0.58**. These measures indicate that the LLM behaves much like just another human participant in terms of rating consistency. Given that steering is a highly subjective task about which individuals frequently disagree, these strike us as high correlation numbers overall. Moreover, prior work on similar annotation tasks [[3]](https://arxiv.org/abs/2406.06369)[[4]](https://arxiv.org/abs/2204.00447) has reported correlations of at most 0.6, further validating our findings. We have uploaded our anonymized human annotation interface [here](https://tinyurl.com/2xyd44d). > **Q3**: Some Appendix could be probably removed. The overall paper is not easy to read. The legend of Figure 2 is bit confusing. **A3**: We'll clarify our notations early and update Figure 2 and the Appendix in our next revision. > **Q4**: The paper seems to not be able to pick which contribution to highlight between AxBench and ReFT‐r1. **A4**: **Our goal is to evaluate whether existing representation-based steering methods are competitive in terms of concept detection and model steering compared to finetuning or prompting methods.** Most of our approaches produce rank-1 steering vectors, which is the simplest steering setting designed to minimize inference-time overhead while rivaling prompting in practical scenarios. To accomplish this goal, our contributions are: (1) **AxBench**, which, to the best of our knowledge, is the first benchmark for evaluating representation-based steering methods at scale; and (2) **ReFT‐r1**, a representation-based method derived from ReFT [[5]](https://arxiv.org/abs/2404.03592) that rivals finetuning or prompting methods. As shown in our Figure 1, without ReFT‐r1, representation-based methods significantly lag behind, making it nearly hopeless for them to catch up. **We believe that our two contributions work organically together** to provide a solid foundation for evaluating methods and paving the way forward for representation-based approaches. Our work shows that while SAEs were once seen as the only scalable method for training steering vectors, ReFT‐r1 disproves this for concept detection and model steering. AxBench is essential to evaluate interpretability methods as they scale against finetuning and prompting. Although ReFT‐r1 still lags behind, we think it provides important insights into how to train better steering vectors that can be useful in practice.
null
null
null
null
null
null
LLM Alignment as Retriever Optimization: An Information Retrieval Perspective
Accept (poster)
Summary: This paper establishes the connections between the formulations of LLM alignment and information retrieval (IR). Inspired by this discovery, it introduces various practices in information retrieval into LLM alignment, including hard negative mining and ranking loss functions. Empirical studies demonstrate the effectiveness of the proposed approach. ## update after rebuttal The additional empirical results and conceptual explanations by the authors have mostly addressed my earlier concerns. Claims And Evidence: The claims are mostly supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem or application at hand. Theoretical Claims: I checked the proofs in Appendix E and did not find issues. Experimental Designs Or Analyses: I checked the experimental designs and analyses. Possible issues include: **I1.** The authors only compared the baseline approaches with the weaker reward model, and it is unclear how the baseline methods perform with the stronger reward model. **I2.** The experiments are restricted to small models with 2B or 7B parameters. **I3.** SimPO also studies two Llama-3 models in addition. Supplementary Material: I reviewed the section E in Appendix. Relation To Broader Scientific Literature: To the best of my knowledge, - The IR perspective is novel. - The paper considers three IR-inspired objectives, including contrastive ranking and two list-wise ranking objectives (LambdaRank and ListMLE). [1] has previously explored using learning-to-rank objectives. The differences between the objectives employed by [1] and this paper are not sufficiently discussed. - The empirical studies follow the practice (setting, benchmarks, baselines) of existing efforts [2]. - The mining of hard negatives in iterative alignment is novel. [1] Liu et al. LiPO: Listwise Preference Optimization through Learning-to-Rank. [2] Meng et al. SimPO: Simple Preference Optimization with a Reference-Free Reward. Essential References Not Discussed: N.A. Other Strengths And Weaknesses: ### Other Strengths **S1.** The paper is well-written and easy to follow overall. ### Other Weaknesses **W1.** Notations in section 2 are not self-contained. Other Comments Or Suggestions: N.A. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and address the points below: - **Baselines with Strong Reward Models.** We include iterative DPO with a strong reward model for fair comparison: | Model | ApEval 2 (LC WR) | ApEval 2 (WR) | MixEval | MixEval-Hard | |--|--|--|--|--| | **Mistral-Base (7B)** | | Iterative DPO | 32.85 | 35.2 | 0.6825 | 0.3835 | | LarPO (Contrastive) | 41.5 | 42.9 | 0.718 | 0.417 | | LarPO (LambdaRank) | 35.8 | 34.1 | 0.717 | 0.431 | | LarPO (ListMLE) | 36.6 | 37.8 | 0.73 | 0.423 | | **Mistral-Instruct (7B)** | | Iterative DPO | 38.95 | 47.89 | 0.698 | 0.3965 | | LarPO (Contrastive) | 43 | 53.8 | 0.718 | 0.425 | | LarPO (LambdaRank) | 41.9 | 48.1 | 0.74 | 0.44 | | LarPO (ListMLE) | 39.6 | 48.1 | 0.717 | 0.397 | LarPO consistently outperforms iterative DPO under the strong reward model, demonstrating the strength of our proposed frameworks. - **Model Size Limitation.** Training larger models is resource-intensive and cannot be easily handled by our current computational devices. Our current results on 2B and 7B models show LarPO’s consistent effectiveness across these model sizes and families. We leave scaling to larger models for future work. - **LLaMA-3 Results.** We include LLaMA-3-8B model results below. LarPO achieves the best performance across metrics, outperforming all baselines including SimPO: | Model | ApEval 2 (LC WR) | ApEval 2 (WR) | Arena-Hard | MTBench | MixEval | MixEval-Hard | |--|--|--|--|--|--|--| | SFT | 26.0 | 25.3 | 22.3 | 8.1 | 0.742 | 0.4005 | | RRHF | 31.3 | 28.4 | 26.5 | 7.9 | 0.743 | 0.4125 | | SLiC-HF | 26.9 | 27.5 | 26.2 | 8.1 | 0.752 | 0.4515 | | DPO | 40.3 | 37.9 | 32.6 | 8.0 | 0.7715 | 0.4675 | | IPO | 35.6 | 35.6 | 30.5 | 8.3 | 0.756 | 0.452 | | CPO | 28.9 | 32.2 | 28.8 | 8.0 | 0.7665 | 0.4225 | | KTO | 33.1 | 31.8 | 26.4 | 8.2 | 0.7645 | 0.461 | | ORPO | 28.5 | 27.4 | 25.8 | 8.0 | 0.7545 | 0.445 | | RDPO | 41.1 | 37.8 | 33.1 | 8.0 | 0.779 | 0.4645 | | SimPO | 44.7 | 40.5 | 33.8 | 8.0 | 0.732 | 0.4185 | | LarPO (Contrastive)| **47.72** | **49.81** | 35.2 | 8.3 | **0.7795** | 0.4555 | | LarPO (LambdaRank) | 46.2 | 49.07 | **36.5** | **8.4** | 0.779 | **0.4785** | | LarPO (ListMLE) | 44.67 | 47.51 | 35.6 | 8.3 | 0.7705 | 0.456 | - **Comparison to LiPO.** LiPO proposes a LambdaRank-based listwise objective, which can be viewed as a special case of LarPO. However, their objective designs are based on learning-to-rank heuristics without a grounded theoretical basis. In contrast, LarPO is supported by theoretical foundations and generalizes to multiple ranking objectives beyond LambdaRank. We will add this distinction in the revised manuscript. - **W1. Notations in section 2 are not self-contained.** We acknowledge this and will revise to ensure all notations are self-contained and clearly defined. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response, which has mostly addressed my concerns. I do not have further questions. --- Reply to Comment 1.1.1: Comment: Dear Reviewer ToBQ, Thank you once again for your thoughtful feedback, which has been invaluable in helping us improve our paper. We’re glad to hear that our response has addressed most of your concerns. Wishing you all the best.
Summary: This paper views LLM alignment as a retriever optimization problem, presenting a systematic framework that connects LLM alignment with information retrieval (IR) methodologies. The paper maps LLM generation and reward models to the retriever-reranker paradigm in IR. Based on three key IR principles—retriever optimization objectives, hard negative mining, and candidate list construction—the authors propose a novel alignment method, LLM Alignment as Retriever Preference Optimization (LARPO). Experimental results on AlpacaEval2 and MixEval-Hard validate the effectiveness of the proposed LARPO approach. ## update after rebuttal Thanks for the author's rebuttal and additional experiments. I appreciate the authors' conceptualization of LLM tuning as retriever optimization, where SFT is treated as direct retriever optimization and preference optimization is framed as reranker-retriever distillation. While this perspective is intuitive, the paper lacks a clear and convincing explanation of why IR principles are effective for LLM alignment. Unfortunately, this key concern was not addressed in the rebuttal. **I hope the authors can consider discussing this aspect more thoroughly in future revisions. I will raise my score to a 3, but I still retain this concern.** Claims And Evidence: The authors conceptualize LLM tuning as retriever optimization, where supervised fine-tuning (SFT) is treated as direct retriever optimization and preference optimization is framed as reranker-retriever distillation. While this perspective is intuitive, the paper lacks a clear explanation of why IR principles are effective for LLM alignment. Methods And Evaluation Criteria: 1. The authors adopt most of their offline experimental results directly from SimPO but use only the AlpacaEval2 dataset, replacing Arena-Hard and MT-Bench with MixEval. It is unclear why this dataset substitution was made. Given the reliance on SimPO's experimental setup, the authors should clarify why they did not use the same datasets entirely. 2. The paper lacks an analysis of the computational complexity associated with different optimization objectives. Theoretical Claims: The theoretical section appears to be correct. Experimental Designs Or Analyses: Some experimental comparisons seem unfair. I have the following concerns: **Lack of Ablation Studies**: While framing LLM alignment as retriever optimization is a valuable perspective, the proposed method combines elements of optimization objectives, hard negative mining, and candidate list construction. Each of these components has been explored in prior work, but the paper does not provide comparisons. + For listwise preference optimization, comparisons should be made against existing methods such as LiPO [1], DRPO [2], and MPPO [3]. + The baseline methods in Table 2 use different training datasets than LARPO. To fairly assess the effectiveness of the proposed preference optimization objective, Table 3 should include comparisons against other DPO-based baselines under the same data conditions. The current results do not convincingly demonstrate that the optimization objective itself is effective, as improvements may stem from external reward model filtering rather than the optimization technique. + For hard negatives mining, comparisons should be made with existing preference pair quality assessment techniques, such as explicit reward margin [4] and implicit reward margin [5]. This would provide stronger evidence for the effectiveness of the proposed hard negatives mining approach. **Temperature Coefficient in Figure 4(b)**: The experiment starts at a temperature of 0.8. It is unclear why lower values (0–0.8) were omitted. A complete trend analysis is necessary to understand the behavior across the full range. [1] LiPO: Listwise Preference Optimization through Learning-to-Rank [2] Optimizing Preference Alignment with Differentiable NDCG Ranking [3] MPPO: Multi Pair-wise Preference Optimization for LLMs with Arbitrary Negative Samples [4] Reward difference optimization for sample reweighting in offline rlhf [5] Not all preference pairs are created equal: A recipe for annotation-efficient iterative preference learning Supplementary Material: I reviewed all appendices, from Appendix A to Appendix H. Relation To Broader Scientific Literature: The authors establish a conceptual link between LLM alignment and retriever optimization, leading to the proposal of the LARPO method. Essential References Not Discussed: Please see above in Experimental Designs Or Analyses about the following papers: [1] LiPO: Listwise Preference Optimization through Learning-to-Rank [2] Optimizing Preference Alignment with Differentiable NDCG Ranking [3] MPPO: Multi Pair-wise Preference Optimization for LLMs with Arbitrary Negative Samples [4] Reward difference optimization for sample reweighting in offline rlhf [5] Not all preference pairs are created equal: A recipe for annotation-efficient iterative preference learning Other Strengths And Weaknesses: 1. The perspective introduced in this paper is valuable and contributes to a conceptual understanding of LLM alignment through the lens of retriever optimization. However, the proposed method primarily integrates existing approaches in optimization objectives, hard negatives mining, and candidate list construction, serving more as an explanatory framework rather than introducing a fundamentally new method. This raises concerns about the validity of the claimed LARPO method, particularly due to the lack of thorough ablation studies to demonstrate its effectiveness. Consequently, while I believe this is a valuable perspective paper, I question whether it is suitable for a main conference paper. 2. The authors claim that the baseline checkpoints are from SimPO. Could you provide the corresponding checkpoint links? Additionally, will the author open-source the code, experimental data, and checkpoints used in this paper, which will enhance the reproducibility? Other Comments Or Suggestions: Please see above. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your insightful feedback and believe it has significantly strengthened our manuscript. We have carefully addressed each of your comments as detailed below: - **Use of SimPO Datasets.** We exclude MTBench and Arena-Hard because (1) SimPO shows minimal differences among methods on MTBench, and (2) Arena-Hard evaluation is costly ($30+ per run). In this rebuttal, we include both below using LLaMA-3 (8B) and Mistral (7B): Llama-3-8B models |Model|Arena-Hard|MTBench| |-|-|-| |RRHF|26.5|7.9| |SLiC-HF|26.2|8.1| |DPO|32.6|8.0| |IPO|30.5|8.3| |CPO|28.8|8.0| |KTO|26.4|8.2| |RDPO|33.1|8.0| |SimPO|33.8|8.0| |LarPO (Contrastive)|35.2|8.3| |LarPO (LambdaRank)|**36.5**|**8.4**| |LarPO (ListMLE)|35.6|8.3| Mistral-7B models |Model|Arena-Hard|MTBench| |-|-|-| |RRHF|5.8|6.7| |SLiC-HF|7.3|**7.4**| |DPO|10.4|7.3| |IPO|7.5|7.2| |CPO|6.9|6.8| |KTO|5.6|7.0| |RDPO|8.0|**7.4**| |SimPO|16.6|7.3| |LarPO (Contrastive)|15.4|7.2| |LarPO (LambdaRank)|**19.7**|6.9| |LarPO (ListMLE)|14.2|7.3| These results show LarPO performs comparable or better than competitive baselines on these benchmarks. We will include these results in the revised manuscript. - **Computational Complexity.** (1) Pairwise Ranking: Computes preference between a single pair of responses per prompt, with complexity O(1). (2) Contrastive Ranking: Involves a softmax over a candidate list of size k, with complexity O(k). In practice, good performance is achieved with a small k (e.g., with k = 4, contrastive ranking performs better alpaca_eval2 win rate than pairwise ranking), making it effective. (3) LambdaRank: Based on pairwise comparisons with position-aware weighting. Its worst-case complexity is O(k²) but can be reduced to O(k) via subsampling. (4) ListMLE: Computes the likelihood of a full ranking using sequential softmax, with O(k²) complexity. This can also be lowered to O(k) using subsampling strategies. - **Comparison to LiPO[1], DRPO[2], MPPO[3], and reward margin[4][5] papers.** We would like to first emphasize that our goal is to connect IR and alignment, not to propose new listwise or hard negative methods. However, we are happy to compare our method to the five mentioned papers and add discussion accordingly in our revision. (1) LiPO [1] proposes LLM alignment as listwise learning to rank and proposes a LambdaRank-based solution. LiPO is a special case of LarPO with LambdaRank. However, we provide extensive theoretical background and demonstrate that many other ranking assumptions can be adopted in LarPO in addition to LambdaRank. (2) We compare LarPO with [2], [4], and [5] in the table below, where LarPO consistently outperforms all methods. Specifically: [2] performs well on the HH dataset (as shown in the original paper) but shows limited effectiveness on Alpaca Eval 2 and MixEval. [4] introduces a reward margin coefficient, which can misguide the LLM if the reward scores lack sufficient granularity. [5] relies on optimizing Eq. 6, but we find its training to be unstable and less robust in our experiments. (3) MPPO [3] is concurrent (posted one month before the ICML deadline), and comparison is left to future work. |Model|Ap2 (LC WR)|Ap2 (WR)|MixEval|MixEval-Hard| |-|-|-|-|-| |[2]|9.00|5.11|0.6035|0.2865| |[4]|13.48|10.45|0.6785|0.3395| |[5]|9.34|6.21|0.6360|0.3285| |LarPO (Contrastive)|41.50|42.90|0.7180|0.4170| |LarPO (LambdaRank)|35.80|34.10|0.7170|0.4310| |LarPO (ListMLE)|36.60|37.80|0.7300|0.4230| - **Table 2 Dataset Consistency.** All baselines in Table 2 use the same UltraFeedback prompt set and PairRM reward model if needed (as in SimPO); thus, the experiments are in fair condition. To further isolate objective effectiveness, we have added results with advanced reward models (i.e., FsfairX) under the same setup, shown in the response 4 to Reviewer r2Em. - **Temperature Study (Fig 4b).** We previously started from T=0.8 due to response concerns that lower temperatures might reduce response diversity, potentially leading to lower-quality preference data. In this rebuttal, we add these results, as shown in the table below: |Temperature|Ap2 (LC Winrate)| Ap2 (Winrate)| |-|-|-| |0.2|55.47|62.74| |0.4|53.71|62.09| |0.6|55.45|62.30| Surprisingly, low temperatures yield strong performance—possibly because the lower the temperature is, the harder the negatives are—suggesting an interesting direction on in-depth temperature analysis for future investigation. We will add this discussion to the revised manuscript. - **Ablation studies.** Section 6 includes ablations on objectives, hard negatives, and candidate construction. Our work is the first to jointly explore all these components in connecting retrieval and LLM alignment which we would think are enough contributions to the main conference. - **Reproducibility.** SimPO paper checkpoints: https://huggingface.co/collections/princeton-nlp/simpo-66500741a5a066eb7d445889. We commit to releasing all code, data, and checkpoints upon acceptance.
Summary: This paper demonstrates that concepts from information retrieval can be ported over to shed light on numerous aspects of language model alignment tuning, including both RLHF-type functions and the data generation methods that feed those objectives. From a more technical perspective, this paper demonstrates empirically that replacing the Bradley-Terry pairwise preferences model used in the original DPO paper with listwise preference models like LambdaRank can result and improved alignment. # Update after rebuttal I appreciate the authors' thorough reply to my questions/comments, and maintain my recommendation that the paper be accepted. Claims And Evidence: Largely yes. The theoretical discussion (mapping ideas from information retrieval onto LLM alignment, both conceptually and mathematically) is sound, and the experiments are for the most part appropriately designed to measure natural axes of variation motivated by the theoretical discussion. I have the following specific comments/concerns: Fig 2/Section 2.4: I appreciate the analogy. However: - I wonder if the experiment could have been done on a single data set (of course 2 different models is unavoidable). E.g. make all answers in the dataset the corpus the retriever is retrieving from. - Both Recall@N and Pass@N are metrics that obviously monotonically increase with N. So this experiment provides limited support for the analogy between retrieval and LLM inference. - "Greedy decoding, equivalent to N = 1, is a prevalent LLM inference strategy. However, as shown in Figure 2(b), Pass@1 is often suboptimal". I don't agree with the word choice "suboptimal" since in many real life scenarios you only get to attempt to answer once. And the Pass@N metric just checks whether one of your N attempts was correct, and includes no mechanism for picking which of the N samples to submit as final answer. **Table 2**: it's a nice sanity check that using a better reward model improves the performance of LARPO, however, the absence of comparison against any baselines (in particular DPO) using the better reward model makes the bottom of this table difficult to interpret. **Section 6.1**: the observations that list MLE and lambda rank improve over pairwise and contrastive objectives do not seem to transfer to table 2. At least some comment on this would be warranted. **Figure 4**: could you please comment on why GSM, Mathstral and iterative DPO are used in figure 4a (as they differ from the models/datasets/methods appearing elsewhere in the section)? I do not concur with the analysis of 4b. First, it compares temperature with win rate, not with example hardness. While it is plausible that example hardness is a latent factor and that the impact of temperature and win rate occurs *via* hardness, the experiment and figure simply don't show this. Moreover, when I see this figure I reach the reasonable conclusion that "oh, the simple choice of temperature 1 works best." The analysis "within a specific range, lower temperatures generate harder negatives" somehow presupposes that the range under consideration is [1,1.2], even though that's not even the entire $x$ axis of the figure. Methods And Evaluation Criteria: Yes, although as mentioned in the claims and evidence section of this review, I have some questions around consistency in models, datasets and optimization algorithms across the various experiments of the paper. Note: I'm not asking for more experiments during the rebuttal period. Further clarification in the paper around choice of model and data set and optimization algorithm where they differ from other experiments in the paper would be adequate. Theoretical Claims: No, I did not closely read the appendix. However, having read the DPO paper and the first few sections of this one, I readily believe the results linking ranking objective functions to their alignment-objective analogues. Experimental Designs Or Analyses: See my comments on claims and evidence. Supplementary Material: Not beyond skimming. Relation To Broader Scientific Literature: The contributions are highly relevant to language model alignment (potentially also to language model tuning more broadly, i.e. for purposes other than being pleasant to chat with). It's possible they also have impact in applications of language models for information retrieval (by showing LLM alignment research has overlap with LLM ranking research). Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: None beyond those discussed above. Other Comments Or Suggestions: Equations 1-3: I get what they're trying to say, but it appears odd that the document $d$ does not appear on the right hand side of the set notation. May I suggest either of a) a pseudo-code type `sorted(D, r(q,-))[:k]` or b) briefly defining an $\text{argmax}_k(r(q,\cdot), D)$ that returns the function inputs $d\in D$ resulting in top k values of $r(q,d)$ and then using that. >This analogy is further supported by the LLMs’ architecture. As illustrated in Figure 1, the generative modeling with LLMs can be interpreted as the matching process of a bi-encoder model ... This analogy seems fair for the first generated token, but not for subsequent tokens, where the last layer representation being decoded depends on both the query and previously generated tokens of $y$. In the equation on L131-136, is there an implicit assumption that the embedding and the embedding matrixes of the LLM are transposes of each other? L154: since $r$ is already being used to denote re-ranker, it would be best to use something other than that for the "rule". For that matter $f_{\text{rerank}}$ could maybe just be $r$. L255: it's not obvious to me as a reader how figure 2b supports this statement. L301: "More details of how the temperature " typo. Questions For Authors: Have the ranking objectives (contrastive, list MLE, lambda rank) considered in this paper, been used for the direct purpose of training LLM based re-rankers (I saw RankT5 in the related work, but it doesn't cover all of the ranking objectives in this paper and the underlying models are somewhat different)? If so, is their a correlation between the following? - the performance observed using a ranking objective for LLM-based reranker training - the performance observed using a ranking objective for alignment tuning Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for the constructive feedback, which has strengthened our work. Please find our point-by-point responses below: - **Fig 2: Single dataset?** Thank you for the suggestion. We plan to incorporate this in the revised manuscript (e.g., using NQ for both 2(a) and 2(b)). However, note that Figure 2 is intended to highlight the analogy between retriever and LLM scaling behaviors, not to compare them directly, so using different datasets seems acceptable. We adopt NQ and GSM8K since NQ is a standard dataset for retrieval; GSM8K captures LLM reasoning capabilities. - **Fig 2: Metrics are monotonic.** We agree that Recall@N and Pass@N obviously increase with N. Our goal was to visualize similar trends in scaling behavior rather than make a strong claim. We will tone down the language in the revised manuscript. - **Fig 2: "Suboptimal" word choice.** We agree and will revise to replace "suboptimal." - **Table 2: DPO baseline with strong RM.** Thank you. We have now included iterative DPO with a strong reward model: | Model | ApEval 2 (LC WR) | ApEval 2 (WR) | MixEval | MixEval-Hard | |--|--|--|--|--| | **Mistral-Base (7B)** | | Iterative DPO | 32.85 | 35.2 | 0.6825 | 0.3835 | | LarPO (Contrastive) | 41.5 | 42.9 | 0.718 | 0.417 | | LarPO (LambdaRank) | 35.8 | 34.1 | 0.717 | 0.431 | | LarPO (ListMLE) | 36.6 | 37.8 | 0.73 | 0.423 | | **Mistral-Instruct (7B)** | | Iterative DPO | 38.95 | 47.89 | 0.698 | 0.3965 | | LarPO (Contrastive) | 43 | 53.8 | 0.718 | 0.425 | | LarPO (LambdaRank) | 41.9 | 48.1 | 0.74 | 0.44 | | LarPO (ListMLE) | 39.6 | 48.1 | 0.717 | 0.397 | LarPO consistently outperforms iterative DPO, validating its effectiveness with the strong reward model as well. - **Sec 6.1 vs. Table 2.** Section 6.1 isolates the effect of objectives under a fixed setup (e.g., hard negatives, candidate list construction) [1]. Table 2 reflects the joint effects of all factors. We’ll clarify this distinction in the revision. - **Fig 4a: Why GSM/Mathstral/iterative DPO?** We use GSM8K and Mathstral because such math problems have ground-truth labels, enabling clear negative type classification. Iterative DPO ensures a fixed candidate list to isolate the impact of negative difficulty. - **Fig 4b: Temperature vs. win rate.** We adjust the temperature to induce response hardness but agree other latent factors may play a role. We'll revise the explanation to focus on the importance of choosing an appropriate temperature rather than strong claims (“lower temperatures generate harder negatives”). - **Eq. 1–3: Missing document in set notation.** Thank you - we’ll update the notation accordingly with your suggestion (b). - **Fig 1: Analogy only partial.** Agreed. We’ll revise to clarify this is a similarity, not an identity. For later tokens, the prompt + decoded tokens act like a dynamic query in bi-encoder retrieval—but they differ fundamentally. - **L131–136: Embedding assumption.** Yes, we assume vocab embeddings and LLM hidden states share the same dimension, as in most LLMs. We’ll clarify this in the revision. - **r used for both re-ranker and rule.** Thank you - we’ll change the "rule" notation from r() to c() for clarity. - **L255: Fig 2b does not support the statement.** Thank you for the comment. The intended intuition is that lower temperatures yield more similar generated responses, increasing overlap between positive and negative samples. This effectively makes the negatives harder. We will revise the explanation to clarify this point in the revision. - **Correlation: reranker vs. alignment.** Great point. We believe a correlation may exist, but detailed exploration is beyond this paper’s scope and is an exciting direction for future work. [1] RLHF Workflow: From Reward Modeling to Online RLHF. TMLR 2024.
Summary: The paper established a connection between LLM alignment and IR, particularly the retriever-reranker framework. With such a connection, the paper applied multiple techniques used in IR for LLM alignment, specifically, (1) IR objectives; (2) use of hard negatives (from a reasonably good model); (3) candidate list construction. Under these techniques, the paper proposed LARPO, which aims to iteratively optimize an LLM in a direct preference optimization way, so that it can learn to rank its generation responses according to a given reward model. The paper was backed with extensive experiments against different baselines. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: n/a Experimental Designs Or Analyses: Yes, no issues observed. Supplementary Material: Most of them. Relation To Broader Scientific Literature: The paper discussed the connection between IR and LLM, which is novel. Essential References Not Discussed: No Other Strengths And Weaknesses: The paper is well motivated and has a great presentation to connect IR and LLM alignment, which is novel. Such a connection was not only motivated conceptually, but also inspired new algorithms for LLM equipped with existing techniques from IR. However, as many techniques are applied for the optimization, it would be very interesting to see some ablation studies, say: 1. Without hard negatives, how's the performance? 2. Without inclusiveness/memorization, how's the performance? 3. How does the number of generated responses affect the performance? 4. Compared with traditional methods, how would the introduction of the candidate list affect the training total time? Other Comments Or Suggestions: Algorithm 1 line 13, $\mathcal D_s$ should be $\mathcal D_i$ Questions For Authors: See Other Strengths And Weaknesses Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your insightful feedback, which has significantly strengthened our manuscript. We address each comment below. - **Without hard negatives, how's the performance?** Thank you for the question. As shown in Section 6.2 and Figure 4(a), removing hard negatives (i.e., using only easy/easiest ones) leads to significantly worse performance. This highlights the importance of hard negatives in enhancing LLM capabilities. - **Without inclusiveness/memorization, how's the performance?** Thank you for the question. We have conducted a detailed study of the inclusiveness and memorization in Figure 4(c) and Table 4. From Figure 4(c), an increase in the candidate list size can contribute to improved performance (inclusiveness). From Table 4, incorporating previous iteration responses can further enhance the LLM’s capability (memorization). - **How does the number of generated responses affect the performance?** The study of the number of generated responses can be found in Figure 4(c) and also shown below. We find that increasing the number of responses improves the win rate, indicating that our method can utilize and benefit from more diverse candidates: | Length | LC Winrate (%) | Winrate (%) | |--|--|--| | 2 | 49.75 | 55.07 | | 4 | 50.02 | 61.72 | | 6 | 52.56 | 63.59 | | 8 | 55.21 | 64.88 | | 10 | 55.52 | 64.42 | - **How does the candidate list affect training time?** We evaluate training time per iteration of LarPO (contrastive variant) with varying candidate list sizes: | Length | time (min) | Winrate (%) | |--|--|--| | 2 | 25 | 55.07 | | 4 | 45 | 61.72 | | 6 | 65 | 63.59 | We observe that larger candidate lists incur linearly higher costs but yield substantial performance gains, allowing users to balance efficiency and effectiveness trade-off. - **Algorithm 1 line 13.** Thank you for pointing this out. The operation merges all responses from previous iterations. We will revise accordingly.
null
null
null
null
null
null
WeGeFT: Weight‑Generative Fine‑Tuning for Multi‑Faceted Efficient Adaptation of Large Models
Accept (poster)
Summary: The paper proposes Weight-Aware Fine-Tuning (WAFT) to generate fine-tuning weights directly from the pretrained weights. Results show promising results on three datasets based on the LLaMA series of models. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes. Table 1 shows the results of fine-tuning Llama 2 (7B) on the MetaMathQA dataset and evaluating it on the GSM8k test set, while Table 2 presents the results of fine-tuning Llama 1 and 2 (7B) on the Math10k benchmark. In addition, Table 3 gives the results of fine-tuning LLaMA-1 7B, Llama 2 7B and Llama 3 8B on Commonsense170k. The rationality of foundation model selection and benchmarking method is not clear. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: PEFT methods are meaningful for various applications based on foundation models. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. Weight-Aware Fine-Tuning Framework is novel. 2. Experimental results are presented clearly. Weaknesses: 1. Experimental designs, especially for using what foundation models and on what datasets, are not logically clear. 2. The claimed scalable performance is not validated by experiments. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer Zn72, Thank you for your feedback and efforts in reviewing our submission. We address your concerns as follow, which will be carefully incorporated in the revision. > **Experimental designs, especially for using what foundation models and on what datasets, are not logically clear** We follow prior works [1, 2, 3] in selecting foundation models and datasets, all of which are standard in the PEFT literature. We appreciate any suggestions on how to further clarify our choices. > **The claimed scalable performance is not validated by experiments** We have evaluated WAFT at scales of 7–8B parameters on large fine-tuning datasets for arithmetic reasoning, code completion, commonsense reasoning, and instruction following, all of which are tasks of practical interest. Based on these results, we believe our experiments demonstrate scalability. However, we welcome further insights on how to strengthen this validation. **References** [1] Zhiqiang Hu, Lei Wang, Yihuai Lan, Wanyu Xu, Ee-Peng Lim, Lidong Bing, Xing Xu, Soujanya Poria, Roy Ka-Wei Lee: LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models. EMNLP 2023: 5254-5276 [2] Zhengxuan Wu, Aryaman Arora, Zheng Wang, Atticus Geiger, Dan Jurafsky, Christopher D. Manning, Christopher Potts: ReFT: Representation Finetuning for Language Models. NeurIPS 2024 [3] Shaowen Wang, Linxi Yu, Jian Li: LoRA-GA: Low-Rank Adaptation with Gradient Approximation. NeurIPS 2024
Summary: The authors: 1. Identify current limitation of existing PEFT methods -- the replicated structures compromise time and memory efficiency. 2. Propose a novel approach that shares PEFT paratemers across layers. 3. Evaluate the proposed method on different testbed, including image classification, reasoning and generation tasks. 4. Perform ablation study on the effect of different latent operators g, ultimately finding that the identity operator is sufficient. Claims And Evidence: All of the authors’ claims are generally convincing. Methods And Evaluation Criteria: The evaluation benchmark is comprehensive. Providing convincing evidence for the performance of proposed method. Theoretical Claims: N/A Experimental Designs Or Analyses: The experiments cover a wide range of tasks for foundation models, offering a comprehensive and sufficiently thorough evaluation. Supplementary Material: The main paper is clearly self-contained and well-explained, no need to consult the appendix for additional details. Except for hyperparameter details. Relation To Broader Scientific Literature: The idea of describing a shared weights between PEFT layers is enlighting. I believe this methodology offers valuable insights for future PEFT research, specifically, enlights people on design of more effective and efficient architectures. Essential References Not Discussed: See section "Questions For Authors". Other Strengths And Weaknesses: Strenths: A nice work overall. 1. The paper is well-written and well-organized. 2. The motivation is clear and proposed method is effective, easy to understand. 3. Experiment design are comprehensive and convincing. Weaknesses: See section "Questions For Authors". Other Comments Or Suggestions: N/A Questions For Authors: 1. The concept of sharing information across PEFT layers appears very similar to approach proposed in [1]. I'm kind of superised the authors cite that work without comparison with them. I would be very interested in seeing a detailed comparison between these two mehods, particularly, how to explain if any difference in performance. 2. I'm interested in the performance of identify transformation conducted in section 5.1. Seems the authors present them with explanation. Do the authors have any hint on its superior performance versus other operators? I don't mind raising my score if my questions (especially 1) are well-addressed. [1] Jie, S. and Deng, Z. Fact: Factor-tuning for lightweight adaptation on vision transformer. In Williams, B., Chen, Y., and Neville, J. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Dear reviewer M1uG, Thank you for your feedback and efforts in reviewing our submission. We address your concerns as follow, which will be carefully incorporated in the revision. > **Comparison with FacT** We acknowledge that Factor Tuning (FacT) shares a similar parameter-sharing concept with WAFT. However, unlike WAFT, FacT shares parameters across both layers and modules (Self-Attention and MLP blocks). While this reduces trainable parameters, it has a drawback: Transformer modules serve distinct functions - attention layers largely handle syntactical and in-context learning abilities [1, 2], while MLP layers encode factual knowledge [3]. Sharing parameters across these modules may therefore be suboptimal. For a comprehensive evaluation, we compare FacT on the VTAB and Math10k benchmarks. We use the open-source implementation from the original paper for VTAB, employing the same model as in our FGVC experiments. For Math10k, we adapt FacT to the HuggingFace PEFT package and evaluate it on LLaMA-1 and LLaMA-2 (7B). The table below shows that WAFT outperforms both FacT variants on VTAB while using fewer parameters. Method|Params (M)|Natural|Specialized|Structured|Avg -|-|-|-|-|- VPT|0.046|81.0|85.7|58.9|72.7 BitFit|0.083|81.8|85.2|57.8|72.4 LoRA|0.147|**82.0**|85.9|61.0|74.0 FacT-TT|0.040|79.8|86.0|58.0|71.9 FacT-TK|0.069|80.0|**86.8**|60.9|73.4 WAFT |0.025|**82.0**|86.3|**61.1**|**74.1** WAFT outperforms FacT on the Math10k benchmark, with a larger performance gap on LLaMA-1 than Llama 2. Since LLaMA-1 is weaker and requires more adaptation for arithmetic reasoning, this highlights WAFT’s ability to adapt both weak and strong models effectively, whereas FacT struggles with weaker models. Method|Params (%)|Mem. (GB)|Wall Time|AQuA|GSM8k|MAWPS|SVAMP|Avg -|-|-|-|-|-|-|-|- -|-|-|-|LLaMA 1|-|-|-|- FacT (TT) |0.051|17.74|0.52|21.5|30.7|80.3|50.3|45.7 FacT (TK) |0.062|17.75|0.59|21.3|34.8|82.2|51.9|47.5 WAFT|0.052|17.74|0.51|**24.3**|**36.5**|**82.4**|**56.9**|**50.0** -|-|-|-|Llama 2|-|-|-|- FacT (TT) |0.051|17.74|0.52|**24.9**|38.3|81.9|56.2|50.3 FacT (TK) |0.062|17.75|0.59|24.5|41.0|**85.7**|54.4|51.4 WAFT|0.052|17.74|0.50|23.6|**42.4**|84.2|**57.4**|**51.9** > **Effectiveness of the identity transformation** While we do not have a theoretical understanding yet, we hypothesize that the superior performance of the identity operation over more complex and non-linear operations is because of difficulty in optimization. --- **References** [1] Alberto Bietti, Vivien Cabannes, Diane Bouchacourt, Hervé Jégou, Léon Bottou: Birth of a Transformer: A Memory Viewpoint. NeurIPS 2023 [2] Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov: Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy Lifting, the Rest Can Be Pruned [3] Kevin Meng, David Bau, Alex Andonian, Yonatan Belinkov: Locating and Editing Factual Associations in GPT. NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: Thanks the authors for addressing my concerns. i'll change my evaluation to accept.
Summary: 1. Fine - tuning large pretrained Transformer models has two main focuses: parameter - efficient and representation - efficient fine - tuning. LoRA is a pioneering method, but its variants often sacrifice compute and memory efficiency or performance. ReFT is another approach, yet it has performance limitations. 2. The paper proposes Weight - Aware Fine - Tuning (WAFT), which generates fine - tuning weights from pretrained weights. It uses a simple low - rank formulation with two shared linear layers, aiming for multi - faceted efficiency in parameters, representations, compute, and memory. 3. WAFT is related to hypernetworks and neural functionals. It innovatively tunes models using their own weights. Its contributions include a novel framework, multi - faceted efficiency, unifying parameter and representation efficiency, and strong and scalable performance. Experiments on arithmetic reasoning, commonsense reasoning, instruction following, code generation, and visual recognition show that WAFT outperforms many baseline methods. It can achieve better performance with fewer parameters while maintaining LoRA's compute and memory efficiency. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Compete and correct proofs. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. All of the parts. Relation To Broader Scientific Literature: 1. The paper's WAFT approach builds on the prior work of LoRA and its variants. While LoRA achieved efficiency in parameters, compute, and memory, its successors often traded off these aspects. WAFT addresses this by directly generating fine - tuning weights from pretrained weights, offering multi - faceted efficiency and better performance. 2. ReFT, another related method, focused on lightweight representation - editing modules. However, it had performance issues. WAFT unifies parameter - efficient and representation - efficient fine - tuning, improving upon ReFT by leveraging pretrained weight knowledge and achieving on - par or better performance. Essential References Not Discussed: No. Other Strengths And Weaknesses: Novelty: 1. WAFT innovatively generates fine - tuning weights from pretrained weights. This is a new approach in fine - tuning Transformer models. It sets itself apart from LoRA and its variants by using weight - aware parameterization, which enables more effective utilization of pretrained knowledge. It unifies parameter - efficient and representation - efficient fine - tuning. By combining these two aspects, WAFT offers a more comprehensive solution. This unified approach simplifies the fine - tuning process and can potentially be applied to a wider range of tasks. The design of WAFT, with two shared linear layers, is simple yet effective. It reduces the number of learnable parameters while maintaining or improving performance. This simplicity also contributes to its scalability and ease of implementation. 2. Experiments The paper conducts extensive experiments across multiple tasks. It tests WAFT on arithmetic reasoning, commonsense reasoning, instruction following, code generation, and visual recognition. This wide range of tasks validates the method's generality and effectiveness in different scenarios. The experiments include comparisons with various baseline methods. By comparing WAFT with LoRA, DoRA, VeRA, and ReFT, among others, the paper clearly demonstrates WAFT's advantages. The results show that WAFT can achieve better performance with fewer parameters and maintain compute and memory efficiency. Ablation studies are carried out to verify the effectiveness of WAFT's components. These studies test different parameterization schemes and alternative formulations. They help to understand the impact of each part of WAFT, providing insights into its design and performance. 3. Reference Integrity The paper comprehensively reviews the related literature. It covers parameter - efficient fine - tuning methods, hypernetworks, and neural functionals. This shows a deep understanding of the field and places WAFT in the context of existing research. All the references are properly cited. The authors use a wide range of sources, from conference papers to arXiv preprints. This gives credit to the original work and allows readers to further explore related research. The reference list is up - to - date. It includes recent papers published in 2024, ensuring that the research builds on the latest developments in the field. This indicates the paper's relevance and timeliness. Weakness 1. Limited Generalization Exploration Although WAFT shows good performance on the tested tasks, its generalization to other architectures and tasks is not fully explored. The paper mainly focuses on Transformer - based models. It's unclear how well WAFT would work in models with different structures or in emerging applications. There is a lack of analysis on the long - term stability of WAFT. As the field of deep learning evolves, new data and tasks may emerge. It's not clear if WAFT can maintain its performance and efficiency over time without further adjustments. WAFT's performance in resource - constrained environments is not well - studied. The experiments are mostly conducted on a single Nvidia A100 GPU. It's unknown how the method would perform in devices with limited computational power and memory, which are common in real - world applications. Other Comments Or Suggestions: N.A. Questions For Authors: N.A. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer NsTU, Thank you for your feedback and efforts in reviewing our submission. We address your concerns as follows, which will be carefully incorporated in the revision. > **Performance in constrained environments** We use an A100 GPU to accelerate experiments, but the implementation is not tied to any specific GPU architecture. WAFT’s memory and wall-time requirements are comparable to or lower than LoRA, while VeRA and DoRA significantly increase both. Table 2 in the manuscript provides a detailed analysis, confirming WAFT’s suitability for any device that supports LoRA. For further verification, we conduct additional experiments with LLaMA-1 (7B) using mixed-precision float16 instead of mixed-precision bfloat16. The table below shows that WAFT and LoRA experience a similar relative performance drop with float16 while maintaining comparable memory and wall-time, consistent with float16’s known limitations as compared to bfloat16. The performance drop due to float16 can be offset by increasing the number of trainable parameters in WAFT. As shown in the table, WAFT with a rank of 128 outperforms LoRA even with float16, while using four times fewer parameters. **These results further confirm WAFT’s compatibility with any device that supports LoRA**. Method|Params (%)|Mem. (GB)|Wall Time|AQuA|GSM8k|MAWPS|SVAMP|Avg. Acc. -|-|-|-|-|-|-|-|- LoRA $^{r=16}$ (bfloat16)|0.416|18.01|0.43|23.5|38.5|85.3|56.4|50.9 WAFT $^{r=64}$ (bfloat16)|0.052|17.74|0.51|24.3|36.5|82.4|56.9|50.0 LoRA $^{r=16}$ (float16)|0.416|18.07|0.42|21.8|37.9|84.7|57.1|50.4 WAFT $^{r=64}$ (float16)|0.052|17.75|0.43|22.7|36.0|83.5|54.9|49.3 WAFT $^{r=128}$ (float16)|0.104|17.80|0.43|22.7|38.5|84.6|56.1|50.5 > **Generalization to other architectures** We recognize the importance of exploring generalization. However, Transformers have proven to be highly scalable and flexible, and have emerged as the standard architecture due to their scalability and adaptability. Consequently, both our study and most prior PEFT research focus on Transformers. We believe extending WAFT to other architectures is a valuable future direction, and its strong performance in the context of prior research on fine-tuning Transformer models makes it a meaningful contribution.
Summary: This paper introduces Weight-Aware Fine-Tuning (WAFT), a novel parameter-efficient fine-tuning method for large pre-trained Transformer models. WAFT proposes to generate fine-tuning weights directly from the pre-trained weights using a low-rank formulation with shared linear layers across multiple transformer layers. By sharing parameters across layers, the authors claim that WAFT achieves multi-faceted efficiency in parameters, representations, compute, and memory, while maintaining or exceeding the performance of LoRA and its variants. Experimental results are presented for various NLP tasks (commonsense reasoning, arithmetic reasoning, instruction following, code generation) as well as visual recognition to demonstrate the effectiveness of WAFT. ## update after rebuttal I thank the authors for their rebuttal and willingness to improve clarity on the weight-aware term. I will maintain my score. My 2) concern is indeed about having a $\mathbb{W}^l$ in the weight update that is not the pretrained weight which I suppose bears similarities to VeRA as suggested. There is not clear reason why this approach would not converge since VeRA does. This study would help enhance the insights of WAFT to understand whether the algorithm actually uses knowledge from the pretrained weight or if it simply uses properties of the learned matrix that arise during the pre-training stage (independence of features for example). I suggest the authors include such study in the paper (or supplementary) if they obtain interesting results for the camera ready. Claims And Evidence: This paper proposes to modify LoRA's weight additive paradigm to a matrix-multiplication one. This is a valid idea to pursue although I do not feel that the weight-aware narrative pushed by the authors is very convincing. I have two main concerns about the author's weight-aware claims: (1) LoRA is weight-aware to an extent as the update is formulated as (W + BA) so the BA updates takes W in consideration when learning a solution. This is a bit hand-wavy from my part. Maybe the weight-aware term is not characteristic enough in this scenario which weakens the narative pushed by the paper to justify the multiplicative update. (2) It is not always clear that the pre-trained weight are a good starting basis for finetuning. What about tasks where the zero-shot model is highly un-adapted ? In this case, starting without the pre-trained weights constraints may be an advantage. The cars dataset in Table 6 could be an example or any of the VTAB1k [1] specialized or structured subsets. More exploration could be done on the relevance of using the pre-trained weight for the matrix multiplication, for example is a properly scaled random matrix performing close to the pretrained weights ? [1] Zhai, Xiaohua, et al. "A large-scale study of representation learning with the visual task adaptation benchmark." ICLR 2020 Methods And Evaluation Criteria: Good range of test datasets which adequately justify the performance of the proposed algorithm Theoretical Claims: Did not check the correctness of proofs Experimental Designs Or Analyses: Good design but more ablation study should have been conducted on using the pre-trained weight matrix for the multiplicative update. Supplementary Material: Reviewed the gradient derivations for LoRA and WAFT. Relation To Broader Scientific Literature: PEFT is a relevant field, it is good to see contributions that veer away from LoRA's formulation. Essential References Not Discussed: SOTA is well represented Other Strengths And Weaknesses: Discussed above Other Comments Or Suggestions: In general the idea of multiplicative update is valid and appear to be effective experimentally. The current justification as to why this is a good idea theoretically is not good enough as I do not find the weight-aware narrative compelling and ablations are missing on using other types of matrices for the multiplicative updates. Questions For Authors: I was surprised of the 9 hours per epoch of VeRA line 307 which is almost 20 times longer than LoRA. Is this because of the very large intermediate dimension of 12k ? What about using a representation of 1024 as suggested in the original VeRA paper ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Dear reviewer UHSy, Thank you for your feedback and efforts in reviewing our submission. We address your concerns (C1-C5) as follows, which will be carefully incorporated in the revision. > **C1: Weight-awareness in LoRA and WAFT** In general, any PEFT methods have to be (pretrained) weight-aware to be effective for downstream tasks. However, prior art like LoRA has `implicit` weight-awareness between the fine-tuning residual ($\Delta W^l = B^l\cdot A^l$) and the pretrained weights ($W^l$) via gradient backpropagation in learning (see Eqn.10 for a toy example in Appendix A). Our proposed WAFT harnesses `explicit` weight-awareness (in the forward computation) via learning $\Delta W^l = W^l\cdot \phi \cdot \psi$. In the abstract (lines 025-027), we define **WAFT, a novel approach that learns to generate fine-tuning weights directly from the pretrained wegiths**. (the generic formulation is in Sec.3.1). To address your concerns, we propose to use **generative weight-aware fine-tuning (GenWAFT)** to better highlight the characteristics of our proposed method in revision. The comprehensive experiments have verified its effectiveness. > **C2: Training time for VeRA** VeRA's long training time results from the large intermediate dimension used in our experiments. Since VeRA relies on fixed random matrices, a large dimension is necessary for reasonable accuracy. As shown in the table below, **reducing the dimension to 1024 (as in the original paper) significantly degrades performance**, despite lowering training time and parameter count. In contrast, our proposed WAFT maintains parameter and training efficiency while achieving comparable or superior performance to state-of-the-art methods. Method|Params (%)|Mem. (GB)|Wall Time|AQuA|GSM8k|MAWPS|SVAMP|Avg -|-|-|-|-|-|-|-|- -|-|-|-|LLaMA 1|-|-|-|- VeRA $^{r=12288}$ |0.042|20.65|9.01|21.3|34.0|82.8|50.7|47.2 VeRA $^{r=1024}$ |0.015|17.80|1.15|23.0|30.5|79.1|48.4|45.2 -|-|-|-|Llama 2|-|-|-|- VeRA $^{r=12288}$ |0.042|20.65|9.00|23.5|38.7|85.3|54.3|50.4 VeRA $^{r=1024}$ |0.015|17.80|1.15|23.6|35.5|82.1|53.3|48.6 > **C3: Effect of weight-aware finetuning in cases of weak zero-shot performance of models** This is certainly a valid concern, especially when evaluating methods that reduce parameter count. Our experiments on the Math10k benchmark with LLaMA-1 (7B) address this issue (Table 2). Given LLaMA-1 7B’s low zero-shot accuracy of 11.0 on GSM8k [1], it can be considered weak for arithmetic reasoning which would require significant adaptation to perform well on Math10k. The results show: - Simply reducing parameters by lowering the rank in LoRA and DoRA leads to a significant performance drop. LoRA's accuracy with rank=16 is 50.9 vs. 48.9 with rank=2. With an equivalent parameter count (~0.05%), WAFT outperforms all baselines (LoRA, DoRA, VeRA, and VB-LoRA), and achieves accuracy close to LoRA $^{r=16}$. This demonstrates that generating the fine-tuning residuals from the pretrained weights improves adaptation with minimal parameters, whereas LoRA $^{r=2}$ struggles to achieve good performance. We have also included further experiments with the VTAB benchmark as suggested. We follow the same settings as the FGVC experiments (Section 4.5). The table below shows that WAFT can perform on par or better than baseline methods even on Specialized and Structured tasks on the VTAB benchmark. Method|Params (M)|Natural|Specialized|Structured|Avg -|-|-|-|-|- VPT|0.046|81.0|85.7|58.9|72.7 BitFit|0.083|81.8|85.2|57.8|72.4 LoRA |0.147|**82.0**|85.9|61.0|74.0 WAFT|0.025 |**82.0**|**86.3**|**61.1**|**74.1** > **C4: More exploration could be done on the relevance of using the pre-trained weight for the matrix multiplication, for example is a properly scaled random matrix performing close to the pretrained weights?** We are not certain that we fully understand your concern. Were you suggesting to test a variant like this: converting WAFT update from $\Delta W^l= W^l\cdot \phi\cdot \psi$ to $\Delta W^l= \mathbb{W}^l\cdot \phi\cdot \psi$ in learning fine-tuning residual weights, where $\mathbb{W}^l$ is ``a properly scaled random matrix"? It seems very unlikely that a random matrix can do better than the pretrained weights. With the random matrix, the overall fine-tuning residual weights $\Delta W^l$ would behave similar in spirit to VeRA. Could you kindly elaborate your concern? > **C5: Ablations on using other types of matrices for the multiplicative updates** The purpose of our simple formulation is that it enables memory and computationally efficient updates, since the low-rank structure and linear updates result in smaller matrix multiplications. Our ablations studies (Table 7) show that more complex updates are not necessary. Hence, we leave the further exploration to future work, and we would welcome any suggestions from the reviewers and the broader community. --- [1] Touvron et al., "LLaMA: Open and Efficient Foundation Language Models", ArXiv 2023
null
null
null
null
null
null
PDUDT: Provable Decentralized Unlearning under Dynamic Topologies
Accept (poster)
Summary: The paper proposes PDUDT, a Provable Decentralized Unlearning algorithm under Dynamic Topologies. The key contribution is a decentralized method that eliminates the influence of a specific client without additional communication or retraining. The authors provide rigorous theoretical guarantees demonstrating that PDUDT is statistically indistinguishable from perturbed retraining and achieves a convergence rate of O(1/T), matching state-of-the-art results. Experimental results show that PDUDT saves over 99% unlearning time compared to retraining while maintaining comparable unlearning performance. Claims And Evidence: The claims made in the paper are largely supported by theoretical analysis, experimental results, and comparative evaluations. Here’s an assessment of the key claims and their supporting evidence: 1. PDUDT enables decentralized unlearning without additional communication or retraining: The proposed algorithm uses historical gradient information to approximate gradient residuals and eliminate a client’s contribution without retraining. The method does not require extra communication beyond standard decentralized learning updates. The algorithm’s efficiency is demonstrated theoretically and empirically, showing that it performs unlearning in O(t1) time, significantly reducing computational overhead compared to retraining-based methods. 2. PDUDT is statistically indistinguishable from the perturbed retrained method: The paper provides a rigorous proof that PDUDT satisfies (epsilon, beta)-machine unlearning, ensuring that the output model is statistically close to one that would have been obtained by retraining from scratch. A Gaussian noise mechanism is added to further enhance indistinguishability, following the principles of differential privacy. 3. PDUDT effectively removes the influence of the unlearned client: The membership inference attack (MIA) evaluation shows that after unlearning, the attack success rate drops to approximately 50%, indicating that the model has effectively "forgotten" the removed client’s data. The accuracy on the unlearned client’s class drops significantly, while accuracy on other classes remains stable, reinforcing that the client's influence has been successfully removed. Methods And Evaluation Criteria: This paper effectively tackles the challenge of data unlearning in decentralized machine learning environments. The proposed methods are well-structured, and the evaluation criteria are comprehensive. Here’s why: 1. The paper focuses on decentralized unlearning under dynamic topologies, a complex and underexplored issue. The proposed PDUDT introduces a gradient residual approximation technique that enables unlearning without retraining or additional communication, ensuring scalability and efficiency. Theoretical analysis guarantees that PDUDT is statistically indistinguishable from retraining-based unlearning. 2. The paper employs multiple experimental metrics to assess the performance of the proposed unlearning algorithm, including: — Accuracy: Ensuring that the model retains high performance on unaffected data while effectively forgetting the targeted data. — Unlearning Time: Demonstrating that PDUDT achieves significant efficiency gains compared to retraining-based methods. — Attack Resistance: Validating unlearning effectiveness using membership inference attacks (MIA) to confirm that the removed data is no longer recoverable. Overall, the paper provides a well-founded, efficient, and theoretically sound approach to decentralized unlearning, supported by rigorous evaluation and empirical evidence. Theoretical Claims: The theoretical claims in the paper are well-supported and logically consistent, following standard mathematical frameworks from differential privacy and decentralized optimization. The proofs provide strong guarantees on statistical indistinguishability and convergence: 1. The (epsilon, beta)-machine unlearning guarantee is based on differential privacy principles, ensuring that the model after unlearning is statistically indistinguishable from a retrained model. The proof constructs an upper bound on the difference between PDUDT’s output and a fully retrained model and then introduces Gaussian noise to mask this difference. By leveraging the Gaussian mechanism theorem, the authors establish that no one can reliably distinguish between the two models within a given probability bound. 2. The convergence analysis aims to prove that PDUDT maintains a O(1/T) convergence rate after an unlearning operation. The proof follows a decentralized gradient descent approach, deriving an upper bound on the expected gradient norm over T communication rounds. By incorporating assumptions of Lipschitz smoothness and bounded variance, the analysis confirms that the learning process continues to converge at the same rate as standard decentralized optimization methods. The step-size conditions are carefully derived to ensure stability. Experimental Designs Or Analyses: The experimental designs and analyses in this paper appear well-structured and methodologically sound. The paper employs a variety of evaluation metrics and benchmark datasets to assess the effectiveness of the proposed PDUDT algorithm. Below is an assessment of the soundness and validity of these experiments: 1. The experiments are conducted on standard benchmark datasets: MNIST, Fashion-MNIST, CIFAR-10, and SVHN, ensuring generalizability. The models used (CNN for simpler datasets and ResNet-18 for more complex datasets) align well with the dataset complexity. The experiments simulate decentralized environments with dynamic topologies, making the setup realistic for real-world applications. PDUDT is compared against multiple baseline methods (Retrain, FATS-Unl, FedRecovery, and HDUS), allowing for a comprehensive performance evaluation. While the decentralized learning setup is reasonable, the details on how the topology evolves dynamically over time are limited. More information on how client connections change could improve reproducibility. 2. The paper evaluates PDUDT using three primary metrics: — Accuracy: To assess whether the model successfully forgets the unlearned data while maintaining overall performance. — Unlearning Time: To measure computational efficiency, showing that PDUDT is significantly faster than retraining-based methods. — Attack Resistance: Using membership inference attacks (MIA) to verify if the target client’s data has truly been forgotten. These metrics effectively capture both efficiency and security aspects of unlearning. However, the impact of different levels of noise injection (used for statistical indistinguishability) on model utility and unlearning effectiveness could be further analyzed. In summary, the experimental design is methodologically sound, and the analyses provide strong empirical evidence supporting PDUDT’s effectiveness. While the paper presents robust results, additional details on topology evolution and noise impact could further enhance validity. Supplementary Material: The supplementary material in the paper primarily comprises theoretical proofs, additional experimental details, and methodological clarifications, providing deeper insights into the theoretical guarantees and practical implementation of PDUDT. Specifically, I reviewed the following sections: 1. Proof of Theorem 4.9 (Appendices A–D): Establishes an upper bound on the difference between the retrained model and the output model of PDUDT, quantifying the gap between the proposed unlearning approach and retraining-based unlearning. 2. Proof of Corollary 4.10 (Appendix E): Demonstrates the statistical indistinguishability of PDUDT from retraining by leveraging the Gaussian mechanism, ensuring that the unlearned model remains indistinguishable from the retrained counterpart with well-defined privacy guarantees. 3. Proof of Theorem 4.11 (Appendix F): Analyzes the convergence behavior of decentralized learning following the unlearning process, ensuring that the system continues to learn efficiently even after removing a client’s influence. 4. Proof of Corollary 4.12 (Appendix G): Establishes that PDUDT achieves an O(1/T) convergence rate, aligning with state-of-the-art results under specific step size conditions, thereby confirming its theoretical soundness and efficiency in decentralized settings. These supplementary materials reinforce the robustness of PDUDT, validating its efficiency, effectiveness, and scalability within dynamic decentralized learning environments. Relation To Broader Scientific Literature: The key contributions of PDUDT relate to several areas of research, including machine unlearning, decentralized learning, efficient model updates, and theoretical guarantees. Prior works in machine unlearning, such as FedEraser (Liu et al., 2021) and FedRecovery (Zhang et al., 2023), primarily focus on removing client contributions in federated learning, often relying on gradient residual removal or retraining, but they assume the presence of a central server, limiting their applicability to fully decentralized systems. Recent decentralized unlearning approaches, such as HDUS (Ye et al., 2024) and BlockFUL (Liu et al., 2024a), propose heuristic solutions using distilled models or blockchain structures, but they lack rigorous theoretical guarantees. PDUDT addresses these limitations by introducing the first provable decentralized unlearning method, leveraging gradient residual approximations and a weighted removal mechanism to eliminate a client’s influence without requiring retraining or additional communication overhead, making it highly efficient in dynamic decentralized networks. Essential References Not Discussed: The paper provides a comprehensive discussion of prior work in federated/decentralized unlearning, there do not appear to be critical missing references that would significantly alter the context or evaluation of the paper’s contributions. The theoretical and experimental analysis is well-grounded in previous research, making the discussion of related work sufficient and appropriate for the scope of the study. Other Strengths And Weaknesses: Strengths: 1. PDUDT is the first provable decentralized unlearning algorithm that operates under dynamic topologies, addressing a significant gap in privacy-preserving decentralized learning. 2. The theoretical results of this article are detailed and in-depth. The paper derives an upper bound on the deviation between PDUDT’s output and a retrained model, providing rigorous guarantees on its unlearning performance (Theorem 4.9). The convergence proof (Theorem 4.11) ensures that after unlearning, the decentralized learning process continues to perform optimally, matching state-of-the-art decentralized learning results. 3. The experimental section is comprehensive, covering multiple datasets and comparing against strong baselines. The performance advantages of the proposed PDUDT are demonstrated from multiple perspectives. 4. The paper is well-structured, with clear theoretical formulations, algorithm descriptions, and proofs that make the methodology easy to follow. Weaknesses: 1. Some experimental details are missing. For example, the paper does not explicitly clarify whether the experiments fully simulate dynamic network connections, which is crucial for demonstrating the adaptability of PDUDT in real-world decentralized settings. 2. While the theoretical analysis is rigorous, some notations in the proofs are complex and could be slightly simplified for readability, especially for readers less familiar with decentralized optimization. Other Comments Or Suggestions: 1. Given the extensive use of mathematical symbols throughout the paper, a notation table summarizing key variables and parameters could improve readability and accessibility. 2. Since the appendix contains many sections with detailed proofs and experimental setups, it would be helpful to add a brief overview at the beginning of the appendix summarizing the role of each section. This would make it easier for readers to navigate the supplementary material. 3. A careful proofreading of the paper may help identify and correct any minor spelling or grammatical errors to ensure clarity and professionalism. Questions For Authors: 1. How is network dynamism reflected in the theoretical results? The paper discusses dynamic topologies, but could you clarify where this is explicitly incorporated into the theoretical analysis? 2. Additionally, was the dynamic nature of network links simulated in the experiments? If so, how frequently were connections updated or changed? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the Reviewer 6f9d for the time and valuable feedback! We would try our best to address the comments one by one. **1. Response to “Experimental Designs Or Analyses”:** Thank you for your constructive feedback. We agree that providing more details on topology evolution and the impact of noise would further strengthen our results. In the experiments, we randomly generate connections among clients and then assign communication weights using the Metropolis-Hastings method. More details are provided in our subsequent response (Response to “Other Weaknesses 1.” & “Questions For Authors 2.”). As for the impact of noise, our Figure 1 demonstrates that increased noise leads to a decline in model accuracy. However, under all noise conditions, the performance of PDUDT and Perturbed Retraining method is always comparable. This indicates that under the same noise conditions, these two approaches can achieve statistical indistinguishability. We hope our response addresses your concerns. **2. Response to “Other Weaknesses 1.” & “Questions For Authors 2.”:** Thank you for your constructive feedback. In fact, our experiments are based on dynamic connections. In each round, whether there is a connection between any two clients is randomly generated. Then, in order to ensure that the communication situation can be modeled as a doubly stochastic matrix, we use the Metropolis-Hastings method cited in the paper to generate the communication weights between clients. We have provided this detail of dynamic topology construction in the revised version. **3. Response to “Other Weaknesses 2.” & “Other Comments Or Suggestions 1.”:** Thank you for your valuable feedback. We agree that the extensive use of mathematical symbols can be challenging for readers. In our revised version, we have added a comprehensive notation table that summarizes all key variables and parameters, which enhance both readability and accessibility. Thank you again! **4. Response to “Questions For Authors 1.”:** Thank you for your insightful question. Network dynamism is explicitly modeled in our theoretical analysis by allowing the communication matrices $\mathcal{W}^t$ (and the corresponding retraining matrices $\tilde{\mathcal{W}}^t$ ) to be time-varying. This reflects the fact that the network topology can change at each round. We introduce spectral gap upper bounds (e.g., $\rho_1$ and $\rho_2$) in our Assumption 4.8, which quantify the connectivity properties of these matrices. These bounds directly influence the convergence rate and error bounds derived in Theorem 4.9 and subsequent results. We hope our answer can clarify your confusion. **5. Response to “Other Comments Or Suggestions 2.&3.”:** We thank the reviewer for these valuable suggestions. In response, we have added a concise overview at the beginning of the appendix that outlines the contents and purpose of each section, making it easier for readers to navigate the supplementary material. Additionally, we carefully proofread the entire paper to correct any minor spelling or grammatical errors, ensuring clarity and a professional presentation. We hope these improvements meet your expectations. If there are any further confusions/questions, we are happy to clarify and try to address them. Thank you again and your recognition means a lot for our work.
Summary: The authors introduce a novel algorithm PDUDT, which is designed to enable efficient and provable unlearning in decentralized learning systems with dynamic network topologies. PDUDT allows clients to remove the influence of a specific client without retraining or additional communication by using historical gradient submissions to approximate gradient residuals and adjusting local models accordingly. Theoretical guarantees show PDUDT achieves $\mathcal{O}(\frac{1}{T})$ convergence rate. Experiments on datasets like MNIST, CIFAR-10, and SVHN demonstrate that PDUDT reduces unlearning time by over 99% compared to retraining. Claims And Evidence: This work is generally supported by rigorous theoretical analysis. However, I still have some concerns: (1) The authors claim that they are the first provable decentralized unlearning algorithm under dynamic topologies. Some related works, such as “Decentralized Federated Unlearning on Blockchain, arxiv 2024; Decentralized Unlearning for Trustworthy AI-Generated Content (AIGC) Services, IEEE Network 2024; Heterogeneous Decentralized Machine Unlearning with Seed Model Distillation, arxiv 2023” need to be carefully addressed and compared. (2) The claim that PDUDT is statistically indistinguishable from perturbed retraining, the authors utilize Gaussian mechanism to achieve it. However, the connection between the Gaussian noise and the indistinguishability guarantee could be explained more clearly. Methods And Evaluation Criteria: The chosen baselines and the evaluation criteria are well-suited for the decentralized unlearning problems. Theoretical Claims: Almost check the correctness, I have questions about the Corollary 4.10, The choice of the noise scale \sigma is based on the upper bound from Theorem 4.9. If the bound in Theorem 4.9 is not accurate, the noise scale might not be appropriate, and the statistical indistinguishability claim could be affected. Also, in practice, the Gaussian mechanism's effectiveness might be limited by the actual distribution of the data and model parameters. Experimental Designs Or Analyses: (1) While the paper demonstrates the effectiveness of PDUDT on a network of 10 clients, the scalability to much larger networks (e.g., hundreds of clients) is not thoroughly explored. (2) The experiments are conducted on specific models and datasets. While the results are convincing for these cases, the generalizability to other models and tasks (e.g., natural language processing or more complex architectures) is not fully addressed. Supplementary Material: None. Relation To Broader Scientific Literature: Theoretical guarantees for unlearning have been explored in centralized settings, with methods like Exact Unlearning (Guo et al., 2020) and Approximate Unlearning (Sekhari et al., 2021) providing formal guarantees for removing data points from trained models. PDUDT provides provable guarantees for decentralized unlearning, including statistical indistinguishability and convergence rates. This extends prior theoretical work to the decentralized setting, where the lack of a central server introduces additional challenges. Essential References Not Discussed: Some decentralized unlearning algorithms have also been proposed, such as “Decentralized Federated Unlearning on Blockchain, arxiv 2024; Decentralized Unlearning for Trustworthy AI-Generated Content (AIGC) Services, IEEE Network 2024; Heterogeneous Decentralized Machine Unlearning with Seed Model Distillation, arxiv 2023”, what are the main contributions of PDUDT? Other Strengths And Weaknesses: While the early stopping strategy is introduced to save memory, the paper does not provide a detailed analysis of its impact on unlearning performance. A more thorough discussion of the trade-offs between memory savings and performance would enhance clarity. Other Comments Or Suggestions: None. Questions For Authors: (1) Some decentralized unlearning algorithms, such as “Decentralized Federated Unlearning on Blockchain, arxiv 2024; Decentralized Unlearning for Trustworthy AI-Generated Content (AIGC) Services, IEEE Network 2024; Heterogeneous Decentralized Machine Unlearning with Seed Model Distillation, arxiv 2023”, need to be addressed. (2) While the paper demonstrates the effectiveness of PDUDT on a network of 10 clients, the scalability to much larger networks (e.g., hundreds of clients) is not thoroughly explored. The experiments are conducted on specific models and datasets. While the results are convincing for these cases, the generalizability to other models and tasks (e.g., natural language processing or more complex architectures) is not fully addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer vxha for the valuable feedback! We would try our best to address the comments one by one. **Response to “Claims And Evidence (1)” & “Essential References Not Discussed” & “Questions For Authors (1)”:** We have carefully examined these works and provide a detailed comparison below to clarify our unique contributions. **(1) vs. HDUS (Ye et al., 2023):** HDUS relies on seed model distillation and additional training steps, without theoretical guarantees. PDUDT, by contrast, eliminates influence using saved historical gradients—without retraining—and offers formal guarantees on unlearning and convergence under dynamic topology. **(2) vs. AIGC Unlearning (Lin et al., 2024):** The AIGC work focuses on privacy-preserving AIGC systems via coded computing, incurring storage and reconstruction overhead. It lacks theoretical unlearning guarantees and is tailored to AIGC tasks. In contrast, PDUDT is application-agnostic, retraining-free, and provides provable client-level unlearning. **(3) vs. BlockFUL (Liu et al., 2024):** BlockFUL depends on blockchain infrastructure and consensus, which may be impractical in dynamic or asynchronous networks. In contrast, PDUDT is lightweight and naturally supports dynamic peer-to-peer topologies. Therefore, we claim that our work presents the first **provable** decentralized unlearning algorithm under **dynamic topologies**. In our current manuscript, although HDUS and BlockFUL have been cited, we agree that adding more detailed discussion will improve the quality. More generally, we also fill in the missing discussion on AIGC Unlearning (Lin et al., 2024). **Response to “Claims And Evidence (2)”:** The statistical indistinguishability guarantee in Corollary 4.10 relies on the fact that the difference between the PDUDT output (model corrected by gradient residual estimation) and the output of retraining can be bounded (as shown in Theorem 4.9), and then obfuscated by adding Gaussian noise with a corresponding scale. Specifically, the Gaussian mechanism ensures that, after adding noise drawn from $\mathcal{N}(0,\sigma^2\mathbb{I}_d)$, the two outputs become close in distribution—making them statistically indistinguishable. This approach follows the standard principles of the Gaussian mechanism in differential privacy. In our case, Theorem 4.9 gives an upper bound on this difference, and we use it to determine a noise scale, thereby masking any deviation between the PDUDT output and the retraining output. **Response to “Theoretical Claims”:** We fully understand your concern about the accuracy of the difference between the retrained and the unlearned models, as it affects the subsequent noise scale. But in fact, we cannot get the exact value because $\tilde{\theta}_i^{t_1}$ is a retrained model. Therefore, we need to approximately seek the upper bound of it through the model iteration rule. Our Theorem 4.9 provides a conservative upper bound on the difference between the retrained and the unlearned models, which ensures that the noise scale $\sigma$ selected suffices to achieve the desired level of statistical indistinguishability. While the bound might not be tight in some scenarios, this conservativeness is intentional and aligns with the standard approach in differential privacy, where upper bounds are used to ensure worst-case guarantees. We also highlight that the Gaussian mechanism is chosen due to its robustness under such uncertainty—it provides meaningful guarantees as long as the upper bound is valid. **Response to “Experimental Designs Or Analyses” & “Questions For Authors (2)”:** Due to limited computational resources and funding constraints, we are currently unable to simulate scenarios involving hundreds of clients. However, to show the scalability, we conducted experiments with 20 clients on a NLP task. Specifically, we evaluated PDUDT using the Yahoo! Answers dataset with the Bert-tiny model. The results can be found at: https://anonymous.4open.science/r/Unlearning-47E3, confirming that PDUDT has good scalability to larger networks and NLP task. **Response to “Other Strengths And Weaknesses”:** We would like to clarify that Corollary 4.10 in our paper also applies to the early stopping variant PDUDT (ES). The key difference between them lies in their cumulative weighted gradient residual approximation ($\frac{1}{n-1}\sum\limits_{i=1}^{n-1}\sum\limits_{t=0}^{t_1-1}p_i^t \delta_i^t$ for PDUDT and $\frac{1}{n-1}\sum\limits_{i=1}^{n-1}\sum\limits_{t=0}^{t_{1,i}-1}p_i^{'t} \delta_i^t $ for PDUDT (ES)). Despite it, they both can be bounded by the same upper bound in Theorem 4.9. This is formally justified in Appendix A (Lemma A.2) and further supported by the derivations in Appendix C. We note that the current manuscript did not explicitly state that Corollary 4.10 also holds for PDUDT (ES). We have clarified this in the revised version. If you have further confusions, we are happy to clarify them. Thank you again for your support.
Summary: This work focuses on the provable unlearning in decentralized learning under dynamic topologies. The proposed PDUDT algorithm addresses this by using historical gradient information of clients and their neighbors to eliminate a specific client's influence without extra communication or retraining. The authors provide rigorous theoretical guarantees, showing its statistical indistinguishability from perturbed retraining and $\(O(\frac{1}{T})\)$ convergence rate. Claims And Evidence: The claim that PDUDT can eliminate the influence of a specific client without additional communication or retraining is supported by the algorithm design. The algorithm uses historical gradient submissions to compute gradient residual approximations and weights, which are then used to adjust the local models. However, the approximation of gradient residuals may not fully capture the complex interactions among clients in all cases. This could potentially lead to incomplete elimination of the client's influence, especially in scenarios with highly non-linear model dynamics. Methods And Evaluation Criteria: Yes, I think the experiments are sufficient, the proposed methods and evaluation criteria in the paper are highly relevant and make sense. Theoretical Claims: The proof assumes that the communication matrices are doubly stochastic and symmetric (Assumption 4.7). While this is a common assumption, it may not hold in all real-world decentralized systems, especially under dynamic topologies. A discussion of how the proof generalizes to non-symmetric or non-doubly stochastic matrices would be beneficial. Experimental Designs Or Analyses: Yes, The paper's experimental designs and analyses generally show soundness. The selection of datasets (MNIST, FashionMNIST, CIFAR - 10, and SVHN) , models (CNN and ResNet - 18), the baselines are appropriate. But I do not understand the Attack precision design, which need more details. Supplementary Material: I check the main results, the derivation details are too long. Relation To Broader Scientific Literature: The key contribution is that this work is the first provable decentralized unlearning algorithm under dynamic topologies. The concept of (\epsilon, \beta)-indistinguishability (Neel et al., 2021) used in the paper to prove that PDUDT is statistically indistinguishable from perturbed retraining. This builds on the existing literature on differential privacy in distributed learning. Essential References Not Discussed: I think the highly related works have been cited and discussed. Other Strengths And Weaknesses: No any other comments or suggestions. Other Comments Or Suggestions: No any other comments or suggestions, overall, I think this work is well-written. Questions For Authors: My major concern is still about the PDUDT algorithm design, the claim that PDUDT can eliminate the influence of a specific client without additional communication or retraining is supported by the algorithm design. The algorithm uses historical gradient submissions to compute gradient residual approximations and weights, which are then used to adjust the local models. However, the approximation of gradient residuals may not fully capture the complex interactions among clients in all cases. This could potentially lead to incomplete elimination of the client's influence, especially in scenarios with highly non-linear model dynamics. If this can be well addressed, I can consider changing my evaluation. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer rNoR for the time and valuable feedback! We would try our best to address the comments one by one. **1. Response to “Claims And Evidence”& “Questions For Authors”:** Thank you for your insightful feedback. As we discussed in Related Work, approximate unlearning has demonstrated sufficient effectiveness in many practical scenarios and has thus motivated further work by many researchers, such as Liu et al. (2021) and Ye et al. (2024). Therefore, in this work, we primarily employ the weighted gradient residual approximation to adjust the local model, further achieving approximate unlearning in the decentralized paradigm. Corollary 4.10 shows the unlearning performance that our algorithm can achieve, which is statistically indistinguishable from perturbed retraining. In addition to the theoretical guarantee, we experimentally verify its statistical indistinguishability and further analyze the utility and efficiency of PDUDT. We hope our answer can clarify your confusion. **2. Response to “Theoretical Claims”:** Thank you for your valuable feedback. We note that Assumption 4.7 is a standard and widely adopted assumption in decentralized learning frameworks. Furthermore, many methods have been proposed to construct a doubly stochastic communication matrix, such as the Metropolis-Hastings method. However, under extremely harsh communication conditions, the use of Assumption 4.7 has certain limitations, since it may be challenging to guarantee the doubly stochastic nature of the communication matrix. We have included a discussion on this limitation in the revised version. Thank you once again for your insightful comments. **3. Response to “Experimental Designs Or Analyses”:** Thank you for your insightful feedback. We hope that the following explanation of MIA design can clarify your confusion: Membership Inference Attack (MIA) is employed to determine whether the data samples of a client slated for forgetting were part of the training process. A higher MIA success rate indicates that the global model still retains considerable information about this client's training data, signifying an inadequate unlearning effect. In contrast, a success rate of 50%—equivalent to random guessing—implies that the model no longer carries exploitable traces of the client's data, thereby demonstrating effective client removal. Thank you again for your valuable comments, we have added this explanation in the revised version. If there are any further confusions/questions, we are happy to clarify and try to address them. Thank you again and your recognition means a lot for our work.
null
null
null
null
null
null
null
null
Learning Changes in Graphon Attachment Network Models
Accept (poster)
Summary: This paper introduces the Graphon Attachment Network Models (GAN-M), a new framework for modeling evolving networks by utilizing attachment probabilities inspired by graphon analysis. The authors extend the classical CUSUM method to accommodate graphon-inspired attachment probabilities through the use of subgraph counts, proposing a new statistic called WESUM. The paper claims that WESUM effectively detects structural changes in dynamic networks and provides theoretical guarantees for its performance. Claims And Evidence: Claim 1: GAN-M extends graphon models to dynamic networks. Evidence 1: The paper clearly outlines the limitations of static graphon models and shows how GAN-M integrates graphon functions with dynamic attachment processes. Additionally, the theoretical formulation provides a generalized approach that unifies both static and dynamic regimes. Claim 2: WESUM effectively detects change-points in dynamic networks. Evidence 2: Theoretical derivations establish consistency, and simulation results demonstrate high accuracy. Claim 3: WESUM outperforms traditional change-point detection methods. Evidence 3: The paper does not comprehensively compare WESUM against baseline methods beyond the synthetic cases. I recommend evaluations on common benchmarks to improve the experimental analysis. Methods And Evaluation Criteria: The evaluation metrics used—change-point estimation error, normalized Hausdorff distance, and Averaged Rand Index (ARI)—are well-established in the field and effectively measure the accuracy of change-point detection. However, the evaluation is limited to a small number of simulated datasets, with no experiments conducted on real-world networks. Theoretical Claims: The proofs for consistency and convergence rates are novel and well-structured. However, the assumptions underlying key theoretical claims are not explicitly discussed, making it difficult to assess their generality and practical applicability. For instance, Theorem 3.1 establishes error bounds and detection consistency, but it lacks a clear discussion on the realistic feasibility of its conditions. Clarifying these assumptions and their implications would improve the interpretability of the theoretical results. Adding examples or empirical tests of these assumptions would also improve clarity. Experimental Designs Or Analyses: The experimental findings seem to confirm the theoretical predictions, but additional experiments with real-world dynamic networks would strengthen the claims. Supplementary Material: The supplementary material was not reviewed in full. Relation To Broader Scientific Literature: The change-point detection literature is well-referenced, but the paper lacks citations and references for the use of graphons in other domains. Essential References Not Discussed: The paper has a general lack of citations. Specifically, there is a notable absence of references to prior works that integrate graphon theory in (graph) machine learning, despite the growing body of works [1,2,3,4,5,6] -- many recent studies leverage graphon-based representations for tasks such as graph classification, generative modeling, and large-scale network learning, yet none of these are cited or discussed. Moreover, a broader discussion of related works—especially those addressing learning changes in evolving graphs using deep learning, probabilistic models, or spectral methods—would help clarify how this work improves upon prior research. [1] Levie et al. A graphon-signal analysis of graph neural networks. NeurIPS 2023. [2] Finkelshtein et al. Learning on large graphs using intersecting communities. NeurIPS 2024. [3] Herbst et al. Higher-order graphon neural networks: approximation and cut distance. ICLR 2025. [4] Ruiz et al. Graphon Neural Networks and the Transferability of Graph Neural Networks. NeurIPS 2021. [5] Keriven et al. On the universality of graph neural networks on large random graphs. NeurIPS, 2021. [6] Maskey et al. Generalization analysis of message passing neural networks on large random graphs. NeurIPS, 2022. Other Strengths And Weaknesses: Strengths: 1. The paper is novel, presenting the first integration of graphons and attachment models. 2. Despite the limited number of experiments, the empirical validation demonstrates the efficiency and practicality of WESUM. Weaknesses 1. The introduction and motivation lacks a compelling real-world justification for why this specific integration of graphon and attachment models is needed. Furthermore, the writing style often seem dense with a short introduction and motivations to new terms—perhaps including an illustrative example would improve accessibility. 2. The paper does not discuss prior works that incorporate graphon theory into machine learning. It also sparsely discusses alternative approaches for dynamic graph modeling and change detection. Without this context, it is difficult to distinguish the novelty and significance of the proposed method. 3. The evaluation is restricted to a small number of simulated datasets, with no experiments conducted on real-world networks. This lack of practical validation raises concerns about the applicability of GAN-M and WESUM. 4. While the theoretical analysis is well-structured, some assumptions (e.g., Theorem 3.1) are not well explained, making it difficult to assess the practical feasibility of the theoretical results. A discussion on when these assumptions hold in real-world settings would enhance the paper’s interpretability. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time and effort in reviewing our manuscript, and for your recognition of our methods and theoretical results. We appreciate your acknowledgment of the novelty of our approach, as well as the effectiveness and practical applicability of our algorithm. Below, we will address the questions you raised. 1.**Regarding the benchmarks.** **Reply:** Thank you for your question. We would kindly point out that we are unable to find a fair method for comparison. This is because the problem addressed in our paper presents the following three challenges: 1. The network size is varying; 2. The networks at different time points exhibit high correlation; 3. At the time points after the change, the network is generated by a mixture of multiple graphon functions. Most existing methods with theoretical guarantees typically require one of the following conditions: 1. The node labels across different time points must match; 2. The network must be generated by a single graphon function; 3. The network sequence must be independent or stationary. As we mentioned in Section 2.2, these methods are not directly applicable to our problem. We will clearly state this in the introduction. 2.**Regarding the scale of the experiments and a real data example.** **Clarification:** Thank you for your question. We have added a larger real data example to demonstrate the effectiveness of our method on real-world data. We kindly refer you to Reviewer pfb1's first question for details. Additionally, we would like to point out that many similar studies in the literature adopted experiments with network sizes in the range of a few hundred, see, for example, [1,2]. 3.**Regarding the essential references.** **Clarification:** Thank you for your suggestion. Our paper lies at the intersection of graphon theory, dynamic network and change-point detection, thus, we focus on analyzing and comparing closely related works. As for the general literature on graphon representations used for graph classification, generative modeling, and large-scale network learning, as well as the use of deep learning, probabilistic models for change-point detection, we will provide a brief discussion in the appendix. 4.**Regarding the motivation and an illustrative example.** **Reply:** Thank you for your question. We have added a real data example (see the response to question 2), which helps illustrate the motivation behind our paper and serves as a concrete example. 5.**Regarding the discussion on alternative approaches for dynamic graph modeling and change detection.** **Reply:** Thank you for your question. Due to the vast literature and space limitations, we are unable to cover all the details. However, we will add some review papers to help readers gain a better understanding of the related background. 6.**Regarding the discussion on assumptions.** **Clarification:** Thank you for your question. We kindly point out that we provide some comments after our theorem (see page 7). These assumptions and discussions are standard in the change-point literature (see, e.g., [2]) and are generally accepted (see, for example, Reviewer EyQT’s second paragraph on section "Theoretical Claims": "they are clear in the discussion: condition (6) is essentially a high-level requirement that jump size × sqrt(interval length) exceeds noise, a familiar trope in change detection"). Therefore, we believe that our discussion is generally sufficient. [1] Peel, L., & Clauset, A. (2015). Detecting change points in the large-scale structure of evolving networks. In Proceedings of the AAAI conference on artificial intelligence (Vol. 29, No. 1). [2] Wang, D., Yu, Y., & Rinaldo, A. (2021). Optimal change point detection and localization in sparse dynamic networks. The Annals of Statistics, 49(1), 203-232.
Summary: The authors propose an extension of graphons to model growing networks, where nodes are added over time and each new node forms connections to existing nodes. To this end, the authors define a so-called Graphon Attachment model, where starting from an initial graph new nodes iteratively generate undirected edges to existing nodes with a probability given by a function that depends on uniformly drawn real-valued node features. This model mimics existing uniform or preferential attachment models for growing networks, but allows to modulate attachment probabilities based on a customizable graphon function. The authors use this model to formulate and address the change point detection problem in growing networks. The proposed approach WESUM (which is an extension of CUSUM) is evaluated in a set of four synthetically generated growing networks with artificially introduced change points. Claims And Evidence: The authors' strong claims regarding the real-world applicability of the proposed method, e.g. in online social networks or citation networks and its "wide-ranging implications across fields such as epidemiology, finance and the social sciences", are not substantiated by the experimental evaluation, which is limited to small-scale synthetically generated graphs. Methods And Evaluation Criteria: I am concerned that the paper does not evaluate the proposed method against existing methods for change point detection, which makes it hard to judge its contributions compared to the state-of-the-art. Moreover, theexperimental evaluation is limited to rather small synthetic graphs and there is no analysis in real-world networks. Theoretical Claims: I did not perform a detailed check of the theoretical claims. Experimental Designs Or Analyses: I checked the soundness of experimental analysis, see my comments below. Supplementary Material: I checked the supplementary material, specifically the description of the model parameters used in the experimental evaluation. Relation To Broader Scientific Literature: The paper is broadly related to graphon models, network growth models studied in network science, as well as existing techniques for change point detection in time series (on graphs). Given the broad existing literature in these three areas, I found the reference list rather limited, which is also due to the fact that there is no related work section that works out the research gap addressed by this work. In particular, there are closely related works on modelling growing networks and change point detection in dynamic networks that have not been discussed (see comments below). Essential References Not Discussed: The proposed graphon attachment model is closely related to the fitness-based model for scale-free networks that has been proposed in: G Caldarelli et al: Scale-Free Networks from Varying Vertex Intrinsic Fitness, PRL, 2002 It is crucial that the authors discuss how their paper related to this work. Moreover, other well-known works have previously studied (heuristic) methods to detect change points in general dynamic networks or - more generally - explored topological consequences of different growth models, e.g. Sun et al.: GraphScope: parameter-free mining of large time-evolving graphs, KDD 2007 Rossi et al.: Modeling dynamic behavior in large evolving graphs, WSDM 2013 L Peel and A Clauset: Detecting change points in the large-scale structure of evolving network, 2014 J Leskovec et al.: Graph evolution: Densification and shrinking diameters, TKDD 2007 The problem and method considered in the present work should be contrasted to those earlier works. Other Strengths And Weaknesses: The theoretical approach to link subgraph count statistics to changes in the graphon function in section 2.1 is interesting and seems to be a contribution of this work. Other Comments Or Suggestions: I am a bit puzzled by the guiding question of the paper, which is stated in section 2: "Do structural changes in the function h_{T,t} esult in shifts in the stochastic behavior of G_t and vice versa?". I do not think that this is an interesting research question, since it is h_{T,t} that defines the connection probability in the graph, which thus clearly affects the "stochastic behavior" of the graph. Also, the question *how* differences in the connection probability function (e.g., in the fitness model, see above) affect the connectivity patterns of growing networks has been studied extensively both analytically as well as experimentally in the network science literature. I would thus recommend to reformulate the guiding research question of the paper. - The authors seem to confuse some terms from the network science literature. They state that GAN-M allows to generate networks with heavy-tail degree distributions that "inherit small-world properties". However, the term small-world networks refers to networks that combine a small diameter or avg. shortest path length with a larger than expected clustering coefficient (see Watts and Strogatz, 1998), which is neither an implication of the heavy-tail degree distribution nor can be reproduced by preferential attachment model. Questions For Authors: - The proposed model seems to be related to the fitness model proposed by Caldarelli more than 20 years. Here each node is assigned a real-valued fitness score (similar to the values U_i in the proposed model) and the link probability between nodes is calculated based on a symmetric function f(x,y) that takes into account the fitness scores of node pairs x,y. This paper has analytically studied the conditions for the distribution of fitness scores under which scale-free degree distributions (similar to those in the BA model) arise. Similar to the present work, this approach also generalizes a simple random graph model, which corresponds to a constant function f(x,y) that yields a single connection probability p. It would suggest that the authors discuss this in the paper. - Related to the previous point, I can not follow the claim that the proposed graphon model could be used to mimic the preferential attachment rule that is the basis of the BA model. A key aspect of this model is a positive feedback of node degrees, i.e. each node forming a link to a node x will further increase the probability that subsequently added nodes form an edge to x. Since the attachment probability given by the graphon exclusively depends on a real value uniformly drawn at the "birth time" of a node, such a feedback is impossible for the proposed model. The reference to Borgs et al., 2014 does not help to answer this question. Could the authors explain this? - The previous point is crucial since in the introduction the authors state that their framework "unified approach to analyzing networks, addressing both static and dynamic regimes, and opens the door to modeling a broader range of real-world applications, such as evolving citation networks or online social platforms.". In fact, it seems that the proposed graphon attachment model would not be suitable to model any growing network that is influenced by positive feedback of node degrees (which is the case for many real-world network). Please clarify this. - Why is there no comparison to existing methods for change point detection in growing graphs, e.g. those that I mentioned above? Do you address a novel problem that cannot be addressed with any existing technique? If so, you should clearly state this in the introduction. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort you dedicated to reviewing our manuscript and providing insightful feedback. We are grateful for your recognition of the interest and soundness of our work. In the following, we will address the questions you raised. 1.**Regarding the comparison with existing methods.** **Clarification:** Thank you for your question. We would kindly point out that we are unable to find a fair method for comparison due to our innovative formulation. For more details, we kindly refer you to Reviewer VGrc's first question. 2.**Regarding a real data example.** **Clarification:** Thank you for your question. We have added a larger real data example and we kindly refer you to Reviewer pfb1's first question for details. 3.**Regarding the connection to fitness-based models.** **Reply:** Thank you for your question. After careful examination, we found that the fitness model in G. Caldarelli et al., 2002 shares certain connections with the graphon model in our paper. The key difference lies in that, in the fitness model, the link function can depend not only on the fitness values of nodes $i$ and $j$ but also on additional factors, such as the maximum fitness in the entire network or a threshold related to network size. This additional dependence is what enables the fitness model to generate power-law degree distributions. In contrast, within the graphon framework, power-law degree distributions can be achieved through unbounded graphon functions, as discussed in Borgs et al., 2014. Despite this distinction, the underlying ideas are similar. Meanwhile, we would like to mention that the graphon function can be viewed as a graph limit (Lovász, 2012). This serves as a key reason for grounding our model within graphon theory. In summary, we appreciate your insightful observation and will incorporate a discussion on the connection between our model and the fitness model into the final manuscript. 4.**Regarding the essential references.** **Reply:** Thank you for your suggestion, which enriched the comprehensiveness of our reference. We will add a brief discussion of these papers into our manuscript. 5.**Regarding "Do structural changes in the function $h_{T,t}$ result in shifts in the stochastic behavior of $G_t$ and vice versa?".** **Clarification:** We must kindly point out that your statement - "since it is $h_{T,t}$ that defines the connection probability in the graph, which thus clearly affects the 'stochastic behavior' of the graph" - is not entirely accurate. In fact, there exist cases where $h_{T,t}$ changes, yet the stochastic behavior of the graph remains unchanged. Examples of such cases can be found in Remark 2.2. This is precisely why we define the change-points based on a non-zero cut norm (see also Lovász (2012)), rather than simply setting $h_{T,t} \neq h_{T,t+1}$. In this paper, our objective is to detect changes in the stochastic behavior of the graph. 6.**Regarding "inherit small-world properties".** **Reply:** Thank you for your question. Considering Watts and Strogatz (1998), we have decided to remove this phrase from the paper to avoid any potential misunderstanding. We appreciate your insightful comment. 7.**Regarding the connection to the preferential attachment model and the "positive feedback" mechanism.** **Clarification:** Thank you for your question. We first kindly clarify that our model is grounded in graphon theory rather than designed to mimic the preferential attachment model. In fact, our model (including the previously mentioned fitness model) is fundamentally different from the preferential attachment model (see also [1]). Our framework is broad enough to encompass standard random graph models (e.g., SBM, RDPG) while also generating networks with power-law degree distributions. The "positive feedback" mechanism is one of many important network mechanisms, but there are also networks that possess characteristics where positive feedback seems inappropriate (see also [2]). While our model has the potential to reflect positive feedback on node degrees (e.g., by introducing dependence between edges), this is beyond our scope and will be a promising future work. [1] Nguyen, K., & Tran, D. A. (2011). Fitness-based generative models for power-law networks. [2] Broido, A. D., & Clauset, A. (2019). Scale-free networks are rare.
Summary: This work proposes a methodology for learning structural changes in these networks over time. Our approach uses graph counts—frequencies of substructures such as triangles and stars—to capture shifts in network topology, called GAN-M. ## update after rebuttal The authors provided two useful examples in their rebuttal. I remain positive on this work and my original review. Claims And Evidence: Yes Methods And Evaluation Criteria: It is difficult to evaluate such dynamic approaches within the scope of publicly available large-scale data; therefore, IMHO, the current evaluation is good enough. Theoretical Claims: No Experimental Designs Or Analyses: It is difficult to evaluate such dynamic approaches within the scope of publicly available large-scale data; therefore, IMHO, the current evaluation is good enough. Supplementary Material: No Relation To Broader Scientific Literature: The discussed problem or detecting changes in the graph is interesting and connects well to recent literature on dynamic graphs. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: One thing I am missing is what is the final objective of detecting the changes. e.g., fraud detection? What is the formulation of a task where one wants to "automatically" detect changes in the graph? the changes themselves, I assume, are not of interest, but rather whether they are correlated with some label. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for taking the time and effort to review our manuscript and for providing positive feedback. We appreciate your recognition of the interesting nature of our work. Below, we will address the questions you raised. 1.**Regarding the final objective of detecting the changes.** **Reply:** Thank you for your question. The detection of change-points provides valuable information about the dynamics of the system, which helps make more precise piecewise modeling, retrospective analysis, and other applications. **To better reply your question, we first provide two motivating examples, and provide a real data example for further illustration.** The first example is the citation network, where two nodes (papers) are connected when there is a citation relationship between them. It is a growing network that reflects the academic community. A change-point could indicate a shift in the research landscape, which might result from a groundbreaking paper that reshapes the direction of research in a field, or from changes in legislation and regulations affecting the academic community. Our second example is the email network constructed by connecting two nodes (individuals) if they have exchanged emails. This network represents communication relationships within an institution or a company, and it grows continuously as the number of individuals and emails increases. Detecting jump points in an email network can reveal changes in communication patterns, which may be caused by organizational restructuring, significant events, or the retirement of influential individuals. In the following, we provide a real data example. The email-Eu-core network data, previously used in [1], was constructed using email data from a large European research institution. The dataset captures all emails between members over a period of 803 days, involving 986 individuals (nodes). In our analysis, individuals are sequentially added to the network based on the time of their first email connection with colleagues. Two individuals are connected in the network if they have an email interaction between them. From this, we can observe that the dataset closely aligns with the proposed model. Both exhibit a growing number of nodes, and incoming nodes have a likelihood of establishing connections with existing ones. The network exhibits substantial variability in node degrees. Specifically, the maximum degree is 345, the mean degree is 22, and the minimum degree is 1. This highlights the heterogeneous nature of the nodes. For threshold we set $\tau = \max_{j=1,\dots,T-h} \max_{j<t<j+h} |\tilde{X}_{j,j+h}^t(H)| \log(T), h=6\log(T)$. We kindly refer you to our section 4 for more details regarding choosing the threshold. We applied our algorithm using subgraph patterns, including line segments, triangles, and rectangles. For subgraph line segments, three change points were detected at nodes 166, 422, and 759. For subgraph triangles, two change points were identified at nodes 432 and 795. Similarly, for rectangles, two change points were detected at nodes 434 and 795. The plots of counted subgraphs, including line segments, triangles, and rectangles, along with the jump point detection results, are presented in the following three figures available at the provided link: https://imgur.com/a/ib3Sx8H. These plots provide valuable insights into the observed shifts. For triangles and rectangles, the rate of subgraph count growth before the first change point is notably higher than after it. This likely reflects an initial surge in network activity, where frequent interactions and tasks lead to a denser connection structure in the early stages. The second change point, occurring at node 795, marks a significant increase in subgraph counts. This shift may correspond to the entry of a key figure into the network, altering connectivity dynamics and driving the observed structural change. For line segment subgraphs, the last two detected change points closely align with those identified using triangles and rectangles, demonstrating the robustness of our method to subgraph selection. However, an additional change point is detected at node 166 when using line segments. This provides a more detailed view of early-stage connectivity changes, supporting our claim that subgraphs with fewer nodes can be particularly effective if they capture critical structural shifts. [1] Ashwin Paranjape, Austin R. Benson, and Jure Leskovec. "Motifs in Temporal Networks." In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining, 2017.
Summary: The authors introduce the Graphon Attachment Network Model (GAN-M), a new theoretical framework for evolving graphs that integrates graphon theory with network growth dynamics. In GAN-M, nodes arrive sequentially and form edges with existing nodes according to a time-dependent graphon function $h_{T,t}(x,y)$The paper’s theoretical contribution is a change-point detection methodology for GAN-M: it leverages subgraph counts (frequencies of motifs like edges, stars, triangles) as time-series signals to detect structural changes in the underlying graphon. the authors prove that the expected count of any fixed subgraph evolves as a polynomial in time within each phase where the graphon is stati.. the paper also proposes a Weighted CUSUM (WESUM) statistic tailored to handle the polynomial trends and strong temporal dependencies in subgraph count data. Finanly the authors propose an efficient multi-scale detection algorithm to estimate both the number and locations of change-points, with theoretical guarantees on its performance. In summary, the paper provides a strong theoretical paper for modeling dynamically evolving networks via GAN-M, and offers provably consistent methods to learn if and when the underlying graphon function changes over time. Claims And Evidence: The paper’s theoretical claims are well-articulated and generally well-supported by proofs. The central claim is that the proposed WESUM-based procedure can consistently detect change-points in GAN-M. This is shown in Theorem 3.1, which under certain conditions guarantees that with high probability the estimated number of change-points $\hat K$ equals the true number $K$, and each estimated change time is within a small error $ϵ$ of the truth. The conditions include a signal-to-noise ratio (SNR) assumption requiring a minimum jump size in the graphon’s subgraph densities and a minimum spacing between changes relative to the time horizon and thus ensuring changes are distinguishable from random fluctuations. These conditions are standard in change-point analysis and are well-justified here given the strong temporal dependencies. The proofs of theoretical results are provided in a detailed appendix, and they appear sound. For instance, the proof of Theorem 3.1 handles the dependency across time by introducing the $T^{,|V(H)|-1/2}$ term in the SNR condition, reflecting the variance inflation due to cumulative network growth. The authors reference established results (e.g. building on techniques from Fan & Wu (2024) and Yu et al. (2022)) to support their argument. I haven't found anys gaps in these proofs. Also, all key lemmas (such as Lemma 2.5 on the polynomial form of subgraph counts) are stated clearly and proved in the supplementary material. The assumptions made (e.g. piecewise-constant graphon segments, no edge deletions, bounded number of change-points) are reasonable for theory development, though they may restrict applicability slightly (see Weaknesses). Methods And Evaluation Criteria: The framework is well-poised to the problem of dynamic graphon modeling and change detection. By using graphon theory , the authors describe large random graphs and connect subgraph counts to underlying graphon parameters (homomorphism densities). I find the use of subgraph counts as summary statistics a smart choice: subgraph frequencies (like edges, wedges, triangles) are sensitive to structural patterns and have known expectations in graphon models, that makes them effective signals for detecting changes in $h_{T,t}$. The choice to use a multi-scale interval scanning algorithm (Algorithm 1’s random interval distillation) as the search procedure is also well-motivated. It resembles Wild Binary Segmentation or seeded segmentation techniques designed for detecting multiple changes efficiently. This approach provides a nice way to search for breaks without brute-force checking every time split. The evaluation criteria for success are clearly defined in theory (correct recovery of change-points and bounded localization error) and align with how change-point methods are typically assessed. Theoretical Claims: The paper’s theorems and lemmas appear to be correct and well-proved. Lemma 2.5, is based on a carefu reasoning on the attachment process and seems correct (the result aligns with intuition, since adding $n$ new nodes to a graph should produce subgraph counts that are polynomial in $n$). Theorem 3.1 is the main theoretical result, and its statement is plausible and consistent with known results in change-point theory. It effectively gives consistency (no false/missed change-points in the large-$T$ limit) and a convergence rate for localization error. The proof (provided in the appendix) is quite involved but builds on established techniques: it decomposes the problem into showing that with high probability each true change-point will be triggered by the WESUM statistic on some interval, and false alarms are controlled by the threshold choice. The authors cite and adapt the proof strategy from recent literature (e.g. Fan & Wu, 2024; Yu et al., 2022) to handle the dependencies. I did not find any obvious mathematical errors in the statements. The conditions (i) and (ii) in Theorem 3.1 are a bit complex but they are clear in the discussion: condition (6) is essentially a high-level requirement that jump size × sqrt(interval length) exceeds noise (a familiar trope in change detection) One minor point is that the theory assumes the number of change-points $K$ is not too large (they mention the case “when $K$ is bounded” in commentary) . This is common in proving consistency, but it would be interesting to know if the result could extend to growing $K$ (e.g. $o(T)$) – the paper does not explicitly address that scenario. Another subtlety is the selection of the threshold $τ$ in Algorithm 1: Theorem 3.1 assumes $τ$ falls in a certain range dependent on unknown constants, but in practice one must set $τ$ without knowing $κ_k$. The authors might rely on theory to choose $τ$ conservatively, but a more data-driven threshold choice could be discussed. Despite these minor considerations, the theoretical claims are solid. All major results are either proved or referenced, and I did not spot any gaps. The appendix appears to contain full proofs (the complete proof of Theorem 3.1 is given, building on Lemma C.1 from a cited source for the random interval coverage argument . Experimental Designs Or Analyses: The paper includes simulation experiments (Section 4) that empirically validate the theoretical claims. If anything, adding some real-world data or at least a synthetic example with a known ground truth graphon change could further strengthen the paper. Supplementary Material: The appendix and supplementary materials are thorough. The authors include pseudocode for their algorithms (Algorithms 1 and 2) and detailed descriptions of the simulation settings in Appendix A. Relation To Broader Scientific Literature: This work lies at the intersection of graph theory (graphons, network models) and statistical change-point detection, Essential References Not Discussed: One omission of references is the work by Peel and Clauset (2015) on detecting change-points in the large-scale structure of evolving networks. Other Strengths And Weaknesses: Check my previous comments. Other Comments Or Suggestions: 1. It would be valuable if the authors include a brief discussion on choosing the subgraph statistic $H$. 2. if possible, the authors should demonstrate the method on a small real dynamic network dataset Questions For Authors: q1. How sensitive is the change detection performance to the choice of subgraph $H$? In particular, could there be a situation where a change in the graphon affects certain motifs but not others q2. The theoretical results assume a threshold $τ$ that depends on unknown model parameters. In practice, how do the authors suggest setting $τ$? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your review of our manuscript and your positive evaluation of our work. We are grateful for your recognition of the theoretical contributions of our paper, including your comments and affirmations regarding our theorem conditions, proofs, and underlying ideas. Additionally, we appreciate your acknowledgment of the soundness of our methodology and the validity of our simulation approach, as well as your careful examination and insightful feedback on our proofs. We fully acknowledge and value all these aspects. Furthermore, we thank you for the constructive questions for improvement, and we will discuss them in detail below. 1.**Regarding the allowance for the divergence of $K$.** **Clarification**: Thank you for your insightful question. When $K$ diverges, our theoretical framework *remains valid without any modifications*. Our condition (6), which is a signal-to-noise ratio requirement, does not necessitate the boundedness of any term in it; rather, it only requires that the combined terms collectively satisfy the signal-to-noise ratio condition. For instance, in a classical case where change-points are equally spaced and the jump magnitude is bounded away from zero, our framework allows $K$ to diverge at a rate of $T^{1/(|V(H)|+0.5)}$, up to logarithmic factors. We will incorporate this discussion into the paper to provide a clearer commentary. Thank you again for your valuable insight. 2.**Regarding threshold selection challenge.** **Reply:** Thank you for the opportunity to clarify the challenge of choosing the threshold. As stated in the last paragraph of page 7, our choice was guided by theoretical considerations that hold asymptotically under standard scenarios. Empirically, it has shown satisfactory performance. However, we acknowledge that selecting an appropriate threshold remains an active area of research in change-point detection (e.g., [1,2,3]). 3. **Regarding omission of references** **Reply:** Thank you for your valuable suggestion. We will add this reference into our manuscript to enhance the comprehensiveness of our bibliography. 4.**Regarding the choice of the subgraph statistic $H$, its sensitivity, and the possibility of different motifs being affected differently by changes.** **Reply:** From our theory, it is evident that a smaller size for $H$ leads to a better detection bound. Considering that subgraph counts serve as the moments of the network, this selection method also aligns with lower-order principle in moment estimation. Therefore, we recommend using smaller subgraphs, such as line segments or triangles, for change-point detection, which also aligns with the findings in our simulations. Furthermore, we can integrate information from various subgraphs, as described in Remark 3.2. Regarding sensitivity, after further simulations, we found that when the size of the subgraph $H$ remains the same (although its specific shape may vary), our method exhibits robustness. Moreover, when the size of the subgraph increases, a larger sample size is needed to achieve similar performance, which is also consistent with our theoretical results. For instance, in Scenario 3 (SBM model), we tried five subgraphs: line segments, triangles, connected triple, rectangles, and closed paths of length 4. It turned out that the results for the triangles and the connected triple are similar, while those for the rectangles and the closed paths of length 4 also exhibit similarity. Additionally, the triangle slightly outperforms the rectangle. We will add these discussions into the appendix of our paper. Thank you for your valuable feedback. Finally, there are cases where some motifs are affected while others are not. This is a common limitation of statistical methods based on subgraphs (for example, a similar issue occurs in [4]). However, when integrating information from multiple motifs (see Remark 3.2), as long as one of these motifs is affected, our method is capable of detecting the change-point. Therefore, in practice, this limitation can be alleviated. Thank you for your valuable suggestions, which help to improve our paper. 5. **Regarding a real-world data or synthetic example.** **Reply:** Thank you for your suggestion. We have applied our method to a real dataset. We kindly refer you to our response to the first question of Reviewer pfb1 for details. [1] Zhao, W., Zhu, X., & Zhu, L. (2023). Detecting multiple change points: The PULSE criterion. [2] Baranowski, R., Chen, Y., & Fryzlewicz, P. (2019). Narrowest-over-threshold detection of multiple change points and change-point-like features. [3] Cho, H., & Fryzlewicz, P. (2024). Multiple change point detection under serial dependence: Wild contrast maximisation and gappy Schwarz algorithm. [4] Shao, M., Xia, D., Zhang, Y., Wu, Q., & Chen, S. (2022). Higher-order accurate two-sample network inference and network hashing.
null
null
null
null
null
null
Adaptive Learn-then-Test: Statistically Valid and Efficient Hyperparameter Selection
Accept (spotlight poster)
Summary: This paper makes progress on the problem of hyperparameter selection with statistical guarantees. The work provides a new, adaptive algorithm for hyperparameter selection that presents an extension of the Learn-Then-Test (LTT) framework that was previously introduced. The LTT framework uses p-values of hypotheses corresponding to hyperparameter selections to obtain high probability bounds on the selection of reliable hyperparamters. The present manuscript extends this framework by utilizing e-value rather than p-values. E-values are using to accumulate evidence measures multiplicatively which allows for combining evidence across studies, or in this case, across sets of collected data. The paper provides some theoretical intuition first and demonstrates the utility of the proposed approach in offline-to-online RL and prompt engineering. Claims And Evidence: The main contribution of this paper is a novel algorithm for hyperparameter testing called adaptive LTT (aLTT). The main claim about this algorithm is that it guarantees rigorous control over the FWER and FDR metrics while reducing the number of testing rounds compared to the non-adaptive version of the algorithm. To provide evidence for this claim, the work outlines the mathematical properties of e-values and then provides an application study which includes experiments in offline RL as well as prompt engineering. Even though the paper claims that the algorithm provides guarantees, there is no formal proposition statement that would quantify that. In line 246R, the text states that “if an FWER-controlling method [...] is used, the resulting set [hyperparameter set] is (α, δ)-FWER-controlling”. I believe it would be beneficial to state this as an explicit theorem of the algorithm and provide a proof, even if this proof simply refers to properties or propositions analyzed by other works. For the experimental section, the claim that aLTT requires fewer rounds and is supported in both applications in Figures 1-4. In both cases, the algorithm also achieves a higher true positive rate indicating it selected fewer hyperparameters that have poor performance. What would be nice to see and is missing is a measure of the achieved reliability. Methods And Evaluation Criteria: The proposed benchmarks are both problems where hyperparameter tuning is relevant. However, in both cases, the hyperparameter problem becomes significantly more difficult when one does not have access to an online version of the problem. This is not being addressed and it seems that unlimited access is assumed. As a result, e.g. in Figure 2, there are 5000 rounds. If each round collects only a single trajectory of say 500 steps in Half-Cheetah, one gets access to 2.5 million environment steps. This is sufficient to train online agents rather than hyperparameter tune on offline data. Theoretical Claims: As mentioned above, the theoretical claims are solely in prose and I think a formal proposition would round out section 4 nicely. Since things are only in prose, there are no proofs to evaluate. I am not very familiar with this type of work but I am willing to believe that the algorithm is correct to the extent that it follows from direct application of e-value properties and corresponding optimization algorithms. Experimental Designs Or Analyses: The experimental design is mostly reasonable I believe. The paper evaluates the TPR which seems to be the key indicator relevant to the approach. However, I think there is a lack of simple baselines to put things into perspective. For instance, how well could I do with simple gridsearch? What happens if I use a very small $\varepsilon$ such as 0.01. While the main claims are supported, I think more ablations to understand the relative behavior of the approach and to put it into perspective with other approaches in the domain would be quite useful. It is also not clear to me how the choices for $\alpha$ were made. $0.57$ in the RL settings seems quite arbitrary. An ablation of varying alpha might be useful to remove any concerns about the choice of this value. Supplementary Material: I looked at some of the experiments in the Appendix to get a better understanding of the individual performance. Relation To Broader Scientific Literature: It seems to me that the closer scientific literature has been discussed. However, the introduction in line 10R states "hyperparameter optimization is typically modeled as a bandit problem". This is a very broad statement. There are many other approaches for hyperparameter optimization beside bandits. There is lots of work on Bayesian optimization for adaptive hyperparameter selection and evolutionary strategies. There is also work on PAC-Bayes hyperparamter selection. All these are not mentioned. Essential References Not Discussed: I’m not sufficiently familiar with the literature to make a statement here. Other Strengths And Weaknesses: Weaknesses: * It is unclear to me how this approach could handle multiple correlated hyperparameters. The experimental setting only considers one fixed hyperparameter over which needs to be searched. In practice, the problem is much more high-dimensional. Other Comments Or Suggestions: I believe the text would benefits from a better description of e-values and e-processes. I had to conduct my own search to gain clarity on what these are. Specifically, in line 100L, the text states the key property of e-values is that they can be combined to obtain e-processes which are then not explained. This section would benefit from conciseness about what the property is that makes this possible and an explanation of e-processes. I think line 162L is supposed to be "with the expectation". Overall, I think this is a decently written paper that requires some fine-tuning here and there thus I am recommending weak accept. Questions For Authors: Q1. What would happen if I, for instance, applied a simple gridsearch and took an argmax? I might be able to achieve TPR=1, correct? Q2: Why is reliability in line 313L approximated over a single rollout and not an expectation over multiple? That seems like an odd choice especially for highly stochastic domains such as those considered in the paper but maybe I am missing something. Q3. The work states that there are other methods without guarantees (line 010). Could you elaborate why there are no comparisons against such methods? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful comments. Below we address your question point by point: **Claims And Evidence** - **Missing Theorem**: We will add the following theorem on the statistical validity of aLTT “*Given a hyperparameter set $\\Lambda$, a reliability level $\\alpha \\in [0,1]$, and an error level $\\delta \\in [0,1]$, aLTT with an FWER or FDR-controlling selection rule returns a final prediction set $\\hat{\\Lambda}^{{\\rm aLTT},T}$ that satisfies $(\\alpha,\\delta)$-FWER control or $(\\alpha,\\delta)$-FDR control, respectively.*” and provide a proof in the Appendix. - **Reliability of selected parameters** Models in the predicted set have reliability above $\\alpha$ with a probability at least $1-\\delta$, ensuring valid hyperparameter selection. However, in our experiments, realized performance often exceeds the target reliability ( https://ibb.co/CpGgwVtP ). We agree that reporting the best model’s reliability is insightful and will include it in the revised version. - **Methods And Evaluation Criteria** Training an agent online is not always feasible due to safety concerns (e.g., robotic surgery). Alternatively, training with off-policy data requires accounting for covariate shift using propensity scores [1]. Even with known propensity scores, valid performance guarantees must be established via policy rollouts, as done in aLTT. [1] Uehara M, Shi C, Kallus N. A review of off-policy evaluation in reinforcement learning. **Experimental Designs Or Analyses**: - A single grid search has poor TPR and lacks FWER/FDR control, making it unsuitable for statistically valid hyperparameter selection. As an alternative, we benchmark validation-based selection, which tests hyperparameters on a grid and selects those with empirical risk below $\alpha$. Our experiments show that this approach results in high empirical FWER/FDR, making it unreliable. - The reliability $\\alpha$ parameter must be chosen by the user based on the desired level of reliability for the application. Accordingly, the choice of this parameter must be informed by some knowledge about realistic values for the average risk. In the RL example, one could use off-policy data log to estimate a reasonable target reliability level $\\alpha$. To evaluate the impact of $\\alpha$ here we perform an ablation study by varying its value ( https://ibb.co/xq5fxXTn ). **Relation To Broader Scientific Literature:** In the paper we mention various approaches based on Bayesian optimization [1-4]; however, we do not mention evolutionary strategies or PAC-Bayes methods. Thanks for pointing out this, in the revised paper we will add references to [5-8]. ( References https://ibb.co/twNJDDCd ) **Weaknesses**: Our approach extends to multiple correlated hyperparameters by associating a multivariate parameter with a single hypothesis. Appendix B.2 presents a wireless resource allocation experiment where hyperparameters include transmit power level and scheduling policy, which are inherently correlated. Even in such cases, aLTT significantly improves testing efficiency and system performance. If space allows, we will move this discussion to the main text. **Clarifying e-values**: We acknowledge that readers unfamiliar with these concepts may need more guidance. In the revised version, we will explicitly direct them—already in the introduction—to Section 4.1, which explains e-values, e-processes, and their role in testing. **Questions For Authors:** - A simple grid search selecting the hyperparameter with the lowest empirical risk yields a maximum TPR of 1/|reliable hyperparameters|, which can be much lower than 1 when reliable hyperparameters are numerous. For instance, in our RL experiment, with nine reliable hyperparameters, grid search would achieve at most 0.11 TPR, far below aLTT’s 0.85. Moreover, this approach lacks FWER/FDR guarantees, as shown in our experiment. - The reliability requirement is an expectation over multiple roll-out. The definition in (15) is the reward for a single roll-out, while the expectation of (15) over the roll-out distribution defines the reliable hyperparameters, i.e. $\\Lambda^{\\rm rel}=\\{\\lambda\\in\\Lambda: R(\\lambda)=\\mathbb{E}_{P_Z}[R(\\lambda,Z)]> \\alpha\\}$. We will clarify this in the main text as “*The reliability for a single roll-out is measured via the cumulative reward obtained by a policy $\\lambda$ on the MDP $\\mathcal{E}$, which is defined as…*” - We choose to compare aLTT only against methods that are statistically valid, as the goal of this paper is to perform ‘statistically valid hyperparameter selection.’ To illustrate that methods without formal guarantees can violate the target FWER and FDR levels we consider a validation based baseline that picks an hyperparameter if its empirical risk is below $\alpha$. Figures: RL experiment ( https://ibb.co/xq2KnrGH ) and the APE experiment (https://ibb.co/QFxgkc0G ) . --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for the proposed changes to the manuscript. I'm going to be trusting that these are included appropriately. Especially the experiment validating the parameter choice seems quite insightful. That being said, I'm happy to recommend acceptance. One caveat I would like to emphasize again is that I am not very familiar with the literature. Some reviewers have raised concerns here and I want to clearly state that my recommendation largely excludes an assessment of the relation to the literature. --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and for taking the time to review our rebuttal. We appreciate your positive feedback and we will make sure that the proposed changes and the additional material, including the experiment validating the parameter choice, are appropriately incorporated into the final version of the manuscript.
Summary: This paper extends the LTT strategy to accommodate adaptive MHT procedures. The resulting procedure is more data-efficient and can be used in situations where the loss function as a function of lambda is not a-priori known, but must be evaluated (with some non-negligible associated cost) on a per-lambda basis. Experiments are performed on offline RL via MuJoCo, prompt engineering, and wireless resource allocation (Appendix). Claims And Evidence: For the most part, yes. It is clear to me why this procedure works. However, why no main theorem/proof? To a reader who is an expert in both LTT and e-processes, it is clear that combining these tools leads to a stronger result than in the original LTT paper (accounting for temporal dependencies, allowing early stopping, etc.) But a theorem would substantially help to clarify this point and speak to the power of aLTT in comparison. And a proof, as well. Right now it feels like the paper is incomplete if there’s no theorem or proof of the main theoretical claim (that this is a valid risk-controlling procedure) even though I believe this to be true based on the arguments made in the text of the paper. Methods And Evaluation Criteria: I believe the method is very well evaluated. Theoretical Claims: Copy-pasting above: Why no main theorem/proof? To a reader who is an expert in both LTT and e-processes, it is clear that combining these tools leads to a stronger result than in the original LTT paper (accounting for temporal dependencies, allowing early stopping, etc.) But a theorem would substantially help to clarify this point and speak to the power of aLTT in comparison. And a proof, as well. Right now it feels like the paper is incomplete if there’s no theorem or proof of the main theoretical claim (that this is a valid risk-controlling procedure) even though I believe this to be true based on the arguments made in the text of the paper. Experimental Designs Or Analyses: Yes. Seems sound. Supplementary Material: Yes. Appendix B.2. Relation To Broader Scientific Literature: It may be a good idea to reference selectivenet: https://proceedings.mlr.press/v97/geifman19a At this point, it is known that the selectivenet procedure is not statistically correct. In other words, the guarantee does not hold. However, it does have a similar adaptive search through $\Lambda$-space idea presented here, so I believe it is worth a cite. Essential References Not Discussed: See above: https://proceedings.mlr.press/v97/geifman19a Other Strengths And Weaknesses: I think the paper is very strong. The idea is truly great, and can really improve the statistical and computational efficiency of LTT. The experiments are fantastic, very strong, relevant, and modern. My main gripe is the lack of formal theoretical statement being made. It makes it unclear what new algorithm with accompanying theory the paper is proposing. Other Comments Or Suggestions: As stopping criteria, the authors suggest (1) stopping once $|\hat{\Lambda}^{\rm aLTT, t}| \geq d$ or (2) stopping once $t = t_{\rm max}$. Are these the most natural stopping criteria?I would have thought there would be a different criterion. For example, in some cases you may want the smallest or largest $\lambda$ satisfying the constraint of having a population risk less than alpha. Is there any way to show/argue that this strategy is tight? For example, what does it do in the monotone case? (Can it somehow reduce to RCPS, or something close to RCPS?) “potentially resulting in empty calibration sets” —> “potentially resulting in empty set of selected prompts $\hat\Lambda^{\rm rel}$.” Questions For Authors: No further questions. Great job, this is a wonderful paper! Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper and for your positive comments! Below, we address each comment point by point: - *"Why no main theorem/proof?..."*. - Thank you for the suggestion! In the revised version, we will add the following theorem on the statistical validity of aLTT “Given a hyperparameter set $\\Lambda$, a reliability level $\\alpha \\in [0,1]$, and an error level $\\delta \\in [0,1]$, aLTT with an FWER or FDR-controlling selection rule returns a final prediction set $\\hat{\\Lambda}^{{\\rm aLTT},T}$ that satisfies $(\\alpha,\\delta)$-FWER control or $(\\alpha,\\delta)$-FDR control, respectively.” and provide a proof in the Appendix. - *"It may be a good idea to reference selectivenet..."*. - Thanks for bringing this related work to our attention, we will include it in the related work. **Other Comments Or Suggestions** - We consider the stopping criterion $ |\\hat\\Lambda^{\\rm aLLT,t}|>d$ or $t=t_{\\rm max}$ because we don’t make any assumption on the relation between the value of $\\lambda$ and its risk (e.g. monotonicity as in RCPS). For example, in case of the prompt selection, there is no clear notion of larger or smaller prompt. However, if some preference or information about the parameters is available, it is possible to include this information in the design of the stopping criterion. We will clarify this in the revised text. - Yes, assuming that the loss function is monotone, one can use aLTT in a similar manner to RCPS. The e-processes in aLTT can be used to obtain upper confidence bounds on the risk, which can then be used to implement the selection procedure as in RCPS. The validity would follow as in the RCPS paper. However, note that our paper operates in the most general setting, where we remain agnostic to any relationship between risk and hyperparameters. - Thanks for spotting the typo!
Summary: This paper essentially proposes the following method, as summarized in Section 2.3: Sequential and Adaptive Hyperparameter Selection. Given candidate hyperparameters $\Lambda = \lambda_1, \dots, \lambda_N$, for rounds $t = 1, \dots, T$, where $T$ is some stopping time: - Choose $\mathcal{I}^t \subset \Lambda$ based on an adaptive acquisition policy (that is, based only on information seen so far.) For each $\lambda_i^t \in \mathcal{I}^t$, sample new data $Z_i^t$ independent of the past. - Return a set $\hat \Lambda^{\textup{aLTT}, t}$ according to some decision rule on the data, which is an estimate of $\{\lambda \in \Lambda : E_{P_Z}[R(\lambda_i^t, Z_i^t)] \leq \alpha\}$. The rule is in the form of a multiple testing algorithm for the $N$ null hypotheses that $E_{P_Z}[R(\lambda_i^t, Z_i^t)] > \alpha$, which corrects for multiple testing over all of $\Lambda$. In this work, the authors are free to use any adaptive acquisition policy and stopping time $T$, due to the special form of the decision rule, which is based on multiple testing algorithms applied to e-processes. The theoretical observation is that the final set $\hat \Lambda^{\textup{aLTT}, T}$ has the risk controlling properties of the original LTT proposal, and the main, empirical observation is that it has much better sample efficiency, which is useful for scenarios where testing is costly. Claims And Evidence: In Section 4.3 it is claimed that $\hat \Lambda^{aLTT, T}$ is an $(\alpha, \delta)$-FWER or FDR controlling as defined in Definition 2.1 and 2.2. No proof is provided, as it follows immediately from the anytime-valid properties of e-processes and their resulting p-values. Regarding this claim, however, it is important that the multiple testing procedure involves a correction over *all* of $\Lambda$ at each step, rather than just $\mathcal{I}^t$. But Section 2.3 states, as item 1, that $\mathcal{I}^t \subset \Lambda$ is selected for "testing". This is misleading; they are for sampling the $Z_i$'s, not for testing, as evidenced by the notation in Section 5, which refers to the p-values $1,\dots, N$. On the contrary, every parameter in $\Lambda$ is tested at every time step. On the empirical side, in Section 5 they show that their method indeed performs as expected w.r.t FWER and FDR, and has good performance in terms of some metric (true positive rate, length of the prompt) with few rounds of sampling required. These claims are well supported, though it should be mentioned in Figure 3 that the FDR of the FDR controlling method is nearly zero, while its FWER is less than $\delta$---a curious behavior which is *not* implied by the theory. However, the claim that aLTT is much better than LTT is at least partly unfair to LTT. In the experiment of Section 5.1, LTT is the same algorithm as aLTT at the final time step, where the stopping time $T = 5000$ is a fixed constant. However, even though LTT cannot use adaptive acquisition policies and stopping times (decreasing its power), it can take p-values as input that were not derived from e-processes (increasing its power). I suggest specific comparisons to LTT later. Methods And Evaluation Criteria: The original LTT motivation was to give a guarantee of absolute safety for high risk applications like medical imaging, which was why the FWER guarantee which appears in that paper has to hold in finite samples. In the RL and LLM examples, it's unclear why a finite-sample guarantee would be absolutely required. That caveat aside, the method itself and the FWER criteria makes sense for these problems, and the TPR and prompt length criteria of Section 5 seem like good measurements (though I would defer to the opinion of an applied MLer here.) Less clear to me is why the FDR criterion would make sense here---if $\alpha = 0.2$, then on average, 20% of the $\lambda$'s might be unreliable, which doesn't seem attractive if the goal is to pick a final reliable one. As for the evaluation criteria, perhaps this is my unfamiliarity with RL, but it would seem that Section 5.1 requires the MDP to be stationary every time it is queried, since aLTT needs the transcripts $Z$ of state/action/rewards to be iid. I would think that in the real world, interacting with the environment repeatedly changes the underlying MDP. Also, perhaps the loss function in Secton 5.2.1 should be stated explicitly? Theoretical Claims: No proofs to check, though as previously mentioned, ideally it's stated clearly that $\mathcal{I}^t$ is only for sampling, not for testing. Experimental Designs Or Analyses: The experiments showcase well the utility of aLTT on the example problems. Though, as previously mentioned, the RL problem may be too stylized. Supplementary Material: Section A seems fine. Relation To Broader Scientific Literature: Xu et al (2021) stated the e-process bandit framework from which this article inherits. Waudby-Smith and Ramdas (2024) proposed the particular e-process that is used here. And Angelopoulos et al (2021) cast the hyperparameter optimization problem as a multiple testing problem in the form of LTT. This paper might be distinguished by the recognition that these tools can be fruitfully applied to settings that require expensive queries from an environment, such as policy selection in offline RL and prompt engineering for LLMs. Even if these methods are not directly applied (due to the conservatism displayed in Figure 3), perhaps they could be adjusted in practice and used as a heuristic when the finite sample guarantee is not precisely required. Essential References Not Discussed: The Xu et al (2021) is cited, but the bandit multiple testing setup is precisely the one for which the current one is a special case, and this framing is not discussed. An earlier paper using the same framework, but without e-processes, is Jamieson and Jain (2018), and is not cited. Other Strengths And Weaknesses: The paper is relatively clear, but there was a point of frustration regarding whether $\mathcal{I}^t$ is tested over or simply used for sampling, as previously mentioned. I think this is an interesting paper with an application to a domain not previously considered by Xu et al (2021) and Angelopoulos et al (2021), but methodologically it is not new. Its strength would then lie in its impact to, e.g. prompt engineering or RL policy selection, though I don't think I can evaluate this area. Other Comments Or Suggestions: - To remind the reader, you could refer to $N$ as the cardinality of $\Lambda$ wherever it appears. - Equations (4) and (5) should be $\alpha$, not 1. - Box in Section 2.3: change "selected for testing" to something like "selected for estimating the risk more carefully"? - Figure 3: Dashed lines are hard to see, why not say "triangles"? Also, it might be nice to walk the reader through this figure more. Questions For Authors: To more convincingly show this method's utility compared to LTT, we should equip LTT with fixed-time valid $p$-values, which are more powerful than anytime-valid. So: (1) Does your method still dominate LTT with non-adaptive acquisition, when LTT uses $p$-values based on the hedged capital process of Waudby-Smith and Ramdas (2024), which Angelopoulos et al (2021) call the WSR bound? (2) What about $p$-values based on the central limit theorem, since $\frac{1}{\sqrt T} \sum_{t = 1}^T R(\lambda_i^t, Z_i^t)$ is asymptotically normal? This would serve as a more fully convincing demonstration that aLTT can outdo LTT. When the stopping time $T$ is fixed, my hunch is that it cannot always be the case, but may be the case in your applications of interest. I would change my score from Reject to Weak Reject if this is confirmed to be indeed the case (and possibly more if other reviewers concur that the application examples are compelling). Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful comments. Below we address your question: **Questions For Authors:** Yes, adaptive LTT (aLTT) substantially outperforms LTT, even when LTT uses fixed-time valid p-values. The table below reports final TPR for aLTT and LTT using p-values from Waudby-Smith & Ramdas (2024) and Hoeffding-Bentkus p-values (Angelopoulos et al., 2021). The latter is tight for 0-1 losses. In all cases, aLTT with adaptive acquisition ($\epsilon<1$) achieves significantly higher true positive rates (TPR), demonstrating its advantage stems from data-adaptive acquisition rather than LTT’s use of weaker p-values. We will include these results in the revised paper. | | LTT WSR | LLT HB p-value | aLTT $\\epsilon=1$ (non adaptive) | aLTT $ \\epsilon=0.25$ (adaptive) | |-----------|---------|----------------|--------------------|-------------------------| | Online RL | 0.14$\pm0.07$ | 0 $\pm0.0$ | 0.15 $\pm0.06$ | **0.86** $\pm0.05$ | | Automated Prompt Engineering | 0.27$\pm0.06$ | 0.27$\pm0.06$ | 0.27$\pm0.06$ | **0.65**$\pm0.05$ | P-values based on the central limit theorem are asymptotically valid but unsuitable for rigorous finite-sample guarantees. Such p-values can inflate FDR and lack statistical soundness—for instance, prediction sets from these p-values yield an empirical FWER of 0.154, exceeding the intended $\\delta=0.10$. **Claims and Evidence** - We acknowledge the ambiguity in the term “testing” and will replace it with “evaluation” when referring to the data-adaptive acquisition policy. Specifically, at each time step $t$, a subset of hyperparameters $\\mathcal{I}^t \\subseteq \\Lambda$ is selected for evaluation, updating only the e-processes in $\\mathcal{I}^t$ (see Eq. 14). This clarifies that all hypotheses in $\\Lambda$ are tested, but only a subset is evaluated at each step. - We believe Figure 3 aligns with theory, as both empirical FWER and FDR stay below $\delta$. It shows FWER (blue) and FDR (red) for an FWER-controlling procedure (dashed triangle) and an FDR-controlling procedure (solid square). Since FWER is stricter than FDR, for either scheme (triangle or square), the FWER curve (blue) is always higher than the FDR curve (red). Likewise, as the FWER-controlling procedure (dashed triangle) is stricter, its FDR and FWER curves are lower than those of the FDR-controlling procedure (solid square). **Methods and Eval** - RL and LLMs are widely used in high-stakes and high-risk applications. For example, in the field of robotics, RL algorithms are used for robotic surgery, autonomous navigation, robotic arm control, and more. In these cases, policies are often trained offline using simulators to mitigate the risks associated with training policies in the real world. However, due to the mismatch between simulators and the real world, the actual performance of these policies is often unknown. In Section 5.1 we exemplify how aLTT can be used to identify policies that satisfy real-world performance guarantees, while limiting the number of real-world interactions. Similarly, LLMs are used in critical fields such as medical diagnosis and legal applications. For example, Med-PaLM is a large language model (LLM) designed to provide high-quality answers to medical questions. The performance of these model can vary depending on how are prompted, rendering their use potentially harmful. In the automated prompt engineering example proposed in Section 5.2 we showcase how aLTT can be used to identify prompt templates whose accuracy falls above the minimum reliability level. - The FWER criterion ensures high-confidence reliability, whereas FDR prioritizes statistical power in large-scale testing. FDR control is common in exploratory research, including genomics, neuroimaging, and social sciences, enabling broader discovery while accepting a higher false positive rate. - We assume the standard episodic RL setting [2], where agent interactions occur in episodes with resets to the initial state. Risk (15) measures average episodic performance, and reliability requirements apply over the initial state distribution $P_{s^1}$ and policy roll-out. [2] Sutton RS, Barto AG. Reinforcement learning: An introduction. **Related Work** - In the revised text, this relation will be further clarified as “This work aims to improve the data efficiency of LTT by leveraging recent advances in sequential multiple hypothesis testing (MHT) based on e-processes (Vovk & Wang, 2021; Waudby-Smith & Ramdas, 2024; Xu et al., 2021). Unlike LTT, which employs standard p-value hypothesis testing, we propose an adaptive learn-then-test strategy based on the hypothesis testing framework proposed in Xu et al. (2021) to perform statistically valid hyperparameter selection for machine learning models and generate finite-sample performance certificates.” - Thanks for bringing to our attention Jamieson and Jain! --- Rebuttal Comment 1.1: Comment: Thank you for acknowledging my comments! To a few of your points, here are just a few followups, which may not require a response. * Thank you, it makes sense that the LTT + WSR procedure will be dominated. But for LTT + CLT, after T = 5000 rounds, I would have expected normality to kick in, and its small sets may be attractive to a practitioner... this comparison may be worth adding as well. I would find it compelling that it dominates the CLT by so much. * In Figure 3, what I had meant is that it should not be construed that the FDR controlling approach also controls FWER. The FWER is always below $\delta$, which is not guaranteed by the FDR control. Perhaps this is also worth mentioning. * Thank you for your explanation in Methods and Eval. I actually do find the RL and LLMs example compelling, especially Med-PalM. In robotic surgery and autonomous navigation, the environment does not seem static, leading to a slightly different MDP from episode to episode (that is, every time $Z$ is queried). But the point that it is a standard assumption is well taken. * I am not altogether convinced by the utility of FDR---the reasons for its appeal in a genomics or neuroimaging scenario do not seem to apply here---but perhaps a practitioner may find it appealing as a heuristic way to narrow down the $\lambda$ space. Also in "Other Comments Or Suggestions", please disregard my note on Equations (4) and (5), which is incorrect, though it might be nice to remark that the alpha dependence lies in $\Lambda^\textup{unrel}$. The other comments you can take or leave. I have improved my score to Weak Accept (originally Reject). --- Reply to Comment 1.1.1: Comment: Thank you for the thoughtful follow-up comments. We agree that including a comparison with LTT + CLT could be a valuable addition—especially to illustrate its power and practical appeal, even if it lacks formal statistical guarantees. We’ll consider adding this comparison in the revised version. Thank you as well for the clarification regarding the potential misinterpretation of Figure 3. You’re absolutely right that one should not conclude that FDR control implies FWER control, as the observed behavior is empirical and specific to our experimental setup. We’ll make sure to clarify this point in the revision.
Summary: The authors propose Adaptive Learn-Then-Test (aLTT), a method designed for calibration of AI applications over discrete sets of hyperparameters. The problem is formulated as a multiple hypothesis testing (MHT) task. Hypotheses are tested using e-processes, enabling adaptive and sequential selection of hyperparameters while updating the e-processes based on accumulating evidence over time. The method is applied to two settings: policy selection for offline Reinforcement Learning and prompt engineering, showing improvements in true positive rate compared to non-adaptive baselines. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria are appropriate and well-justified for the calibration of AI applications. Theoretical Claims: There are no formal theoretical proofs presented in the paper. However, I reviewed the formulas provided and verified that they are consistent with their descriptions in the text. The claims regarding the algorithm’s behavior, as derived from these formulas, appear sound and align with the expected outcomes. Experimental Designs Or Analyses: The experimental design and analyses are sound, clearly described, and well-aligned with the goals of the paper. Supplementary Material: I reviewed the entirety of the supplementary material. I appreciated the detailed discussion of FWER and FDR-controlling procedures, as well as the additional experiments, including: - Ablation studies on the betting strategy, - Quantile risk control, - Wireless resource allocation, - Task-level results on the instruction induction dataset. Relation To Broader Scientific Literature: Key contributions: - Introduction of e-processes for calibration in AI applications. - Adaptive, sequential selection of hyperparameters in a statistically principled manner. Essential References Not Discussed: There are no essential references missing from the current discussion. Other Strengths And Weaknesses: Strengths: - Clear and accessible writing. - Well-explained and principled algorithm design. - Strong experimental evidence. Weaknesses: - No noticeable weaknesses. Other Comments Or Suggestions: No additional comments or suggestions. Questions For Authors: 1. What is the intuition behind the inability of aLTT to uncover the entire set of valid hyperparameters in the experiments? Understanding this limitation more deeply could help assess the practical utility and potential extensions of the method. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for taking the time to review our paper! Below is our response to your question and your comment regarding the absence of formal proofs: - The intuition behind aLTT’s inability to discover all hyperparameters is that, in our experiments, we use finite datasets, which may sometimes be insufficient for aLTT to accumulate enough evidence to identify the entire set of valid hyperparameters. - In the revised text we have decided to state formally the validity of aLTT with theorem as follows “Given a hyperparameter set $\\Lambda$, a reliability level $\\alpha \\in [0,1]$, and an error level $\\delta \\in [0,1]$, aLTT with an FWER or FDR-controlling selection rule returns a final prediction set $\\hat{\Lambda}^{{\\rm aLTT},T}$ that satisfies $(\\alpha,\\delta)$-FWER control or $(\\alpha,\\delta)$-FDR control, respectively.”
Summary: The submission proposes an adaptive algorithm for bandit multiple hypothesis testing with e-processes and anytime-valid false discovery rate control. The algorithm is dubbed "adaptive learn then test" (aLTT) for its connections to the "learn then test" (LTT) framework, which finds a subset of hyper-parameters that provide risk control, but in a non-adaptive fashion. Experiments on a reinforcement learning and a prompt selection task showcase the behavior of the proposed method in comparison to LTT. aLTT displays larger true positive rates across experiments, while controlling false discovery rate (or family-wise error rate). Claims And Evidence: The connections between this submission and Xu et al. (2021) should be clarified. The proposed aLTT algorithm boils down to Algorithm 1 in the reference. It should be made clear whether the contribution of the submission is to apply the algorithm of Xu et al. to machine learning settings, or to propose a novel algorithm. Methods And Evaluation Criteria: The evaluation criteria are appropriate Theoretical Claims: Theoretical claims are mentioned in the main text without formal proofs. Anytime FDR validity does not immediately follow from validity of the e-BH procedure in the general case (see Wang et al., "Anytime-valid FDR control with the stopped e-BH procedure"). Within the context of the submission, the arms are independent so FDR validity should be satisfied. It should be discussed when the proposed algorithm fails to avoid misuse. Experimental Designs Or Analyses: Could the authors expand on how ground-truth is defined in each experiment to evaluate true positive rate? Supplementary Material: No, I did not review supplementary material Relation To Broader Scientific Literature: The submission builds upon bandit multiple hypothesis testing, conformal risk control, and e-processes, which are well-established ideas in their research areas. Essential References Not Discussed: Connections with Xu et al. (2021) should be discussed Other Strengths And Weaknesses: **Strengths** * The problem of statistically-valid hyper-parameter selection is compelling * The idea of using anytime valid inference for this problem is neat **Weaknesses** * Presentation is hard to follow at times * Certain claims are hand-wavy and should be made precise I will expand on my comments and I am looking forward to discussing with the authors! Other Comments Or Suggestions: **Definition of hyper-parameter** Hyper-parameters are introduced in the submission and generally understood by the community as train-time quantities. The LTT framework and the proposed method, however, focus on calibration of fixed, pre-trained models, or am I missing something? In the offline reinforcement learning setting, it was somewhat confusing to see the policies themselves being treated as hyper-parameters. I thought the experiment was going to tackle the problem of choosing, for example, the strength of a regularization parameter to guarantee risk control after training. **Clarification questions on proposed method** The considered problem boils down to the bandit multiple hypothesis testing problem described in Xu et al. (2021). That is, at each round of the algorithm, the adaptive mechanism chooses a subset of arms to query, and then updates their respective wealth processes according to the outcomes of the queries. It should be clarified and discussed how aLTT differs from the previous work of Xu et al. The aLTT procedure is introduced as querying a subset of the arms at each round. In practice, the $\epsilon$-greedy procedure only selects one arm at a time. This comes short of showcasing the proposed method in full. Could the authors expand on reasonable choices of adaptive mechanisms different from the greedy one? It should be discussed under which assumptions aLTT provides anytime valid FDR control, as this property does not immediately follow from validity of the e-BH procedure in a general setting (see Wang et al., "Anytime-valid FDR control with the stopped e-BH procedure"). Could the authors expand on the choice of providing FWER control by converting the e-processes to p-processes? Could the authors expand on the message they are trying to convey in point 2 of the highlighted box on page 3? I am not sure I understand with respects to what it is claimed that the $Z_i$ "can be arbitrarily dependent". $Z_i^t \sim P_Z$, so the $Z_i^t$'s are i.i.d. **Experiments** How is ground-truth determined to compute TPR in both experiments? I have a few questions on the prompt engineering experiment: * How many prompts are being tested? * What kind of tasks are being considered? It would be interesting to include some examples of good and bad prompts to verify whether there are distinctive features in the two groups. * I am not sure how to verify the claim that "stricter accuracy requirements are seen to reduce the number of discovered reliable prompts, leading to an increase in the minimal instruction length". I could have a set {$p_1, p_2$} with minimal prompt length $\ell_1$, and a larger set {$p_3, p_4, p_5$} with minimal instruction length $\ell_2 > \ell_1$. * Instead of minimal instruction length, it might be more informative to report the bottom $\delta$ quantile in order to account for the failure probability. --- **Minor comments** * Lines 16-17, right column: repetition of "or" * Lines 99-100, left column: e-values are not a p-values. * Definition 2 is not the usually considered notion of FDR: $FDR = \mathbb{E}[FDP] = \mathbb{E}[\frac{\lvert S \cap S_0 \rvert}{\| S \| \vee 1}] = \mathbb{E}[\frac{\lvert S \cap S_0 \rvert}{\| S \|} \mid \|S\| > 0] \cdot \mathbb{P}[\lvert S \rvert > 0]$. * Algorithm 1: the mechanism $\mathcal{Q}$ should also be a function of $\mathcal{D}^{t-1}$ to reflect adaptability? * Line 296, left column: $\mathcal{E}$ is overloaded with the e-processes * Figures: it might be more readable to report LTT performance as a horizontal line since it does not depend on $t$ Questions For Authors: I have no further questions for the authors Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful comments. Below we address your question point by point: **Theorem and Stopped e-process**: In the revised version, we will formally state the aLTT guarantee in a theorem and provide a proof in the Appendix, which follows from the validity of the e-processes. “*Given a hyperparameter set $\\Lambda$, a reliability level $\\alpha \\in [0,1]$, and an error level $\\delta \\in [0,1]$, aLTT with an FWER or FDR-controlling selection rule returns a final prediction set $\\hat{\\Lambda}^{{\\rm aLTT},T}$ that satisfies $(\\alpha,\\delta)$-FWER control or $(\\alpha,\\delta)$-FDR control, respectively.*” Regarding the validity of the e-BH procedure, we will add the following statement clarifying that e-BH maintains statistical validity under the scenario discussed in the paper, while for alternative scenario in which data is re-used at later testing round it is necessary to use the procedure of [1] “*Finally, note that while the eBH procedure directly applies to the sequential testing of Section 2.3, where we allow risk estimates $\\mathcal{R}^t=\\{R(\lambda_i, Z_i^t)\\}_{\\lambda_i\\in \\mathcal{I}^t}t$ be arbitrarily dependent --- for example, by using the same data to evaluate all models in $\\mathcal{I}^t$ --- reusing the same data for evaluation of different hyperparameters at later stages can invalidate the e-process validity unless this is properly corrected [1]*" [1] Wang et al. Anytime-valid FDR control with the stopped e-BH procedure. arXiv preprint arXiv:2502.08539. 2025 Feb 12 **Hyperparamater Definition**: We consider a broad definition of hyperparameters, including train-time, architectural, and post-processing parameters. For example, in the RL experiment (Sec. 5.1), “*We produce $N = 20$ control policies by setting the hyperparameter $\\lambda$ in the TD3+BC training objective.*” The revised text will clarify that aLTT applies to both train-time and inference-stage hyperparameters. **Xu et al. :** We agree that aLTT relies on the sequential multiple hypothesis testing framework based on e-processes introduced by Xu et al. (2021). In this sense, aLTT is to Xu et al. (2001) what the original LTT is to multiple hypotheses testing with FWER control. In the original text we wrote “*This work aims at improving the data efficiency of LTT by leveraging recent advances in sequential MH based on e-processes (Vovk & Wang, 2021; Waudby-Smith & Ramdas, 2024; Xu et al., 2021)*”. In the revised text, this relation will be further clarified as “*This work aims to improve the data efficiency of LTT by leveraging recent advances in sequential multiple hypothesis testing (MHT) based on e-processes (Vovk & Wang, 2021; Waudby-Smith & Ramdas, 2024; Xu et al., 2021). Unlike LTT, which employs standard p-value hypothesis testing, we propose an adaptive learn-then-test strategy based on the hypothesis testing framework proposed in Xu et al. 2021 to perform statistically valid hyperparameter selection for machine learning models and generate finite-sample performance certificates.*” **Alternative Data Acquisition:** The $\\epsilon$-greedy strategy generalizes to a top-$K$ approach, where the $K$ largest e-processes are selected with probability $1-\\epsilon$, and $K$ random hyperparameters are chosen otherwise. We will revise the text accordingly and add an experiment ( https://ibb.co/d46Q2yPK ). **Why p-values from e-processes**: While for FDR control there exists procedures that directly operate with e-values, for FWER e-values are first converted to p-values using e-to-p calibration and then FWER procedure are based on p-values are used. [1] [1] Vladimir Vovk and Ruodu Wang. E-values: Calibration, combination and applications. **Highlighted box page 3:** We clarify that while data for each hyperparameter is i.i.d., risk estimates across hyperparameters may be dependent due to shared data reuse (see also first point). **Experiments**: - Unless the experiments are conducted using synthetic data, the ground truth is generally unknown. In real-data experiments, the ground truth is estimated from large hold-out samples, which will be clarified in the revision. - The average number of tested prompts is 66 across tasks. - We consider tasks from the Instruction induction dataset; in Appendix B.3 we report the performance per tasks, specifying which task we consider. In the revised version we will subsample some interesting good/bad prompt example from the exhaustive list ( https://ibb.co/1GV2MHyj ). - In the revised text, we will clarify that this statement is made in an empirical and average sense. In fact, due to the randomness of testing, it is possible that the discovered sets are not exactly subsets of one another. However, we find that a stricter requirement leads to a smaller predicted set of reliable prompts (on average) and an increase in the minimal instruction length (on average). - Here we consider plot failure probability instead of minimal length (https://ibb.co/QSCN4sy). --- Rebuttal Comment 1.1: Comment: I sincerely thank the authors for their consideration of my questions and comments, which have been addressed. I thank the authors for clarifying the connections between their proposed algorithm and existing methods, I suggest this connection be made crystal clear in order not to confuse readers coming from the broader ML community who may be unaware of testing by betting literature. The contribution of the paper is a neat and useful application of sequential testing ideas which would be of interest to the community. I still remain unconvinced on the statement that the proposed method applies to train-time quantities. Selecting policies post-hoc still requires training a policy for each hyperparameter value, which may be prohibitive. I suggest to smooth out this claim, since all experiments presented in the paper apply to post-training quantities. I believe FWER could also be controlled by an equivalent e-Bonferroni correction without converting to anytime valid p-values? See Hartog et al. "Family-wise Error Rate Control with E-values". Thank you for including example with a top-K version of the selection mechanism and example prompts! Some of them definitely do not seem relevant to the task at all, e.g. "choose the better word, but my friend doesn't" for "larger animal" task. I was somewhat surprised to see these prompts being evaluated at all. Could the authors expand on their findings regarding useful discoveries of prompts that "sound good" but actually are not selected by the testing mechanism? I think this could be a compelling use case. I updated my score to reflect the rebuttal and updated claims. --- Reply to Comment 1.1.1: Comment: Thank you for your valuable feedback and responsiveness: - We agree with the suggestion to rectify the claim regarding train-time quantities. We will clarify that aLTT naturally applies to hyperparameter selection when testing is performed post-training (as in the experiments). In contrast, if testing is conducted during training, aLTT can still be applied, but with the caveat that any evidence obtained from testing past versions of the model must be discarded, as it does not provide valid risk estimates for the latest model (since we are effectively testing a new model or hypothesis). - You are absolutely right! Hartog and Lei (2025) provide an FWER-controlling procedure that directly uses e-values. This is a very recent reference, and we will make sure to include it in the revised version, as it strongly supports the proposed approach based on e-value testing. Thanks a lot for bringing this up! - The reason for evaluating seemingly irrelevant prompts stems from the way candidate prompts are generated. We followed the forward generation approach from Zhou et al. (2023), where (i) a few-shot example is first provided to a larger LLM, and (ii) the model is then asked to generate prompts that effectively describe the given data. Given this approach, there is a non-negligible chance that some prompts may be less relevant—or even entirely irrelevant—due to imperfections in the larger LLM, despite using a strong model (Llama3.3 70B Instruct, released last December). That said, we fully agree with your point that our testing mechanism successfully filters out prompts that may sound appropriate but perform poorly in actual deployment. For instance, in the first_word_letter task—where the goal is to extract the first letter from a given word (e.g., “y” from “year”)—the prompt “*"Extract the first letter from each word*” initially appears ideal but achieves a test score of only 0.67, leading to its rejection by our mechanism. In contrast, a similar prompt, “*Output the first letter of the word,*” achieves a test score of 0.89 and is selected. We will make sure to emphasize this important point in the revised version.
null
null
null
null
Preference Optimization for Combinatorial Optimization Problems
Accept (poster)
Summary: The paper studies the problem of solving combinatorial optimization problems with reinforcement learning, focusing on two key issues: diminishing reward signals that slow down learning and inefficient exploration in large solution spaces. To address this, it introduces preference optimization, a method that replaces numerical rewards with preference-based signals. It also integrates local search into training rather than post-processing, aiming to improve solution quality without extra inference cost. The approach is tested on problems like the traveling salesman etc, claiming faster convergence and better solutions compared to standard RL methods. Claims And Evidence: In contribution 1, the authors mentioned optimal solutions. However, the paper is entirely empirical, I believe there is no evidence showing optimality. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper does not include theoretical claims or proofs, except for a light proposition. Experimental Designs Or Analyses: The experiment section is well presented without significant issue. Supplementary Material: Yes, the formulation of TSP and CVRP. Relation To Broader Scientific Literature: The paper adapts preference-based learning, which is currently very popular in llm era, to combinatorial optimization problems. This can potentially open new chances for preference learning outside llm. Essential References Not Discussed: I am not aware of any missing references. Other Strengths And Weaknesses: Strengths: - The paper presents experimental results on several combinatorial optimization problems, showing improvements over the baselines Weaknesses: - A key concern with this paper is the lack of a natural justification for applying preference learning to combinatorial optimization problems. In language models, preference learning arises organically because human feedback provides qualitative judgments on responses, making it a natural fit for reinforcement learning from human feedback. However, in COPs, there is no intrinsic notion of human preference—solutions are typically evaluated using well-defined numerical objective functions. It is unclear whether PO’s improvements stem from methodological advantages or better-tuned training procedures. Other Comments Or Suggestions: None Questions For Authors: 1. Can the authors clarify what "fine-tuning" refers to in detail? 2. Can the authors provide a side-by-side comparison between the preference model used in this combinatorial optimization framework and other preference learning applications? I am particularly interested in understanding why preference models are the right tool for this setting. Unlike RLHF, where preference conflicts are often allowed, combinatorial optimization problems typically have well-defined objective functions. Given this distinction, how does preference learning in this work align with or differ from existing applications, and what makes it particularly suitable here? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your thoughtful comments, which help us improve our presentation clarity. Below, we address your concerns regarding terminology and methodology. --- ### **1. Regarding the Claims of Optimality** We apologize for the confusion in Contribution 1 regarding *optimal solutions*. This term was used to indicate that our model preserves preference relations among candidate solutions—when solution $\tau_1$ is better than $\tau_2$ (the true relation), our model consistently assigns higher (implicit) reward value to $\tau_1$. This property is approximately ensured by preference modeling and shift-invariance. We will revise this description as *consistently emphasizing better solutions with relation preservation property* and justify this term in method section in detail. --- ### **2. Rationale of PO** + **Background: Reinforcement Learning for Combinatorial Optimization (RL4CO)** In the RL4CO paradigm, a neural policy iteratively samples a batch of feasible solutions and updates itself to favor better solutions according to the objective function (e.g., route length, makespan), naturally establishing the preference relationships. Our work focuses on *algorithmic improvements for training RL-based policy to enhance solution quality and learning efficiency* in RL4CO, which further improve the current REINFORCE-based algorithms that rely on numerical reward; this is one of the reasons for considering PO for COPs. + **Key Challenge: Diminishing Rewards** A critical issue in RL4CO is the diminishing reward signal. As solutions approach near-optimal, the numerical differences between their rewards become extremely small (e.g., 0.001), making it difficult for the policy trained with existing RL algorithms to distinguish better solutions from worse ones. The weakened signals cause premature convergence, preventing the policy from finding better solutions. Hopefully, PO can alleviate such issues due to the stability of preference relation, which is also the reason for choosing PO. + **Introducing Preference Optimization** To address this, we *convert the batch of numeric rewards into pairwise preferences*. Even if the reward difference is small, a preference-based approach gives a *categorical* label: B > A. This binary label can offer a more stable learning signal than the numerical one. Further, the learning process can be briefly summarized as: > 1. Sample a batch of feasible solutions from the current policy. > 2. Compare them by the objective function (e.g., cost, makespan) and label the better one as "preferred." > 3. Update the policy by *increasing the probability* of generating solutions that are "preferred (winner)" over others (loser) and back to Step 1. More details are provided in Algorithm 1 in our manuscript. + **Distinction from RLHF** 1. *Objective vs. Subjective Preferences*: In RLHF, human annotators may disagree or change their minds, creating conflicts. In PO, the grounding metric (e.g., route length) is naturally fixed, ensuring consistent and transitive preference signals. 2. *Online vs. Offline*: RLHF often uses a fixed dataset of human comparisons. In PO, our policy *actively* samples solutions, which are then compared by the objective function to produce preference labels 3. *Entropy vs. KL Regularization*: Our approach stems from an *entropy*-regularized RL objective, encouraging exploration in large discrete spaces. whereas RLHF typically employs KL-divergence to maintain proximity to a pretrained model. --- ### **3. Explanation of "Fine-Tuning"** In deep learning, *fine-tuning* typically means continuing to train an already-trained model with specialized data or new techniques to improve or adapt its performance. In our model, we similarly adopt this term to represent a two-stage training framework. Moreover, note that the stage 2 (fine-tune) is optional, and model with stage 1 already admit significant improvements over SOTA baselines; thus, the performance gain is indeed ensured by PO. In our context: 1. We *first train* a policy $\pi_\theta$ using PO. 2. Then, we *"fine-tune"* that policy by *incorporating local search into the training loop*. **Local search (LS)** is a standard technique in CO that makes small changes to a solution with non-degradation guarantee, which is generally fair to introduce this process into solver. The detailed fine-tune process can be summarized as: > 1. Take the policy's output $\tau$. > 2. Run local search on $\tau$ to get a slightly improved solution $\mathrm{LS}(\tau)$. > 3. Update the policy to favor $\mathrm{LS}(\tau)$ and go back to Step 1. This way, the policy *learns* from the local search refinements. This is what we refer to as "fine-tuning": continuing to refining the learned policy under the guidance of LS–based preference pairs. --- We appreciate the opportunity to clarify our work and will revise our manuscript accordingly. Thank you again for your valuable time and feedback and openly welcome any further discussions.
Summary: This paper proposes a way to modify reinforcement learning (RL) so that it can deal with diminishing reward signals. The key idea is to turn the reward signals into pairwise preferences from which an underlying reward function can be learned. Another contribution of the paper is to use local search to generate additional pairwise preference relations (solution after local search is preferred over solution before local search). The ideas are applied to combinatorial optimization problems, and it is shown that the conversion of reward to preference function is helpful on different problems (TSP, CVRP, FFSP) and for different underlying Neural RL approaches (AM, Pointerformer, Sym-NCO and POMO). ## update after rebuttal The authors have addressed my questions and comments. I retain my score and recommend accepting the paper. Claims And Evidence: The main claims are - that the proposed transformation of reward signals into preference signals addresses the issue of diminishing reward signals and inefficient exploration. - That the reparameterized entropy-regularized objective bypasses the enumeration of the entire action space - That the integration with local search helps during training. The paper shows on different neural solvers that the transformation of reward signal into preference signal is beneficial. It is also demonstrated that the fine-tuning helps. However, here it is not clear whether the fine-tuning is done in addition or instead of the final iterations of the algorithm without fine-tuning. In the former case, one may wonder whether simply running the algorithm without fine-tuning for some additional generations would have yielded similar results. Methods And Evaluation Criteria: The method is compared with the default method from the literature and with state-of-the-art heuristics such as LKH3 and HGS. The metrics used are optimality gap and inference time that are commonly used in the field. Theoretical Claims: I did not review the mathematical derivations in Appendix C. Experimental Designs Or Analyses: It is not said explicitly, but I assume the training and test set are different (albeit drawn from the same distribution). Results seem to be based on single training runs, no standard errors are provided, which is not best practice. However, given the consistency of the results across problems, it seems unlikely that replicating the experiments would lead to very different results. Thus, it does not raise a major concern in my eyes. Supplementary Material: All except for Section C. Relation To Broader Scientific Literature: The proposed mechanism is shown to improve results when integrated into three different methods from the literature, and tested on three different combinatorial problems. There is some relation to Preference-based RL which is discussed. Essential References Not Discussed: I am not aware of missing references. Other Strengths And Weaknesses: - I understand the issue of diminishing reward signals. However, there are many possible solutions, e.g., simply replacing the reward by the rank of the solution. It is not intuitive why the proposed transformation into pairwise preferences is a natural choice. However, it seems to work. - The name “Preference optimization” seems misleading, as it usually refers to optimization based on preference information where only such information is available, whereas in the current paper, the actual reward function is available but turned into preferences. - It is not clear how parameters were chosen. I assume parameters for the baseline algorithms are taken from the corresponding original references? How were the alpha and the preference function chosen? How would they be chosen for a new problem (since different settings are chosen for different problems)? - The paper doesn’t provide any information on e.g. the graph embedding or decoder used. It would be nice to have a summary in the appendix. - While I believe it is not explicitly stated, the models are trained separately for different problem sizes. In practice, a heuristic that only works on a specific size seems very limited ted. Some experiments to see how the trained models generalize to other problem sizes would have been nice. Other Comments Or Suggestions: NA Questions For Authors: 1. Could you please confirm that the test set is different from the training set? 2. Could you clarify whether the fine-tuning is in addition to the non-fine-tuning iterations, or replacing the last few non-fine-tuning iterations? 3. Do you have an idea how the approach generalizes across different sized problems? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your affirmation on our work, and are grateful for the insightful review and constructive comments. We provide point-to-point responses as follows. --- ### **1. Math implications behind the proposed PO algorithm** We appreciate this insightful question on the rationale PO model, which can be justified from following aspects. *Transform numerical signals into ranks*. First, consider the REINFORCE algorithm: $$ \nabla_{\theta} J(\theta)= \mathbb{E}\_{x \sim \mathcal{D}, \tau \sim \pi{\theta}(\tau \mid x)} \left[\left(r(x, \tau)-b(x)\right) \nabla_{\theta} \log \pi_{\theta}(\tau \mid x)\right]. $$ The policy is trained to increase the probability of solutions better than the baseline (where $r(x, \tau)-b(x) > 0$) and decrease the probability of worse solutions (where $r(x, \tau)-b(x) < 0$). This implies that *the policy mathematically acts as a ranking model that ranks solutions by their rewards*. Besides, explicitly learning the reward with traditional RL algorithms has two key weaknesses: 1) Sensitive to baseline selection; 2) Sensitive to the numerical scale of reward signals. *Transform ranks into preference relations*. Based on the math implication, we naturally connect the essential goal (discriminative ranking) with rank model. Since there are many possible solutions in rank, we adopt the preference model to retrieve the rank. Specifically, as long as the training pairs are sufficient, it is clear that the true ranks can be provably recovered by the pair-wise relations. Moreover, preference relations admit appealing properties, i.e., invariant to both baseline choice and the numerical scale of rewards. Thus, the PO-based model for ranks and numerical signals is generally reasonable from the views of methodology and numerical approximation. *The name 'PO'*. Note that PO here only means the methodology; thus, a more accurate description could be 'PO-based modeling for COPs'. We would clarify this point in revision. --- ### **2. Parameter setting** All parameters in baselines were adopted from their official codes, and we only retrained all models from scratch for comparison. --- ### **3. Choice of $\alpha$ in PO** We selected $\alpha$ through grid search with several training steps and found that the suitable value is both problem-dependent and model-dependent. This is why different $\alpha$ values are adopted for different problems and models. --- ### **4. Graph embedding and decoder** We utilize the same graph embedding and decoder as in the original works. We will include a comprehensive summary in the revision. --- ### **5. Generalization of PO** Methodologically, as PO is a general algorithmic improvement that can be applied to RL-based solvers, it could inherit the advantages of baselines and enhance them. To empirically validate this, we adopt it to the baselines, i.e., ELG[r1], that are specifically designed for generalization. The results on TSPLib and CVRPLib demonstrate this capability: **TSPLib:** |Method|(0, 200]|(200, 1002]|Total|Time| |-|-|-|-|-| |LKH3|0.00%|0.00%|0.00%|24s| |POMO|3.07%|13.35%|7.45%|0.41s| |Sym-POMO|2.88%|15.35%|8.29%|0.34s| |Omni-POMO|1.74%|7.47%|4.16%|0.34s| |Pointerformer|2.31%|11.47%|6.32%|0.24s| |LEHD|2.03%|3.12%|2.50%|1.28s| |BQ-NCO|1.62%|**2.39%**|**2.22%**|2.85s| |DIFUSCO|1.84%|10.83%|5.77%|30.68s| |T2TCO|1.87%|9.72%|5.30%|30.82s| |ELG-POMO (RF)|1.12%|5.90%|3.08%|0.63s| |ELG-POMO(PO)|**1.04%**|5.84%|3.00%|0.63s| **CVRPLib-Set-X:** |Method|(0, 200]|(200, 1000]|Total|Time| |-|-|-|-|-| |LKH3|0.36%|1.18%|1.00%|16m| |HGS|0.01%|0.13%|0.11%|16m| |POMO|5.26%|11.82%|10.37%|0.80s| |Sym-POMO|9.99%|27.09%|23.32%|0.87s| |Omni-POMO|5.04%|6.95%|6.52%|0.75s| |LEHD|11.11%|12.73%|12.25%|1.67s| |BQ-NCO|10.60%|10.97%|10.89%|3.36s| |ELG-POMO (RF)|4.51%|6.46%|6.03%|1.90s| |ELG-POMO (PO)|**4.39%**|**6.37%**|**5.94%**|1.90s| Therefore, these findings verify that PO is also effect in enhancing the generalization ability of RL-based solvers. We will incorporate them into revision. --- **Q1:** Yes, the test set differs from the training set. Training data is generated on-the-fly while test data uses different seeds. --- **Q2:** Fine-tuning is applied after policy convergence with PO, when standard training would not improve performance. Fine-tuning iterations are additional to non-fine-tuning iterations. We will clarify this in revision. --- **Q3:** As PO provides a general algorithmic improvement over REINFORCE variants, integrating it with generalizable RL methods is natural. This includes combining PO with meta-learning like Omni-VRP[r2] or with local policy like ELG[r1]. --- We sincerely thank you for your valuable time and insightful comments. We kindly welcome any further questions. **Reference** >[r1] Towards Generalizable Neural Solvers for Vehicle Routing Problems via Ensemble with Transferrable Local Policy. arXiv:2308.14104. > >[r2] Towards Omni-generalizable Neural Methods for Vehicle Routing Problems. ICML 2023. --- Rebuttal Comment 1.1: Comment: I appreciate that the authors show now more examples on how their idea can be used in combination with different baseline algorithms, and it improves results in all cases. I don’t follow their response to why one cannot just use the ranks instead of the actual reward values to avoid the issue of diminishing rewards. Most modern evolutionary algorithms use ranks rather than fitness values. Is the issue that it is more challenging to compute the gradient, or because the rank size grows in RL but not in EAs? The authors tune their parameter alpha for every problem but don’t provide guidance on how to set alpha for a new problem in practice. In other cases, I have seen this lead to underwhelming performances when practitioners applied the methods to new applications. The authors compare their algorithm with an additional fine-tuning phase with the algorithm without fine-tuning. At first glance, this doesn’t seem appropriate because the fine-tuning algorithm is allowed to do more learning steps. However, the authors say that the standard algorithm has converged, and so running it longer may not have been beneficial. I am fine with it because the authors agree to make this explicit in the paper. --- Reply to Comment 1.1.1: Comment: We are deeply grateful for your valuable feedbacks and affirmation on the responses. We really appreciate the opportunity to response the further comments, which are presented as follows. --- ### **1. Regarding the comparison with rank-based rewards** We appreciate your comment regarding the potential use of rank-based rewards to address the diminishing reward issue. We agree that utilizing ranks instead of actual reward values can indeed circumvent diminishing differences, analogous to *reward shaping* techniques commonly employed in RL community. For instance, if we have five sampled solutions with very similar costs (e.g., 7.773, 7.774, 7.775, 7.776, and 7.777), rank-based methods could reassign their rewards like (5, 4, 3, 2, 1), respectively, to enhance gradient signals in a traditional REINFORCE-based algorithms. The proposed PO framework offers two key advantages over standard rank-based reward shaping methods: 1) **Integration of probabilistic framework for exploration.** PO naturally stems from entropy regularization within its probabilistic framework to enhance exploration within the solution space. Thereby, PO could *simultaneously address diminishing rewards issues while promoting diverse solution sampling*, as the quantifiable improvements in policy entropy are evidenced in Figures 3(c) and 3(d) of our manuscript. 2) **Flexibility without reward engineering.** Rank-based rewards often require careful manual scaling and tailored reward assignments, as gradients in RL are sensitive to reward magnitudes. PO mitigates this issue by relying on flexible and generalizable preference models rather than manual reward engineering. --- ### **2. Our practice of parameter tuning** You've identified an important aspect that we should clarify for practical modeling. We appreciate your concern regarding the practical application of our method to new problems, which can be justified from the following aspects. + **Methodological views.** From PO's entropy-regularized objective, the parameter $\alpha$ represents the exploration-exploitation trade-off. Higher $\alpha$ values promote exploration, while lower values emphasize exploitation. In our experiments, we employed a grid search within {0.005, 0.01, 0.05, 0.1, 0.5, 1.0, 2.0} for each problem-model combination. + **Empirical views.** Our empirical findings suggest that PO's performance is influenced by two additional factors: 1) *Network architecture's inherent exploration capacity:* Models with built-in exploration mechanisms (e.g., POMO and its variants with multi-start approaches) typically benefit from lower $\alpha$ values to prioritize exploitation but DIMES require more exploration with higher $\alpha$ values. For POMO and its variants on different problems, we observed that routing problems typically perform well with $\alpha$ in the range of 1e-2 to 1e-3, while FFSP benefits from $\alpha$ values between 1.0 and 2.0. 2) *Preference model selection:* As PO serves as a flexible framework, different preference models could yield distinct parameterized reward assignments (as shown in Figure 3(b)), necessitating different $\alpha$ calibrations. The Exponential model could be a good candidate when the Bradley-Terry model underperforms on new problems, particularly for challenging problems, before exploring alternatives like Thurstone or Plackett-Luce models (which generalize Bradley-Terry beyond pairwise comparisons). Besides, we also provide detailed analysis regarding different preference models in Appendix E2. + **Adapting to new problems.** For new applications, there are two intuitions for practical extensions: 1) *Length-control regularization:* For problems where sampled solutions have varying lengths and shorter solutions with lower costs are preferred, a length-control regularization factor $\frac{1}{|\tau|}$ can be effective, resulting in: $f(\alpha \left[ \frac{1}{|\tau_1|} \log \pi_\theta(\tau_1 | x)- \frac{1}{|\tau_2|} \log \pi_\theta(\tau_2 | x) \right])$. 2) *Margin enhancement:* For models with limited capacity, a margin enhancement term $f(\alpha \left[\log \pi_\theta(\tau_1 | x)- \log \pi_\theta(\tau_2 | x) \right] - \gamma)$ can help prioritize better solutions, where $\gamma$ serves as a margin parameter when $f(\cdot)$ is a non-linear function. --- We sincerely thank you for your insightful comments, and thorough review of our work. Your academic rigor and thoughtful questions have helped us significantly improve both the clarity and quality of our research. We will incorporate your valuable suggestions in our revised manuscript and welcome any further inquiries you may have.
Summary: This paper attempts to improve the training paradigm of end-to-end deep learning solvers for combinatorial optimization problems. It introduces Preference Optimization (PO) to alleviate the issue of diminishing advantage signals in the later stages of reinforcement learning (RL) training, thereby preventing models from getting stuck in local optima. Claims And Evidence: Yes, I believe the evidence is sufficient. Methods And Evaluation Criteria: Yes Theoretical Claims: n/a Experimental Designs Or Analyses: The experiment is reliable. Supplementary Material: I have browsed through most of it. Relation To Broader Scientific Literature: See strength Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: - The problem addressed by the authors, namely improving the training paradigm of end-to-end neural solvers, is interesting. Introducing preference optimization into this domain is novel. - The proposed method is plug-and-play and demonstrates performance improvements across multiple RL-based neural solvers. Weaknesses: - The paper only compares RL algorithms enhanced with PO to their original RL counterparts. However, it does not benchmark the improvements against other state-of-the-art (SOTA) approaches from different paradigms, such as: Supervised learning-based methods (e.g., BQ-NCQ), Diffusion model-based methods (e.g., Diffusco), Heuristic learning-based solvers (e.g., NeuroLKH). - Even if some comparisons do not favor the RL approach, acknowledging the inherent limitations of end-to-end RL methods in this domain and discussing RL’s potential advantages over other paradigms—as well as ways to narrow the performance gap—would add significant value to the paper. - The problem scale studied in the main text is too small. The authors are encouraged to incorporate the large-scale generalization experiments from the appendix into the main text and compare against relevant SOTA methods to provide a more comprehensive evaluation. Other Comments Or Suggestions: see above -------------- Update: After rebuttal, I raised the score from 2 to 3. Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your thorough review and valuable insights. Your constructive feedback has significantly improved our manuscript. We present the detailed responses as follows and hope that could address the concerns. --- ### **1. Broadening comparison with SOTA solvers on large-scale problems and generalization** We appreciate your suggestion to enhance experimental evaluation. In response, we have included additional comparisons with SOTA solvers LEHD [r1], BQ-NCO [r2], DIFUSCO [r3], NeuroLKH [r4], T2TCO [r5], ELG [r6] on large-scale problems and generalization tasks. Specifically, recall that our PO approach can serve as algorithmic improvements over baselines, which implies it is applicable to RL-based solvers for large-scale and generalization problems. To validate this claim, we conducted additional experiments with ELG, a neural solver designed for such challenges. Below are the results of optimality gap on TSPLib and CVRPLib with problems scaling up to 1002: **Results on TSPLib:** |Method|(0, 200]|(200, 1002]|Total|Time|Paradigm| |-|-|-|-|-|-| |LKH3|0.00%|0.00%|0.00%|24s|Heuristic| |NeuroLKH|0.00%|0.00%|0.00%|24s|Heuristic+SL| |POMO|3.07%|13.35%|7.45%|0.41s|RL| |Sym-POMO|2.88%|15.35%|8.29%|0.34s|RL| |Omni-POMO|1.74%|7.47%|4.16%|0.34s|RL| |Pointerformer|2.31%|11.47%|6.32%|0.24s|RL| |LEHD|2.03%|3.12%|2.50%|1.28s|SL| |BQ-NCO|1.62%|**2.39%**|**2.22%**|2.85s|SL| |DIFUSCO|1.84%|10.83%|5.77%|30.68s|SL| |T2TCO|1.87%|9.72%|5.30%|30.82s|SL| |ELG-POMO (RF)|1.12%|5.90%|3.08%|0.63s|RL| |ELG-POMO(PO)|**1.04%**|5.84%|3.00%|0.63s|RL| **Results on CVRPLib-Set-X:** |Method|(0, 200]|(200, 1000]|Total|Time|Paradigm| |-|-|-|-|-|-| |LKH3|0.36%|1.18%|1.00%|16m|Heuristic| |NeuroLKH|0.47%|1.16%|0.88%|16m|Heuristic+SL| |HGS|0.01%|0.13%|0.11%|16m|Heuristic| |POMO|5.26%|11.82%|10.37%|0.80s|RL| |Sym-POMO|9.99%|27.09%|23.32%|0.87s|RL| |Omni-POMO|5.04%|6.95%|6.52%|0.75s|RL| |LEHD|11.11%|12.73%|12.25%|1.67s|SL| |BQ-NCO|10.60%|10.97%|10.89%|3.36s|SL| |ELG-POMO (RF)|4.51%|6.46%|6.03%|1.90s|RL| |ELG-POMO (PO)|**4.39%**|**6.37%**|**5.94%**|1.90s|RL| Consequently, the results demonstrate that the solver trained with our PO method consistently outperforms the original counterpart, verifying that PO enhances generalizability across diverse problem scales. We will incorporate these findings to main text in revision. --- ### **2. Pros and Cons of RL and other paradigms** We sincerely appreciate your suggestion. We try to expand our discussion on the Pros and Cons of RL over other paradigms as follows. Generally, RL-based approaches offer flexible training without requiring expert knowledge or high-quality reference solutions. As demonstrated in Table 2, our PO-based solver achieves better performance on FFSP instances where heuristic solvers struggle. We acknowledge that RL may face challenges such as slower convergence rates and longer training times compared to supervised learning (SL) approaches, while SL also induces the intractable problem of acquiring the supervision, e.g., annotations. Note that RL-based methods are free of these limitations, and our PO-based algorithmic improvement can further improve the RL baselines with significantly better results. Besides, a promising future research direction lies in hybrid approaches combining RL with heuristic methods like MCTS and 2-Opt, as in DIMES. Our experiment results also preliminary verify this point, i.e., the experiment and analysis of training DIMES with PO (included in the Appendix E.3) demonstrate that PO can further enhance such hybrid methods. --- We are deeply grateful for your thorough review and insightful feedback. We sincerely hope that these responses address your concerns, and kindly welcome any further discussions. **References** >[r1] [Fu Luo et al., 2023]. Neural Combinatorial Optimization with Heavy Decoder: Toward Large Scale Generalization. NeurIPS 2023. > >[r2] [Drakulic et al., 2023]. BQ-NCO: Bisimulation Quotienting for Efficient Neural Combinatorial Optimization. NeurIPS 2023. > >[r3] [Sun et al., 2023]. DIFUSCO: Graph-based Diffusion Solvers for Combinatorial Optimization. NeurIPS 2023. > >[r4] [Xin et al., 2021]. NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun Heuristic for Solving the Traveling Salesman Problem. NeurIPS 2021. > >[r5] [Li et al., 2023]. T2T: From Distribution Learning in Training to Gradient Search in Testing for Combinatorial Optimization. NeurIPS 2023. > >[r6] [Liu et al., 2023]. Towards Generalizable Neural Solvers for Vehicle Routing Problems via Ensemble with Transferrable Local Policy. arXiv:2308.14104.
Summary: This paper introduces Preference Optimization (PO), a novel method for solving CO problems like TSP and CVRP. Key contributions: 1. The authors transform quantitative reward signals into qualitative preference signals, addressing two major challenges in reinforcement learning for COPs: - Diminishing reward signals as policy improves - Inefficient exploration in vast combinatorial action spaces 2. They reparameterize the reward function in terms of policy and use statistical preference models (like Bradley-Terry) to formulate an entropy-regularized objective that aligns policy directly with preferences. 3. They integrate local search techniques during fine-tuning rather than as post-processing, helping policies escape local optima without adding inference time. ## Update after rebuttal The authors have partially answered my questions but failed to address all my concerns during the rebuttal. I maintain my rating as weak accept. Claims And Evidence: Yes. Methods And Evaluation Criteria: The proposed method aims at CO problems, for which TSP, CVRP, and FFSP are standard benchmarks. Theoretical Claims: I have not found any concerns. Experimental Designs Or Analyses: Yes. Supplementary Material: I reviewed the supplementary material and did not find any concerns. Relation To Broader Scientific Literature: This work brings ideas from preference-based optimization (e.g. in RLHF) to combinatorial optimization. Essential References Not Discussed: - [Grinsztajn et al., 2023] Winner Takes It All: Training Performant RL Populations for Combinatorial Optimization. - [Chalumeau et al., 2023] Combinatorial Optimization with Policy Adaptation using Latent Space Search Other Strengths And Weaknesses: Strength: formulation of the method Weakness: lacks more recent baselines (e.g. Poppy [Grinsztajn et al., 2023] and COMPASS [Chalumeau et al., 2023]) Other Comments Or Suggestions: None Questions For Authors: 1. Optimizers are mostly scale-invariant, e.g. Adam scales updates to be loss scale invariant. The proposed method can be compared to reward shaping. Would you have ablation studies using reward shaping or reward normalization to understand how PO compares to that? 2. Equation 3: should the entropy term not be inside the expectation over x? (max-entropy framework) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank you for your valuable feedback and constructive comments. Your insightful suggestions are invaluable for improving our work. Below, we address your concerns in detail. --- ### **1. Including comparison with Poppy and COMPASS** Thank you for providing the related references, i.e., COMPASS [r1] and Poppy [r2]. We discuss them from the following aspects, and will incorporate them into revision. *Methodological Comparison.* In general, both them and our work focus on RL for COPs, while the major difference is that the provided works focus on the framework design for enhancing the diversity of learned policies, and our proposed PO algorithm concentrates on algorithmic improvements on the quality of policies and efficiency of learning, which can serve as a plug-and-play objective to the baselines with REINFORCE-based objective. *Experimental Validation.* To validate our plug-and-play PO objective on these baselines, we try to reproduce the baselines with the official open-source codes and adapted our objective for further algorithmic improvement, i.e., replacing the policy gradient-based training with PO-based training. Due to computational constraints (each experiment requires approximately 50 hours on a single 80GB-A800 GPU for 1e5 training steps, with complete training requiring over 60 days), we provide the preliminary comparison results of current training records, including the models trained from scratch with original baselines and PO-based improvement: *Results on TSP-100:* |Step|10k|20k|30k|40k|50k|60k|70k|80k|90k|100k| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |COMPASS(RF)|75.29%|29.69%|32.43%|51.36%|34.44%|39.23%|27.14%|17.22%|14.02%|10.93%| |COMPASS(PO)|9.51%|7.15%|5.89%|4.95%|4.50%|4.32%|4.14%|3.90%|3.76%|3.61%| |Poppy(RF)|4.57%|3.85%|3.17%|2.83%|2.59%|2.39%|2.39%|2.24%|2.05%|1.95%| |Poppy(PO)|2.28%|1.56%|1.23%|1.03%|0.92%|0.78%|0.71%|0.69%|0.66%|0.63%| *Results on CVRP-100:* |Step|10k|20k|30k|40k|50k|60k|70k|80k|90k|100k| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |COMPASS(RF)|120.82%|58.17%|43.11%|20.38%|14.80%|12.78%|11.46%|10.84%|10.40%|9.79%| |COMPASS(PO)|18.84%|10.26%|8.99%|8.10%|7.47%|7.19%|6.91%|6.53%|6.04%|6.50%| |Poppy(RF)|14.15%|11.13%|9.42%|8.52%|8.63%|7.16%|6.94%|6.15%|5.91%|5.42%| |Poppy(PO)|6.25%|4.54%|3.77%|3.43%|3.11%|2.95%|2.78%|2.65%|2.53%|2.43%| The results above demonstrate that 1) PO significantly ensures lower optimality gap at the same iteration number; 2) PO ensures much faster convergence speed for the same gap, and higher stability during optimization. Moreover, these results also validate that our proposed algorithmic improvement method is consistently effective in various RL-based baselines. --- ### **2. Regarding comparison to reward shaping** We fully agree with you about the importance of comparing PO with reward shaping methods, which is just right considered in experiment section. Specifically, the original Pointerformer implementation already incorporated reward normalization techniques: $\nabla_{\theta} J(\theta) \approx \frac{1}{B \times N} \sum_{i=1}^{B} \sum_{j=1}^{N}\left(\frac{R(\tau_i^j)-\mu(\tau_i)}{\sigma(\tau_i)}\right)\nabla_{\theta}\log p_{\theta}\left(\tau_i^j \mid s\right)$. As shown in Table 1 and Figure 2(c) of our manuscript, the results consistently show that PO outperforms its REINFORCE variant with reward shaping. Figure 3(b) also illustrates how PO re-distributes the advantage values. Besides, from theoretical view, the fundamental difference between PO and reward shaping lies in the fact that **PO is derived from an entropy-regularized objective and is invariant to reward shaping since that would not change the preference relationship between solutions**, making it more suitable for exploring large discrete spaces, while reward shaping functions more as a technique to stabilize the training of RL policies. In current manuscript, the analysis is already included in Appendix E.2, i.e., the different preference models (e.g., Bradley-Terry, Thurstone, Exponential) which is used to control the advantage values assignment like reward shaping. --- ### **3. Correction of Equation 3** We sincerely thank you for your careful examination and valuable suggestion. We will revise Equation 3 to the more precise form: $\max_{\pi_{\theta}} E_{x \sim \mathcal{D}, \tau \sim \pi_{\theta}(\tau|x)}\left[ r(x,\tau) + \alpha \mathcal{H}\left(\pi_{\theta}(\tau|x)\right)\right]$. --- We are deeply grateful for your thoughtful feedbacks, and sincerely hope that these responses address your concerns. We welcome any further discussions or suggesstions. ### **Reference** >[r1] [Grinsztajn et al., 2023] Winner Takes It All: Training Performant RL Populations for Combinatorial Optimization. NeurIPS 2023 > >[r2] [Chalumeau et al., 2023] COMPASS: Cooperative Multi-Agent Policy Search for Combinatorial Optimization. NeurIPS 2023
null
null
null
null
null
null
QuEst: Enhancing Estimates of Quantile-Based Distributional Measures Using Model Predictions
Accept (poster)
Summary: This paper introduces the QuEst method, which combines a few observed data with a large quantity of imputed data to derive enhanced estimates and reliable confidence intervals for quantile-based distribution metrics (QBDM). The method demonstrates its value in real-world applications, such as LLM Auto-Evaluation, with both theoretical and experimental evidence supporting its effectiveness. ## update after rebuttal In the rebuttal, the authors clarified that their method achieves variance reduction without extra assumptions and degrades when imputation quality is low, addressing my main concern. Hence, I maintain a positive rating. Claims And Evidence: - The claim that “QuEst can provide more accurate quantile-based distribution metric (QBDM) estimates by integrating observed and model-predicted data” is substantiated by theoretical and experimental results. - The theoretical and empirical findings support the claim that “QuEst performs well in multidimensional and complex scenarios.” However, additional experiments in high-dimensional settings would further validate the method's efficacy. Methods And Evaluation Criteria: The proposed method and evaluation criteria are reasonable and effectively address the research problem. QuEst combines observed and imputed data to provide more accurate QBDM estimates, and experimental results across various domains demonstrate its effectiveness. Theoretical Claims: I have not verified the correctness of the theoretical claims in the paper. Experimental Designs Or Analyses: The experimental design and analysis are sound and comprehensively validate QuEst’s performance. However, incorporating comparisons with more advanced methods, such as those discussed in Section 3.2, would enhance the paper's persuasiveness. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: The proposed QuEst is a rigorous extension of the Prediction-Powered Inference (PPI) framework. By introducing L-estimators, it improves the estimation of quantile-based metric, showcasing advantages across multiple application areas. Essential References Not Discussed: The paper thoroughly discusses previous studies. While no critical references appear missing, the possibility of omissions cannot be completely ruled out due to my limited familiarity with this area. Other Strengths And Weaknesses: **Strengths**: - S1: The topic is valuable, especially the application to "LLM Auto-Evaluation." - S2: The paper is well-written and easy to follow. - S3: Experimental results support the proposed claims and demonstrate the method’s effectiveness. **Weaknesses**: - W1: More comparisons with advanced methods, like those in Section 3.2, would strengthen the paper. - W2: The motivation and advantages of tuning a set of basis function coefficients rather than the original $\lambda$, as presented in Section 5 ("Extension"), need further discussion. Other Comments Or Suggestions: There are some duplicated references that should be checked. Questions For Authors: **Questions**: - Q1. Could the authors further discuss the requirements for imputed data to assist in estimating the target statistics? Intuitively, when imputed data significantly deviates from the true values, the assumptions needed for certain conclusions (e.g. variance reduction) may be violated, raising questions about the necessity of imputed data. - Q2. I wonder whether the proposed estimator $\hat Q_{\psi}(\lambda)$ achieves asymptotic variance reduction while maintaining asymptotic unbiasedness without any additional assumption, compared to the previous estimator $Q_{\psi}(F_n)$. If additional assumptions are introduced, please discuss what they are and whether they hold. - Q3. Could the authors briefly discuss the finite-sample properties of the proposed estimator? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the careful assessment of our work and the thoughtful suggestions. Below, we respond to particular concerns. [Q1: Comparisons with more methods, such as those in Section 3.2] With respect to baselines, prior PPI variants are incompatible with the measures considered in our experiments; QuEst is the only existing specialized method for rigorously combining labeled and imputed data to estimate and provide CIs on these measures. We will clarify these choices in the final draft, specifying our exact baselines and emphasizing why specialized PPI methods do not directly apply to quantile-based distributional measures. This should ensure readers understand both the rationale behind our baseline selection and the distinct advantages offered by QuEst. Also, in order to further respond to these concerns, we have completed additional experiments on estimating the $\beta-VaR$ QBDM (Table 1). This experiment does allow for a direct comparison to PPI, since it is estimating a single quantile. We find that QuEst produces higher quality estimates than our usual suite of observed and imputed baselines as well as the PPI method from Angelopoulos et al. Representative figures will be included in the camera-ready version of the paper. | Num. Data | Observed | Imputed | QuEst | PPI | |-----------|------------------------|-----------------------|-----------------------|-----------------------| | 250 | 9.2e-02 ± 6.7e-03 | **6.3e-02 ± 1.8e-03** | 6.5e-02 ± 4.8e-03 | 6.7e-02 ± 4.9e-03 | | 500 | 6.5e-02 ± 4.2e-03 | 6.6e-02 ± 1.9e-03 | **4.3e-02 ± 2.9e-03** | 4.5e-02 ± 3.4e-03 | | 1000 | 4.3e-02 ± 3.2e-03 | 6.6e-02 ± 1.7e-03 | **3.0e-02 ± 2.0e-03** | 3.5e-02 ± 2.4e-03 | | 1500 | 3.5e-02 ± 2.2e-03 | 6.6e-02 ± 2.0e-03 | **2.4e-02 ± 1.8e-03** | 3.2e-02 ± 2.4e-03 | [Q2: The motivation and advantages of our extension in Section 5.] We here address the reviewer's question from two aspects: - Motivation: We leverage the fact that our estimator remains asymptotically unbiased regardless of which weighting function $\tilde{\psi}$ is chosen, because the expectations of its two terms cancel out. Thus, by letting $\tilde{\psi}$ be a flexible tuning function---rather than simply setting $\tilde{\psi} = \psi$---we can optimize for better debiasing and variance reduction. - Advantage: Instead of tuning a single scalar, we introduce a basis-function approach (e.g., polynomial or ``spline'' basis) and optimize the associated coefficient vector $\xi$. This advanced method, backed by new theorems and techniques, can yield more efficient estimators in both synthetic and real-world settings, as it provides added flexibility in controlling bias and variance. [Q3: Imputed data quality] With respect to imputation quality, we emphasize that QuEst’s adaptive nature ensures that it remains valid even if the imputed data deviates substantially from ground truth: if the imputed data are of high quality, then we choose a large $\lambda$ to weight more on the imputed data in the estimator. On the other hand, if the imputed data is of low quality, we will set smaller weight to imputed data (if $\lambda=0$, it just uses the gold-standard data alone). Since we are optimizing the tuning parameter $\lambda$, we would expect our estimator can always perform better than only using gold-standard label alone. Empirically (as in Fig. 4 with correlation $<0.1$), the estimator self-corrects by increasingly falling back on the smaller observed dataset if predictions are poor (with the $\alpha$ parameter). [Q4: Additional assumptions] As it pertains to assumptions, our estimator achieves asymptotic variance reduction relative to naive estimates without imposing additional assumptions beyond those enumerated in the paper. As mentioned in the previous question, our framework can automatically adapt to the quality of imputed data and we would expect our estimator can always perform better than only using gold-standard label alone. [Q5: Finite-sample properties of the proposed estimator] There are two points we want to address. 1. Building central limit theorems actually only require a couple of hundreds gold-standard labeled data, so in practice, our framework works quite well with limited gold-standard labeled data. 2. If we falls into the regime of extreme small data, i.e., we only have less than 100, then the data uncertainty will be too huge and **no** statistically principled methods would work in the distribution-agnostic setting (this can be proved by using "no free-lunch theorem" in learning theory). We have to impose more assumptions to gain stronger results. --- Rebuttal Comment 1.1: Comment: Thank you for your rebuttal, and I will maintain my positive score.
Summary: Authors propose a novel framework—QuEst—that enhances quantile based distributional measure estimation by combining scarce high quality observations with abundant model-predicted data. It produces an asymptotically unbiased estimator with reduced variance and can be further optimized using spline functions. Empirical results demonstrate that QuEst achieves accurate estimates and tighter confidence intervals than previous methods in different domains. ## Update after rebuttal: I thank the authors for their responses. My overall rating and assessment of the work remains positive, and so I maintain my score and recommend weak accept. Claims And Evidence: QuEst can achieve more accurate and reliable estimates for quantile￾based distributional measures than methods relying solely on either observed or imputed data. Methods And Evaluation Criteria: The method and evaluation in the paper are well-motivated and comprehensive. Theoretical Claims: The theoretical claims appear well-founded and are built upon established statistical principles. Experimental Designs Or Analyses: - Overall, the experimental evaluation is thorough and well-designed, but there are a few areas worth noting: QuEst’s performance improves as the quality (i.e., the correlation between observed and imputed data) increases. However, the experiments do not fully explore scenarios where imputed data is of lower quality or noisy. - The method hinges on the proper tuning of the λ parameter (or its optimized variant in QuEst-Opt). While the paper provides closed-form expressions and some empirical evidence of optimal tuning, additional details on hyperparameter sensitivity and robustness across a wider range of settings might further strengthen the evaluation Supplementary Material: I only roughly read the supp but did not check the proof. Relation To Broader Scientific Literature: The impact statement in this paper has well stated the borader impact of this paper. Essential References Not Discussed: All good. Other Strengths And Weaknesses: - As the authors mentioned in the limitations section, a key weakness is the heavy reliance on high-quality imputed data, which may reduce robustness when predictions are noisy. However, this paper follows a similar style to PPI, which leverages pretrained model information and provides valid confidence intervals for estimating certain quantities. In this context, this approach is reasonable. - In Figure 2, only the mean error is reported. Please include variance by running additional experiments. - Does the prompt design impact the performance of the proposed method? - Can more baseline methods be included? Currently, only two naive baselines are presented in the paper. Other Comments Or Suggestions: No Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful remarks. Please see below for responses to individual concerns. [Q1: Imputed data quality] First, we would like to clarify that our remark in the limitations section concerned the problem that **if we want to have significant gain in variance reduction**, we need **relatively** high-quality predictions. There are two points we want to address here regarding methodology: - We want to emphasize, **relatively** high-quality imputed data is enough to observe a significant gain in variance reduction. We don't need $100\%$ correct annotations for unlabeled data. Actually, in our experiments (Figure 4), we show that QuEst performs effectively with correlation of less than 0.1 (a very noisy regime) between the observed and imputed labels, and we still observe significant benefit by incorporating the imputed data in the estimator. - Another benefit of this framework is that our framework can automatically adapt to the quality of imputed data: if the imputed data are of high quality, then our algorithm will choose a large $\lambda$ to weight more on the imputed data in the estimator. On the other hand, if the imputed data is of low quality, our algorithm will set smaller weight to imputed data (if $\lambda=0$, it just uses the gold-standard data alone). Since our algorithm is optimizing the tuning parameter $\lambda$, we would expect our estimator can always perform better than only using gold-standard label alone. We will explicitly clarify this distinction in the final draft and emphasize that QuEst automatically debiases poor imputation quality and gracefully reverts to using the observed data as the primary signal. [Q2: Hyperparameters] Also, in response to the reviewer’s concern we will further clarify that our core estimator does not involve any hyperparameters in the closed-form tuning for $\lambda$, which we derived theoretically. We believe that our empirical results across five distinct datasets and a wide range of labeled/unlabeled splits suggest this tuning is robust. [Q3: Variance in Figure 2] With respect to Figure 2, we ran 2000 trials, and the resulting variance in Figure 2 is extremely small, rendering the error bars nearly invisible for most of the plots. [Q4: Prompt design] Although we did not vary prompt design in our study, QuEst remains valid regardless of how the imputed data is generated—it will simply adapt to the correlations observed between imputed and true labels. Empirically, we show that even inaccurate predictions can be used effectively by QuEst: for example, accuracy on OpinionQA is roughly 50\%, but the QuEst estimate still consistently beats baselines. [Q5: Other baselines] With respect to baselines, prior PPI variants are incompatible with the measures considered in our experiments; QuEst is the only existing specialized method for rigorously combining labeled and imputed data to estimate and provide CIs on these measures. Thus we compared against the "classic" approach of using only the labeled data, as well as the use of imputed data only. For the confidence intervals, our "classic" baseline still requires our highly non-trivial CLT derivation (with $\lambda=0$). We will clarify these choices in the final draft, specifying our exact baselines and emphasizing why specialized PPI methods do not directly apply to quantile-based distributional measures. This should ensure readers understand both the rationale behind our baseline selection and the distinct advantages offered by QuEst. Also, in order to further respond to these concerns, we have completed additional experiments on estimating the $\beta-VaR$ QBDM (Table 1). This experiment does allow for a direct comparison to PPI, since it is estimating a single quantile. We find that QuEst produces higher quality estimates than our usual suite of observed and imputed baselines as well as the PPI method from Angelopoulos et al. Representative figures will be included in the camera-ready version of the paper. | Num. Data | Observed | Imputed | QuEst | PPI | |-----------|------------------------|-----------------------|-----------------------|-----------------------| | 250 | 9.2e-02 ± 6.7e-03 | **6.3e-02 ± 1.8e-03** | 6.5e-02 ± 4.8e-03 | 6.7e-02 ± 4.9e-03 | | 500 | 6.5e-02 ± 4.2e-03 | 6.6e-02 ± 1.9e-03 | **4.3e-02 ± 2.9e-03** | 4.5e-02 ± 3.4e-03 | | 1000 | 4.3e-02 ± 3.2e-03 | 6.6e-02 ± 1.7e-03 | **3.0e-02 ± 2.0e-03** | 3.5e-02 ± 2.4e-03 | | 1500 | 3.5e-02 ± 2.2e-03 | 6.6e-02 ± 2.0e-03 | **2.4e-02 ± 1.8e-03** | 3.2e-02 ± 2.4e-03 | --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response and I will keep my score.
Summary: The authors propose QuEst to estimate a quantile-based distributional measure (see Defintion 1) in a setting where there is a small set of high-quality data and a large set of low-quality data. For the high-quality data, it assumes that their low-quality estimates are also known and follow the same low-quality distribution that governers the low-quality data. In this setting, the shift between high- and low-quality distribution can be estimated using the small set of data, and then be used to correct the estimate computed using only the large set of low-quality data. This correction is the key step of their proposed QuEst, as described in Eq. (2). The hyperparameter $\lambda$ in Eq. (2) is optimized using Theorem 3.1, which essentially balances the contribution of the high- and low-quality data based on their proportions. By using the optimized $\lambda$ in Eq. (2), the authors arrive at their proposed QuEst. ## update after rebuttal - Despite the heavy theoretical background, the high-level intuition seems simple to me. According to the assumptions made in Section 3.1, it is expected that $Q\_{\psi}(F_n) - \lambda Q\_{\psi}(\widetilde{F}_n) \approx Q\_{\psi}(F\_{N}\^{u}) - \lambda Q\_{\psi}(\widetilde{F}\_{N}^{u})$ because $\lim\_{n\to\infty}Q\_{\psi}(F_n) = \lim\_{N\to\infty}Q\_{\psi}(F\_{N}\^{u})$ and $\lim\_{n\to\infty}Q\_{\psi}(\widetilde{F}_n) = \lim\_{N\to\infty}Q\_{\psi}(\widetilde{F}\_{N}^{u})$, which is roughly an application of some variant of law of large numbers. Rearranging this approximate equality yields Eq. (2), which underpins the proposed method. After reading the first rebuttal, I realized that the primary difference lies in the specific variant of the law of large numbers employed. The authors distinguish the two variants using the concepts of M-estimators and L-estimators. However, I am uncertain about the novelty of this distinction in the present context. - Regarding the spline-function-based method, I think the authors' initial rebuttal was weak, as the relevant content is entirely missing. After reading their second response (which contains information absent from the submission), I agree that using a function basis could potentially improve the quality of estimation. However, the submission lacks concrete theoretical support for evaluating its efficiency, and I do not find the experimental results in Table 3 sufficient to empirically validate the claimed superior efficiency. Moreover, how their spline-function-based method was implemented is also missing. Overall, I will raise my score to 2. The main reason it is not higher is that the section on the spline-function-based method is clearly incomplete, considering that the authors highlight it as a key contribution in the submission. Claims And Evidence: One of the contributions they claim in the abstract is that "Further, we offer a novel spline function based method for optimizing our method." However, the word "spline" does not appear outside the abstract and the introduction section. Methods And Evaluation Criteria: The proposed QuEst makes sense in their specified setting. Theoretical Claims: I did not verify the proofs of their theoretical claims, as they fall outside my area of expertise. Experimental Designs Or Analyses: One issue is that they do not clearly define what the "classic method" is, which is used in the experiments comparing confidence intervals. Another *possible* issue is the absence of other baselines in their experiments comparing the quality of estimates. Supplementary Material: I did not review the appendix, as I am not familiar with the theories presented therein. Relation To Broader Scientific Literature: I am not familiar with the context. Essential References Not Discussed: I am not familiar with the context. Other Strengths And Weaknesses: The major concern I have is what is novel in this paper compared to the cited references. The key step of their proposed QuEst, as described in Eq. (2), already appears in (Angelopoulos et al. 2023, Section 2.1). Moreover, the idea of optimizing $\lambda$ in Eq. (2) can also be found in (Angelopoulos et al. 2023, Section 6). The only difference I noticed is the quantity to estimate. **References** Angelopoulos, A. N., Duchi, J. C., & Zrnic, T. Ppi++: Efficient prediction-powered inference, 2023. URL https://arxiv.org/abs/2311.01453. Other Comments Or Suggestions: NA Questions For Authors: - For the proposed QuEst, what is novel compared to the theoretical framework established in (Angelopoulos et al. 2023)? - It is possible that I missed it, but where is the proposed so-called spline function based method? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their time and consideration in reviewing our work. Here we respond to specific concerns raised in the review. We hope that our answers can resolve the misunderstanding and help to increase our score. [Q1: Novelty compared to existing work.] At the first glance, the surface form of Eq. (2) might resemble that of Angelopoulos et al. (2023). However, there is a significant difference between our frameworks. QuEst advances the prediction-powered inference paradigm beyond M-estimators (e.g., means, single quantiles) to fully encompass L-estimators, which can express diverse distributional measures like VaR, CVaR, and multi-quantile segments. This expansion requires new theoretical machinery, as our estimator cannot be straightforwardly written into sums of i.i.d. random variables. We rely on techniques from L-statistics, e.g., Hajek decomposition, and we also need to prove that $\mathbb{E}Q_\psi(F_n)-Q_\psi (F)=o_p(1/\sqrt{n})$ for certain $\psi$ functions, thereby establishing asymptotic normality under suitable conditions. Furthermore, QuEst integrates the concept of “power tuning” into this broader class of estimators and extends it with more sophisticated (spline-function-based) corrections in Sec.5, improving statistical efficiency in scenarios where a simple scalar multiplier would be insufficient. We also address multidimensional estimation, allowing simultaneous handling of multiple distributional measures (e.g., tail segments of multiple metrics) in a way that is not possible in prior PPI methods. From a practical standpoint, this generalization to L-estimators is essential for real-world tasks where the distributional properties themselves matter, such as identifying extreme behaviors or analyzing key population subgroups. QuEst provides asymptotically unbiased estimates and valid confidence intervals for quantities of substantial interest in domains like finance, public policy, and healthcare. Alongside the theoretical results, we present new variance formulas, covariance expressions, and performance bounds tailored to the L-estimator setting, offering a blueprint for rigorous inference on an array of tail or segment-based metrics. In sum, QuEst goes far beyond merely adapting an existing correction term; it is a comprehensive framework that systematically enables distributional inference where M-estimator-based prediction-powered approaches do not apply. [Q2: Use of splines in extension] Sorry for causing confusion. We are referring to the result in our “Extension” section. There, we introduce a more complicated version of power tuning instead of just tuning a multiplicative scalar lambda. Specifically, we consider tuning the coefficient vector for a given basis function in $\xi^T\phi(\cdot)$ (the tuning parameter is $\xi$ here). We specifically tried polynomial function basis, that is why we call this “spline function based”. Notice that it is a completely new and more advanced tuning method and we introduce new theorems and new techniques to get **more statistically efficient estimators**. [Q3: “Classic” baseline for confidence intervals] With respect to baselines, prior PPI variants are incompatible with the measures considered in our experiments; QuEst is the only existing specialized method for rigorously combining labeled and imputed data to estimate and provide CIs on these measures. Thus we compared against the "classic" approach of using only the labeled data, as well as the use of imputed data only. For the confidence intervals, our "classic" baseline still requires our highly non-trivial CLT derivation (with $\lambda=0$). We understand that this was not clear in the original submission, and would definitely clarify these details in our future revision. [Q4: Other baselines] In order to respond to these concerns, we have completed additional experiments on estimating the $\beta-VaR$ QBDM (Table 1). This experiment does allow for a direct comparison to PPI, since it is estimating a single quantile. We find that QuEst produces higher quality estimates than our usual suite of observed and imputed baselines as well as the PPI method from Angelopoulos et al. Representative figures will be included in the camera-ready version of the paper. | Num. Data | Observed | Imputed | QuEst | PPI | |-----------|------------------------|-----------------------|-----------------------|-----------------------| | 250 | 9.2e-02 ± 6.7e-03 | **6.3e-02 ± 1.8e-03** | 6.5e-02 ± 4.8e-03 | 6.7e-02 ± 4.9e-03 | | 500 | 6.5e-02 ± 4.2e-03 | 6.6e-02 ± 1.9e-03 | **4.3e-02 ± 2.9e-03** | 4.5e-02 ± 3.4e-03 | | 1000 | 4.3e-02 ± 3.2e-03 | 6.6e-02 ± 1.7e-03 | **3.0e-02 ± 2.0e-03** | 3.5e-02 ± 2.4e-03 | | 1500 | 3.5e-02 ± 2.2e-03 | 6.6e-02 ± 2.0e-03 | **2.4e-02 ± 1.8e-03** | 3.2e-02 ± 2.4e-03 | --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response, it addresses some of my concerns. Here are my additional concerns: - As suggested in the response, $\phi(\cdot)$ is supposed to be a polynomial function basis. I looked into Appendices C.4, C.5 and Section 5, I did not find a precise definition of $\phi(\cdot)$. In my view, it is still unclear what polynomial function basis is, how it contributes to the developed theory, and why it is supposed to be more efficient, as stated in lines 408-409 of the left column. Do the authors mean $\phi_{k}(p) = \psi(p)^{k}$ ? - Additionally, I noticed that the authors made an assumption in a proof; see line 1017. Although Theorem 5.1 states, "Under certain regularity conditions," I think it would be clearer to explicitly state the assumption, especially since it is brief. Moreover, in Table 1, to my understanding, the second measure does not satisfy this assumption, which might be seen as a limitation of their developed theory. I am quite hesitant to raise my score because the part related to the so-called "spline function-based" approach seems incomplete. In particular, the authors claim this as one of their contributions in the abstract. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their continued engagement with our work. We are glad that we were able to address your major concern with respect to the novelty of QuEst with respect to existing methods. The main significance of our paper is in generalizing PPI and existing techniques to L-estimators. This innovation requires highly non-trivial derivations, and enables a wide range of important applications, including those highlighted in our comprehensive experiment section. Given that the reviewer expresses no more concerns with respect to the main contribution of our idea, we respectfully and sincerely request that they reconsider their score of 1. We also thank the reviewer for their close reading of our extension section. While this is not part of our core QuEst method, we believe that this add-on also holds value in allowing for better estimates and tighter confidence intervals. We agree that this section would benefit from enhanced clarity in a final draft, and plan to address this. However, we do maintain that the algorithm as presented in this section is both complete and correct. Please see below for responses to your particular concerns; hopefully these may also lead you to consider raising your score. **Q1**: Use of polynomial basis function. **A1**: Here we will clarify more about the polynomial function basis. We are sorry about the confusion caused, and will include further clarification in our revised version. 1. We would like to point out our extension in Section 5 is for **general** basis functions $\phi$ such that $\tilde{\psi}(\cdot)=\xi^T\phi(\cdot)$. Here, $\phi(\cdot)$ can be any basis function, like a Fourier basis or polynomial function basis. The proof of the theory \textbf{does not} dependent on the polynomial function basis, and it is just an instance of one choice of $\phi(\cdot)$. 2. Here, polynomial function basis is a commonly used terminology in mathematics, which means $\phi(x)=(1,x,x^2,\cdots,x^k)^T$, where $k$ is our choice. We specifically used that in our experiment in Appendix B, Figure 6. 3. Regarding efficiency, this is because we can specify $\phi(\cdot)$ to be complex enough so that optimizing $\tilde \psi$ can reach a lower variance. Notice as long as we choose $\phi$ to be a family of expressive enough basis functions, finetuning $\tilde \psi$ is more flexible than just finetuning a multiplicative parameter $\lambda$ as we mentioned in line 416. Note that by classic approximation theory, a polynomial function basis can approximate a wide class of functions very well if we choose $k$ large enough. **Q2**: Assumptions and application to measures in Table 1. **A2**: Thanks you for the suggestion regarding moving assumptions to the main paper. We have stated the clear conditions in the Appendix C.5; we will also include that in the main statement in the revised version. Previously, our goal given space constraints was to briefly summarize the argument in the main paper, and leave highly technical details for interested readers to the appendix. Based on the reviewer's concern, we can move some important details forward. Additionally, we note that we **did not** claim that our theory in section 5 can cover all the cases in Table 1. But our theory is general enough to cover CVaR and Interval VaR, which already give rise to a lot of interesting applications as highlighted throughout the paper. Most importantly, Section 5 is just an extension section, this is **not** our main contribution. Our main contribution is greatly generalizing PPI to L-estimators with new techniques and apply it to new interesting applications in genomics, social science, and LLM evaluation. Again, we thank the reviewer for the time and consideration in reviewing our work. Once again, we are pleased that the reviewer seems satisfied with the novel contribution of our main QuEst method, and hope that we have been able to address your questions about the extension section, which we will thoroughly revise based on the reviewer's feedback. We would greatly appreciate if the reviewer would reconsider their evaluation based on our discussion, and potentially also in light of the feedback of other reviewers.
Summary: The authors introduce QuEst, a framework for estimating quantile-based distributional measures (e.g., VaR and CVaR in financial mathematics), which combines a smaller set of real data with a much larger set of output data from machine learning predictions, in a similar fashion as the prediction-powered inference framework introduced by Angelopoulos et al. (2023). The authors extend QuEst to be able to provide point estimates and prediction intervals in multivariate settings, further providing a more efficient implementation using a spline-function-based method. The authors showcase the efficacy of their approach in various real datasets, as well as in a red-teaming and news summarization experiments using various LLMs. Claims And Evidence: The claims made by the authors are supported by convincing evidence, as the experiments provide validation of their technique in various settings. Methods And Evaluation Criteria: I am not an expert in prediction-powered inference, but as far as my understanding goes the methods and evaluation criteria are sensible. Theoretical Claims: I have not checked closely the proofs, just skimmed through it. The application of the limiting theorem from Van der Vaart (2000), which I am familiar with, seems sensible and I have not identified any glaring mistake in the proof structure and reasoning. Experimental Designs Or Analyses: I went through the experiments and their setup, which I think are solid and motivate the presented work well. Supplementary Material: I reviewed A and B, just skimmed C. Relation To Broader Scientific Literature: The authors do a good job citing relevant literature in Section 2. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Overall, I've found the paper to be well written and well structured, with the idea of QuEst providing an interesting approach of obtaining estimations of quantile-based estimation measures. As I am not an expert in this literature, my main question is about the degree of innovation provided by the paper: after going through the manuscript, my understanding is that QuEst provides (a) a better alternative to point estimation to existing approaches of specific quantiles, (b) extends to measure beyond M-estimators and (c) further extends this setup to high-dimensional settings. If this is correct, how does the estimation of (a) compare with existing PPI approaches, like the ones mentioned in line 45 (second column)? Other Comments Or Suggestions: - Can the authors comment on whether it is fair to say that this work extends the work of PPI on M-estimators to L-estimators? (This seems to be implied by line 155, second column). Can it be generalized further than quantile-based distributional measures? - The baselines in experiments are not clearly explained, which I appreciate is due to the lack of space. I suggest the authors to include a more detailed explanation in the final version of the paper. Line 137. "M-estiamtors" -> "M-estimators" Line 199, second column. "simultaneously" -> "simultaneously." Questions For Authors: Please see the questions above. Overall I think this a good paper, with my overall recommendation being influenced by some of the points highlighted above and being outside of the prediction-powered inference literature. (Finally, as a note outside this review, it is a shame that after using CVar as an example there was not a comparison of CVar from QuEst versus more traditional financial mathematics approaches, but I am sure someone might work on that once the paper is out.) Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the thoughtful feedback and for recognizing the contribution of QuEst. Below we address your concerns and questions. [Q1: Relation to existing approaches] This is correct. Existing PPI methods deal with estimators written in the form of sums of i.i.d. random variables. We strictly generalize it by moving beyond M-estimators to a new class of L-estimators (i.e., quantile-based distributional measures). Specifically, if we choose the weighting function as 1, it will become the mean estimator of a loss function, which is the objective studied in PPI. So the family of estimators we study is a strict generalization of the previous result. The family of estimators we investigate are no longer straightforwardly in the form of sum of i.i.d. random variables as those studied in PPI, thus, it incurs new challenges and requires new techniques to build central limit theorems. For example, in Appendix C.1.1, in order to prove CLT, we need to prove $\mathbb{E}Q_\psi(F_n)-Q_\psi (F)=o_p(1/\sqrt{n})$ for certain $\psi$ functions. Then, we need to use techniques in L-statistics such as Hajek decomposition to build the final CLT. In practice, this shift addresses important real-world needs, such as assessing tail behaviors, population subgroups, or other segments of interest that existing PPI methods do not handle. Our derivations yield asymptotically unbiased estimators and valid confidence intervals for these more complex target measures. Thus, QuEst closes a gap in the literature by providing a unified approach that not only captures means or single quantiles, but also richer distributional statistics of concern in diverse and important fields like biology, economics, and healthcare. [Q2: Handling L-Estimators] This is mostly correct but not precise. Because in line 155, what we mean is that if we choose the weighting function as a special one (constant function 1), it will become the mean estimator of a loss function, which is the objective studied in PPI. Instead of claiming we can handle all L-estimators, our claim is that the family of estimators we proposed for QBDMs falls into the category of L-estimators. As for further generalization over QBDM, we can indeed generalize a bit to handle a wider range of other L-estimators (more general weighting function, and special nonlinear functional of $Q$). But a main focus here is on quantile-based distributional measures, as they are especially relevant in high-stakes applications like tail risk analysis and policy decisions. [Q3: Clarifying baselines] For baselines, prior PPI variants are incompatible with the measures considered in our experiments; QuEst is the only existing specialized method for rigorously combining labeled and imputed data to estimate and provide CIs on these measures. Thus we compared against the "classic" approach of using only the labeled data, as well as the use of imputed data only. For the confidence intervals, our "classic" baseline still requires our highly non-trivial CLT derivation (with $\lambda=0$). We will clarify these choices in the final draft, specifying our exact baselines and emphasizing why specialized PPI methods do not directly apply to quantile-based distributional measures. This should ensure readers understand both the rationale behind our baseline selection and the distinct advantages offered by QuEst. To further address these concerns, we have completed additional experiments on estimating the $\beta-VaR$ QBDM (Table 1). This experiment does allow for a direct comparison to PPI, since it is estimating a single quantile, and is thus the only QBDM computable with PPI. We find that QuEst produces higher quality estimates than our usual suite of observed and imputed baselines as well as the PPI method from Angelopoulos et al. Representative figures will be included in the revised version of the paper. | Num. Data | Observed | Imputed | QuEst | PPI | |-----------|------------------------|-----------------------|-----------------------|-----------------------| | 250 | 9.2e-02 ± 6.7e-03 | **6.3e-02 ± 1.8e-03** | 6.5e-02 ± 4.8e-03 | 6.7e-02 ± 4.9e-03 | | 500 | 6.5e-02 ± 4.2e-03 | 6.6e-02 ± 1.9e-03 | **4.3e-02 ± 2.9e-03** | 4.5e-02 ± 3.4e-03 | | 1000 | 4.3e-02 ± 3.2e-03 | 6.6e-02 ± 1.7e-03 | **3.0e-02 ± 2.0e-03** | 3.5e-02 ± 2.4e-03 | | 1500 | 3.5e-02 ± 2.2e-03 | 6.6e-02 ± 2.0e-03 | **2.4e-02 ± 1.8e-03** | 3.2e-02 ± 2.4e-03 | [Q4: CVaR] We do not fully understand this point. Our focus on CVaR as one of our primary QBDMs of interest was inspired by its use in the financial mathematics literature. Our innovation applies to this measure; our "classic" baseline corresponds to its application based on that literature. To our knowledge, there are no other known techniques for combining gold-standard and imputed data to enhance CVaR estimates.
null
null
null
null
null
null
Neural Interpretable PDEs: Harmonizing Fourier Insights with Attention for Scalable and Interpretable Physics Discovery
Accept (poster)
Summary: This paper introduces Neural Interpretable PDEs (NIPS), a novel neural operator architecture that enhances both predictive accuracy and computational efficiency in modeling complex physical systems. NIPS builds upon Nonlocal Attention Operators (NAO) by employing a linear attention mechanism combined with a learnable kernel network that functions as a channel-independent convolution in Fourier space. This design eliminates the need to explicitly compute and store large pairwise interactions, effectively amortizing the cost of spatial interactions through the Fourier transform. Claims And Evidence: The claims made in this submission are generally well-supported by empirical evidence and theoretical analysis. The authors provide comprehensive quantitative results across three distinct experimental setups (Darcy flow, MMNIST, and synthetic tissue learning), consistently demonstrating NIPS's superior performance compared to baseline models. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper are well-aligned with the challenges of learning interpretable operators for physical systems. The NIPS architecture logically addresses the limitations of existing approaches by combining Fourier techniques with linear attention, which is appropriate for handling PDE operators efficiently while maintaining interpretability. Theoretical Claims: The paper does not present formal mathematical proofs in the traditional sense with explicitly stated theorems and detailed proof steps Experimental Designs Or Analyses: The experimental designs in this paper are generally sound and well-executed. The Darcy flow experiments (Section 4.1) use appropriate synthetic data generation procedures with clear controls across model variants while maintaining comparable parameter counts. The parametric studies in Table 2 systematically explore the effects of random permutations and projection dimensions with sufficient sampling to establish trends. One limitation is that statistical significance of the performance differences is not explicitly established through multiple runs with different random seeds. Supplementary Material: I reviewed the supplementary material provided in the paper. The first section provides valuable information about data generation procedures for all three experimental settings. The second section elaborates on the implementation configurations of all baseline models, ensuring comparability in the experiments. Relation To Broader Scientific Literature: The key contributions of this paper build upon several important strands of research in the computational physics and machine learning literature. NIPS integrates two significant developments in neural operators: the Fourier Neural Operator (FNO) framework introduced by Li et al. (2020c), which leverages spectral methods for efficient computation, and the Nonlocal Attention Operators (NAO) recently proposed by Yu et al. (2024), which focus on interpretability through attention mechanisms. Essential References Not Discussed: While the authors cite linear attention work by Cao (2021), they do not discuss Performer (Choromanski et al., 2020, NeurIPS) which introduced kernel-based approximations for efficient attention that are conceptually related to their Fourier-domain approach. Second, the paper could benefit from citing Graph Neural Operators (GNOs) by Li et al. (2020, NeurIPS), as these provide an alternative perspective on handling non-local interactions in operator learning that would help contextualize NIPS's approach. Other Strengths And Weaknesses: Pros: The harmonization of Fourier insights with attention mechanisms represents a creative combination of existing ideas that yields tangible performance improvements. The dual capability of NIPS to solve both forward and inverse problems within a unified framework is especially significant, as it addresses a fundamental challenge in computational physics. Cons: While empirical results are strong, a more rigorous mathematical foundation would strengthen the paper's contribution. Additionally, the experimental evaluation could benefit from a more detailed analysis of failure cases or boundary conditions where the method might underperform. The paper would also be strengthened by exploring more diverse or complex physical systems beyond the three test cases presented, particularly systems with higher dimensionality or stronger nonlinearities. Other Comments Or Suggestions: NA Questions For Authors: 1. Your experiments show AFNO consistently achieving error rates exceeding 50% across all scenarios, which seems unusually poor for an established method. Could you clarify whether this reflects a fundamental limitation of AFNO for these specific tasks, or if there might be implementation factors affecting its performance? 2. While you demonstrate strong zero-shot generalization capabilities within similar PDE systems, how would NIPS perform when transferring to fundamentally different classes of PDEs? For instance, could a model trained on diffusion equations generalize to wave equations or advection-dominated systems? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments. Our response: **Multiple runs with different random seeds**: We repeat the experiments three times with randomly selected seeds in the first case of experiment 1, using NIPS and the major baseline models including NAO and AFNO. The results are reported in the table below. Consistent with the findings in the original manuscript, NIPS outperforms the best baseline by 28.9\%. Table 1: Test errors and number of trainable parameters for the Darcy flow problem. Bold numbers highlight the best method. |Model | NIPS | NAO | AFNO | | :------------- | :-----------: | :-----------: | :-----------: | |Test error | **2.31\%**$\pm$**0.03\%** | 3.25\%$\pm$0.14\% | 52.92\%$\pm$0.72\% | **Additional discussion on Performer and GNO**: We appreciate the suggestion and agree that additional discussion on these works will help contextualize NIPS. NIPS is conceptually related to Performer, which introduces kernel-based approximations for efficient self-attention by replacing softmax attention with positive orthogonal random features, achieving linear scaling with sequence length. While both aim to reduce the quadratic complexity of self-attention, NIPS employs a Fourier-domain reformulation that leverages convolutional structure. Unlike Performer’s randomized feature mappings, NIPS deterministically decomposes attention using spectral representations, making it well-suited for physics-based PDE learning. Our work is also related to GNOs, which use graph message passing to model nonlocal interactions in operator learning tasks. While GNOs capture spatial dependencies over irregular domains, NIPS operates in the Fourier domain, leveraging spectral representations to encode long-range dependencies in a structured manner. Whereas graph-based architectures like GNOs and NAO learn operators on discretized graph structures, our Fourier-based approach is particularly efficient on structured grids. We will incorporate this discussion in the revised manuscript. **More rigorous mathematical foundation**: We appreciate the suggestion. NIPS inherits resolution invariance and improved identifiability from NAO, and we will add this discussion to the manuscript. While further mathematical analysis is an interesting future direction, our current focus is on developing a novel architecture that enhances predictive accuracy and computational efficiency for both forward and inverse PDE problems. **More detailed analysis of challenging and failure cases**: To explore where NIPS might fail, we consider two out-of-distribution scenarios. The training dataset uses a microstructure $b$ generated by a Gaussian random field with covariance operator $(-\Delta + 5^2)^{-4}$ and a loading field $g$ generated by $(-\Delta + 5^2)^{-1}$. The out-of-distribution scenarios are: 1) in-distribution (ID) microstructure $b$ and out-of-distribution (OOD) $g$ using $(-\Delta + 5^2)^{-4}$; 2) OOD microstructure $b$ using $(-\Delta + 5^2)^{-1}$ and ID $g$. Results below show that both NIPS and NAO perform well across these scenarios in terms of forward operator errors and kernel errors, but microstructure errors increase, especially in scenario 2, where the out-of-distribution microstructure is rougher and more detailed. This complexity makes it harder to recover the ground truth, particularly in an unsupervised learning context. Table 2: Test, kernel, and microstructure errors for Darcy flow. Bold numbers highlight the best method. |Data Setting | Case | Model | \#param | Test error | Kernel error | Microstructure error | | :------------- | :-----------: | :-----------: | :-----------: | :-----------: | :------------- | :-----------: | |ID | No noise | NIPS | 327,744 | **4.09\%** | **9.24\%** | 7.92\% | | | | NAO | 331,554 | 4.15\% | 10.35\% | **7.09\%** | |OOD, Scenario 1 | No noise | NIPS | 327,744 | **3.40\%** | **10.63\%** | **11.23\%** | | | | NAO |331,554 | 6.86\% | 14.46\% | 20.37\% | |OOD, Scenario 2|No noise | NIPS | 327,744 | **4.98\%** | **9.68\%** | **16.06\%** | | | | NAO |331,554 | 5.57\% | 10.94\% | 20.30\% | **Explanation on AFNO performance**: AFNO performs poorly because it is designed to learn solutions for a single PDE system and cannot handle inputs from multiple systems. In contrast, both NAO and NIPS leverage global prior knowledge from training data of multiple systems, allowing them to generalize across unseen system states. **Transferring NIPS to fundamentally different PDEs**: As mentioned in NAO, attention operators extract prior knowledge in the form of identifiable kernel spaces from multiple PDE tasks. This approach may not work well if the kernel for a new problem differs significantly from those in the training data. For example, if NIPS is trained on diffusion problems with symmetric kernels (stiffness matrices), it will struggle to predict stiffness matrices for non-symmetric systems, like those in advection problems. We will include this discussion in the revised paper.
Summary: This article introduces an operator learning method that computes the solution to a PDE with a data-dependent kernel called NIPS. The kernel comes from a linear attention mechanism performs in Fourier space, which improves the efficiency of the method. The method can also be used for the inverse problem to discover the governing parameters of a PDE given the solution. This method is tested Darcy flow, mechanical MNIST, and synthetic tissue models. The methods is claimed to be accurate, efficient, and interpretable. Claims And Evidence: The claim of efficiency is not substantiated clearly. The computational domains of the computer experiments remain fairly small. All discretization are under 2000 points which does not allow for large computational domains. Larger domains need to be studied to illustrate the efficiency and scalability of the method. Methods And Evaluation Criteria: The chosen applications are also weak baselines [1]. More computational experiments are needed to benchmark the method. McGreivy, Nick, and Ammar Hakim. "Weak baselines and reporting biases lead to overoptimism in machine learning for fluid-related partial differential equations." Nature Machine Intelligence 6.10 (2024): 1256-1269. Theoretical Claims: This is an empirical study. Experimental Designs Or Analyses: See concern above. Supplementary Material: The supplementary material presents some details on the computational experiments. Relation To Broader Scientific Literature: This work is an extension to Nonlocal Attention Operators with linear attention in the Fourier domain. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments and questions. Our response: **Larger discretization**: We demonstrate the scalability of NIPS using larger discretizations (e.g., $121\times121$ with 14,641 tokens in total), and compare its per-epoch runtime and peak GPU memory usage with the best-performing baseline, NAO. The results are summarized in the table below, where ``x'' indicates that the method exceeds memory limits on a single NVIDIA A100 GPU with 40 GB memory due to the explicit computation of spatial token interactions. Compared to the quadratic NAO that fails at 3.7k tokens and the linear NAO that fails at 6.5k tokens, NIPS can easily scale up to 15k tokens on a single A100 GPU. Note that the analysis is performed on a single GPU. To further accelerate training, one can utilize multiple GPUs and leverage distributed training frameworks, such as PyTorch’s Distributed Data Parallel (DDP) module, which will be an interesting direction for future work. Table 1: Scalability demonstration via per-epoch runtime and peak memory footprint for the Darcy flow problem. All models are 4-layer. The per-epoch runtimes for both the original quadratic attention and the re-formulated linear attention are reported and separated by /. ``x'' denotes exceeding memory limits on a single NVIDIA A100 GPU with 40 GB memory. | | \# tokens | 441 | 1,681 | 3,721 | 6,561 | 10,201 | 14,641 | |-------------------------|-----------:|--------:|--------:|--------:|--------:|--------:|--------:| | **Per-epoch runtime (s)** | **NIPS** | 2.5 | 8.4 | 17.1 | 25.1 | 45.6 | 65.66 | | | **NAO** | 7.8/2.5| 58.8/11.8| x/20.9| x/x | x/x | x/x | | **Peak memory usage (GB)** | **NIPS** | 0.46 | 1.68 | 3.68 | 6.46 | 10.04 | 14.39 | | | **NAO** | 2.47/0.66 | 32.61/5.69 | x/25.53 | x/x | x/x | x/x | **Weak-baseline applications**: Thank you for pointing us to this insightful work. The referenced paper highlights two key principles for fair comparisons: (1) evaluating models at either equal accuracy or equal runtime, and (2) comparing against an efficient numerical method. First, we clarify that our comparisons focus on state-of-the-art (SOTA) machine learning methods rather than standard numerical solvers used in data generation. Specifically, we evaluate our proposed model, NIPS, against NAO, its variants, and AFNO, which are established baseline models in the field. To ensure fairness (rule 1), we maintain a comparable total number of trainable parameters across models, as large discrepancies could lead to misleading conclusions. Within this constraint, we assess (1) per-epoch runtime to measure computational efficiency and (2) test errors to evaluate prediction accuracy. Regarding rule 2, we select SOTA neural operator models as baselines to demonstrate the advantages of NIPS in accuracy and efficiency. These models represent the strongest available baselines for learning-based PDE solvers. Therefore, our experimental setup adheres to the principles outlined in the referenced paper. We will incorporate this discussion into our revised manuscript to further clarify our evaluation methodology.
Summary: The paper introduces Neural Interpretable PDEs (NIPS), an attention-based neural operator architecture for solving forward and inverse PDE problems. NIPS utilizes a learnable kernel network to optimize efficiency in Fourier transform interactions by mitigating the computation and storage of the pairwise interactions. Improved upon Nonlocal Attention Operators (NAO), the author incorporates linear attention to improve the scalability and interpretability of large physical systems learning. The authors demonstrate that NIPS surpasses NAO and other baselines across multiple physics modeling benchmarks. Claims And Evidence: The author claims efficiency in Fourier Transform interaction, interpretability and scalability based on the improved attention structure, and generalizability on zero-shot unseen physical systems. The numerical results in Table 1 provide evidence that NIPS has lower test errors and faster training times in the Darcy Flow problem compared to the baselines. Interpretability is primarily demonstrated through visualizations in the Darcy Flow section. While these visualizations provide a qualitative explanation, it is suggested to extend them to other tested datasets or incorporate formalized metrics, as interpretability is a relatively subjective term. Methods And Evaluation Criteria: The benchmark datasets utilized are 2D Darcy Flow, Mechanical MNIST, and Synthetic Tissue Learning. Unlimited to fluid dynamics, the dataset also incorporates a material and biomechanics dataset. The evaluation metric utilized is mainly relative mean squared error (rMSE), which is a standardized metric in evaluating prediction performances. Theoretical Claims: The theoretical claims in Section 3.2 and Section 3.3 introduce the expression of attention mechanism-based kernel map and the reformulation of Fourier-domain kernel to enhance the efficiency. Specifically, the reformulation and the assessment of Big O in Section 3.3 looks correct to me. Experimental Designs Or Analyses: The experiments confirm NIPS' capability in forward and inverse PDE solving as well as zero-shot generalization. Some vital baseline for this method, such as NAO and NAO-relatives, are included. Additionally, multiple cases of token and layer numbers are discussed. One suggestion is to include real-world PDE data to further demonstrate NIPS' practical applicability, such as airfoil flow and climate data. They could provide additional insights into how well NIPS generalizes on noisy data, which is beyond synthetic PDE problems. Supplementary Material: Supplementary materials include the dataset and implementation specifications. These sections are important to make NIPS reproducible. One suggestion is to add the model architectures as an additional section for clarity. Relation To Broader Scientific Literature: This paper proposes a useful method that is closely related to multiple facets in PDE learning, including Neural Operators (NO) and Fourier Neural Operators (FNO), its prior work Nonlocal Attention Operators (NAO). It is also related to both forward and inverse PDE solving in efficiency and effectiveness considerations. Essential References Not Discussed: The author discusses the majority of essential references. Other Strengths And Weaknesses: Clarity: The paper flows smoothly, making readers easy to follow. Other Comments Or Suggestions: Suggestions are mentioned in the earlier sections. Questions For Authors: Question 1: How would interpretability vary across different zero-shot conditions? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful comments . Our response: **Formalized interpretability metric**: We thank the reviewer's valuable suggestion. A quantitative evaluation of the interpretability can be provided by the recovered kernel. Taking the Darcy flow example for instance, the discovered kernel should correspond to the stiffness matrix $K$, and the underlying permeability field $b(x)$ can be obtained by solving an optimization problem: $B^*=argmin_B \sum_{i,j}||K_B[i,j]-K[i,j]||^2$. Here $B=[b(x_1),\cdots,b(x_N)]$ denotes the pointwise value of $b$ at each grid point, and $K_B[i,j]$ is the corresponding stiffness matrix obtained using a finite difference solver. In the table below, we take a $11\times11$ grid, and calculate the relative errors of $K$ and $b$ to provide a quantitative evaluation of the interpretability. Beyond the in-distribution (ID) scenario where the microstructure $b$ is generated using a Gaussian random field with covariance operator $(-\Delta + 5^2)^{-4}$ and the loading field $g$ is generated with $(-\Delta + 5^2)^{-1}$, two out-of-distribution (OOD) scenarios are included: 1) data with ID microstructure $b$ and OOD $g$ using covariance operator $(-\Delta + 5^2)^{-4}$; 2) data with OOD microstructure $b$ using covariance operator $(-\Delta + 5^2)^{-1}$ and ID $g$. Intuitively, these two OOD tasks are more challenging. One can see that NIPS achieves the smallest error in both test errors and kernel/microstructure errors almost in all scenarios. Table 1: Test errors, kernel errors, and microstructure errors for the Darcy flow problem. Bold numbers highlight the best method. |Data Setting | Case | Model | \#param | Test error | Kernel error | Microstructure error | | :------------- | :-----------: | :-----------: | :-----------: | :-----------: | :------------- | :-----------: | |ID | No noise | NIPS | 327,744 | **4.09\%** | **9.24\%** | 7.92\% | | | | NAO | 331,554 | 4.15\% | 10.35\% | **7.09\%** | |OOD, Scenario 1 | No noise | NIPS | 327,744 | **3.40\%** | **10.63\%** | **11.23\%** | | | | NAO |331,554 | 6.86\% | 14.46\% | 20.37\% | |OOD, Scenario 2|No noise | NIPS | 327,744 | **4.98\%** | **9.68\%** | **16.06\%** | | | | NAO |331,554 | 5.57\% | 10.94\% | 20.30\% | |ID|Noise $\epsilon\sim\mathcal{N}(0,0.01^2)$ | NIPS | 327,744 | 4.40\% | 9.47\% | 7.97\% | |OOD, Scenario 1|Noise $\epsilon\sim\mathcal{N}(0,0.01^2)$ | NIPS | 327,744 | 3.88\% | 11.14\% | 13.63\% | |OOD, Scenario 2|Noise $\epsilon\sim\mathcal{N}(0,0.01^2)$ | NIPS | 327,744 | 5.65\% | 10.21\% | 17.36\% | |ID|Noise $\epsilon\sim\mathcal{N}(0,0.1^2)$ | NIPS | 327,744 | 9.98\% | 21.77\% | 14.84\% | |OOD, Scenario 1| Noise $\epsilon\sim\mathcal{N}(0,0.1^2)$ | NIPS | 327,744 | 3.84\% | 21.08\% | 19.87\% | |OOD, Scenario 2| Noise $\epsilon\sim\mathcal{N}(0,0.1^2)$ | NIPS | 327,744 | 11.67\% | 22.77\% | 26.54\% | **Real-world data to demonstrate practical applicability**: We appreciate the reviewer's valuable suggestion. To our best knowledge, most available airfoil flow and climate datasets [1-2] are also generated synthetically by solving PDEs. To address the reviewer's request, we instead consider a real-world data setting, by including additional noise in the dataset. In particular, we perturb the training data with additive Gaussian noise: \begin{equation} \widetilde{g}(x) = g(x) + \epsilon(x), \quad \epsilon \sim \mathcal{N}(0, \sigma^2), \end{equation} where $g(x)$ is the true source field, and $\epsilon(x)$ represents zero-mean Gaussian noise with variance $\sigma^2$. The results are presented using the Darcy flow experiment, with $\sigma=0.01$ and $0.1$. From the results, we can see that NIPS's predictions unavoidably deteriorates under the increased level of observational noises, but they remain robust. [1] Z Li. Fourier neural operator with learned deformations for PDEs on general geometries. JMLR, 2023 [2] J Gupta. Towards multi-spatiotemporal-scale generalized PDE modeling. TMLR, 2022 **Add model architecture to appendix**: We thank the reviewer's valuable suggestions. Besides Fig 1 and Algorithm 1, we will add an additional section to provide details on the model architecture. We will also release source codes and datasets upon paper acceptance to guarantee clarity and reproducibility of all experiments. **Interpretability variation across different zero-shot conditions**: As discussed above, we consider three zero-shot generalization scenarios (ID, OOD for loading $g$, and OOD for microstructure $b$), with results listed in the above table. Both NIPS and NAO generalize well across these scenarios in forward operator errors and kernel errors. However, microstructure errors may deteriorate, particularly in scenario 2 (OOD microstructure $b$). This is expected, as the microstructure in this scenario is not only OOD but also rougher, incorporating more fine-grained details. This added complexity makes recovering the ground truth more challenging, especially in an unsupervised learning setting.
null
null
null
null
null
null
null
null
Conformal Prediction for Hierarchical Data
Reject
Summary: This paper studies the problem of conformal prediction for hierarchically-structured multivariate regression datasets. Hierarchical coherence of the different outputs is encoded via a projection matrix. A novel split conformal prediction algorithm is introduced with two objectives in mind: marginal coverage for each of the output dimensions, and small intervals for each of the output dimensions. The authors present extensive theoretical results and limited experimental results for their work. The main theoretical result is Theorem 3.7, which shows that the hierarchical conformal prediction algorithm presented by the authors leads to a smaller expected set size compared to the plain multivariate version. In the experiments a synthetic dataset and one real-world dataset are considered. Claims And Evidence: I have concerns about the theoretical and experimental claims. See below for more details. Methods And Evaluation Criteria: I find the experimental results quite weak. Only one real-world dataset is considered, and this dataset is not convincing for me. Energy-demand forecasting is a time series problem, so the nonconformity scores will not be i.i.d. This is problematic, since the authors are not capable of showing that their method is useful for a practical problem. Overall, the motivation for extending conformal prediction to the hierarchical multivariate case is quite weak. Are there actually real-world static regression problems with i.i.d. data and multivariate hierarchically-structured outputs? The minimum I expect is that at least one problem of that kind is studied in the experiments. Theoretical Claims: I did not check the proofs carefully. (I have to review 6 papers for ICML, so I don't have time to read appendices or proofs) In Theorem 3.7 it is not clear to me whether the theoretical improvement over the plain multivariate conformal prediction algorithm comes from Assumption 3.6 or from imposing the hierarchical structure. My gut feeling is that it comes from Assumption 3.6. It is quite obvious that, with stronger assumptions than i.i.d. data or exchangeability on the nonconformity scores, stronger theoretical guarantees in terms of coverage or interval length can be obtained. Can theoretical improvement of the presented method be proven without introducing additional assumptions? In fact the elliptical distribution assumption of the nonconformity scores is a very strong assumption that is somewhat against the spirit of the conformal prediction literature (i.e. distribution-free guarantees). Experimental Designs Or Analyses: Apart from the remark on the non i.i.d. nature of the real-world dataset, I don't have other remarks about the experiments. Supplementary Material: Only the experimental setup. That part was clear. Relation To Broader Scientific Literature: Looks ok. Essential References Not Discussed: No missing references. Other Strengths And Weaknesses: Strengths: - The theoretical analysis looks strong - The paper is well written (from a math perspective) - The authors understand the topic very well Weaknesses: - Very specific problem setting - Experiments and assumptions for theory not convincing - The paper is quite technical Other Comments Or Suggestions: In Section 2.2 I did not understand the motivation for using signed nonconformity scores. It is unclear to me what advantage these have over absolute residuals. Please motivate better. In the introduction the notion "data series" is somewhat misleading. Initially I thought that the paper was about hierarchically-structured time series (as in the energy-demand dataset). Questions For Authors: The authors are welcome to give feedback to my comments. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments; as listed in the strengths, our main objective was indeed to develop a theory (of conformal prediction for hierarchical data). This endeavour turns out to be much different from the theory of conformal prediction for multivariate data as we detail in Issue 3 below. **Issue 1. Very specific problem setting** Real-world static regression problems that fit the setting exist --- for instance, survey data with hierarchically structured answers (e.g., household budget surveys, where expenditures are decomposed accross various categories, as https://ec.europa.eu/eurostat/web/microdata/household-budget-survey), and more generally, any regression task for which the answers are divided into subcategories. We did not consider such real-world data sets because of the wish of testing robustness detailed below. &nbsp; **Issue 2. Experiments** The study with synthetic data is meant to exactly illustrate our theoretical claims, while the study based on the real data set shows that the results are robust in practice and should hold beyond the i.i.d case (and they also show that the assumption of elliptical distribution assumption is reasonable, see Figure 5 page 28). Admittedly, we however did not underline enough that the real case was for the sake of testing robustness, and will correct that. Our approach with this article is basically to derive theoretical results in an ideal setting (i.i.d and elliptically-distributed forecast errors) so as to establish the theoretical foundations for future research. &nbsp; **Issue 3. Assumptions for theory** **3.1** Actually, the most important source for the improvement is the hierarchical structure. This is best illustrated in Appendix E, where the objective is about joint coverage and where the improvement in efficiency is achieved with no assumption on the distribution of scores (see Theorem E.3). **3.2** Now, the main challenge in our setting is that we do not just want some joint coverage, but target a more ambitious goal of coverage for each variable i of the hierarchy, referred to as individual coverages in the sequel. (This wish of individual coverages makes sense for hierarchical data but would not make sense for plain multivariate data.) For individual-coverages results, we do not need any assumption beyond the typical i.i.d. assumption. **3.3** The efficiency results for individual-coverages guarantees, however, do require both, and equally importantly, the hierarchical structure (as in the case of joint coverage) and the distributional assumption of elliptical distribution (as in the literature of forecast reconciliation, where it is standard). Fully distribution-free results would of course have been prefered, but first, elliptical distributions form a vast set of distributions, second, appear naturally as distributions for residuals, and third, results of efficiency in conformal prediction are most of the time only empirical results while theoretically-grounded such results are also only achieved under additional assumptions. One of the (very) few examples of such theoretical results consists of the article by Le Bars and Humbert (2025, On volume minimization in conformal regression) and requires additional assumptions on the forecast model. (We prefer assumptions on the distribution of scores.) Another reference is Dhillon et al. (AIStats 2024, On the expected size of conformal prediction sets): they pave a way for obtaining theoretical results on the efficiency in terms of some complex multiplicative factor suffering from complex interdependancies between the base forecasting method, the choice of the score, and the distribution of data --- controlling this factor would require complex additionnal assumptions. **3.4** Most of the articles on conformal prediction for multivariate data (including Johnstone and Cox, 2021, and Messoudi et al., 2022, which we both cite) derived empirical efficiency results based on the intuition that elliptical predictive regions will fit well the underlying distribution. To us, this intuition hints at an implicit assumption of elliptical distribution of the scores (which we made explicit). **3.5** All in all, we commit to incorporate the comments above in a revision, and to better insist on the differences in nature between the aims of joint coverage and individual coverages. &nbsp; **Why signed non-conformity scores:** From a practical viewpoint, absolute residuals produce prediction sets centered on the forecasts, which are poor when forecasts are biased. From a technical viewpoint, the projection of signed non-conformity scores corresponds to the non-conformity score of the projected forecasts, which is important in our proofs (e.g., in the first lines of the proof of Theorem 3.2, in Appendix A). Absolute values being non-linear, we do not have that the projection of absolute non-conformity scores correspond to the absolute values of the projected forecast errors. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response. It made a few things more clear to me.
Summary: This paper addresses an important question in distribution-free inference--constructing prediction intervals for the multivariate response. Motivated by forecast reconciliation, this paper proposes to utilize the hierarchical information among multivariate responses to enhance the accuracy of prediction sets. Claims And Evidence: Claims are supported by both theoretical and numerical results. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, I have checked the appendix for both technical proofs and additional numerical results. Relation To Broader Scientific Literature: Constructing distribution-free prediction sets for multivariate random vectors is important in practice. Essential References Not Discussed: There are other related papers on conformal prediction for multivariate responses: 1. Xu, Chen, Hanyang Jiang, and Yao Xie. "Conformal prediction for multi-dimensional time series by ellipsoidal sets." arXiv preprint arXiv:2403.03850 (2024). 2. Dheur, Victor, Matteo Fontana, Yorick Estievenart, Naomi Desobry, and Souhaib Ben Taieb. "Multi-Output Conformal Regression: A Unified Comparative Study with New Conformity Scores." arXiv preprint arXiv:2501.10533 (2025). 3. Henderson, Iain, Adrien Mazoyer, and Fabrice Gamboa. "An adaptive covariance based score for conformal inference in multivariate regression." (2024). Other Strengths And Weaknesses: Strength 1. This paper is well-organized, technically solid, and studies an important question. 2. The connection built between conformal prediction and forecast reconciliation is novel. Weakness 1. overall, the contribution of this paper is limited to the literature. Essentially, the projection of scores can be roughly related to the de-correlation of multivariate scores, which is actually already explored in the literature of conformal prediction (please see aforementioned papers). 2. The assumption that scores are from an elliptical distribution is limited, which may hurt the distribution-free nature of conformal prediction. Other Comments Or Suggestions: N/A Questions For Authors: 1. Section 2-Settings. Is $m \geq 3$ essential for the theoretical guarantee? More concretely, when $m=2$, can we still apply the current approach? 2. The final output is a hyper rectangular. I wonder if this could be optimal. Intuitively speaking, if score vector $s$ is a multivariate Gaussian random vector with mean vector $0$ and covariance matrix $\Sigma$, the projection we are looking for should transform the covariance to identity, in which sense, this "reconciliation" approach is the same with "de-correlation". In addition, scores based on Mahalanobis norms are explored in the aforementioned papers, and it would be beneficial tp compare the current approach with those scores empirically. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed reading, and also for considering the connections built between conformal prediction and forecast reconciliation as one of the strengths of this submission. We go over the two issues pointed out. &nbsp; **Issue 1 - Projection vs. de-correlation and answer to Question 2** **1.1** The articles cited are all about joint coverage, an objective that we acknowledge before Equation (2) and discuss in detail in Appendix E. We will discuss these articles in these places (and also in the introduction). However, we must underline that joint coverage is a much different objective than the individual coverages stated in Equation (*). It is easy to leverage the hierarchical structure for the objective of joint coverage (see Appendix E), but less so for individual coverages. To us, targeting joint coverage or individual coverages are two distinct problems, which call for different approaches, though possibly based on similar underlying intuitions. **1.2** Now, about de-correlation and the central part of Question 2: does the reviewer refer to using the oracle projection matrix stated in Eq. (6) of Section 3.3? If so, our answer is that the two approaches are conceptually very different or that at least, we do not see the connection. Indeed, when H is the identity matrix (which corresponds to the classic multivariate setting with no hierarchical constraints), then the projection (6) is the identity and the covariance matrix is not involved at all. **1.3** As discussed below in our answer to Question 1, forecast reconciliation can be interpreted as forecast combination under constraints. Thus, using the oracle projection matrix stated in Eq. (6) rather corresponds to projecting without changing too much the nodes where the forecast errors are low compared to the others, which is encoded by the covariance matrix (and not to a de-correlation). **1.4** To now answer the first part of Question 2, we target hyper-rectangles because we want individual coverage guarantees (for each element i of the hierarchy), and such individual guarantees make sense for hierarchical data (while they would not for truly multivariate data). We thus cannot wonder what the optimal shape of the prediction region is: it has to be some hyper-rectangle. **1.5** For the last part of Question 2, note that we consider scores based on the Mahalanobis norms in Appendix E, as they relate to joint coverage -- but did not consider them in our experiments, which focus on individual coverages. **1.6** All in all, we will incorporate the comments above and better insist on the differences in nature between the aims of joint coverage and individual coverages. &nbsp; **Issue 2 - Elliptical distribution for scores** **2.1** Yes, fully distribution-free results would of course have been prefered, but first, elliptical distributions form a vast set of distributions, second, appear naturally as distributions for residuals, and third, results of efficiency in conformal prediction are most of the time only empirical results while theoretically-grounded such results are also only achieved under additional assumptions. One of the (very) few examples of such theoretical results consists of the article by Le Bars and Humbert (arXiv 2025, On Volume Minimization in Conformal Regression) and requires additional assumptions on the forecast model. (We prefer assumptions on the distribution of scores.) Another reference is Dhillon et al. (AIStats 2024, On the expected size of conformal prediction sets): they pave a way for obtaining theoretical results on the efficiency in terms of some multiplicative factor but the latter suffers from complex interdependancies between the base forecasting method, the choice of the score, and the distribution of data --- handling this multiplicative factor would require complex additionnal assumptions to get any efficiency results (and it is not even clear which assumptions would work). **2.2** Most of the articles on conformal prediction for multivariate data (including Johnstone and Cox, 2021, and Messoudi et al., 2022, which we both cite, and the articles mentionned by the reviewer) derived empirical efficiency results based on the intuition that elliptical predictive regions will fit well the underlying distribution. To us, this intuition hints at an implicit assumption of elliptical distribution of the scores (which we made explicit). &nbsp; **Question 1:** Yes, the theoretical results hold for m=2 and are actually interesting: in this case, the observations are equal for both nodes but the base forecasts can be different, and thus, our approach corresponds to finding the best combination of two forecasts, (where 'best' is relative in our setting to the interval lengths). The links between forecast combination and forecast reconciliation were studied, as far as point forecasts are concerned (not conformal prediction) by Hollyman et al. (EJOR 2021, Understanding forecast reconciliation). --- Rebuttal Comment 1.1: Comment: Thank you for the clarification! I have another question regarding the coverage guarantee: as the individual guarantee for multivariate response is related to individual p-values in multiple testing problems, if we are interested in a set of coordinates in Y and would like to combine those individual prediction sets, would it be loose in terms of coverage guarantee. Overall, the authors’ response is very helpful and I will adjust my score based on later discussion with other reviewers. --- Reply to Comment 1.1.1: Comment: Indeed, Bonferroni-correction approaches (or copula-based approaches) can be used to obtain simultaneous coverage guarantees for several (and possibly all) coordinates. However, the resulting prediction regions, for similar joint coverage levels, are likely to be less efficient as their shapes is more constrained (e.g., they would be rectangles in case Bonferroni corrections are used).
Summary: The authors extend the conformal prediction framework to hierarchical data, defined as multivariate data where some variates are linear combinations of covariates. The authors establish tighter efficiency bounds on the size of the conformal intervals, and also establish new bounds on a component-wise coverage objective. They build upon split conformal prediction, and make a simple modification by construing a projection matrix $P$ to take the original SCP prediction $\hat{u}$ to a full hierarchical prediction $\tilde{u}$. This is inspired by existing literature in forecast reconciliation. Claims And Evidence: Yes. All claims made are well-supported by clear proofs. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not carefully check the proofs. Experimental Designs Or Analyses: The experimental design was done rigorously, with 1000 runs over artificial data as well as 360 runs over real data. However, the experiments are lacking in breadth, as both settings only consider the somewhat trivial 5-2-1 hierarchy. It would be more interesting to conduct a scaling experiment to see how the proposed algorithms scale in both time and sample efficiency with respect to greater complexity of hierarchical levels. Supplementary Material: N/A Relation To Broader Scientific Literature: This paper helps unite the previous somewhat disparate areas of multivariate conformal prediction and forecast reconciliation. In addition, they introduce for the first time the component-wise coverage objective, which will be a useful objective for future work in multivariate conformal prediction. Essential References Not Discussed: There is no related-works section, although the introduction gives a comprehensive exposition. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed reading and the positive comments w.r.t. experiments. We actually conducted the ones with synthetic data on larger hierarchies (and obtained similar results) but found it difficult to report the results. Figure 2 is already quite complex with the simple 5-2-1 hierarchy. On second thoughts, we could have included a table summarizing the performance on such larger hierarchies (like averages of average coverage and lengths, per layer of the hierarchy). We will do so in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the reply.
Summary: The work proposes application of conformal prediction to hierarchical data. The idea is a combination of two approaches, i.e. split conformal prediction and forecasting reconciliation. The proposed method not only provides global coverage but also component-wise coverage with the computed prediction regions efficient in size. ## update after rebuttal I think the authors adequately addressed my questions, but I think they could have done a better job at evaluation, hence I will keep my score. Claims And Evidence: Yes, there is convincing evidence on the efficacy of the method. Methods And Evaluation Criteria: Yes Theoretical Claims: Yes! Proofs such as coverage guarantees. Experimental Designs Or Analyses: Yes! Supplementary Material: Only additional results. Relation To Broader Scientific Literature: Whilst there are not a lot of works considering conformal prediction, there is no coherent discussion on related works. Essential References Not Discussed: A key work that discusses Conformal Prediction in Hierarchical setting is missing, [1] Conformal Prediction in Hierarchical Classification Thomas Mortier Alireza Javanmardi Yusuf Sale Eyke Hullermeier Willem Waegeman Other Strengths And Weaknesses: The theory seems sound behind the method and the paper is well-written and easy to follow. The experimental setting consisted of 1000 runs on the artificial data, which substantiates the coverage findings, something many conformal works lack. Other Comments Or Suggestions: NA Questions For Authors: I have a couple of concerns, 1. I don't see a coherent discussion about related works such as [1] 2. The figure 2 is quite busy and could be made more presentable. 3. More datasets could be considered. 4. The baselines are restricted, it might be possible to add other baselines, such as CopulaCPTS as discussed in the work. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed reading and the positive comments --- especially the ones relative to the well-conducted and neat experiments. On the issues raised: &nbsp; **1. I don't see a coherent discussion about related works such as [1]** We will discuss [1] in the revised version. It indeed forms the only other article on conformal prediction that deals with hierarchical data in the way we mean it, but its setting and objectives are much different. Note: [1] was posted on arXiv on January 31, 2025, the day of the ICML submission deadline. &nbsp; **2. The figure 2 is quite busy and could be made more presentable** This figure is dense, indeed. Since we are providing both graphs in larger sizes in the appendix, we could only keep one in the main body (the one for synthetical data) and for instance, omit the y-axis graduations. This could help getting a more readable picture. &nbsp; **3. More datasets could be considered** Yes, except that creating good base forecasts requires some expertise, which we have as far as electricity load forecasting is concerned, but do not necessarily have for other applications. Due to the nature of our contribution, we focused our submission on the theoretical findings and included a limited number of experiments. &nbsp; **4. The baselines are restricted, it might be possible to add other baselines, such as CopulaCPTS as discussed in the work** Actually, CopulaCPTS (and other methods, which we discuss in the article) target joint coverage (see Equation 2 and Appendix E), instead of the individual coverages stated as the objective (*). This makes these methods not directly comparable. See our answers to Reviewers KKXz and mvP5 for the deep differences between the aims of joint coverage vs. individual coverages. --- Rebuttal Comment 1.1: Comment: Thanks for pointing out the publication date of the work. Indeed, you couldn't mention the work earlier. Creating good base forecasts requires some expertise-- While this is true, the focus of the work is on calibration and not on having good base forecasts, the qualitative assessment can be done easily as long as the base forecaster remains the same, which is typically what most papers on Conformal Prediction do. Also, the work is not a purely theoretical one, it has practical applications, so I disagree on having limited experiments. As for CopulaCPTS, it might not achieve marginal coverages but it could still be compared with. Another work, which I believe provides marginal coverages is: Zhou, Yanfei, Lars Lindemann, and Matteo Sesia. "Conformalized adaptive forecasting of heterogeneous trajectories." Proceedings of the 41st International Conference on Machine Learning. 2024. Just to clear my stance, I am mostly positive about this work, but I am a little disappointed from evaluation perspective. --- Reply to Comment 1.1.1: Comment: **Experiments** To be clear, we plan to enrich the experimental section to include real-world i.i.d. datasets (as it also was a concern for Reviewer KKXz). We plan to additionally include larger synthetical hierarchies to illustrate the scalability of our results to complex hierarchies (we already have partial results for this, as discussed in the rebuttal for Reviewer 4SfW). We alas will not be able to show tables or other summaries of results by the end of the discussion phase, but would be able to include such extended results at the time of final ICML publication (which is in several weeks). **CopulaCPTS** On second thoughts, we could also include experimental results for joint coverage: therein, we would show the impact of the projection step (see Appendix E) and illustrate that we systematically obtain improvements. We could also indeed check how methods targeting joint coverage perform in terms of simultaneous individual coverages but these will be overly conservative, thus much worse in terms of efficiency. **On the reference mentioned** We checked the mentioned reference by Zhou et al. (2024): what they refer to as "simultaneous marginal coverage" is not the component-wise coverages we are targeting in our submission (that we also refer to as individual coverages in this review thread); see their Equation (1) and the paragraph above: they still target joint coverages, but simultaneously over several data points (simultaneously over time). **As a conclusion** We are pleased that you appreciate the theoretical part of our work and the experimental study already included. All reviewers of this thread seem to do so, while asking for an extended section of experimental results. This is something we commit to perform (and the submitted work should prove that this commitment is feasible).
null
null
null
null
null
null
Understanding and Mitigating Memorization in Diffusion Models for Tabular Data
Accept (poster)
Summary: This paper investigates memorization behavior in diffusion models for tabular data generation. The authors examine how factors like dataset size, feature dimensionality, and model architecture influence memorization. They propose a data augmentation method called TabCutMix and an enhanced version, TabCutMix-Plus, that additionally mitigates out-of-distribution generations. The paper provides a theoretical explanation for why diffusion models tend to memorize training data in the tabular domain and demonstrates the effectiveness of their proposed methods through empirical evaluations. Claims And Evidence: The paper's theoretical claims about memorization in diffusion models for tabular data lack novelty and are insufficiently substantiated. The authors claim theoretical contributions that appear to be direct adaptations of existing results from [Gu et al., 2023] without any modifications for the tabular domain. The claim that their analysis "specifically addresses tabular data with mixed feature types" is not well-supported as the core theoretical results (Propositions 3.1 and 3.2) apply to points in Euclidean space regardless of whether they represent images or tabular data's latent encodings. The evidence for the effectiveness of TabCutMix and TabCutMix-Plus in mitigating memorization regarding a specific distance based metric is presented, but its unclear if this metric is well-suited for the task especially given the VAE's latents are used for the final generation. Methods And Evaluation Criteria: The method for determining memorization in tabular data is problematic. The paper uses a mixed distance metric in the data space without adequate justification for why this specific distance and the empirical 1/3 hyperparameter align with meaningful memorization phenomena, especially regarding the utility and quality of generated data. Additionally, since the models evaluated are latent diffusion models where the final generation requires decoding through a VAE, measuring distance in data space fails to capture potential memorization effects introduced by the VAE itself. This creates a fundamental issue with the evaluation criteria that isn't addressed. The proposed mitigation method relies solely on data augmentation rather than addressing model architecture or training procedure issues, which limits its applicability as memorization can still occur with the augmented data. Theoretical Claims: I checked the theoretical claims in Propositions 3.1 and 3.2 and found them to be essentially identical to results already established in [Gu et al., 2023] for image generation. The authors' statement that "The analysis is closely related to the work of Gu et al. (2023)" understates the similarity; in fact, the exact same analysis applies directly to tabular data without modification because both domains ultimately work with points in Euclidean space. Therefore, the claim that this represents a novel theoretical contribution specifically for tabular data is overstated. Experimental Designs Or Analyses: There's no ablation study examining how different memorization thresholds would affect the results or analysis of the VAE's potential contribution to memorization. Supplementary Material: I did not review the supplementary material in detail. Relation To Broader Scientific Literature: This paper's theoretical contributions heavily overlap with those of [Gu et al., 2023], which already established similar results stated for image generation. The authors acknowledge this relationship but understate the extent of the similarity. The paper would benefit from more clearly positioning its contributions relative to existing work on memorization in generative models across domains. Essential References Not Discussed: No major omissions noted, though a broader discussion of memorization metrics across different domains would strengthen the paper. Other Strengths And Weaknesses: Strengths: - The problem of generating tabular data while avoiding memorization is important and interesting. - The empirical investigation of factors influencing memorization provides useful insights. Other Comments Or Suggestions: N/A Questions For Authors: How does the quality and utility of data generated using TabCutMix compare to data generated by the baseline model? Can you provide metrics beyond memorization that demonstrate the practical benefits of your approach? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. All responses and corresponding revisions have been incorporated into the updated manuscript, available at: https://anonymous.4open.science/r/TabCutMix-3F7B/TabCutMix.pdf. [**Q1**] The paper's theoretical claims about memorization in diffusion models for tabular data lack novelty and are insufficiently substantiated. Propositions 3.1 and 3.2 and found them to be essentially identical to results already established in [Gu et al., 2023] for image generation. **Ans**: We acknowledge that the theoretical results presented in Proposition 3.2 build on existing foundations (e.g., [1]). Our goal, however, is not to claim novelty in the theoretical derivation itself, but rather to contextualize and apply these results specifically to tabular diffusion models. As discussed in the manuscript (see the paragraph surrounding Proposition 3.2), we leverage these theoretical insights to explain why memorization can arise in the latent space of tabular data models such as TabSyn, and why it does not necessarily lead to exact duplication in practice—due to factors like imperfect score matching and the stochasticity of VAE decoding. Our primary contribution is the first empirical investigation of memorization in tabular diffusion models, a setting that has been underexplored in the literature. Furthermore, we propose two novel and effective mitigation strategies, TabCutMix and TabCutMixPlus, which address memorization in a model-agnostic and structure-aware way. We have revised the manuscript to more clearly distinguish between the reused theoretical insights and our original empirical and methodological contributions. [**Q2**] The evidence for the effectiveness of TabCutMix and TabCutMix-Plus in mitigating memorization regarding a specific distance based metric is presented, but its unclear if this metric is well-suited for the task especially given the VAE's latents are used for the final generation. **Ans**: Prior work on image generation [1,2] commonly uses L2 distance to assess similarity in the input space. Building on this and mixed-type data clustering techniques [3], we adopt a standard mixed-type distance metric suitable for both continuous and categorical features in tabular data. We also clarify that our memorization metric is computed in the original input space, not in any model’s latent space. Although TabSyn uses a VAE, our method is model-agnostic, and the evaluation remains unaffected. We will clarify this further in the revised manuscript. [1] Diffusion probabilistic models generalize when they fail to memorize. In ICML 2023 Workshop. [2] On memorization in diffusion models. arXiv:2310.02664. [3] An improved k-prototypes clustering algorithm for mixed numeric and categorical data. Neurocomputing 2013. [**Q3**] The method for determining memorization in tabular data is problematic. Why use 1/3 to judge if a sample belongs to be memorized. **Ans**: While we initially use 1/3 as a representative threshold to determine whether a sample is memorized, we agree that threshold selection can influence interpretation. To address this, we conducted a correlation analysis across multiple thresholds (e.g., 1/4, 1/3, and 1/2) and with Mem-AUC (which averages memorization ratios over all thresholds). As shown in Figure 11, the memorization ratio at threshold 1/3 exhibits extremely high correlation (e.g., >0.99) with both Mem-AUC and other thresholds, demonstrating that 1/3 serves as a reliable and representative choice. We have added this discussion and the supporting analysis to the revised manuscript. [**Q4**] The proposed mitigation method relies solely on data augmentation rather than addressing model architecture or training procedure issues, which limits its applicability as memorization can still occur with the augmented data. **Ans**: We respectfully disagree with the concern that our method is limited in applicability due to its focus on data augmentation. Our approach is model-agnostic and has been successfully applied across a diverse set of generative models and training procedures, including StaSy, TabDDPM, TabSyn, CTGAN, and TVAE. This demonstrates its broad compatibility and effectiveness regardless of the underlying architecture. Furthermore, we view our method as orthogonal to model design and training objectives—it can be readily integrated into more advanced generative models or used alongside other memorization control techniques. In fact, we believe it can further enhance models that already include architectural or optimization-based memorization defenses. We have included a discussion of these complementary directions in Appendix G (Future Work) of the revised manuscript.
Summary: This paper introduces TabCutMix and TabCutMixPlus, two augmentation methods designed to reduce memorization in tabular diffusion models. The authors demonstrate that state-of-the-art diffusion models tend to memorize tabular datasets and show that their proposed augmentations mitigate this issue. They further evaluate the effectiveness of both methods across multiple datasets and diffusion models. Claims And Evidence: The empirical results are well-supported through various experiments. However, there is a concern regarding the practical significance of the proposed augmentation methods. The results in Table 1 show that, across different datasets, the memorization ratio decreases by only a few percentage points. This raises questions about the overall impact and effectiveness of the method in reducing memorization. Methods And Evaluation Criteria: The methods are evaluated using standard metrics commonly used in the field. Theoretical Claims: The theoretical result in Section 3.2 appears trivial. The authors assume a delta distribution for the data points and derive the ground truth score functions based on this assumption. Consequently, if the reverse diffusion process is run using these ground truth scores, the generated samples will be drawn directly from this delta distribution—essentially replicating a training sample. Under this assumption, it is evident that the sampling process would lead to 100% memorization. Experimental Designs Or Analyses: The experiments are well-designed and effectively demonstrate the impact of each component in the proposed methods. Supplementary Material: I have reviewed most sections in the supplementary material. Relation To Broader Scientific Literature: While memorization has been extensively studied in image and video diffusion models, it remains an underexplored issue in tabular data generation. Given the significance of tabular data generation across various domains, this paper is well-situated within the broader literature and studies an important gap in the field. Essential References Not Discussed: All essential references are discussed in the main text. Other Strengths And Weaknesses: ### **Strengths** * The paper is well-written and easy to follow. * The experiments are extensive and conducted with clear purpose. * The authors provide ample details on various experiments, metrics, methods, and implementation details. ### **Weaknesses** * The proposed method shows only a modest reduction in the memorization rate, raising questions about its practical significance. * The theoretical result in Section 3.2 is somewhat straightforward, as it directly follows from the assumed delta distribution. Other Comments Or Suggestions: N/A Questions For Authors: 1) Have you experimented with adding a flag to the model to indicate whether it is processing real or augmented data? This type of augmentation conditioning has been explored in EDM networks [1]. 2) How do you handle discrete features for the diffusion models as the framework is mainly designed for continuous signals? [1] Karras T, Aittala M, Aila T, Laine S. Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems. 2022 Dec 6;35:26565-77. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. All responses and corresponding revisions have been incorporated into the updated manuscript, available at: https://anonymous.4open.science/r/TabCutMix-3F7B/TabCutMix.pdf. [**Q1**] The results in Table 1 show that, across different datasets, the memorization ratio decreases by only a few percentage points. [Figure 6. Add the ratio for different thresholds] **Ans**: We respectfully disagree that the gains on memorization ratio reduction are not significant. In Table. 1, we report the performance with balanced tradeoff in memorization ratio and data utility. Our proposed methods can achieve significant memorization ratio reduction. For example, in default dataset Tabsyn model, TCM and TCMP reduce memorization ratio by 16.16% and 12.48%, which is significant compared with mixup and smote baseline (2.65% and 6.93%). Additionally, as shown in Table 3 and Figure 6 with different augmentation ratio, TabCutMix provides a clear and tunable reduction in memorization via its augmentation ratio from 0% to 100%. For example. In Default dataset, the memorization rate drops from 20.11% to 15.34%, achieving a relative reduction of 23.7%. [**Q2**] The theoretical result in Section 3.2 is somewhat straightforward, as it directly follows from the assumed delta distribution. **Ans**: We have trimmed down the theoretical part and add more details and reference for additional experiments in appendix. Our goal, however, is not to claim novelty in the theoretical derivation itself, but rather to contextualize and apply these results specifically to tabular diffusion models. We clarify that our main contribution lies in the empirical investigation of memorization phenomena within tabular diffusion models—a previously unexplored area. Moreover, our proposed novel methods, TabCutMix and TabCutMixPlus, effectively mitigate memorization issues specific to tabular data. We have clarified this distinction explicitly in the revised manuscript to better highlight our empirical findings and methodological contributions. [**Q3**] Have you experimented with adding a flag to the model to indicate whether it is processing real or augmented data? This type of augmentation conditioning has been explored in EDM networks [1]. **Ans**: We thank the reviewer for this insightful suggestion. We have not yet experimented with adding an explicit flag or conditioning signal to indicate whether a sample is real or augmented. We agree this augmentation-aware conditioning, as explored in EDM networks [1], could help the model better distinguish between real and synthetic patterns and improve both generalization and memorization control. We view this as a promising direction and will include it in the revised manuscript under Future Work discussion. [**Q4**] How do you handle discrete features for the diffusion models as the framework is mainly designed for continuous signals? **Ans**: Our proposed methods, TabCutMix and TabCutMixPlus, are model-agnostic and operate by exchanging feature values across samples within the same label. This augmentation strategy applies equally well to both discrete and continuous features, and we have successfully implemented it across various tabular diffusion models, including TabSyn and TabDDPM. We acknowledge that diffusion models are primarily designed for continuous data, and different models adopt different strategies for handling discrete features. For example, TabSyn uses a VAE to encode mixed-type features into a continuous latent space, where diffusion is applied. These techniques enable continuous diffusion models to effectively represent and generate mixed-type tabular data. We will clarify this point in the revised manuscript to avoid potential misunderstandings. [1] Karras T, Aittala M, Aila T, Laine S. Elucidating the design space of diffusion-based generative models. Advances in neural information processing systems. 2022 Dec 6;35:26565-77. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for addressing my comments in the rebuttal. Please find my responses below: - Regarding the performance improvements, I still find the gains from applying augmentations relatively minor. For example, in the Default dataset for TabSyn, the memorization ratio decreases from 20.11% to 16.86%. While the improvement might appear larger when considering relative error, I do not think relative improvement is a common or particularly meaningful metric in this context, as the absolute reduction in memorized examples is only about 4%, not 20%. - Although I acknowledge the authors' clarification that the main contribution of the work is not its theoretical claim, the paper explicitly states it studies memorization in the tabular domain from a theoretical perspective. My original concern remains: the proposition presented is somewhat straightforward, resulting inherently in 100% memorization by design. Thus, I still have reservations regarding the significance and added value of this particular theoretical claim. Given that the use of augmentation to decrease memorization is relatively well-known in diffusion models, I maintain my current score unless the authors can further substantiate the novelty or significance of their contribution and results. --- Reply to Comment 1.1.1: Comment: We thanks reviewer dji2 for thoughtful engagement and follow-up comments. > [Q1] I still find the gains from applying augmentations relatively minor. Ans: We respectfully but strongly disagree with the reviewer’s claim that the gains from our proposed augmentations are “relatively minor.” In the Default dataset with TabSyn, our method reduces memorization from 20.11% to 16.86%—an absolute drop of 3.25% and a relative reduction of ~16%, which is substantial. More importantly, compared to widely used baselines, TabCutMix clearly outperforms Mixup (19.58%, 0.53% absolute drop) and SMOTE (18.72%, 1.39% absolute drop). Our method achieves more than **2x the absolute reduction of SMOTE** and over **6x that of Mixup**, highlighting its clear superiority. Moreover, this is the first work to systematically study and mitigate memorization in tabular diffusion models—a direction that has been largely overlooked. Our experiments are comprehensive, spanning multiple datasets, models, and augmentation baselines, and we also discuss complementary solutions beyond data augmentation in Appendix G. We stand by our position that the proposed methods provide significant empirical improvements and make a substantive, novel contribution to the field. > [Q2] My original concern remains: the proposition presented is somewhat straightforward, resulting inherently in 100% memorization by design. We appreciate the reviewer’s continued engagement, but we’d like to clarify and push back on the concern regarding the theoretical component. As clearly stated in both our original and revised submissions, the theoretical section is not presented as a novel contribution, but as a **motivational and contextual tool to highlight why memorization arises in tabular diffusion models**—a topic previously unaddressed in the literature. To reflect this, we have trimmed the theory section to half a page in the revised manuscript https://anonymous.4open.science/r/TabCutMix-3F7B/TabCutMix.pdf. The proposition, while simple, is essential for framing the issue. It illustrates how memorization can emerge under idealized conditions and motivates our empirical study. As for not 100% empirical memorization, we explicitly discuss why in Lines 190–202 (e.g., due to score-matching approximations and VAE encoding). We emphasize again: **our main contribution is empirical**, and the theoretical framing reinforces—not detracts from—our work. **To dismiss this context as insignificant overlooks the novelty and importance of addressing memorization in tabular diffusion for the first time**. > [Q3] I maintain my current score unless the authors can further substantiate the novelty or significance of their contribution and results. We respectfully disagree with the reviewer’s statement regarding the lack of novelty or significance in our contribution. As clearly outlined in the ICML reviewer guidelines [1], “originality need not mean wholly novel methods”—it can also involve “a new way of framing tasks.” Our work is the **first to identify, analyze, and address memorization** in tabular diffusion models, introducing a **new and overlooked problem** within this domain. On the methodological side, we propose TabCutMix and TabCutMixPlus, which are specifically designed to reduce memorization and mitigate out-of-distribution (OOD) generation risks in tabular data. These methods go beyond generic augmentations by tailoring the mixing process to tabular settings with mixed feature types and label preservation. We also respectfully push back on the comment that “augmentation to decrease memorization is relatively well-known in diffusion models.” This **does not imply that any augmentation strategy is unoriginal or insignificant**. Our augmentations are novel in design, purpose-built for tabular diffusion, and empirically effective, with consistent improvements over strong baselines. We believe the problem we introduce, the perspective we bring, and the solutions we propose represent both a novel framing and a meaningful contribution to the field. [1] https://icml.cc/Conferences/2025/ReviewerInstructions We kindly ask the reviewer to reconsider and update their score if our response has addressed the stated concerns and clarified the novelty and significance of our contributions.
Summary: The paper explores the issue of data memorization in tabular diffusion models, highlighting its potential privacy risks and negative impact on generalization. To explain why memorization arises in a tabular diffusion setting, the authors present a theoretical analysis that connects denoising score matching to the tendency of replicating training data. They propose two augmentation-based strategies—TabCutMix (TCM) and TabCutMixPlus (TCMP)—inspired by the CutMix approach, which are designed to mitigate memorization while preserving data utility. Claims And Evidence: **Memorization occurs in tabular diffusion models.** - The authors conduct empirical evaluations showing significant memorization across multiple diffusion-based tabular generators and datasets. They also support these findings with a theoretical rationale. **TabCutMix and TabCutMixPlus effectively reduce memorization without significantly harming synthetic data quality.** - The paper includes experiments demonstrating that TCM and TCMP offer an improved trade-off between memorization reduction and data utility. Methods And Evaluation Criteria: - The proposed techniques (TCM/TCMP) adapt the CutMix concept from image processing to tabular data by selectively swapping or mixing feature segments between samples. - TabCutMixPlus further clusters correlated features before swapping to preserve coherent relationships within the data. Theoretical Claims: The paper provides a reasonable theoretical explanation for why memorization arises in tabular diffusion models, grounded in the denoising score-matching framework. No immediate issues are found with their theoretical arguments. Experimental Designs Or Analyses: - The authors run extensive experiments demonstrating that TCM and TCMP consistently yield better trade-offs between memorization reduction and overall data utility. - The experiments cover multiple tabular datasets and also analyze aspects such as out-of-distribution (OOD) issues arising from the proposed augmentations. Supplementary Material: I did review supplementary materials based on references from the main text. Relation To Broader Scientific Literature: I think studying memorization in tabular data might be beneficial for other domains as well. In particular, the proposed techniques might be extendable to other tasks. Essential References Not Discussed: None identified. Other Strengths And Weaknesses: **Strengths* - The problem of memorization is well-motivated and clearly articulated. - Simple yet effective augmentation strategies are proposed. **Weaknesses** - The techniques may generate out-of-distribution samples, which can pose challenges in certain practical scenarios. - Currently, TCM/TCMP focus on classification tasks and may not generalize as readily to regression settings, which limits the scope of the paper. Other Comments Or Suggestions: - It would be helpful for the authors to evaluate OOD ratios for Mixup and SMOTE as well, providing a more direct baseline comparison alongside TCM/TCMP (Section D.4.6). Questions For Authors: - See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. All responses and corresponding revisions have been incorporated into the updated manuscript, available at: https://anonymous.4open.science/r/TabCutMix-3F7B/TabCutMix.pdf. [**Q1**] I think studying memorization in tabular data might be beneficial for other domains as well. **Ans**: We agree that our study on memorization in tabular data can have broader implications for other domains. We have added the discussion in Future work (Appendix. G) in the revised paper. [**Q2**] The techniques may generate out-of-distribution samples, which can pose challenges in certain practical scenarios. **Ans**: While data augmentation may lead to out-of-distribution (OOD) samples, our method TabCutMixPlus is designed to mitigate this risk through structure-aware augmentation at the feature-group level. As shown in Table. 5 (Appendix D.4.6), TabCutMixPlus consistently yields lower OOD detection ratios (e.g., 25.44% vs. 39.47% on Default, and 0.70% vs. 1.58% on Shoppers). We highlight OOD detection experiments more clearly in the main text of the revised version. [**Q3**] Currently, TCM/TCMP focus on classification tasks and may not generalize as readily to regression settings, which limits the scope of the paper. **Ans**: TabCutMix and TabCutMixPlus rely on exchanging features between samples of the same class, which inherently limits their applicability to classification settings. As a result, extending our methods to regression tasks remains a challenge and is beyond the scope of this paper. We have acknowledged this limitation in Limitation Discussion (Appendix. F) of the revised version. [**Q4**] It would be helpful for the authors to evaluate OOD ratios for Mixup and SMOTE as well, providing a more direct baseline comparison alongside TCM/TCMP (Section D.4.6) Table. 4. **Ans**: We thank the reviewer for the suggestion. As requested, we have added OOD detection results for Mixup and SMOTE (now included in Table 5). The “Ratio%” column represents the proportion of samples detected as out-of-distribution. While Mixup shows minimal OOD risk, it performs poorly in mitigating memorization compared to our methods. In contrast, TabCutMixPlus achieves a favorable balance—substantially reducing memorization while maintaining low OOD ratios (e.g., 0.36% on Adult vs. 2.06% for TabCutMix and 0.75% for SMOTE), making it a more effective and reliable augmentation strategy for tabular diffusion models.
Summary: The paper examines memorization in diffusion models for tabular data, providing theoretical insights that demonstrate the optimal score function under the empirical distribution and show that generated data can replicate training samples. To mitigate memorization, the study introduces TabCutMix and TabCutMixPlus, which swap feature segments between training samples to reduce overfitting while preserving data quality. Experimental results confirm that these techniques effectively reduce memorization while maintaining generation quality. Claims And Evidence: Yes, both the theoretical and empirical results are justified in the work. Methods And Evaluation Criteria: Yes, the authors incorporate a variety of metrics to assess different aspects of the model. For instance, they introduce a memorization score specifically designed for tabular data, along with $\alpha$-precision and $\beta$-recall to evaluate data fidelity. All the chosen metrics are well-justified. I would suggest including (or at least referencing) important metrics such as $\alpha$-precision and MLE in the main body of the paper rather than leaving them to the appendix. Theoretical Claims: I didn’t verify the proof in detail but reviewed the approach used to derive the theoretical results. Experimental Designs Or Analyses: I think the overall experimental designs make sense and the empirical results are abundant to support the paper's claims. Supplementary Material: Yes, I checked the algorithm part (Section C), metric part (Section D.4.2-D.4.3 and D.6). Relation To Broader Scientific Literature: Investigating the memorization phenomenon and the approaches of diffusion models is an important topic. While most of other work focus on image data, this work considers tabular data to broaden the scope of such studies, and the proposed techniques are potentially transferrable to other data formats. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strengths** * The experiments are quite comprehensive both in terms of the different scenarios considered and the evaluation metrics used. * The proposed methods are both intuitive and well discussed. **Weaknesses** * Results in Proposition 3.1 is not new, it has been shown in e.g., [1]. * The paper's organization could be improved. Table 1 occupies too much space, pushing important details (for example, the introduction of the evaluation metrics) to the Appendix. And Figure 4 appears after Figure 5, etc. Other Comments Or Suggestions: N/A Questions For Authors: I'm a bit confused about the motivation for using MLE (machine learning efficiency evaluation) as a metric for data fidelity. Can the authors elaborate more here? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. All responses and corresponding revisions have been incorporated into the updated manuscript, available at: https://anonymous.4open.science/r/TabCutMix-3F7B/TabCutMix.pdf. [**Q1**] I would suggest including (or at least referencing) important metrics such as precision and MLE in the main body of the paper rather than leaving them to the appendix. **Ans**: We have added more descriptions on evaluation metrics and the reference for these important metrics in the main text. [**Q2**] The Results in Proposition 3.1 are not new, it has been shown in e.g., [1]. **Ans**: We have moved Proposition 3.1 in Appendix A and explicitly cited relevant references to prior works. Our goal, however, is not to claim novelty in the theoretical derivation itself, but rather to contextualize and apply these results specifically to tabular diffusion models. Our main contribution lies in the empirical investigation of memorization phenomena within tabular diffusion models—a previously unexplored area. Moreover, our proposed novel methods, TabCutMix and TabCutMixPlus, effectively mitigate memorization issues specific to tabular data. We have clarified this distinction explicitly in the revised manuscript to better highlight our empirical findings and methodological contributions. [**Q3**] The paper's organization could be improved. Table 1 occupies too much space, pushing important details (for example, the introduction of the evaluation metrics) to the Appendix. And Figure 4 appears after Figure 5, etc. **Ans**: We have reorganized the paper to improve clarity and readability. Specifically, we have moved Magic dataset results from Table 1 to Table 8 (Appendix E.6). We have added evaluation metrics details and reference, in the main text. [**Q4**] I'm a bit confused about the motivation for using MLE (machine learning efficiency evaluation) as a metric for data fidelity. **Ans**: MLE evaluates data fidelity by measuring how well a model trained solely on synthetic data performs on real data (e.g., via AUC on a downstream task), which directly reflects the utility of the synthetic data in practical applications. While traditional fidelity metrics like shape score and trend score assess how closely the synthetic data's distribution matches that of the real data, MLE provides an **end-to-end evaluation that incorporates task-specific performance**, offering a complementary perspective on whether the generated data preserves meaningful patterns for model training.
null
null
null
null
null
null
A Mixed-Curvature based Pre-training Paradigm for Multi-Task Vehicle Routing Solver
Accept (poster)
Summary: In the context of neural solvers for vehicle routing problems (VRP), the paper introduces a methodology to learn embeddings in non-Euclidean space, where the embedding space is divided into several subspaces with adaptively learned curvatures. Claims And Evidence: The authors claim that their proposition can be integrated in various existing solvers and improve their performance. This claim is empirically demonstrated with three backbone methods (i.e., POMO-MTL, MVMoE, RouteFinder) on various problems (e.g., random and real instances) in different settings (e.g., in-distribution, zero-shot, few-shot test). I would have appreciated an ablation study (e.g., for mix-up technique) and/or sensitivity analysis (e.g., number of subspaces) for the proposed method to have a stronger confidence in the results. Methods And Evaluation Criteria: The proposed method defines a more expressive neural architecture where embeddings are divided in several subspaces with different curvatures. The architecture seems natural, although the need of the mix-up technique (equation 13) seems a bit strange to me, since it amounts to do a linear combination of two embeddings in two different spaces. The use of this technique is only justified empirically (i.e., it leads to better performance). The explanation in the first sentence of page 5, col. 2 is not very convincing to me. Why is the transition non-smooth? Theoretical Claims: There is no theoretical claim in this paper. Experimental Designs Or Analyses: The empirical validation uses standard evaluation criteria (e.g., gap) and is conducted according to different previously-proposed protocols. The analysis followed by the authors is standard when evaluating neural solvers. It would have been nice to report computational times and the sizes of the models. Supplementary Material: I have quickly checked the supplementary material. Relation To Broader Scientific Literature: The paper summarizes recent research in solving VRP and in deep learning in non-Euclidean space. The work in this paper is at the intersection of those areas. As far as I know, the relevant work seems to be discussed. Essential References Not Discussed: I'm not aware of a missing reference. Other Strengths And Weaknesses: The idea of learning embeddings in non-Euclidean spaces with different curvatures for solving VRP is novel. It seems that it may introduce more parameters and the computation time increases. This discussion is missing in the current paper. Other Comments Or Suggestions: The paper should be proofread, e.g., - the meaning of acronyms should ideally be recalled. - there are many typos in page 3. - in page 4, the cardinal of V is actually n+1. - the bold values seem to be off in Table 6 in the appendix. - It would be nice if there were some comments on the figures showing the curvatures in the appendix. Questions For Authors: 1. Why is the transition between different curvature spaces non-smooth? 2. How did you choose the hyperparameters specific to your proposed method? 3. How does the number of parameters changes when using your proposed architecture compared to a backbone architecture? 4. What is the additional computational cost required by your proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate the reviewer gmJz's insightful questions and constructive feedbacks. #### **[Acronyms]** Thank you for the suggestion. We will ensure all acronyms are properly defined upon first use and provide a reference table in the final version. For example, "VRPs" stands for Vehicle Routing Problems, "LKH" for Lin-Kernighan-Helsgaun and "HGS" for Hybrid Genetic Search, etc. #### **[Typos]** We appreciate the attention to typos. We identified and will correct errors such as "manioflds" (line 123), "hypersphereical" (line 124), "$n$ nodes" (line 185). The cardinal of $V$ on page 4 is indeed $n+1$ and we will correct it. Besides, in Table 6 (Appendix.3), we notice that for the row indexed by "R102", two columns are mis-highlighted at same time. We have checked all tables and will make sure all bold values are correctly used. #### **[Comments on curvature figure]** In Figure 4 (Appendix.1), we show node curvatures on 10 OOD tasks. It shows that almost every node in the dataset has either negative or positive curvature and the average curvature suggests that each task contains geometry patterns. For Figure 5-10 (Appendix.3), we provided corresponding comments in "curvature analysis" part (Appendix.1), where shallow layers and deeper layers present different curvatures. We will move this analysis directly into these figure captions in the final version. #### **[Why is transition non-smooth]** The transition between different curvature spaces is non-smooth primarily due to the incompatibility of geometric structures among different subspaces. In specifics, hyperbolic and spherical spaces have fundamentally different distance metrics. In details, the geodesic distance in hyperbolic space grows exponentially when $|\kappa|$ ($\kappa$ here denotes negative curvature of hyperbolic space) increases. While on the other hand, the distance in spherical space is bounded. This discrepancy may introduce unexpected disturbance in feature transformation. In Figure 5-10 (Appendix.3), we visualize the curvature information of different subspaces layer by layer. From the results in these figures, the shallow layers tend to process features in hyperbolic spaces while deeper layers choose to process features via spherical spaces. The transitions between hyperbolic space and spherical space usualy happen around layer 2 and layer 3. The inferior features caused by non-smooth transition will severly affect deeper layers' learning ability. To mitigate this, we apply techniques similar to Mix-Up[1] where previous layer's features was infused into deeper layers. This can guide deeper layers gradually move into new curvature spaces. #### **[Settings of hyperparameters]** Table 5 (Appendix.2) lists our detailed settings of hyperparameters and we strictly follow prior works[2,3]: 5,000 training epochs, batch size of 128, and 20,000 instances per epoch from 6 VRPs. Fine-tuning uses 10,000 instances per epoch for 10 epochs with the same batch size. We employ Adam optimizer with learning rate 1e-4, decayed by 10 after 4,500 epochs. Feature space (128 dimensions) is split into 8 subspaces of 16 dimensions, initialized at 0. Number of attention heads and encoder layers are 8 and 6, respectively. #### **[Number of parameters in our method]** For embedder, we insert two mixed-curvature modules for depot and customer node, respectively. For each layer of encoder, we insert one mixed-curvature module for processing feautres from previous layer. The comparisons between baselines and mixed-curvature versions are listed as follows: | **Models** | **Number of parameters** | |---------------------------------------------|-------------------------------------| | POMO-MTL | 1,254,656 | | Mixed-POMO-MTL | 1,386,810 | | MVMOE-Light | 3,698,944 | | Mixed-MVMOE-Light | 3,831,116 | | MVMOE | 3,682,176 | | Mixed-MVMOE | 3,814,348| #### From table, compared with original backbones, the number of parameters is inreased by 3.57\%~10.56\%. We will emphasize these changes in the final version. #### **[Computational cost]** To compare the computational cost with baseline models like POMO-MTL and MVMOE, we list the running time **[here](https://anonymous.4open.science/r/15699-8B57/README.md)**. From results in the table, running time of our model increases a little bit compared to backbones. Taking (n=100) as example, Mixed-POMO-MTL, Mixed-MVMOE-L and Mixed-MVMOE introduce 10.72\%, 7.68\%, 7.56\% costs to backbones, respectively. These show that mixed-curvature modules bring moderate costs, which would not cause serious inference burden. #### **[Ablation]** We conduct ablation studies on MVMOE and results are shown **[here](https://anonymous.4open.science/r/15699-8B57/README.md)**. We can see that Mix-Up can improve performances. [1]Pomo: Policy optimization with multiple optima for reinforcement learning. NIPS, 2020. [2]Mvmoe: Multi-task vehicle routing solver with mixture-of-experts. ICML, 2024. [3]mixup: Beyond empirical risk minimization. ICLR, 2018.
Summary: This article considers modeling the VRP tasks in multi-task NCO in Riemannian space instead of the original Euclidean one. This article modifies the embedding layer based on this idea, and the experiment demonstrated the effectiveness of the proposed method. ## update after rebuttal I think the topic of this article is interesting and super novel, and I have no problem with the possible admission decision. However, since the experiment added by the author does not fully demonstrate the significant effect of non-Euclidean transformation, I am not motivated enough to improve my score. Claims And Evidence: To my understanding, discussing the relation curvature shown in Figure 1 with the Euclidean space should have at least three dimensions of $ xin X$. Unfortunately, based on the current version of this article, I am not sure whether the $X$ space represents coordinates (2-dimensional) or coordinates with node features (such as time window or capacity). I suspect your intention should be the latter. I suggest you pay attention to this point and add clear illustrations and definitions. Methods And Evaluation Criteria: I am not familiar with the basic definitions of Riemann manifold and tank space, and I am curious about the specific formulas for the $Exp$ operator and the $Log$ operator in this work. Theoretical Claims: This paper does not contain theoretical claims. Experimental Designs Or Analyses: What do you mean about ``we use an instance augmentation method for solution decoding.`` 8-augmentation or something else. Please state clearly. Supplementary Material: I reviewed the supplementary material for Figure 5 to Figure 10. Relation To Broader Scientific Literature: I think it is valuable to discuss the distribution of the entire node feature (including constraints and coordinates) for multi-task NCO, and this article first discusses this part. Essential References Not Discussed: N/A Other Strengths And Weaknesses: **Strength:** 1. The general representation of multi-task NCO features can be improved, and this paper notices such drawback. **Weakness:** 1. Based on my background, I have limited knowledge of Riemannian space so I am confused about the analysis presented in this article. Based on the current version, I cannot intuitively understand the necessity of mapping features to Riemannian space. Figure 1 attempts to demonstrate the necessity, but this is only a statistical value. In my opinion, it can only indicate that these points are relatively clustered or scattered, and have no strong relationship with whether they belong to the Riemann space or not. Furthermore, the author did not provide any intuitive explanation. After using the method proposed in this article, why does the embedding method proposed in this article achieve better results? I am looking forward to the author's complete explanation of these motivational doubts. Other Comments Or Suggestions: Figure 2 is not a vector diagram, please replace it with a vector diagram. Questions For Authors: 1. Can you please provide me with an intuitive reason why the embedding method proposed in this article can achieve better results? 2. What is the detailed rule for subspace partition when D cannot be divided by C? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to express our sincere gratitude to the reviewer qtXz for detailed and valuable suggestions and comments. #### **[Dimension of $X$]** Since graph is discrete, Ollivier-Ricci metric is used for measuring curvature based on edges weights, which doesn't require dimensions of inputs. This is different from curvature on 3-dimension smooth manifolds. Our $X$ only consists of two dimensions (coordinates) for calculating curvatures. Since some nodes don't have features like time-windows and backhauls, these features will separately appear in encoder. We will make this clear in the final version. #### **[Instance augmentation method]** We follow POMO-MTL[1] and MVMOE[2], applying greedy rollout with 8× instance augmentation for fair comparisons, where best solutions for each instance are obtained by solving multiple (8×) equivalent instances. Those equivalent instances are acquired by rotating or clipping the original instances[7]. We will clarify this in the final version. #### **[Vector diagram]** We have created a new vector diagram (**[here](https://anonymous.4open.science/r/VD-5D0E/README.md)**) that will replace the original Figure 2 in the final version. #### **[Clarification for motivations]** Following [1,2], settings of VRPs are based on Euclidean space. However, the underlining data structures are graphs and they produce certain kinds of complex structures (e.g., node clustering, scattering) which can be approximated by tree-like or cycle-like patterns[4,5,6]. As pointed out in[4,6], to capture these nuances and variations from datas, Riemannian manifolds provide us with a more flexible and effective framework. Though graph is discrete, which is usually different from Riemannian manifolds that are smooth and continuous, we can use some discrete analogs (e.g., the Ollivier-Ricci curvature used for visualizations in Figure 1) to get some senses. #### Recall definitions in Equation 15,16 (Appendix.1), the node curvature is calculated based on edge weights of neighbour nodes. So, given a graph $G$ and two nodes $u$ and $v$, the positive Ollivier-Ricci curvature $\kappa(u,v)$ indicates that $u$ and $v$ share many common points in their neighbours. In other words, if random walks $A$ and $B$ start from $u$ and $v$ respectively, then $A$ and $B$ are more likely to meet with each other in a few steps. In this case, paths on $G$ tend to collapse together. Similarly, the negative curvature indicates that paths on $G$ tend to diverge and spread out. Such kinds of behaviours are quite similar to those of geodesics on Riemannian manifolds where initially parallel geodesics will converge and diverge on Spherical and Hyperbolic Spaces, respectively. From these, Ollivier-Ricci curvature can be treated as a discrete analogy of curvature on continuous domain and its outputs strongly reflect the existence of non-uniform structures in datas, which motivates us to leverage Riemannian manifold spaces for creating a feature space that preserves the underlying node distances and relative positions more faithfully. #### From Table 3 (Appendix.3), the calculated distortion rates (defined in Equation 9) of different models show that Riemannian manifolds have done a better job of preserving original distances among nodes in feature space compared with other baselines, which is more desirable since neural solvers for VRPs heavily rely on the quality of produced hidden features to select next node. #### To summarize, graphs used in VRPs often contain intricate relations that can't directly be viewed as flat structures. Though graph is discrete, we can utilize neural networks to firstly embed graph into continuous embeddings and apply curved manifolds to fit the underlining geometric structures, which can let model enjoy much stronger abilities to detect and capture fine-grained information from the inputs. #### **[How to set $D$ and $C$ in indivisible case]** In our case, $D=128$ and $C=8$ so each subspace has 16 dimensions. When $D$ is not divisible by $C$, we need to manually design the dimension of each subspace. For instance, when $D=127$ and $C=8$, the first 7 subspaces can have 16 dimensions and last one has 15 dimensions (other choices are also allowed as long as the summation of subspaces' dimensions is equal to $D$). We follow POMO-MTL[1] and MVMOE[2] where $D$ is always divisible by $C$. But this can be explored in future (e.g., using neural architecture search[3] to decide dimensions of subspaces). [1]Multi-task learning for routing problem with cross-problem zero-shot generalization. KDD, 2024. [2]Mvmoe: Multi-task vehicle routing solver with mixture-of-experts. ICML, 2024. [3]Neural architecture search: A survey. JMLR, 2019. [4]Learning mixed-curvature representations in product spaces. ICLR, 2018. [5]Hyperbolic graph neural networks. NIPS, 2019. [6]Constant curvature graph convolutional networks. ICML, 2020. [7]Pomo: Policy optimization with multiple optima for reinforcement learning. NIPS, 2020. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. Based on my current understanding, the motivation of this article is to use knowledge from non Euclidean space to learn better representations of certain types of complex structures in two-dimensional coordinates. I acknowledge the novelty of this motivation. However, I still have two main concerns that have not been resolved: 1. Can you use some experiments to prove that the complex structure is better represented by the Mixed-MVMoE? It seems that images 5-9 attempt to illustrate this point, but I cannot understand the author's conclusion ``By contrast, as the layer index increases, more subspaces shift closer to spherical geometry``. 2. In the experiment, does the Mixed-MVMoE proposed in this article increase the number of parameters compared to MVMOE? If so, how much has it increased, and is this increase in parameter related to the improvement in effectiveness? > Second Round: Thank you for your reply. My concern 1 has been resolved. However, I think the current experimental results are not sufficient to completely eliminate the doubt that the effectiveness of the proposed method is largely due to the increase in parameters. I will keep my score. --- Reply to Comment 1.1.1: Comment: #### **[Second Round]** Thank you for your second-round feedback. Could you please clarify concerns about the lack of sufficiency? Since deadline hasn't passed yet, we are happy to provide more details if needed. --- #### **[Complex structures]** The experiment that demonstrates the ability of our model to keep complex structures is in Table 3 (Appendix.1). In details, we use the distortion rate defined in Eqs. 8 and 9 to measure the difference between distance of two nodes in feature space and that in the original input graphs. For the sake of convenience, we list the definitions of distortion rate and results in Table 3 here: $$ D_{\text{avg}} = \frac{1}{N} \sum_{a \ne b} \left| \frac{d_{U_1}(f(a), f(b))}{d_{U_2}(a, b)} - 1 \right|, $$ #### where $U_1,U_2$ denote feature space and input graph metric space, respectively. For distoratoin rate, the lower the better. | Model | Distortion | |------------------|------------| | POMO-MTL | 2477.725 | | MVMoE-L | 2923.151 | | MVMoE | 2083.015 | | Mixed-POMO-MTL | 1678.605 | | Mixed-MVMoE-L | 1981.076 | | Mixed-MVMoE | 1274.142 | #### We extract features of each node from encoder's last layer and calculate $D_{avg}$ (The reason for choosing encoder's last layer is that feature from this part will directly affect decision making, which is vital for generating better solutions). From results in the above table, we can observe that backbones augmented with mixed-curvature modules can achieve much lower distortion rates compared with original backbones. This demonstrates that introducing mixed-curvature modules can actually help models learn high-quality representations that can keep complex structures more faithfully. #### **[Images 5-9]** The Figures 5-9 (Appendix.3) are mainly used for illustrating the curvatures learned during training stage. Each layer is decomposed into 8 subspaces. From those presented colors, we can observe that preferences among different layers are different: shallow layers tend to acquire negative curvatures and deeper layers prefer positive curvatures. These figures show how curvatures evolve from layer to layer. #### **[Number of parameters]** Indeed, the number of parameters in Mixed-MVMOE is increased with respect to MVMOE. In specifics, we insert mixed-curvature modules for depot and customer node in embedder and we insert one mixed-curvature module for processing feautres from previous layer in encoder block. For decoder, we keep original architecture unchanged. The detailed changings in the number of parameters for each backbone are listed below: | **Models** | **Number of parameters** | |---------------------------------------------|-------------------------------------| | POMO-MTL | 1,254,656 | | Mixed-POMO-MTL | 1,386,810 | | MVMOE-Light | 3,698,944 | | Mixed-MVMOE-Light | 3,831,116 | | MVMOE | 3,682,176 | | Mixed-MVMOE | 3,814,348| #### We can see that compared with original backbones, number of parameters is inreased by 3.57%~10.56%. Besides, we list running time for backbones and their mixed-curvature augmented version (**[here](https://anonymous.4open.science/r/run_time-2157/README.md)**). From results in those two tables, we can observe that running time is increased on both of the node scales (n=50, 100). However, the increasement in time is moderate compared to backbones. For instance, when n=100, Mixed-POMO-MTL, Mixed-MVMOE-L and Mixed-MVMOE introduce 10.72%, 7.68%, 7.56% extra computational costs to backbones, respectively. This demonstrates that increased parameters won't bring heavy burdens for models. #### **[Effectiveness of increased parameters]** In order to further demonstrate that added mixed-curvature modules indeed bring the improved performances, we conduct some extra ablation studies. Due to time and space limitation, we conducted ablation studies using POMO-MTL with a node size of 50. Specifically, Euc-POMO-MTL was implemented by replacing mixed-curvature modules in Mixed-POMO-MTL with their Euclidean counterparts (i.e., regular linear layers), ensuring both models have the same number of parameters (approximately 1.39 million). Average results across the 16 tasks are presented in table below. | N=50 | Num of params | AVG Gap | | -------------- | -----| ----- | | Mixed-POMO-MTL| (1.39 M) | 4.505% | | Euc-POMO-MTL |(1.39 M) | 4.566% | | POMO-MTL |(1.25 M) | 4.536% | #### As shown in the table, Mixed-POMO-MTL achieves superior overall performance compared to Euc-POMO-MTL. At the same time, original POMO-MTL also outperforms Euc-POMO-MTL while using fewer parameters—original POMO-MTL has 1.25 million parameters, approximately 0.14 million fewer than Euc-POMO-MTL. These findings show that merely increasing the number of parameters can actually lead to inferior results, further validating the effectiveness of mixed-curvature modules. We will add detailed results for 16 tasks with similar ablation studies on MVMOE in final version.
Summary: This paper presents a pre-training framework for multi-task vehicle routing solver. The main difference between this framework with existing literatures is the integration of the geometric structures. Specifically, this framework utilizes the curvature of the routes and encodes the geometric features in mixed-curvature spaces to thoroughly learn and leverage the data representations of the problem. Experiments are conducted by comparing with previous works which didn't account for the geometric information of the input. Results demonstrate the effectiveness of the proposed paradigm. Claims And Evidence: Yes Methods And Evaluation Criteria: For the effectiveness, it's okay. However, the authors skip the computation efficiency, which is also an important evaluation criteria to mention. Theoretical Claims: yes Experimental Designs Or Analyses: Yes. Massive experiments and benchmark comparisons are conducted to validate the effectiveness of the proposed framework. Supplementary Material: Yes, the supplementary materials mainly involves two parts. The first part is a little extension of the curvature calculation discussed in the manuscript. The second part is more detailed experimental configurations. Relation To Broader Scientific Literature: The key contributions lie in the introduction of the curvature analysis from differential geometry and so on. The curvature analysis accurately capture the geometrical features of the problem input. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: This paper is clearly written, and the introduction of the curvature analysis is significant since it captures the geometrical features overlooked by previous works. The features in this paper are more reasonable both intuitively and theoretically, reflecting that the authors are skillful in geometry analysis. Besides, the linear transformation of the mixed-space transformation, which perfectly suits the structure of the neural network. The experiment results demonstrate the effectiveness of the proposed framework. Weakness: As the authors mentioned, the proposed framework struggles in computation efficiency. Besides, the authors spend too much attention on the Preliminaries but section Methodology lacks more details, making this part a little confusing. Other Comments Or Suggestions: The geometrical analysis is an important part and foundation of this paper. It analyses the curvature of the input information. It would be better if there are figures to illustrate the original input and corresponding curvature, as well as the linear transformation process. Questions For Authors: The first question, at Figure 2, the features of subspace 1,2,3 will pass through a Log() transformation, but it seems that subspace C don't, why? The second question, two key statements are frequently mentioned all across the paper: mixed-curvature space and (non-)Euclidean space. Can I understand like this: mixed-curvature space is the concatenation of Subspace C; non-Euclidean space is when curvature \kappa is not equal to zero? What's the relationship between curvature calculation and space transformation Exp() and Log()? Should Input Instance pass through an Exp() first to become Subspace C? Given Input Instance, what does Subspace 1,2,3 look like? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely thank the reviewer dd5d for the valuable comments and suggestions. We provide the point-to-point responses below to address the concerns. #### **[Figures to show curvatures of inputs]** Figure 1 visualizes the curvature information on 6 training tasks. We use 1,000 VRP instances (size 50) to record the frequency of node curvatures via Ollivier-Ricci curvature[1] (briefly introduced in Appendix.1). Histograms in Figure 1 show that each task consists of nodes with either positive or negative curvatures with a tendancy towards negative curvature. Similar conclusions also hold for another 10 OOD tasks and we report their histograms in Figure 4 (Appendix.1). These all validate our motivation in introducing mixed-curvature space into pre-training stage. #### **[Clarification for operations in Figure 2]** Each curvature subspace needs exponential mapping $Exp(\cdot)$ and logarithmic mapping $Log(\cdot)$ for feature transformation. The subspace $C$ also needs $Log(\cdot)$ but it was omitted for brevity. We have created a complete one **[here](https://anonymous.4open.science/r/VD-5D0E/README.md)** and will update it in final version. #### **[Mixed-curvature VS non-Euclidean space]** The Euclidean space is flat and has zero curvature. Conversely, non-Euclidean space acquires non-zero valued curvature. The mixed-curvature space consists of multiple subspaces with possible different curvatures. In our case, the original 128-dimension feature space is split into 8 subspaces (16 dimensions) and each is assigned with a learnable curvature. #### **[How do subspaces look like]** In Appendix.3, we visualize curvature information of subspaces in embedder and encoder layer under different settings. For POMO-MTL based architectures, we visualize both scales (n=50, 100) in Figure 5 and Figure 6 (Appendix.3), respectively. Similarly, we show results for MVMOE-Light and MVMOE in Figure 7,8 (Appendix.3) and Figure 9,10 (Appendix.3), respectively. From the results, one can observe a common phenomena: the shallow layers favor low-curvature/hyperbolic space while deeper layers favor high-curvature/spherical space, illustrating how features evolve through layers. #### **[Computation efficiency]** We provide running time of baselines and models with mixed-curvature modules **[here](https://anonymous.4open.science/r/15699-8B57/README.md)**. As we can see, time increase is moderate and it doesn't cause significant burdens. Moreover, the recent work[2] can avoid mapping operations by fully exploitting geometric properties of certain manifolds, and we can explore it to further reduce the computation in future. #### **[More details for methodology]** We acknowledge the concern regarding the imbalance between the Preliminary and Methodology section. We agree that the methodology section requires greater detail and clarity. In the final version, we will add following details and make it more informative: #### **1.How to process inputs** Following the prior works[3], we first project raw inputs into high-dimensional Euclidean space and then transform them into mixed-curvature space via $Exp^{\kappa}(\cdot)$. #### **2.How curvatures affect Exp($\cdot$) and Log($\cdot$)** The curvature is vital for $Exp(\cdot)$ and $Log(\cdot)$. Given a curvature $\kappa$: #### **2.1 Hyperbolic Model ($\kappa < 0$)** - ##### **Exponential Map**: $\exp_p(v) = p \oplus_{\kappa} \left(\tanh\left(\sqrt{-\kappa} \frac{\lambda_p \|v\|}{2} \right) \frac{v}{\|v\|} \right),\quad p\in \mathcal{M}, v \in T_{v}\mathcal{M}$ - ##### **Logarithmic Map**: $\log_p(q) = \frac{2}{\lambda_p \sqrt{-\kappa}} \tanh^{-1}(\sqrt{-\kappa} \|p \oplus_{\kappa} (-q)\|) \frac{p \oplus_{\kappa} (-q)}{\|p \oplus_{\kappa} (-q)\|},\quad p,q \in \mathcal{M}$ where $p$ is point on manifold $\mathcal{M}$, $v$ denotes point on tangent space $T_{v}\mathcal{M}$ and $q$ denotes a point on $\mathcal{M}$. $\lambda_p$ here is the hyperbolic metric: $\frac{2}{1 - \kappa \|p\|^2}$. #### **2.2 Spherical Model ($\kappa > 0$)** - ##### **Exponential Map**: $\exp_p(v) = \cos(\sqrt{\kappa} \|v\|) p + \sin(\sqrt{\kappa} \|v\|) \frac{v}{\|v\|}$ - ##### **Logarithmic Map**: $\log_p(q) = d_{\mathbb{\kappa}}(p, q) \frac{q - \cos(d_{\mathbb{\kappa}}(p, q)) p}{\|q - \cos(d_{\mathbb{\kappa}}(p, q)) p\|},\quad d_{\mathbb{\kappa}}(p,q)=\frac{1}{\sqrt{\kappa}} \cos^{-1}(\kappa \cdot \langle p, q \rangle)$ #### **2.3 Effects of $\kappa$** For hyperbolics, if $|\kappa|$ increases, the metric $\lambda_p$ will decrease thus distance between two points will become larger and points on the manifold acquire stronger tendancy to spread out. For sphericals, enlarging $|\kappa|$ will lead to shrinkage on the distance, which means points on sphere become closer together. [1]Ricci curvature of markov chains on metric spaces. JFA, 2009. [2]Hypformer: Exploring efficient transformer fully in hyperbolic space. KDD, 2024. [3]Hyperbolic vision transformers: Combining improvements in metric learning. CVPR, 2022.
null
null
null
null
null
null
null
null
polybasic Speculative Decoding Through a Theoretical Perspective
Accept (poster)
Summary: The paper introduces the polybasic speculative decoding framework, aimed at accelerating the inference of large language models (LLMs) through multi-model collaboration. Its core contributions include: 1) Establishing a theoretical framework, deriving the optimal inference time formula for multi-model systems (Lemma 3.1), and proposing the model insertion efficiency condition (Theorem 3.2), which quantifies the trade-offs between model capability, acceptance length, and computational cost. 2) Designing a multi-level verification algorithm, which employs a phased strategy where lightweight models generate candidate tokens, intermediate models quickly filter them, and the target model periodically verifies them (Algorithm 1), thereby reducing the target model's invocations and enhancing throughput. 3) Experimental validation shows a speedup ratio of 3×–4.43× on models such as Vicuna-7B, LLaMA2/3, and Qwen2, with mathematical reasoning tasks achieving up to 4.43 times acceleration, and the average acceptance length increasing to 9–10 tokens, while maintaining output distribution consistency with the original model. Additionally, theoretical proofs demonstrate that speculative sampling can reduce the variance of acceptance lengths (Theorem 3.3), and the framework's generality is showcased through task-adaptive designs (e.g., extensions of self-speculative methods). This work provides theoretical guidance and scalable practical solutions for efficient LLM inference. Claims And Evidence: **Claim** The paper introduces a theoretical framework that derives an optimal inference time formula for multi-model speculative decoding systems (Lemma 3.1) and establishes conditions for model insertion efficiency (Theorem 3.2). **Evidence** The authors provide rigorous mathematical proofs for Lemma 3.1 and Theorem 3.2, which quantify the relationship between model forward-pass costs, acceptance lengths, and computational overhead. The proofs are detailed and logically structured, supporting the claim that adding models can improve speedups under specific conditions. **Claim:** Speculative sampling reduces variance in token acceptance lengths, leading to more stable and predictable performance (Theorem 3.3). **Evidence:** The authors present a formal proof for Theorem 3.3, which mathematically demonstrates the relationship between acceptance probability and variance in multi-model settings. Additionally, empirical results in Section 4.3 validate this claim by comparing acceptance-length variance between speculative and greedy sampling, showing significantly lower variance for speculative sampling. Methods And Evaluation Criteria: 1. Proposed Methods: **Polybasic Speculative Decoding Framework**: The introduction of a polybasic speculative decoding framework is innovative and addresses limitations in existing dualistic draft-verify systems. By incorporating multiple models, the approach aims to increase parallelism and acceptance length, which directly tackles the computational bottleneck in LLM inference. Theoretical Foundations: The paper provides a strong theoretical basis for the framework, including a fundamental theorem on optimal inference time and insights into the stability of acceptance lengths. These theoretical contributions help guide practical implementation and offer a deeper understanding of the system's behavior. Algorithmic Design: The staged verification process and the detailed algorithm (Algorithm 1) provide clear guidelines for implementing the polybasic system. This makes the method more accessible and reproducible. 2. Evaluation Criteria: **Diverse Tasks and Models:** The evaluation spans multiple tasks (MT-bench, translation, summarization, QA, mathematical reasoning, RAG) and model families (Vicuna-7B, LLaMA2-Chat 7B, LLaMA3-8B, Qwen2-7B). This diversity ensures that the method's effectiveness is tested across different scenarios and model architectures. **Performance Metrics:** The use of speedup ratio and average acceptance length as metrics is appropriate for assessing the efficiency and effectiveness of the speculative decoding approach. These metrics directly relate to the goals of reducing inference time while maintaining output quality. Comparison to Baselines: The paper compares the proposed method to existing approaches like EAGLE and vanilla speculative decoding, providing a clear benchmark for evaluating improvements. Theoretical Claims: The theoretical claims presented in this paper are both novel and rigorously supported, offering a significant advancement in the field of speculative decoding for large language models (LLMs). The authors have successfully proven the correctness of their key theoretical contributions, including the fundamental theorem on optimal inference time (Theorem 3.2) and the stability analysis of acceptance lengths under speculative sampling (Theorem 3.3). These proofs are mathematically sound and provide a solid foundation for the proposed polybasic speculative decoding framework. The derivation of the optimal inference time formula in Theorem 3.2 is particularly impressive, as it elegantly captures the trade-off between adding additional models and the associated computational costs. This result not only provides clear guidance for model insertion but also establishes a principled approach to system design that can be generalized across different architectures and tasks. Furthermore, the proof of Theorem 3.3 highlights the benefits of speculative sampling in reducing variance in acceptance lengths, which is critical for ensuring stable and predictable performance in multi-model systems. Experimental Designs Or Analyses: The experimental designs and analyses presented in this paper are robust, well-structured, and effectively validate the proposed polybasic speculative decoding framework. Below is a positive review of the key aspects: **1. Comprehensive Evaluation Across Models and Tasks :** The authors conduct experiments on multiple widely-used LLMs (e.g., Vicuna-7B, LLaMA2-Chat 7B, LLaMA3-8B, Qwen2-7B) across diverse tasks such as MT-bench, translation, summarization, QA, mathematical reasoning, and RAG. This broad evaluation demonstrates the generalizability and adaptability of the polybasic approach, achieving impressive speedups (ranging from 3.31× to 4.43×) while preserving output fidelity. The choice of models and tasks ensures that the method's effectiveness is tested under varied conditions, reinforcing the reliability of the results. **2. Clear Metrics for Performance Assessment :** The use of two primary metrics—walltime speedup ratio and average acceptance length—is appropriate and aligns with the goals of speculative decoding. These metrics directly measure the efficiency gains and stability of the system, providing a clear and interpretable evaluation framework. The reported acceptance lengths (9.1–10+ tokens) are significantly higher than dualistic baselines, supporting the theoretical claim that multi-tiered speculation improves acceptance efficiency. Supplementary Material: I reviewed the supplementary material provided by the authors, which primarily consists of their implementation code. The code serves as a practical complement to the theoretical framework and experimental results presented in the paper. It includes detailed implementations of the polybasic speculative decoding framework, showcasing how the multi-model system is structured and executed. Relation To Broader Scientific Literature: The key contributions of this paper are highly significant when viewed in the context of the broader scientific literature on speculative decoding and large language model (LLM) optimization. The introduction of a polybasic speculative decoding framework builds upon foundational ideas from prior works, such as blockwise parallel decoding and hierarchical speculative methods like TRIFORCE, while addressing critical gaps in scalability and theoretical grounding. Unlike earlier dualistic draft-verify paradigms, which often relied on empirical heuristics, this work provides a rigorous theoretical framework that aligns with recent trends toward formalizing speculative decoding principles. Furthermore, the demonstrated speedups (up to 4.43×) across diverse tasks extend the findings of prior studies like EAGLE and Medusa 3, showcasing superior performance through multi-model coordination. Essential References Not Discussed: The paper provides a comprehensive discussion of related works, effectively situating its contributions within the broader context of speculative decoding and large language model (LLM) optimization. Key advancements in dualistic draft-verify frameworks, hierarchical speculative methods, and multi-level drafting are thoroughly reviewed.Even if there exist tangential works not cite, they do not appear essential to understanding or evaluating the key contributions of this paper. Other Strengths And Weaknesses: Strengths: 1. The paper introduces a novel polybasic speculative decoding framework that extends beyond traditional dualistic draft-verify paradigms. By incorporating multiple models in a staged verification process, the approach significantly increases parallelism and acceptance lengths, achieving speedups of up to 4.43× across various tasks and models. 2. The framework is underpinned by rigorous theoretical analysis, including a fundamental theorem on optimal inference time (Theorem 3.2) and insights into the stability of acceptance lengths (Theorem 3.3). These contributions provide clear guidance for system design and optimization. 3. Extensive experiments on diverse tasks (e.g., MT-bench, translation, summarization) and model families (e.g., Vicuna-7B, LLaMA2-Chat 7B) demonstrate consistent performance improvements while preserving output fidelity. The results highlight the method's adaptability and robustness. 4. The paper provides clear algorithmic guidelines (e.g., Algorithm 1) and discusses practical considerations like model selection and speculation length tuning, making the approach accessible and reproducible. Weaknesses: 1. While the theoretical framework supports the inclusion of four or more models, the authors acknowledge practical difficulties in finding suitable off-the-shelf models that meet the theoretical requirements. However, this limitation is relatively minor and can be addressed through future exploration of complementary techniques like advanced quantization or pruning. 2. The paper notes slightly lower acceleration in tasks requiring long-context generation (e.g., summarization, RAG). This limitation is understandable given the inherent challenges of managing KV caches in such scenarios. Nevertheless, the authors appropriately highlight this as an area for future improvement, suggesting potential solutions like caching strategies. 3. Although the paper focuses on LLMs, it could briefly discuss how the polybasic framework might extend to other domains, such as vision or multimodal models. However, this is a minor point and does not detract from the paper's primary focus on language models. Other Comments Or Suggestions: 1. While Theorem 3.2 provides a condition for model insertion, a more intuitive explanation or visual aid (e.g., a flowchart) could help readers better grasp when and how to add new models effectively. 2. The discussion on four-model systems is insightful but could benefit from specific examples of off-the-shelf models that partially meet the requirements, along with potential hybrid approaches (e.g., combining quantization and pruning). 3. Include a brief comparison to very recent speculative decoding methods (e.g., Hydra, Falcon) to highlight how the polybasic framework compares in terms of flexibility and performance. Questions For Authors: 1. In Theorem 3.2, you provide conditions under which adding a new model improves inference time. Could you elaborate on how sensitive these conditions are to variations in acceptance length and forward-pass cost? For example, could small deviations in empirical measurements (e.g., due to noisy data) lead to incorrect conclusions about whether a model should be added? 2. While you acknowledge challenges in implementing four-model systems, could you provide more specific insights or preliminary guidelines for overcoming these limitations? For instance, are there particular quantization techniques or pruning strategies that show promise for balancing computational overhead with acceptance length improvements? 3. Your stability analysis (Theorem 3.3) focuses on speculative sampling’s ability to reduce variance in acceptance lengths. How does this stability hold up in long-context tasks like summarization or RAG, where KV cache management becomes critical? Are there adjustments to the speculative sampling parameters that could further mitigate variance in such scenarios? 4. Recent works like Hydra (Ankner et al., 2024) and Falcon (Gao et al., 2024) explore advanced speculative decoding techniques. How does your polybasic framework compare to these methods in terms of flexibility, scalability, and performance across different model architectures? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for these insightful questions that help clarify the practical implications of our theoretical framework. **Regarding Theorem 3.2's sensitivity:** We recommend measuring the relevant parameter values under identical experimental conditions. While measurement errors may occur, our theoretical framework is robust against minor measurement inaccuracies because when comparing inference time changes after adding models, the errors are relative. This relative nature of measurement ensures that our framework's predictions remain reliable even with some experimental noise. **On implementing multi-model systems:** The primary challenge in extending to multi-model systems is KV cache management. This can be addressed using techniques similar to those in MagicDec, such as StreamingLLM technology, which preserves only initial tokens (Attention Sinks) and KV caches within a sliding window, significantly reducing memory requirements. Our theoretical framework guides model selection and is orthogonal to other speculative methods in its approach. We plan to enhance our system in future work. **Stability in long-context scenarios:** Theorem 3.3's variance analysis strengthens our theoretical framework by reducing errors from acceptance token length variations during model selection, maintaining stability across all tasks. For long-context tasks, effective KV cache management is clearly essential, and our framework provides a stable foundation upon which these optimizations can be built. **Comparison with recent works:** Thank you for your suggestion - we have expanded our experimental comparison to include Hydra, which further validates the effectiveness of our approach across different speculative decoding implementations. ($c$: speed ratio, $μ$: average acceptance length) || Model | $c$ | $μ$ | |-|-|-|-| | Our|Vicuna-7B | **3.48$\times$** | **9.88** | | Hydra|Vicuna-7B | 2.30$\times$ | 4.42 | In conclusion, we appreciate your recognition of our work and your thorough review. We hope these responses address your questions and highlight the contributions of our theoretical framework to the understanding and optimization of speculative sampling systems.
Summary: This paper explores using a chain of draft models rather than a single draft model during speculative decoding, such that the first draft model generates tokens autoregressively, and each subsequent draft model verifies the tokens generated. When an intermediate draft model rejects a token the first draft model generates more tokens. The paper provides theoretical analysis to find the condition of when an additional draft model could lead to speedup, then implements a chain of 2 draft models (EAGLE as the first draft and 4-bit quantized version of the model as a second draft) and shows significant speedups over EAGLE only. ## Update after Rebuttal I have read all the reviews and rebuttals. I appreciate the authors response to my review. I would like to keep my score, i.e., while I am hesitant to accept the paper due to it's lack of coherence and limited contribution, I won't fight against it if it's accepted. However, I highly recommend re-writing parts of the paper to ensure it's coherence and to add results with different types of small drafters, not just EAGLE, to ensure that the theoretical framework of the paper (that the authors claim is their main contribution) is generic. Claims And Evidence: - The claims of obtaining higher speedups using chained drafters compared to a single drafter, is backed up by the results - However, the claim of providing a theoretical foundation for chained drafters is disputable. The authors do provide a theorem on the condition of when adding a drafter would lead to speedup, but they do not use it in their experiments. Methods And Evaluation Criteria: - The experiments were made on 4 models of similar size (7B) from different generations and familes (Llama2, Llama3, Vicuna, Qwen) - I would have preferred if different sizes (e.g., 1B, 13B, 70B) were evaluated - 6 different language generation tasks were evaluated Theoretical Claims: - Equation 2: The mathematical formulation is a bit oversimplified: we have decoding time and prefill time. And the first draft model, i.e., $ T_{n} $, decodes tokens, while other models perform prefill of K tokens - Line 157: Please cite source of equation. I believe the mathematical formulation shown in equations 1 to 3 of [MagicDec](https://arxiv.org/abs/2408.11049) paper is better - Some symbols and equations were not clear to me: - Line 169: I don't under stand what is $i - new$ or $new - (i+1)$ if $new$ is supposed to be between $i$ and $i+1$ - Line 174 (right column): What is $\alpha$ ? - Line 191 (right column): what is $S$ ? Experimental Designs Or Analyses: - Although results show cascaded drafters lead to bigger speedups, the connection with the theoretical proof is not clear. Hence, it is not clear what is the contribution of this paper compared to other papers that propose cascaded drafters - Line 328 (Right column): "As anticipated, speculative sampling yields smaller variance" My understanding from Theorem 3.3 was that lower variance comes from adding multiple draft models. not from speculative sampling. Supplementary Material: No Relation To Broader Scientific Literature: - The Related Work section categorizes speculative decoding techniques from unique perspectives - The paper cited similar work on "cascade or multi-level drafting (Chen et al., 2023b; Sun et al., 2024)". Authors claim that the difference between the paper and such other papers is that they provide a theoretical foundation. However, the theorems deduced in the paper were not used in the experiments to obtain any improvement over similar work. Essential References Not Discussed: - The paper cited other work that did cascaded drafters (Chen et al., 2023b; Sun et al., 2024) but did not really show the differences or contributions compared to them Other Strengths And Weaknesses: - Strengths; - The paper is written with relative clarity and was relatively easy to follow - The speedups presented in the results are strong - Weakness: - There is a gap between the theoretical foundation and Experiments sections. The Experiments or Results section do not leverage any of the theoretical analysis done. e.g., there was no measurement of $ T_{new} $ or $ L_{new} $ to verify if it satisfies the condition of Theorem 3.2 - My suggestion would be to provide a couple of experiments, one experiment where the additional drafter model satisfies Theorem 1, and another experiment where the additional drafter doesn't. In both experiments, measure their corresponding $ T_{new} $ and $ L_{new} $, and show how their values correlated with final speedup to proof Theorem 1 - Also, I suggest to look at [MagicDec](https://arxiv.org/abs/2408.11049): look at its mathematical formulations, and it's experimental analysis that is directly connected to the mathematical formulation I am leaning towards rejecting the paper because the paper claims it's contribution compared to other papers on cascaded drafts is the theoretical foundation, but the theoretical foundation in the paper is totally decoupled from the results section. Also, there were issues I described above in the equations, usage of symbols without definitions, disputable claims, and ad hoc use of natural language in a main part of the Algorithm pseudocode. Other Comments Or Suggestions: - Equations in Page 4 are not numbered - Algorithm 1 Line 30: I would have preferred if there was an explanation or pseudocode rather than just writing "(continue accumulating tokens)" - Line 282: There is a typo in "in SectionSection 3.2" - Page 7: "As Figure 3 Table 1 shows," should be "As Figure 3 and Table 1 show," Questions For Authors: I have written questions in different boxes above Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **1. On the Connection Between Theory and Experiments** We appreciate your concern about the gap between our theoretical foundation and experiments. To address this, we've conducted specific experiments that directly validate our theoretical framework: |$T_i $|$L_{i-new}$|$T_{new}$|$L_{new} $|$T_{i+1}$|$L_i$|Speedup| |-|-|-|-|-|-|-| |22ms|3.83|17.61ms|3.77|4ms|4.34|2.61×→1.08×| |22ms| 6.26|7ms| 4.67|4ms|4.34|2.61×→3.48×| For Case 1: - $ith$:vicuna-7b, $new$: vicuna-1b, $(i+1)th$: EAGLE - $T_{new}/T_i$ = 0.80 - $L_{new }× (1/L_i - 1/L_{i-new}) $= -0.117 - Since 0.80 > -0.117, Theorem 3.2 predicts performance decrease For Case 2: - $ith$:vicuna-7b, $new$: W4A4 vicuna-7b, $(i+1)th$: EAGLE - $T_{new}/T_i$ = 0.318 - $L_{new} × (1/L_i - 1/L_{i-new}) $= 0.330 - Since 0.318 < 0.330, Theorem 3.2 predicts performance improvement These results confirm our theoretical framework's ability to guide model selection decisions for speculative decoding systems. **2.Expanded Model Experiments** Following your recommendation, we have extended our experiments to include Vicuna-13B and Llama-2-Chat-70B models, both showing significant acceleration effects. ||Model|$c$|$μ$| |-|-|-|-| |Our|V13B |**2.69$\times$**|**8.62**| ||L70B|**2.92$\times$**|**7.48**| |EAGLE|V13B|2.30$\times$|4.42| ||L70B|2.46$\times$|4.08| **3. Theoretical Formulations and Symbol Definition Issues** We appreciate your detailed feedback on our theoretical formulations. We will revise the identified errors and provide clearer explanations for the mathematical notations and symbols used throughout the paper. **Equation 2:** In Equation $T = \sum_{i=1}^n F_i \cdot T_i$, we combine prefill and decoding costs into a single term $T_i$ to simplify **model selection for speculative decoding**. This focuses on comparing efficiency when adding models rather than precisely calculating absolute inference times. Our experimental results (like Vicuna-7B's $4.43\times$ speedup) validate this approach. While separating costs ($T_i = T_i^{\text{prefill}} + T_i^{\text{decode}}$) could add precision, it wouldn't change our core theoretical conclusions. We'll clarify this rationale in the final manuscript. **Line 157 and MagicDec:** Line 157 presents a binary form of Equation (2). We appreciate the reference to MagicDec - our theoretical approach indeed aligns with theirs, though we chose our formulation for conciseness and clearer derivations. For equivalence: - **MagicDec**: Speedup = $\Omega \cdot \frac{T_T}{\gamma T_D + T_V}$ - **Ours (dual-model)**: Speedup = $\frac{L_1 T_T}{T_T + T_D}$ By mapping $\Omega \leftrightarrow L_1$ and $\gamma T_D + T_V \leftrightarrow T_T + T_D$, the formulas are mathematically equivalent. **Some symbols and equations were not clear** **Regarding line 169** (the positioning of "new" between i and i+1): As mentioned in our paper's second contribution, our theoretical framework aims to discover conditions under which adding auxiliary models can improve inference speed in speculative sampling systems. Intuitively, we should insert a model with stronger inference capabilities between the draft model and target model to bring the composite draft model's capabilities closer to the target model. **line 174** (the symbol $α$): The paper mentions that acceptance probability $p = 1 - α$, where α represents the rejection probability. **line 191** (the symbol $S$): $S$ is simply an intermediate summation value used for notational convenience, which we didn't explicitly define given space constraints. You also noted that equations on page 4 are unnumbered - we chose not to number equations that are merely part of proofs and not essential to the paper's central theoretical framework. **Theorem 3.3** We realize that our use of "$n$-model" may have caused confusion. In this context, "$n$" refers to "a truncated geometric distribution of maximum $n$ trials" (line 189). We adopted this notation to maintain consistency with [SpecDec](https://arxiv.org/pdf/2211.17192). In our revised manuscript, we will add a clarifying statement at this location to prevent any misunderstanding. **4. Comparison with Other Cascade Methods** We did not experimentally compare our approach with other cascade methods because the current state-of-the-art in speculative sampling is [EAGLE](https://sites.google.com/view/eagle-llm), which significantly outperforms these cascade methods in terms of acceleration. Therefore, we focused our comparison on EAGLE. **5. Other Issues** Regarding Algorithm 1, the note "(continue accumulating tokens)" was intended to indicate continuation of the Draft execution in line 7. We will remove the else branch to ensure the correctness of the pseudocode. Thank you for your thorough review. We hope our responses clarify your concerns and demonstrate our paper's contributions. We respectfully request an improved score. Maybe it's unclear due to word limit, please communicate with us if you have any further questions. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. Here are some comments: I suggest to add the 1st Table in this rebuttal that proves theorem to the main body of the paper. > We did not experimentally compare our approach with other cascade methods because the current state-of-the-art in speculative sampling is EAGLE, which significantly outperforms these cascade methods in terms of acceleration. Therefore, we focused our comparison on EAGLE. Still, if the main claim of the paper is a theoretical foundation for cascaded speculative decoding, why not compare with other types of drafters and verify the theorem for them? It would prove that the theoretical framework is generic. > Regarding Algorithm 1, the note "(continue accumulating tokens)" was intended to indicate continuation of the Draft execution in line 7. It's not about removing else part. My concern is that this is a paper submitted to a top tier conference so readers would expect a comprehensive algorithm description. I am increasing the score to Weak Reject. In my humble opinion, if the paper is accepted, the writing needs to be revisited to ensure coherence between the different sections of the paper, especially between the theoretical part and experiments (e.g., by adding the aforementioned Table). --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the constructive feedback and valuable suggestions. In response to your points, we would like to address the following: 1. Regarding the validation of our theoretical framework's generality, we fully agree with your perspective. We have reproduced the [Cascade Speculative Drafting](https://arxiv.org/pdf/2312.11462) and conducted comprehensive tests on various model scales including flan-t5-xxl, base, and small. The experimental results strongly confirm the universal applicability and correctness of our theoretical framework. We will incorporate these experimental results in the revised manuscript, including adding the table that proves our theorem to the main body of the paper as you suggested. |$T_i $|$L_{i-new}$|$T_{new}$|$L_{new} $|$T_{i+1}$|$L_i$|Speedup| |-|-|-|-|-|-|-| |47.52ms|3.50|19.16ms|3.02|12.42ms|2.28|3.19×→3.88×| - $ith$: FLAN-T5-XXL, $new$: FLAN-T5-base , $(i+1)th$: FLAN-T5-small - $T_{new}/T_i$ = 0.403 - $L_{new }× (1/L_i - 1/L_{i-new}) $= 0.461 - Since 0.403 < 0.461, Theorem 3.2 predicts performance improvement 2. Concerning the description of Algorithm 1, due to the character limit in our first response, our explanation was indeed not sufficiently clear. To clarify: the **else** branch in line 29 was intended to represent the case where **cnt<μ** from line 18. In practice, we don't actually need a dedicated else branch, as after the if statement concludes, the program naturally continues executing the loop from line 5, which is precisely what we meant by "continue accumulating tokens." In the revised version, we will remove the redundant else branch and convert it into a comment for clarity, making the algorithm description more complete and accurate. 3. We commit to thoroughly revising the paper in the final version, not only correcting grammatical errors but also strengthening the coherence between different sections as you suggested, particularly the connection between the theoretical framework and experimental results. We will add the mentioned table and comparative experiments with other drafters to make our paper more rigorous and comprehensive. We sincerely hope you will reconsider our response. Given the innovation of our theoretical framework and the superior experimental results, we respectfully request that you consider improving your evaluation of our paper. We believe that the revised manuscript will better meet the standards expected of a top-tier conference publication.
Summary: The submission describes an new theoretical framework to improve speed of LLM decoding in order to reduce latency. ## update after rebuttal I think this rebuttal was very useful and reaffirmed me in my slightly improved score of 3. I still share concerns with Reviewer KeQh in the areas of coherence and clarity on the choice of drafting models. Claims And Evidence: Initially I found the theory sound and the experimental results showing improved speed very good. But the more I reflected on the content the more issues I found. The following might be subjective: * Why is accuracy not important at all? Or does the proposed method (and the comparison baseline of EAGLE) show no drop in accuracy at all? At least a single experiment to prove the impact on accuracy is standard for any speedup method. * There are inconsistencies that didn't help to understand the overall impact. The method is being described as using 3 models and possibly extend to more, but using 3 models is already pretty tricky (Section 3.3 mentions at least one of the 3 models to be trained separately). Then Section 4.4 discusses to use 4 models and I've no clue how that helps any further. I find that discussion moving off-topic. Isn't the focus on improving impractical latency of inference for a single LLM? It must be doubted that multiple draft models that require to be executed sequentially can help to improve latency. * Reference of the baseline technique is confusing: EAGLE(-1) and EAGLE-2 or consistently EAGLE-2 and EAGLE is a typo? Methods And Evaluation Criteria: ML speedup techniques are usally lossy especially when applied to decoding (e.g. pruning). Choosing the right operating point is mandatory and that requires to select from tradeoff curves. The presentation lacks to describe the impact to accuracy and even that there isn't any if that's the case. Theoretical Claims: Sound to me. However, there's some clarity missing why to use more than the 3 proposed models as mentioned above. Experimental Designs Or Analyses: Limited as the impact on accuracy is missing. Supplementary Material: Didn't look at it in detail. Contains only code. Relation To Broader Scientific Literature: None. Essential References Not Discussed: Only EAGLE is being cited, but Table 1 contains reference to EAGLE-2 instead. Other Strengths And Weaknesses: None Other Comments Or Suggestions: I think the topic is of big interest, but speed improvements need to be put into perspective of impact to accuracy and the presentation lacks clarity that clearly requires improvements. Questions For Authors: Please work on clarity. I don't want to repeat the points here. See Section "Claims And Evidence". Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **1. Response on Accuracy Evaluation** We appreciate your highlighting the importance of accuracy evaluation. This is indeed a critical point that deserves attention. Our method inherently preserves output distribution through the verification mechanism in Algorithm 1. The VERIFY procedure ensures draft tokens are only accepted when they align with the target model's distribution. When verification fails, we fall back to sampling directly from the target model, guaranteeing identical outputs to standard autoregressive generation. This preservation-by-design is why speculative decoding methods typically focus on speedup metrics rather than accuracy. We followed standard performance metrics in this field to maintain consistency with prior work. Thank you for this valuable suggestion. **2. Response to Reviewer Concerns on Model Configuration** Thank you for highlighting these important questions about our paper's consistency and practical focus. We appreciate the opportunity to clarify based on our contributions and motivations: **Theoretical Framework**: Our core contribution is establishing a theoretical framework for speculative sampling that enables the construction of multi-model speculative systems. When models satisfy appropriate conditions, we can extend from binary to multi-model configurations. **Three-Model Implementation**: The three-model system represents a concrete instantiation of this framework, chosen because it achieves an optimal balance between theoretical complexity and practical feasibility. Naturally, this led us to explore extensions to more models, along with their inherent limitations. **Focus on Single LLM Inference**: We fully agree with your point about optimizing inference latency for a single LLM. Speculative sampling works by having draft models generate multiple candidate tokens at once, which the target LLM then verifies in a single forward pass. This reduces the number of target model executions, thereby optimizing overall inference latency. Our discussion of four-model systems aims precisely at further improving LLM inference speed, aligning with your suggestion. **Training Considerations**: Our approach has minimal training requirements. We can utilize quantized versions of existing models or off-the-shelf smaller models, making our method orthogonal to and compatible with most existing speculative sampling approaches. **Practical Value**: The main value of our work lies in providing a theoretical framework that enables researchers to systematically optimize multi-model speculative decoding systems, with demonstrated practical improvements in reducing target LLM inference latency. **3. Response to Terminology Inconsistency** Thank you for highlighting the inconsistency in our reference to baseline techniques. We apologize for the confusion this has caused. We acknowledge the error in our notation. Throughout the paper, we intended to reference EAGLE-2 consistently as our baseline, as it represents an improvement over EAGLE-1. In the revised version, we will: 1. Standardize all references to EAGLE-2 throughout the manuscript 2. Add proper citations to both EAGLE-1 and EAGLE-2 3. Include a brief explanation clarifying that EAGLE-2 is an advancement of EAGLE-1, which is why we selected it as our baseline for comparison Thank you for your thoughtful review of our paper. We genuinely appreciate both your critical observations and the opportunity to address them. Based on the clarifications provided, we respectfully request that you reconsider your evaluation of our paper. We remain available to address any additional questions or concerns you might have.
Summary: This paper proposes a novel polybasic speculative decoding framework. Specifically, the authors prove a fundamental theorem that characterizes the optimal inference time for multi-model speculative decoding systems. Through the theoretical investigation of multi-model token generation, the authors propose a three-model system implementation. Experiments demonstrate that the proposed approach yields speedup ratios ranging from 3.31 to 4.01 while preserving the original output distribution. ## update after rebuttal The authors' response has addressed my concerns. Thus, I will keep my original positive score. Claims And Evidence: Yes. Methods And Evaluation Criteria: **Method** 1. (Strengths) The proposed polybasic speculative decoding framework, which investigates multi-model token generation system with theoretical and empirical foundations, is novel and interesting. 2. (Strengths) The authors prove a fundamental theorem that characterizes the optimal inference time for multi-model speculative decoding systems, shedding light on how to extend beyond the dualistic approach to a more general polybasic paradigm. 3. (Weaknesses) The authors propose a general polybasic speculative decoding framework, while only implementing a three-model speculative decoding system. Although the authors discuss the limitations on four-model system design, it would be more convincing if the authors could evaluate the effectiveness of their method in four-model system. **Evaluation Criteria** 1. (Strengths) Experiments demonstrate that the proposed approach yields speedup ratios ranging from 3.31 to 4.01 while preserving the original output distribution. 2. (Strengths) The authors conduct thorough experiments on four different target models, including Vicuna-7B, LLaMA2-Chat 7B, LLaMA3-8B, and qwen2-7B-Instruct. 3. (Weaknesses) It would be more convincing if the authors could conduct experiments on larger models, such as models with 70B parameters. Theoretical Claims: Yes, the theoretical claims are correct. Experimental Designs Or Analyses: 1. (Strengths) Experiments demonstrate that the proposed approach yields speedup ratios ranging from 3.31 to 4.01 while preserving the original output distribution. 2. (Strengths) The authors conduct thorough experiments on four different target models, including Vicuna-7B, LLaMA2-Chat 7B, LLaMA3-8B, and qwen2-7B-Instruct. 3. (Weaknesses) It would be more convincing if the authors could conduct experiments on larger models, such as models with 70B parameters. Supplementary Material: Not applicable. The authors do not provide appendixes. Relation To Broader Scientific Literature: 1. The proposed polybasic speculative decoding framework, which investigates multi-model token generation system with theoretical and empirical foundations, is novel and interesting. 2. The authors prove a fundamental theorem that characterizes the optimal inference time for multi-model speculative decoding systems, shedding light on how to extend beyond the dualistic approach to a more general polybasic paradigm. Essential References Not Discussed: No. Other Strengths And Weaknesses: Please see the above comments. Other Comments Or Suggestions: No Questions For Authors: Please see the above comments. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank you for your thoughtful feedback and the positive evaluation of our work. **1. Experiments on Larger Models (13B, 70B)** Following your suggestion, we have conducted additional experiments with Vicuna-13B and LLaMA-2-chat-70B. ($c$: speed ratio, $μ$: average acceptance length) || Model | $c$ | $μ$ | |-|-|-|-| | Our|Vicuna-13B | **2.69$\times$** | **8.62** | ||LLaMA-70B| **2.92$\times$** | **7.48** | | EAGLE|Vicuna-13B | 2.30$\times$ | 4.42 | ||LLaMA-70B| 2.46$\times$ | 4.08 | We initially focused on smaller models to ensure all experiments could be run on a single GPU, facilitating direct comparison with existing methods. This is consistent with the evaluation approach in most speculative decoding literature. However, we agree that demonstrating scalability to larger models strengthens our contributions. **2. Four-Model System Evaluation** Concerning the four-model system evaluation, we have implemented a prototype following our theoretical guidance. Using LLaMA-2-7B as the target model with three draft models of decreasing capacities (4-bit quantized LLaMA-2-7B, 4-bit quantized LLaMA-2-1B, and EAGLE), we observed the following results: || Model | $c$ | $μ$ | |-|-|-|-| |4-model|LLaMA-2-7B| **3.75$\times$** | 9.80 | |3-model|LLaMA-2-7B | 3.66$\times$ | **9.84** | We note that diminishing returns become apparent as more models are added. This is because finding models that satisfy our Theorem 3.2 becomes increasingly challenging. Our future work will focus on quantization, sparsification, and KV cache optimization to reduce model inference time. We appreciate your suggestion, which has further advanced our work. We thank the reviewer for these valuable suggestions and have incorporated these additional experiments and discussions in the final version of our paper. --- Rebuttal Comment 1.1: Comment: Thanks for the authors’ response and clarifications. However, I still have two minor questions: 1) **Why is the comparison limited to Eagle, without including Eagle-2?** Given that Eagle-2 is a more recent and competitive baseline, it would provide a more comprehensive evaluation and better highlight the strengths and weaknesses of the proposed method. 2) **Why does the four-model system result in a lower acceptance rate?** Could the authors elaborate on the underlying reason for this degradation? Additionally, I would appreciate a clearer explanation of where the acceleration gains come from in the four-model setup, especially considering the trade-off with acceptance rate. --- Reply to Comment 1.1.1: Comment: Thank you for your prompt response and questions. **1. Regarding the comparison with Eagle-2:** Indeed, our baseline is Eagle-2 rather than Eagle-1, and there was a labeling error in the table in our rebuttal. We reproduced Eagle-2 and our proposed method under identical experimental settings and environments to ensure consistency and fairness in the comparison. We appreciate you pointing this out and will correct this representation in the final version. **2. Regarding the acceptance rate in the four-model system:** We note your observation about the apparent decrease in acceptance rate for the four-model system. In fact, this decrease falls within the range of statistical error and does not represent a significant performance degradation. Based on our analysis, the system's average acceptance length primarily depends on the draft model closest to the target model, as its inference capabilities largely determine the overall acceptance length. In the expansion from three to four models, we kept this key model (4-bit quantized LLaMA-2-7B) unchanged, so the acceptance rate should theoretically remain relatively stable. **3. Regarding the source of acceleration benefits:** Regardless of whether we use three or four models, the fundamental reason for acceleration is that the gain from higher acceptance length outweighs the latency introduced by additional models, which is a direct manifestation of our theoretical framework. To elaborate further, any model we add to the system needs to satisfy two key conditions: it should have inference capabilities close to the next-level model to ensure high acceptance rates, and it must have fast inference speed (achievable through techniques such as quantization, KV cache optimization, etc.). Only when both conditions are met can the introduction of a new model create a net benefit in the overall inference chain, thereby enhancing the system's efficiency. We hope these explanations clarify your questions. Thank you again for your recognition of our work.
null
null
null
null
null
null
SHE: Streaming-media Hashing Retrieval
Accept (poster)
Summary: Existing CMH methods often implicitly assume that all modalities are prepared before processing. However, in practice applications (such as multi-modal medical diagnosis), it is very challenging to collect paired multi-modal data simultaneously. Specifically, they are collected chronologically, forming streaming-media data (SMA). To handle this, all previous CMH methods require retraining on data from all modalities, which inevitably limits the scalability and flexibility of the model. For this issue, this paper proposes a novel CMH paradigm named Streaming-media Hashing rEtrieval (SHE) that enables parallel training of each modality. Specifically, SHE proposes a knowledge library mining module (KLM) that extracts a prototype knowledge library for each modality, thereby revealing the common distribution of each modality. Then, SHE proposes a knowledge library transfer module (KLT) that uses the historical knowledge library to update and align new knowledge to ensure semantic consistency. Finally, to enhance the intra-class semantic relevance and inter-class semantic differences, SHE develops a discriminative hash learning module (DHL). In general, this paper is significantly innovative and has important value in practical applications. Claims And Evidence: Yes, the claim that SHE can train all modalities in parallel and improve the flexibility and scalability of cross-modal retrieval is supported by extensive experimental evidence on four datasets. The improvement in retrieval performance is shown in experimental results. Methods And Evaluation Criteria: Yes, the proposed SHE is suitable for solving the cross-modal retrieval problem under streaming-media data. The selected baseline datasets and evaluation metrics such as the Mean Average Precision (MAP) score are suitable for evaluating the superiority of the proposed SHE. This paper adopts a reasonable experimental setting and compares it with the state-of-the-art methods, which is helpful to verify the effectiveness of SHE. Theoretical Claims: Yes, the theoretical claims that SHE mines a knowledge library for each modality as a medium for maintaining semantic consistency is feasible. This is because the knowledge library can capture the common distribution information in streaming-media data, and the knowledge libraries of different modalities are aligned through the Knowledge Library Transfer (KLT) module, and using the knowledge library as a medium can ensure the consistency of cross-modal semantics. Experimental Designs Or Analyses: Yes, the experimental setting is reasonable. The dataset containing five modalities is used for the experiment and compared with other state-of-the-art (SOTA) methods to fully demonstrate the superiority of SHE under streaming-media data. Additionally, The ablation experiment demonstrates the effectiveness of each component of SHE. Supplementary Material: Yes, the supplementary materials have been reviewed and include pseudo code of the algorithm, dataset introduction, parameter analysis, the impact of bit length on performance, more PR curve results, the impact of media learning sequence on performance, and discussion of limitations. This information supplement helps to fully understand the proposed SHE framework. Relation To Broader Scientific Literature: This paper positions the proposed Streaming-Media Hashing rEtrieval (SHE) within the field of cross-modal retrieval (CMR), comparing SHE with numerous cross-modal hashing (CMH) methods and three real-valued CMR methods. The study underscores the limitations of existing CMH methods in handling streaming-media data and introduces SHE as a novel hashing framework to address these challenges. Essential References Not Discussed: Key references are cited in this paper, but the authors are recommended to cite and discuss more cross-modal retrieval methods that specifically handle streaming-media data to enrich the research context of the current work. Other Strengths And Weaknesses: Strengths: 1. This paper introduces a Streaming-media Hash rEtrieval (SHE) framework specifically designed for streaming-media applications, offering a unique and innovative research perspective while demonstrating its practical value in the field of cross-modal retrieval. 2. The experimental results show that SHE achieves superior retrieval performance across multiple benchmark datasets. Additionally, ablation studies confirm the contributions of each module to the overall model performance. 3. This paper demonstrates a high degree of clarity in its writing style and organizational structure, with well-articulated and explicit experimental motivations. Weaknesses: 1. Some loss function formulations are relatively complex, which may hinder the understanding of the core optimization objectives. The authors should provide more detailed explanations of key loss terms to enhance readability. 2. The authors do not explain why only the nearest prototype is considered when mining the knowledge library. Other Comments Or Suggestions: 1. The authors are recommended to provide more detailed explanations of key loss terms to enhance readability and help readers better grasp the design motivations. 2. The authors are recommended to provide more discussion and comparison with more cross-modal retrieval methods that specifically handle streaming-media data. Questions For Authors: 1. This paper mines a knowledge library from each modality by KLM, and then aligns the knowledge libraries of different modalities through the KLT module to achieve semantic alignment. Can the author explain the benefits of doing so? 2. When mining the knowledge library, this paper only considers the nearest prototype with the same category. What are the benefits of doing so and can it be compared with other methods? 3. Apart from retrieval efficiency, could the authors provide other advantages of SHE compared to MARS? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Our responses are listed below. **R-Weakness-1**: To enhance readability and understanding of our proposed SHE, we provide a detailed explanation of some key losses, which can be found at **`R-Weakness-3’** in [Reviewer TSiu](https://openreview.net/forum?id=JqLKV0L5hM&noteId=KyOMFKYOSw). **R-Weakness-2**: When mining the knowledge library, if all class prototypes are considered simultaneously, a sample may be influenced by multiple prototypes, leading to an unstable optimization objective. Thus, we only consider the nearest prototype. Besides, to verify the effectiveness of the above behavior, we conduct experiments on the XMedia and XMediaNet datasets with 128 bits by comparing the consideration of only the nearest prototype versus considering all prototypes. Specifically, we replace the KLM loss with the KLM* loss when mining the knowledge library, and the KLM* loss can be formulated as: $L_{klm^*} = -\frac{1}{n_m} \sum_{i=1}^{n_m} \log \left( \frac{1}{K} \sum_{k=1}^{K} s\left(x_m^i | k\right) \right).$ The results are shown as follows: |Loss|XMedia|XMediaNet| |:-:|:-:|:-:| |$L_{klm*}$|73.3|46.9| |$L_{klm} $|75.0|51.0| The results show that only considering the nearest prototype with the same category when mining the knowledge library can enjoy a clear optimization objective and a better performance. **R-Suggestion-1**: We provide detailed explanations of key loss terms in **`R-Weakness-3’** of [Reviewer TSiu](https://openreview.net/forum?id=JqLKV0L5hM&noteId=KyOMFKYOSw). **R-Suggestion-2**: We investigate the latest studies and found that there are no other CMH works to handle cross-modal retrieval (CMR) in streaming-media scenarios (SMC). Here, we discuss some real-valued representation-based CMR methods, including SDML[1] and DRCL[2], which are specifically designed to accomplish CMR in SMC. 1) These methods are real-valued based representations, which involve high computational complexity and memory overhead, making it difficult to meet the fast retrieval demands of large-scale datasets. 2) They deploy either a randomly initialized common space or an identical transformation weight matrix to guide semantic alignment, which overlooks the discrepancies among modalities. Since the randomly initialized common space or learned transformation weight matrix may retain modality-specific features that are not very relevant to the new modality, directly applying it may lead to semantic discrepancies. By contrast, our SHE transfers the baseline knowledge library to the new modality, thereby obtaining more semantically consistent representations. **R-Question-1**: The KLM module can mine a knowledge library to preserve the semantic information of streaming data, which can serve as a medium to maintain semantic consistency while reducing the computational burden of maintaining historical data. The KLT module can transfer semantic information extracted from new modalities to the benchmark knowledge library. This allows our SHE to process new media without retraining the entire historical media data, thereby reducing computational complexity while ensuring semantic consistency between streaming modal data. **R-Question-2**: We provide the explanation of why only considering the nearest prototype when mining the knowledge library, and perform a comparison with other methods. Please see **`R-Weakness-2’** for details. **R-Question-3**: To improve the efficiency of handling streaming-media data, MARS leverages the shared label parsing module to achieve semantic alignment without interaction across modalities. However, the label parsing module may retain modality-specific features that are less relevant to the new modality, and directly using it may lead to semantic bias. By contrast, our SHE proposes a KLT module to transfer the baseline knowledge library to the new modality, thereby obtaining more semantically consistent representations. To further demonstrate the advantages of SHE, we integrate the idea of using a knowledge library as a medium to maintain semantic consistency into MARS, and found it beneficial to do so. For detailed experiments and analysis, see **`R-Question-2’** in [Reviewer UdWL](https://openreview.net/forum?id=JqLKV0L5hM&noteId=G4Nh3s9sQe). **Reference** [1] Hu P, Zhen L, Peng D, et al. Scalable deep multimodal learning for cross-modal retrieval[C]//Proceedings of the 42nd international ACM SIGIR conference on research and development in information retrieval. 2019: 635-644. [2] Pu R, Qin Y, Peng D, et al. Deep Reversible Consistency Learning for Cross-modal Retrieval[J]. IEEE Transactions on Multimedia, 2025.
Summary: This paper proposes a novel CMH paradigm, specifically designed for cross-modal retrieval in streaming-media scenarios. The proposed SHE framework comprises three key modules: the Knowledge Library Mining (KLM) module, the Knowledge Library Transfer (KLT) module, and the Discriminative Hash Learning (DHL) module. Specifically, KLM constructs a knowledge library for every modality by extracting essential prototype knowledge, thereby serving as a semantic consistency maintenance medium. KLT aims to adaptively align the knowledge library extracted from the newly arrived modality with the historical knowledge library, thereby capturing the semantic consistency of multiple modalities. DHL enhances the quality of hash codes by maximizing intra-class semantic relevance while amplifying inter-class semantic disparity. Extensive experimental evaluations demonstrate that the proposed method effectively handles cross-modal retrieval in streaming-media scenarios, offering a flexible and scalable solution for real-world applications. Claims And Evidence: SHE effectively handles cross-modal retrieval in streaming-media scenarios, which is supported by extensive experimental evaluations on four benchmark datasets. Besides, SHE can train each modality in parallel, which is supported by the reasonable module design. Although each modality is learned in parallel, semantic alignment can be achieved by using knowledge libraries as the bridge. Methods And Evaluation Criteria: The proposed SHE framework effectively addresses the challenges of cross-modal retrieval, particularly in streaming-media scenarios. The selected benchmark datasets are well-suited for evaluating the effectiveness of SHE, as two of them contains five modalities and are unpaired, which accurately simulate the characteristics of streaming-media data. Additionally, the evaluation metrics employed in this study are widely recognized in the field of cross-modal retrieval and are highly appropriate for assessing the performance of SHE and the baseline methods. Theoretical Claims: SHE can handle unpaired multimodal data, which is well-grounded. SHE adopts a learning framework without modality interaction and utilizes class prototypes as an abstract knowledge library to preserve the semantic information of streaming-media data. This enables effective semantic representation learning even for unpaired multimodal data. Experimental Designs Or Analyses: The experimental designs are sound, with suitable benchmark datasets, appropriate evaluation metrics, well-defined experimental objectives, and reasonable experimental analysis. Extensive experiments and analyses are conducted to comprehensively evaluate assess the effectiveness of SHE, such as investigations into the impact of media learning sequences, similarity boundaries, and the scale of the knowledge library. Supplementary Material: I have reviewed the supplementary material, which provides the SHE training process, a more detailed dataset introduction, and more experimental results. These contents can further support the work of this paper. Relation To Broader Scientific Literature: This paper proposes a novel Cross-Modal Hashing (CMH) framework, termed Streaming-media Hashing rEtrieval (SHE), which is highly relevant to the domain of cross-modal retrieval. To be specific, this paper aims to analyze the limitations and shortcomings of existing research in streaming-media scenarios and propose corresponding solutions, thereby highlighting the advantages of SHE. Essential References Not Discussed: Further discussion on cross-modal hashing methods, particularly techniques addressing cross-modal retrieval in streaming-media scenarios, would be beneficial. Other Strengths And Weaknesses: Strengths: 1) Sound in originality. This paper proposes a novel CMH framework named SHE specifically designed for streaming-media data, which is a highly valuable application scenario in cross-modal retrieval. 2) Reasonable technical route. To enable parallel training of different modalities while maintaining semantic consistency, SHE employs the KLM module to construct a dedicated knowledge library for each modality, preserving semantic information from streaming-media. Subsequently, the KLT module aligns the newly extracted knowledge library with the baseline knowledge library, ensuring cross-modal semantic coherence. 3) Reasonable and comprehensive experimental arrangements. Extensive experiments are conducted on four datasets to verify the effectiveness and superiority of this method. Weaknesses: 1) This paper designates the knowledge library extracted from the first modality as the benchmark which may introduce a degree of arbitrariness. In other words, the study has not yet evaluated the quality of the knowledge libraries extracted from different modalities, which could impact the overall effectiveness of the proposed method. 2) Some sections could benefit from clearer writing and organization. 3) From Fig. 3, the media learning sequence has a significant impact on performance, but no possible solution is provided. Other Comments Or Suggestions: 1) Further discussion on cross-modal hashing methods, particularly techniques addressing cross-modal retrieval in streaming-media scenarios, would be beneficial. 2) Some sections could be written and structured more clearly. Questions For Authors: 1) This paper designates the knowledge library extracted from the first modality as the benchmark which may introduce a degree of arbitrariness. This paper would benefit from comparing and evaluating the quality of knowledge libraries. 2) Would integrating the proposed idea of using a knowledge library as a medium to maintain semantic consistency into existing methods be beneficial? 3) Except the regularization approach applied to the knowledge library in this paper, could the authors provide a comparative analysis with other existing regularization techniques? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your constructive review. Our responses are listed below. **R-Weakness-1**: To evaluate the quality of knowledge mined from different modalities, we design a score, which can be found at **`R-Questions’** in [Reviewer cJMB](https://openreview.net/forum?id=JqLKV0L5hM&noteId=i9vSJNoRoW). **R-Weakness-2**: To make readers understand the work more clearly, we reorganize the writing, such as the design of losses. Specifically, we explain Knowledge Library Regularization (KLR) loss and Knowledge Library Transfer (KLT) loss in detail, which can be found at **`R-Weakness-3’** in [Reviewer TSiu](https://openreview.net/forum?id=JqLKV0L5hM&noteId=KyOMFKYOSw). **R-Weakness-3**: It is a constructive suggestion. To solve this, we first design a score to evaluate the quality of knowledge mined from different modalities, which can be found in **`R-Questions’** in [Reviewer cJMB](https://openreview.net/forum?id=JqLKV0L5hM&noteId=i9vSJNoRoW). Then, due to the word limit, we provide a simple solution here. If we observe that the score of the current modality is too low, we can wait for subsequent modalities to arrive. Once a modality with sufficient quality is detected, it can be selected as the initial modality. According to your suggestion, we will explore how to alleviate negative effects caused by low-quality first modalities in the future. **R-Suggestion-1**: We investigate the latest studies and found that there are no other CMH works to handle cross-modal retrieval (CMR) in streaming-media scenarios (SMC). Here, we discuss some real-valued representation-based CMR methods, including SDML[1] and DRCL[2], which are specifically designed to accomplish CMR in SMC. 1) These methods are real-valued based representations, which involve high computational complexity and memory overhead, making it difficult to meet the fast retrieval demands of large-scale datasets. 2) They deploy either a randomly initialized common space or an identical transformation weight matrix to guide semantic alignment, which overlooks the discrepancies among modalities. Since the randomly initialized common space or learned transformation weight matrix may retain modality-specific features that are not very relevant to the new modality, directly applying it may lead to semantic discrepancies. By contrast, our SHE transfers the baseline knowledge library to the new modality, thereby obtaining more semantically consistent representations. **R-Suggestion-2**: We reorganize the writing so that readers can understand the work more clearly, such as the design of losses. The details can be found at **`R-Weakness-3’** in [Reviewer TSiu](https://openreview.net/forum?id=JqLKV0L5hM&noteId=KyOMFKYOSw). **R-Question-1**: We have designed a score to evaluate the quality of knowledge libraries. Please see **`R-Questions’** in [Reviewer cJMB](https://openreview.net/forum?id=JqLKV0L5hM&noteId=i9vSJNoRoW). **R-Question-2**: As you suggested, we conduct experiments on the XMedia and XMediaNet datasets and found that it is effective to incorporate the above idea into existing methods (i.e., MARS). Specifically, we replace the original label parsing module in MARS with the knowledge library to guide representation learning and add the Knowledge Library Transfer (KLT) loss to maintain semantic consistency. The new objective of MARS can be formulated as: $L=L_{ori}+L_{klt},$ where $L_{ori}$ is the original objective of MARS and $L _{klt}$ is the KLT loss. The results are shown as follows: |Objective|XMedia|XMediaNet| |:-:|:-:|:-:| |$L _{ori}$|72.0|46.1| |$L$|73.8|51.7| The results can show that using knowledge libraries as a medium to maintain semantic consistency can be integrated into existing methods. **R-Question-3**: In fact, the KLR loss is not our main innovation, so we only provide a simple comparative Analysis. As you suggested, we provide another regularization technique, i.e., orthogonal regularization (OR). Specifically, OR aims to impose the constraint that inter-class prototypes are mutually orthogonal, which can be formulated as: $L_{or} = \frac{1}{CK} \sum_{c=1}^{C} \sum_{k=1}^{K}\sum_{c'=1,c'\ne c}^{C} \sum_{k'=1}^{K} \left | \Gamma (p^{c,k}_m,p^{c',k'}_m) \right |,$ where $\Gamma$ is the cosine similarity function. We replace Knowledge Library Regularization (KLR) loss with OR loss and conduct experiments on the XMedia and XMediaNet datasets with 128 bits, and the results are shown as follows: |Regularization|XMedia|XMediaNet| |:-:|:-:|:-:| |No|73.8|33.3| |OR|73.9|40.0| |KLR|75.0|51.0| The results show that OR is effective but inferior to the regularization technique used in our paper. In the future, we will further explore better regularization techniques to contribute to our proposed SHE. **Reference** [1] Scalable deep multimodal learning for cross-modal retrieval, SIGIR. 2019: 635-644. [2] Deep Reversible Consistency Learning for Cross-modal Retrieval, TMM, 2025.
Summary: This work addresses a less studied but practically valuable problem in cross-modal retrieval, i.e., streaming-media hashing retrieval. The paper points out the key challenge is to preserving cross-modal interactions. Through knowledge library mining on existing modalities and knowledge transferring only on the subsequently arrived modalities, the proposed training framework requires no re-training on the historical data but successfully establishes cross-modal knowledge alignment, hence reducing training complexity. Experiments on various datasets have validated the effectiveness of the proposed method. ## update after rebuttal The rebuttal from authors solved my concerns regarding clarity. Therefore, I keep my score as 4 towards acceptance. Claims And Evidence: Yes, claims regarding the effectiveness of each part of the paper, including the three modules and all the loss objectives are well-supported by clear experimental evidences, including benchmarks with various number of modalities and comprehensive ablation study on all evaluated datasets. Methods And Evaluation Criteria: The proposed method is established with solid analysis on the characteristics of streaming-media data, while the used metrics (mAP, P-R Curves) are widely-used evaluation protocols in cross-modal hashing that can also effectively evaluate the studied problem. The selected datasets are widely-used for cross-modal hashing with multiple modalities, hence suitable for the evaluation of the proposed method. Theoretical Claims: The paper does not contain proofs/theoretical claims. Experimental Designs Or Analyses: The paper's method is verified comprehensively and analyzed thoroughly to establish its validity. Supplementary Material: Yes. The supplementary material provides detailed description of the datasets/algorithm, and more experimental results for different parameter choices. Meanwhile, the impact and limitations of the work are also presented in the supplementary material. Relation To Broader Scientific Literature: The work can be applied to cross-modal hashing with increasing data. Particularly, this paper contributes to the problem of increasing modalities and can also potentially improve the similarity learning of current multi-modal hashing methods. Essential References Not Discussed: Key references are well-discussed. Other Strengths And Weaknesses: Strengths: 1. The paper addresses the novel and critical problem of streaming-media hashing retrieval with clear analysis of its potential challenges. 2. The method is clearly motivated and designed based on the characteristics of streaming-media data. 3. Experiments compare with state-of-the-art methods and display significant advantages for retrieval with increasing modalities. Ablation study and other analyses further emphasize the validity and effectiveness of the method. Results with different media sequence verifies the general effectiveness with different initial modalities. Weaknesses: 1. The authors may include descriptions of the training procedure for the compared methods. Other Comments Or Suggestions: 1. Typos: a) Line16: “in practice applications”. b) Line 39 (the right column) “such as such as”. 2. It's better to add some explanations about the Eq. 8 loss objective and the similarity boundary. Questions For Authors: The model’s performance can vary upon the choice of the initial modality, which is sometimes infeasible to be simply replaced by another in real applications. What kind of improvements are significant to alleviate such effects caused by low-quality first modalities? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your valuable feedback. Our responses are listed below. **R-Weaknesses**: To make the experiment more rigorous, a detailed description of the training procedure for the compared methods is provided. Specifically, there are two cases in the training. 1) For the Wikipedia and NUS-WIDE datasets, all baselines directly conduct training without any preprocessing or modification, as the data in these datasets consists of image-text pairs. 2) For the XMedia and XMediaNet datasets, all baselines except for MARS adopt label-based repeat sampling to construct pseudo-instance pairs, and then conduct training on every two modalities, as the data in these datasets consists of independent instances (unpaired and inconsistent in number). MARS directly conducts training without any preprocessing or modification, as it can process each modality independently. **R-Suggestion-1**: We carefully check our paper and ensure that similar mistakes are not made. **R-Suggestion-2**: To enhance the understanding of the loss of Eq. 8 (i.e., Knowledge Library Transfer loss, KLT), we add some explanations to further elaborate on it. The KLT loss is formulated as: $L_{klt} = -\frac{1}{CK} \sum_{c=1}^{C} \sum_{k=1}^{K} \log \left [ min(1,max(0,\Gamma(p_1^{c,k},p_m^{c,k}) - \sigma +1)) \right ].$ KLT aims to transfer the semantic information in the newly mined knowledge to the benchmark knowledge library, thereby achieving cross-modal semantic consistency. As is shown in the above equation, we encourage the prototypes mined from the newly coming modality to align with the prototypes with the same semantics in the benchmark knowledge library by maximizing their similarity. Notably, we focus on maintaining a certain level of semantic similarity between them by setting a similarity boundary rather than forcing them to be identical. That is, the above learning objective is $\Gamma(p_1^{c,k},p_m^{c,k}) \ge \sigma$, which relaxes the alignment requirement and can further improve the representation ability of multimodal data. **R-Questions**: It is a constructive suggestion. To alleviate negative effects caused by low-quality first modalities, we first design a score to evaluate the quality of knowledge mined from different modalities. For the $m$-th modality, the score could be formulated as follows: $S_{m}=\frac{1}{2n_m} \sum_{i=1}^{n_m} \max_{k=1, \ldots, K} s\left(x_m^i | k\right) +\frac{1}{2CK} \sum_{c=1}^{C} \sum_{k=1}^{K} \frac{ \sum_{k'=1,k' \ne k }^{K}e^ {\Gamma(p^{c,k}_m,p^{c, k'}_m)}} {\sum _{c'=1}^{C}\sum _{k'=1}^{C}{e^ {\Gamma(p^{c,k}_m,p^{c',k'}_m)}}-e},$ where $s(x_m^i | k)=\frac{\sum_{c=1}^{C} y_m^{i,c}\cdot \Gamma(b_m^i,p^{c,k}_m)+1}{2}$ and $\Gamma(\cdot,\cdot)$ is the cosine similarity function. In the above equation, the first item is responsible for reflecting the consistency between the mined knowledge and sample representations, while the second item is responsible for reflecting the distinction among inter-class knowledge. The higher the score is, the more the mined knowledge reflects the semantic information in the samples and the more discriminable it is, that is, the higher quality the modality owns. To verify the effectiveness of the proposed score, we conduct experiments on the XMedia dataset with 128 bits. The results are shown as follows: |Indicator|Image|Text|Audio|3D|video| |:-:|:-:|:-:|:-:|:-:|:-:| |Score|0.763|0.763|0.560|0.747|0.762| |MAP|75.0|74.1|24.9|66.3|73.6| From the results, it can be observed that the proposed score can accurately reflect the quality of the modalities. Due to the word limit, a simple strategy is provided here. In real applications, if we observe that the score of the current modality is too low, we can wait for subsequent modalities to arrive. Once a modality with sufficient quality is detected, it can be selected as the initial modality. According to your suggestion, we will explore how to alleviate negative effects caused by low-quality first modalities in the future.
Summary: While most previous research in CHM have assumed that all modalities are prepared before processing, the authors propose a novel CHM paradigm named Streaming-media Hashing rEtrieval (SHE) that enables parallel training of each modality for streaming-media data, where the data is collected chronologically. The paradigm includes a knowledge library mining module and a knowledge library transfer module to jointly extract an implicit knowledge library from the new incoming data and align the commonality distribution from the new knowledge with ones from the historical knowledge library and also a discriminative hashing learning module to enhance intra-class semantic relevance and inter-class semantic disparity. This paradigm is tested over 4 common benchmark datasets streaming-media retrieval, and the results show its superiority over 14 state-of-the-art methods. Claims And Evidence: Yes, the claims in the submission are well supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation criteria make sense for the problem or application at hand. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes, the experimental designs and analyses are valid. Supplementary Material: In the appendix, the authors provide the pseudo code of SHE, detailed information about the 4 benchmarks, sensitivity analysis of two hyper parameters, the impact of different bit lengths, the impact of media learning sequence on XMedia and XMediaNet and limitation. Relation To Broader Scientific Literature: The proposed SHE aims to solve the problem of training streaming-media data, which isn’t prepared before processing. SHE introduces an evolving knowledge library to avoid training redundancy and also improve training efficiency. Essential References Not Discussed: It is suggested to list the references about the previous CMH methods in introduction. Other Strengths And Weaknesses: Strengths - The paper is well-constructed and easy to read. - The experiments are conducted over 4 experiments and 14 state-of-the-art methods. Weaknesses - In section 3.2, the authors use the notation KLR loss before explaining what it is. It’s better to address it with its full name and notify the readers this will be explained later. - In introduction, authors describe many previous CMH methods with all sorts of shortcomings but fail to list the actual methods. Also, the authors only point out emergency medical aid for practical application scenarios, which seems insufficient. - The loss function formulation in this paper lacks detailed explanation. It’s better to explain the loss function design more specifically. - The authors mention that the proposed method enables parallel training of each modality, making training more efficient and require less storage. But this is not shown in the experiments. Other Comments Or Suggestions: Typos - Line 39 right: two ‘such as’s - Line 99 right: ‘knowledge library mining module (KLM) module’ module is included in the abbreviation - Line 205 right: ‘where $p_m^c$ denotes K the normalized and binary prototype for the c-th class’ unnecessary ‘K’ Questions For Authors: please refer to the weakness part. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Our responses are listed below. **R-Weakness-1**: We have revised this issue, providing the relevant explanation when KLR first appears. **R-Weakness-2**: In the introduction section, we list CMH methods (Such as UCCH, DHaPH, DCH-SCR[1], SCH, and CICH[2]) to illustrate the shortcomings of existing CMH research. Here, we briefly discuss some recent CMH works below: 1) UCCH aims to implicitly capture the semantic relevance between modalities by mining the co-occurrence relationship between multimodal data (such as image-text pairs) and their initial distribution characteristics. 2) DCH-SCR[1] deploys a ranking alignment mechanism to capture the close semantic relationship between modalities, which preserves the semantic similarity between tags and feature levels. 3) CICH[2] designs a prototypical semantic similarity coordination module to globally rebuild partially-observed cross-modal similarities under an asymmetric learning scheme. Notably, these CMH methods implicitly assume that all modalities are prepared before processing and adopt joint learning to achieve cross-modal semantic alignment. In practical application scenarios, it is challenging to collect data of all modalities simultaneously, such as emergency medical aid, multi-modal search engines, and financial data analysis. More commonly, data from all modalities is collected asynchronously. To improve the practical applicability of CMH, this paper proposes a novel method termed SHE to enhance the flexibility of processing asynchronously collected multimodal data. **R-Weakness-3**: To enhance the readability and understanding of our SHE method, we explain some loss functions in detail. 1) For the Knowledge Library Regularization loss (KLR), which is formulated as: $L_{klr} = -\frac{1}{CK} \sum_{c=1}^{C} \sum_{k=1}^{K} \log \frac{ \sum_{k'=1,k' \ne k }^{K}e^ {\Gamma(p^{c,k}_m,p^{c, k'}_m)}} {\sum _{c'=1}^{C}\sum _{k'=1}^{C}{e^ {\Gamma(p^{c,k}_m,p^{c',k'}_m)}}-e}.$ KLR aims to escape a simple solution where all prototypes converge to a single point. As the above equation shows, we first treat inter-class prototypes as positive pairs and intra-class prototypes as negative pairs. Then, we maximize the similarity among positive pairs and minimize the similarity among negative pairs, thereby enhancing the distinctiveness between inter-class prototypes while ensuring semantic consistency between intra-class prototypes. 2) For the Knowledge Library Transfer loss (KLT), which is formulated as: $L_{klt} = -\frac{1}{CK} \sum_{c=1}^{C} \sum_{k=1}^{K} \log \left [ min(1,max(0,\Gamma(p_1^{c,k},p_m^{c,k}) - \sigma +1)) \right ].$ KLT aims to transfer the semantic information in the newly mined knowledge to the benchmark knowledge library, thereby achieving cross-modal semantic consistency. As is shown in the above equation, we encourage the prototypes mined from the newly coming modality to align with the prototypes with the same semantics in the benchmark knowledge library by maximizing their similarity. Notably, we focus on maintaining a certain level of semantic similarity between them by setting a similarity boundary rather than forcing them to be identical. That is, the above learning objective is $\Gamma(p_1^{c,k},p_m^{c,k}) \ge \sigma$, which relaxes the alignment requirement and can further improve the representation ability of multimodal data. **R-Weakness-4**: Parallel training means that the proposed SHE can independently process each modality. When facing datasets with five modalities, SHE can directly perform training without any preprocessing, resulting in $5$ training processes and a storage overhead of $5$ sub-networks. By contrast, all baselines except MARS are limited to joint training on paired data from two modalities. When these methods face datasets with five modalities, we first construct pseudo-instance pairs through label-based repeat sampling and then conduct training on every two modalities. As a result, they require $5\times 4/2 = 10$ training processes and incur a storage overhead of $5 \times 4 = 20$ sub-networks. Obviously, parallel training can enhance training efficiency and reduce the required storage space. Besides, to further show the advantages of parallel training, we compare the training time (TT) and storage overhead (SO) of SHE and SCH on the XMedia dataset as follows: |Method|TT|SO| |:-:|:-:|:-:| | SCH |2148 s|2315.72 MB| | SHE |240 s|910.66 MB| The results show that our SHE can enhance training efficiency and reduce the required storage space by parallel training. **R-Suggestions**: We carefully check our paper and ensure that similar mistakes are not made. **Reference** [1] Liu X, Zeng H, Shi Y, et al. Deep cross-modal hashing based on semantic consistent ranking[J]. IEEE Transactions on Multimedia, 2023, 25: 9530-9542. [2] Luo H, Zhang Z, Nie L. Contrastive incomplete cross-modal hashing[J]. IEEE Transactions on Knowledge and Data Engineering, 2024, 36: 5823-5834. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. I have also read the comments from other reviewers. Most of my concerns have been adequately addressed. As a result, I would like to keep my score as '3', leading to acceptance.
null
null
null
null
null
null
SKIM: Any-bit Quantization Pushing The Limits of Post-Training Quantization
Accept (poster)
Summary: This paper introduces a mixed-precision weight-only quantization method for large language models (LLMs). The authors propose a greedy algorithm to allocate bitwidths across weight channels, followed by K-means clustering based on the assigned bitwidths. To enhance performance, the authors further incorporate a trainable scaling vector to refine the non-differentiable clustering process. Experiments on LLaMA models demonstrate the effectiveness of the proposed approach. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Implementation details and additional experiments. Relation To Broader Scientific Literature: The paper proposes a mixed-precision quantization method for LLMs. However, it lacks a thorough discussion and experimental comparisons with existing methods, which would help contextualize the contributions within the broader literature. Essential References Not Discussed: Please see my detailed comments in the following. Other Strengths And Weaknesses: Strengthens: The proposed method is simple yet intuitive. The authors introduce a preliminary unified perspective on layer-wise and sensitivity-based quantization, offering valuable insights that enhance the reader’s understanding. Weaknesses: The paper lacks a thorough discussion and experimental comparisons with several existing weight quantization methods, which would help situate the proposed approach within the broader context of related work. Please see my detailed comments. Other Comments Or Suggestions: Please see my detailed comments. Questions For Authors: 1. In Section 4.1, the authors state that the effectiveness of different objectives ranks from best to worst as S-full, L-full, S-diag, and L-diag. Could the authors clarify whether S-diag performs better than L-full? Additionally, it would be helpful to explain how this ranking is determined. 2. In Section 4.2, the authors propose assigning different bitwidths to different weight channels, which effectively reduces quantization errors. However, during inference, the quantized weights must be dequantized to FP16 for computation. Could the authors clarify whether dequantization in mixed-precision settings is slower than in uniform-precision settings? If so, to what extent? 3. The proposed method uses mixed-precision quantization to improve the accuracy of the quantized models. Several works [1][2] have also explored mixed-precision quantization. It would be beneficial for the authors to include a more detailed discussion of these related approaches, highlighting the differences. 4. The experimental comparisons are insufficient, as several state-of-the-art weight-only quantization methods are not included, such as [3][4][5], among others. 5. The settings of the compared quantization methods are not clearly specified. For example, do the results reported for OmniQuant correspond to group-wise or channel-wise quantization? 6. The experimental results on LLaMA, LLaMA2, and OPT are outdated. It would be beneficial for the authors to include additional results on LLaMA3 [6]. 7. The experimental results based solely on MMLU are insufficient. The authors should include additional commonly used datasets, such as LAMBADA [7], ARCEasy (ArcE) [8], and PIQA [9], which are frequently used in the literature to evaluate the performance of quantized models. Reference: [1] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models. arXiv 2024. [2] CLAQ: Pushing the Limits of Low-Bit Post-Training Quantization for LLMs. arXiv 2024. [3] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration. MLSys 2024. [4] QuIP: 2-Bit Quantization of Large Language Models With Guarantees. NeurIPS 2023. [5] QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks. ICML 2024. [6] The Llama 3 Herd of Models. arXiv 2024. [7] The LAMBADA dataset: Word prediction requiring a broad discourse context. ACL 2016. [8] A systematic classification of knowledge, reasoning, and context within the ARC dataset. ACL 2018. [9] Piqa: An algebra for querying protein data sets. International Conference on Scientific and Statistical Database Management 2003. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: 1. Clarification on Objective Rankings: The ranking of objectives (S-full > L-full > S-diag > L-diag) is elaborated in Section 4.1, where we demonstrate that L-full outperforms S-diag. This is because S-diag simplifies computation via a diagonal assumption, introducing a certain degree of bias compared to the full form. Practically, Section 5.4.1 demonstrates that L-full outperforms S-diag in perplexity, validating our analysis. 2. Dequantization Efficiency in Mixed-Precision Settings: In mixed-precision implementations, we group channels with identical bitwidths and track their original indices to leverage GPU parallelism. Although dequantization introduces minor overhead compared to uniform-precision methods, this overhead is smaller than the computational cost of sparse tensor operations (e.g., in SqueezeLLM) and is justified by significant accuracy improvements. Our method performs similarly to SqueezeLLM latency-wise and faster than it throughput-wise. 3. Differentiation from Prior Mixed-Precision Work: Existing methods like SqueezeLLM, SliM-LLM, and CLAQ mix only a limited set of predefined precisions. For instance, SqueezeLLM combines INT3/4 with FP16, while SliM-LLM and CLAQ mix INT X with INT X+k and also include INT X-k for compensation. In contrast, our framework adaptively assigns arbitrary bit widths (e.g., mixing 1~4bit) across channels without relying on fixed thresholds or compensation rules. This flexibility broadens the scope of discussion and enables finer-grained error minimization. 4. Expanded Baseline Comparisons: We have added comparisons with QuIP#, a state-of-the-art baseline, which outperforms AWQ and QuIP. We choose the pure-PTQ version of QuIP# (QuIP# with fine-tuning consumes greater computational resources; for example, it requires 50 GPU hours to quantize LLaMA-70B on an 8 GPU node) and the the result is included in the following table: | LLaMA2-7B | 4-bit | | | 3.x-bit | | | 3-bit | | | 2.x-bit | | | | ---------- | :----: | -------- | -------- | :----: | -------- | -------- | :----: | -------- | -------- | :----: | --------- | --------- | | | Bit | Wiki | C4 | Bit | Wiki | C4 | Bit | Wiki | C4 | Bit | Wiki | C4 | | FP16 | - | 5.47 | 6.97 | - | 5.47 | 6.97 | - | 5.47 | 6.97 | - | 5.47 | 6.97 | | SqueezeLLM | 4 | 5.62 | 7.12 | 3.24 | 5.96 | 7.51 | 3 | 6.18 | 7.72 | 2.23 | - | - | | OmniQuant | 4 | 5.74 | 7.35 | 3.25 | 6.03 | 7.75 | 3 | 6.58 | 8.65 | 2.25 | 11.06 | 15.02 | | QuIP# | 4 | 5.66 | 7.17 | - | - | - | 3 | 6.19 | 7.85 | 2 | 12.30 | 14.80 | | **SKIM** | 4 | **5.60** | **7.11** | 3.2 | **5.91** | **7.48** | 3 | **6.09** | **7.66** | 2.25 | **10.10** | **12.42** | Note that QuIP# officially supports only integer-bit quantization (e.g., INT2); thus, we compare its INT2 results with our 2.x-bit performance. 5. OmniQuant Implementation Details: Under non-integer bitwidths(eg: 3.x-bits), OmniQuant is evaluated under its group-wise quantization setting (eg: 128 elements per group), as specified in their official implementation. For integer bits (e.g., 3/4-bit), per-channel quantization is used, same as our method. Previous works commonly adopt this setting. 6. LLaMA3 Benchmarking: We focus on LLaMA-1/2 and OPT models to align with established baselines (SqueezeLLM, OmniQuant, and baselines you mentioned), as LLaMA3 results are not yet widely reported in quantization literature. This ensures apples-to-apples comparisons. We will be happy to include LLaMA3 in future work once baselines adopt it. 7. Expanded Evaluation Metrics: Following suggestions, we have added results on ARC-E, ARC-C, and PIQA. Below is the detailed result and our method maintains its consistent superiority. | Method | Precision | PIQA | ARC-C | ARC-E | MMLU | Avg | | --------------------- | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | | **LLaMA2-7B** | FP16 | 77.0% | 42.1% | 64.5% | 45.9% | 57.4% | | **INT3 Quantization** | | | | | | | | SKIM | INT3 | **76.1%** | **40.6%** | **61.3%** | **42.3%** | **55.1%** | | SqueezeLLM | INT3 | 75.6% | 38.8% | 61.1% | 41.3% | 54.2% | | **INT4 Quantization** | | | | | | | | SKIM | INT4 | 76.9% | **42.5%** | **64.6%** | **45.4%** | **57.3%** | | SqueezeLLM | INT4 | **77.0%** | 41.8% | 63.8% | 45.1% | 56.9% | --- Rebuttal Comment 1.1: Comment: The authors have adequately addressed my concerns, so I have decided to raise my score.
Summary: The paper introduces SKIM (Scaled K-means clustering wIth Mixed precision), a novel post-training quantization method for Large Language Models (LLMs) that supports any-bit quantization. The key contributions include a greedy algorithm for optimal bit allocation across weight channels, addressing the significant disparity in data distribution across channels, and a trainable scaling vector for non-differentiable K-means clustering, which regularizes column-wise differences and complements the mixed-precision method. Claims And Evidence: The mixed-precision technique with greedy bit allocation improves quantization performance by adaptively allocating bits per channel. This is demonstrated through experiments showing reduced quantization errors and improved perplexity across different bit levels. The trainable scaling vector effectively regularizes data across columns and improves quantization accuracy. Experimental results confirm that incorporating the scaling vector leads to consistent drops in perplexity. Methods And Evaluation Criteria: The proposed methods are appropriate for the problem of post-training quantization of LLMs. The greedy algorithm for bit allocation addresses the observed disparity in quantization errors across channels, making sense for resource optimization. And the trainable scaling vector provides a solution for regularizing non-differentiable K-means clustering, which is a common challenge in quantization methods. Theoretical Claims: The authors provide a detailed analysis of layer-wise and sensitivity-based quantization objectives, showing how they can be transformed into unified structures. The derivation of the mixed-precision problem as a bit-constrained sum minimization is logically sound, though a more in-depth examination of the dynamic programming algorithm's limitations would strengthen this section. Experimental Designs Or Analyses: The authors evaluate on multiple LLMs (LLaMA, LLaMA2, OPT) to demonstrate generalizability, and they compare against relevant baselines (SqueezeLLM, OmniQuant) under similar conditions. Supplementary Material: The experimental results in the appendix are relatively complete, including full algorithm details, memory usage analysis with varying bit levels etc. Relation To Broader Scientific Literature: Builds on previous work in non-uniform quantization methods, particularly K-means clustering approaches. Extends the research on post-training quantization for LLMs, addressing limitations of existing methods that often experience significant performance drops at lower precision levels. Contributes to the growing body of work on efficient inference methods for LLMs, focusing on memory and computational efficiency. Essential References Not Discussed: The paper could benefit from discussing more recent works on adaptive quantization and hardware-aware optimizations for LLM inference. Other Strengths And Weaknesses: The article is clearly structured and written. The methods are well presented. If possible, more illustrations or ablation studies would be helpful. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: 1. More Information about the Greedy Algorithm: Through empirical validation, we have demonstrated the effectiveness and efficiency of the greedy algorithm: - Time Efficiency: The cpp/jit implementation of the dynamic programming algorithm remains slow. For a single block in LLaMA-7B, it requires several minutes of computation. Given that LLaMA-7B has 32 layers with 7 blocks per layer, this algorithm struggles to scale to large models. In contrast, the greedy algorithm, even when implemented in Python, only requires approximately 3 seconds per block, ensuring computational efficiency. - Practical Effectiveness: To guarantee a near-optimal solution, we employ two different initialization points for the greedy algorithm and select the superior solution. Empirical validation on the first layer of LLaMA-7B shows that the average error between the suboptimal solution from the greedy algorithm and the true optimal solution is below 2%, confirming its capability to deliver sufficiently high-quality results. 2. Additional experiments: We further validated our method by comparing it with more recent works (e.g., adding QuIP# as a new baseline) and extended benchmarking on tasks like PiQA and ARC. These results further validates the effectiveness of our method and detailed results are shown below: | Method | Precision | PIQA | ARC-C | ARC-E | MMLU | Avg | | --------------------- | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | | **LLaMA2-7B** | FP16 | 77.0% | 42.1% | 64.5% | 45.9% | 57.4% | | **INT3 Quantization** | | | | | | | | SKIM | INT3 | **76.1%** | **40.6%** | **61.3%** | **42.3%** | **55.1%** | | SqueezeLLM | INT3 | 75.6% | 38.8% | 61.1% | 41.3% | 54.2% | | **INT4 Quantization** | | | | | | | | SKIM | INT4 | 76.9% | **42.5%** | **64.6%** | **45.4%** | **57.3%** | | SqueezeLLM | INT4 | **77.0%** | 41.8% | 63.8% | 45.1% | 56.9% | | LLaMA2-7B | 4-bit | | | 3.x-bit | | | 3-bit | | | 2.x-bit | | | | ---------- | :-----: | -------- | -------- | :-------: | -------- | -------- | :----: | -------- | -------- | :-------: | --------- | --------- | | | Bit | Wiki | C4 | Bit | Wiki | C4 | Bit | Wiki | C4 | Bit | Wiki | C4 | | FP16 | - | 5.47 | 6.97 | - | 5.47 | 6.97 | - | 5.47 | 6.97 | - | 5.47 | 6.97 | | SqueezeLLM | 4 | 5.62 | 7.12 | 3.24 | 5.96 | 7.51 | 3 | 6.18 | 7.72 | - | - | - | | OmniQuant | 4 | 5.74 | 7.35 | 3.25 | 6.03 | 7.75 | 3 | 6.58 | 8.65 | 2.25 | 11.06 | 15.02 | | QuIP# | 4 | 5.66 | 7.17 | - | - | - | 3 | 6.19 | 7.85 | 2 | 12.30 | 14.80 | | **SKIM** | 4 | **5.60** | **7.11** | 3.2 | **5.91** | **7.48** | 3 | **6.09** | **7.66** | 2.25 | **10.10** | **12.42** |
Summary: The work describes a post-training quantization technique that allows for non-integer size quantization of model parameters. ## update after rebuttal The rebuttal process helped to address my concerns and I thank the authors for their supporting answer(s). As a result I increased my score to 3, in hope the disscussed additions will be added to the final submission. Claims And Evidence: The claim that this beats baselines is wrong as it only beats the presented baselines. A recently presented technique (NeurIPS2024) named ShiftAddLLM has shown to give better perplexity / quantization ratios for 2 and 3 bits (for reference: Table 3 lists results using LLaMa models). The work of ShiftAddLLM may not official list as PTQ technique, although it clear is. But I view missing reference and exclusion of a better working approach published earlier to be a major defect of a submission. Methods And Evaluation Criteria: An important, better baseline was omitted from the comparison. Theoretical Claims: The theoretical claims were all sound to me. Experimental Designs Or Analyses: The improvements to presented baselines for >3 bits are minor. Supplementary Material: None submitted. Relation To Broader Scientific Literature: Non-integer size quantization of large sets of integers/floats may help other research areas apart from ML as well. I didn't check, but the idea may have been presented elsewhere already. Essential References Not Discussed: ShiftAddLLM: https://arxiv.org/abs/2406.05981 with code at https://github.com/GATECH-EIC/ShiftAddLLM Other Strengths And Weaknesses: None Other Comments Or Suggestions: None Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: 1. Comparison with ShiftAddLLM: Thank you for the suggestion. We have now included a comparison with ShiftAddLLM. Overall, our method is competitive compared to ShiftAddLLM. In fact, ShiftAddLLM uses a smaller group size (128) rather than channel-wise quantization when quantizing LLaMA, so its precision should be classified as non-integer level. ShiftAddLLM does not provide detailed explanations about the precision of the scaling factors or additional overhead, so we compare its performance under the common practice of group size 128. We represented the perplexity of quantized LLaMA-7B on Wikitext2 in the table below. our method performs better at 3-bit, while being comparable at 2-bit. Moreover, by slightly increasing the bit level using the any-bit feature, significant improvements can be achieved. Additionally, beyond perplexity (PPL), our method achieves state-of-the-art performance on some widely-adopted benchmarks (see the table under answer 2), with some benchmarks even approaching FP16 performance. Those empirical results demonstrate that our method achieves comparable or superior performance to ShiftAddLLM. Furthermore, we want to emphasize the superior quantization efficiency of our method. While ShiftAddLLM requires approximately 20 GPU-hours to quantize the LLaMA2-7B, our approach completes the same process in under one hour. This dramatic difference in computational overhead becomes even more significant when scaling to larger models, making our method more practical for real-world applications. | Method | Bit | Group Size | PPL (Wikitext2) | |----------------|-----------|------------|-----------------| | FP16 (LLaMA) | 16 | - | 5.68 | | ShiftAddLLM | 3.0 | 128 | 6.04 | | SKIM (Ours) | 3.25 | Channel-wise| **6.02** | | ShiftAddLLM | 2.0 | 128 | 7.98 | | SKIM (Ours) | 2.25 | Channel-wise| 8.44 | | SKIM (Ours) | 2.4 | Channel-wise| **7.76** | 2. Improvements for >3 Bits: In fact, the improvements shown in perplexity are nonlinear—smaller improvements at lower PPL can still lead to notable performance gains. For example, when quantizing LLaMA2-7B, our compressed model demonstrates significant improvements on PiQA, ARC, and MMLU, as detailed in the table below. Meanwhile, the smaller PPL improvement at 3.x-bit is because, for consistency, we compressed the model to 3.2 bits rather than higher bit levels like 3.25 or above used by some other methods. For 4-bit, we set the maximum available bit to 4 to ensure quantization efficiency, effectively disabling mixed precision. If we were to use higher bit levels for 3.x-bit (e.g., 3.25 or 3.3 bits), PPL could be reduced by an additional 50%-100% (from 6.13 to 6.07 to 6.02), making the improvement more pronounced. Similarly, for 4-bit, if we relaxed the restriction, the gap to FP16 could be reduced from 0.13 to 0.11. This 0.02 reduce is relatively considerable for the INT4 level which is quite close to the original FP16 result. | Method | Precision | PIQA | ARC-C | ARC-E | MMLU | Avg | | --------------------- | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | | **LLaMA2-7B** | FP16 | 77.0% | 42.1% | 64.5% | 45.9% | 57.4% | | **INT3 Quantization** | | | | | | | | SKIM | INT3 | **76.1%** | **40.6%** | **61.3%** | **42.3%** | **55.1%** | | SqueezeLLM | INT3 | 75.6% | 38.8% | 61.1% | 41.3% | 54.2% | | **INT4 Quantization** | | | | | | | | SKIM | INT4 | 76.9% | **42.5%** | **64.6%** | **45.4%** | **57.3%** | | SqueezeLLM | INT4 | **77.0%** | 41.8% | 63.8% | 45.1% | 56.9% | 3. Clarification on Any-Bit: Previous methods do include some works that support any-bit due to mixed precision, but most fail to demonstrate "continuity" in bit usage. For example, SqueezeLLM allows increasing the precision of certain elements, but if an extra available bit is added to INT3, its performance would be worse than INT4 rather than matching it. In contrast, our any-bit method, based on allocation, can fully utilize the given bit budget, making it unique and practical.
null
null
null
null
null
null
null
null
Inference-Time Alignment of Diffusion Models with Direct Noise Optimization
Accept (poster)
Summary: The paper proposes a novel approach called Direct Noise Optimization (DNO) for aligning diffusion models with continuous reward functions at inference time. DNO optimizes the injected noise during the sampling process to maximize the reward function, without requiring any fine-tuning of the model parameters. The key contributions are: Out-of-Distribution Reward Hacking‌: The paper identifies out-of-distribution reward hacking as a critical issue in DNO. It proposes a probability regularization technique to ensure the generated samples remain within the support of the pretrained distribution. Non-Differentiable Reward Functions‌: The paper extends DNO to handle non-differentiable reward functions by developing a hybrid gradient approximation strategy. Experimental Results‌: Extensive experiments demonstrate that DNO can achieve state-of-the-art reward scores for various image reward functions, within a reasonable time budget for generation. DNO is shown to outperform tuning-based methods in terms of reward scores while requiring significantly fewer computing resources. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. The authors provide theoretical analysis, extensive experiments, and visual examples to back up their claims about the effectiveness and efficiency of Direct Noise Optimization (DNO) for aligning diffusion models at inference-time. The authors present a comprehensive theoretical study of DNO, including a theorem that demonstrates the improvement of the distribution after each gradient step. They also propose variants of DNO to handle non-differentiable reward functions and address the out-of-distribution (OOD) reward hacking problem. The theoretical foundation is solid, with proofs and justifications provided in the appendices. The authors conduct extensive experiments on several important reward functions, including brightness, darkness, aesthetic score, HPS-v2 score, and PickScore. They compare DNO with existing alignment methods like LGD, SPIN, DDPO, and AlignProp, showing that DNO can achieve state-of-the-art reward scores within a reasonable time budget. The results are presented in tables and figures, demonstrating the superiority of DNO in various settings. The authors explore three methods for optimizing non-differentiable reward functions, including the proposed Hybrid-2 method. They provide experimental results showing that Hybrid-2 is significantly faster and more effective than other methods like ZO-SGD and Hybrid-1. The experiments on JPEG Compressibility and Aesthetic Score reward functions validate the effectiveness of their approach. To prevent OOD reward hacking, the authors introduce a novel probability regularization technique. They provide visual examples and quantitative metrics (like CLIP Score and ITM score) to show that this technique effectively keeps the generated samples within the support of the pretrained distribution. The regularization term is shown to stabilize the optimization process and maintain sample quality. The authors argue that DNO is efficient and practical, requiring significantly fewer computing resources than tuning-based methods. They provide details about the memory usage and time budget for experiments, showing that DNO can run on a single consumer-level GPU with memory usage of less than 15GB. This is supported by the implementation details and experimental settings described in the paper. Methods And Evaluation Criteria: The proposed methods and evaluation criteria in the paper make good sense for the problem of aligning diffusion models with reward functions at inference-time. Here's why: This approach is well-suited for inference-time alignment as it doesn't require modifying the pretrained model parameters. By optimizing the injected noise during the sampling process, DNO can effectively align the generated samples with the target reward function while maintaining the pretrained distribution's support. The introduction of probability regularization to prevent out-of-distribution (OOD) reward hacking addresses a critical issue in alignment methods. This technique ensures that optimized samples remain within the support of the pretrained distribution, which is essential for maintaining sample quality and relevance. For non-differentiable reward functions, the proposed hybrid gradient methods (especially Hybrid-2) provide an efficient solution. These approaches combine estimated gradients of the reward function with true gradients of the noise-to-sample mapping, offering a practical way to handle real-world scenarios where reward functions may not be differentiable. The authors use established benchmark datasets and reward functions relevant to diffusion model alignment, such as aesthetic score, HPS-v2 score, and PickScore. These are appropriate and widely recognized metrics in the field of generative models and alignment research. The paper compares DNO against several existing alignment methods (LGD, SPIN, DDPO, AlignProp), providing a comprehensive evaluation of its performance relative to state-of-the-art approaches. Visual examples and optimization trajectories are provided to qualitatively assess the effectiveness of DNO and its variants. This complements the quantitative results and helps in understanding the behavior of the proposed methods. Theoretical Claims: I checked the proof for Theorem 2.1, which is the main theoretical claim in the paper. This theorem demonstrates that under the assumption of L-smoothness for the composite mapping r ◦ Mθ, the expected reward improves after each gradient step in the Direct Noise Optimization (DNO) process. The proof follows these key steps: 1. It leverages the Descent Lemma from optimization theory, which is a classical result for smooth functions. 2. It applies this lemma to the specific context of the noise optimization problem. 3. It substitutes the gradient step definition into the inequality from the Descent Lemma. 4. It arrives at the final inequality showing that the expected reward improves with each gradient step. I didn't find any issues with this proof. The assumptions are clearly stated, and the logical flow from premises to conclusion appears sound. The application of the Descent Lemma is appropriate, and the algebraic manipulations seem correct. Experimental Designs Or Analyses: I checked the soundness and validity of several key experimental designs and analyses in the paper. Supplementary Material: I reviewed several parts of the supplementary material that were crucial for understanding the technical details and experimental setups. Relation To Broader Scientific Literature: The key contributions of this paper are closely related to several areas of prior research in diffusion models, reinforcement learning, and optimization. Previous approaches to aligning diffusion models with reward functions have primarily focused on fine-tuning the model parameters through reinforcement learning (RL) or direct fine-tuning. Notable examples include: DDPO. DNO represents a different approach to inference-time alignment by directly optimizing the noise vectors rather than modifying the model parameters or the sampling dynamics. Essential References Not Discussed: No Other Strengths And Weaknesses: While DNO is presented as a novel framework, some components build directly on existing ideas from noise optimization in diffusion models . The paper could benefit from a more explicit discussion of how it advances beyond these prior works. The practical impact might be limited by the computational requirements of the optimization process, though the authors argue that the time costs are reasonable. For some applications, the additional optimization time might still be prohibitive. The paper is difficult to understand and should be reorganized before being published. Other Comments Or Suggestions: Ensure consistent capitalization in figure captions (e.g., "Figure 1. ODE vs. SDE for optimization" should be "Figure 1. ODE vs. SDE for Optimization") Ensure all acronyms are defined upon first use (e.g., OOD, , ODE) Questions For Authors: The experiments in the paper focus primarily on image generation tasks. Could the authors discuss whether DNO can be applied to other types of diffusion models (e.g., text, audio) and what modifications might be necessary? Are there fundamental limitations to DNO that would prevent its application in these domains? The authors argue that DNO offers a favorable trade-off between inference time and reward, especially compared to fine-tuning methods. Could the authors provide more detailed comparisons of computational resources required for DNO versus fine-tuning methods, particularly for larger models like SDXL? How does the optimization time scale with model size? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your time in reviewing our work. Here, please allow us to provide specific responses to your major comments. ### 1. Comparing to prior works Please allow us to emphasize and reiterate our main contributions here, which are distinct from all prior works on noise optimization. * **A more comprehensive formulation for noise optimization with theoretical understanding:** Firstly, we reveal a fundamental insight: every stochastic element in the diffusion sampling process can be harnessed and optimized. In contrast, previous works have focused solely on optimizing the initial noise. As another major contribution, we elucidate the underlying mechanism of optimizing noise vectors, showing that it can be viewed as sampling from a provably better distribution. * **Identification and mitigation of the OOD reward-hacking problem in noise optimization:** For example, in our experiments on brightness and darkness enhancement, we show that without proper regularization, noise optimization cannot be applied effectively. Our work highlights the standard Gaussian distribution as a crucial prior for noise vectors, enabling successful regularization for such applications. * **Extension to non-differentiable reward functions:** This innovation significantly broadens the applicability of DNO to a wider range of scenarios, as requiring reward functions to be differentiable can be a highly restrictive condition in real-world applications. With these three important contributions developed in this work, we believe it lays a stronger foundation for more applications of noise optimization in the future. ### 2. Optimization time We would like to point out that the optimization time in DNO is actually controllable, which provides significant flexibility for different applications. For time-sensitive applications, fewer optimization steps can be used, making the additional time less prohibitive, although the sample improvement may also be limited in such cases. For less time-sensitive applications, more optimization steps can be employed to achieve better results. More importantly, as also discussed in our second response to Reviewer F5TP, directly combining DNO with fine-tuning-based methods will make the time cost required by DNO more acceptable. ### 3. More modality For audio, we believe the answer is yes. For text, however, we think the technique of DNO is not directly applicable. This is because, in discrete diffusion, it is not possible to compute the gradient from the sample back to the latent noise due to the combinatorial nature of the problem. The reasons we chose to experiment with image diffusion models are mainly threefold: 1. There are many excellent open-sourced image diffusion models available for us to conduct experiments. 2. There are numerous existing baselines for the image alignment problem, which allow for meaningful comparisons. In contrast, there are very few implementations addressing the alignment problem for audio. 3. Most importantly, there are many powerful, open-sourced reward models for images, which are trained on high-quality human-ranked datasets. Such resources are currently lacking for other modalities like audio. In the future, once the community for audio diffusion achieves the same maturity as the image diffusion community, we believe DNO will continue to demonstrate its value in this domain. ### 4. Scaling behavior of computational resources Thank you for bringing this important question to our attention. This is indeed a crucial aspect to discuss! Due to limited tokens in the rebuttal response, here we can only provide a quick response and leave the evidence for the revised manuscript or later discussion period of rebuttal. Here, we will elaborate on how memory usage scales with fine-tuning methods compared to our DNO method, as memory usage is typically the dominant factor in determining the number of GPUs required for these tasks. In summary, our conclusions can be drawn as follows: 1. Assuming the memory usage of direct sampling as 1 unit, fine-tuning methods would require at least 10 times more units to get started, while our DNO method requires approximately 1.5 times more units. Generally, memory usage scales linearly as the model size grows. 2. One gradient step in DNO roughly takes 2.5–2.7 times longer than direct sampling. Generally, the time cost of direct sampling scales sublinearly or linearly as the model size grows, depending on the level of parallelism in the architecture. Thank you again for raising this excellent discussion. We believe this analysis will make our manuscript more informative, and we will include these insights and the rigorous measurements in the revised version. ### 5. Presentation Could you please specify which parts you feel need the most improvement or reorganization? Your detailed suggestions would be greatly appreciated. Thank you for your valuable feedback!
Summary: This paper conducts a comprehensive investigation in optimizing the noise in the sampling process of diffusion models for alignment. The main contributions are: 1) giving a rigorous definition of noise optimization and extending it to SDE sampling, 2) explaining and quantifying the root of OOD issues in noise optimization and providing a regularization, and 3) extending noise optimization to non-differentiable rewards by estimating their gradients. Generally, this paper delves into some crucial technical issues of noise optimization, a new branch for diffusion alignment, and proposes basic solutions. Claims And Evidence: For all contributions mentioned above, there are all clearly convinced by either theoretical or empirical results. Additionally, for 2), I expect a more direct demonstration of the connection between the low probability of z and the OOD example, e.g. what is the level of M1 and M2 value for diffusion models with hacked reward alignments? Methods And Evaluation Criteria: Yes. This paper discusses an underexplored branch of diffusion alignment: optimizing the noise, which makes sense for deeper understanding of diffusion models. Theoretical Claims: The correctness of Theorem 2.1, results about SDE optimization advantages, and Lemma 3.1 are all checked. To the best of my knowledge, I do not see any issues. Experimental Designs Or Analyses: The experiment is extensive and persuasive. I checked the improvement by introducing regularization and the comparison to existing alignment methods. As an underexplored method, DNO yields considerable performance. Issues: I wonder whether DNO could be combined with fine-tuning based methods for even more superior performance. Supplementary Material: I review Appendix B for examples and Appendix D for the theoretical results. Relation To Broader Scientific Literature: this paper delves into some crucial technical issues of noise optimization, an underexplored branch for diffusion alignment, and proposes basic solutions. It provides a fundamental baseline for this field and guides necessary attention to this branch, which I believe is significant. Essential References Not Discussed: One existing (or maybe concurrent) work should be noticed: Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps (https://arxiv.org/abs/2501.09732). I am curious about the difference between this paper and the mentioned work. Other Strengths And Weaknesses: Weaknesses: [1] The difference between steps of ODE optimization and those of SDE optimization should be elaborated. For example, does optimizing one step for SDE sampling optimize all TxD noise? Since this should be a long serial optimization, how do all gradients computed? Other Comments Or Suggestions: I think this paper studies a significant problem. If my concerns are addressed, I would be pleased to raise my scores. Questions For Authors: [1] Can DNO be combined with other tuning-based methods? I believe this could be extremely important for potential real-world applications, since training cost may not be a real concern for industry deployment of diffusion models. [2] Could you elaborate the optimization process of SDE sampling as advised above? How is the optimization conducted? [3] Please explain the relationship between the work Inference-Time Scaling for Diffusion Models beyond Scaling Denoising Steps (https://arxiv.org/abs/2501.09732). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time in reviewing our work. Here, please allow us to provide specific responses to your major comments. ### 1. "What is the level of M1 and M2 value for diffusion models with hacked reward alignments?" In this work, we did provide a more direct illustration of this point. In lines 262–263, we define the metrics $P(z)$ using the values of $M_1$ and $M_2$. Then, in Figures 2 and 13–15, we show that the value of $P(z)$ diminishes to zero when reward-hacking occurs. ### 2. "Can DNO be combined with other tuning-based methods?" Yes, and we believe the most natural way to combine DNO with tuning-based methods is to directly apply DNO on fine-tuned models. In this way, our proposed DNO can continue to improve samples generated by aligned models at test time. To validate this point, we conducted a quick experiment. Following the setting in Section 5.2, we directly applied DNO to the model fine-tuned by DDPO. We observed that DNO can indeed continue to improve the sample quality to achieve higher rewards and can reach the level of reward achieved by running DNO for 5 minutes using the base model, but with only a 1-minute budget. | Method | DDPO | DNO (5 min) | DDPO + DNO (1 min) | |----------------------|----------|-------------|--------------------| | Aesthetic | 7.180 | 8.587 | 8.761 | | HPS | 0.287 | 0.324 | 0.319 | In this sense, applying DNO with fine-tuning-based methods can be viewed as a way to accelerate the DNO algorithm, making it more practical, especially for time-sensitive applications. ### 3. "Could you elaborate on the optimization process of SDE sampling as advised above?" You might be overthinking the complexity here. Using ODEs and SDEs for DNO has almost the same time cost and memory usage. Gradient backpropagation is conducted over the entire sampling process for both ODEs and SDEs using automatic differentiation. To clarify how our implementation works, here is a core snippet of our code: ```python # SDE sampling logic. To change it into ODE, we only need to modify the sampler and dimension of noise vectors accordingly noise_vectors = torch.randn(args.num_steps + 1, 4, 64, 64, device=args.device) noise_vectors.requires_grad_(True) sampler.initialize(noise_vectors, "sde") # Sampling process while not sampler.is_finished(): model_kwargs = sampler.prepare_model_kwargs(prompt_embeds=prompt_embeds) model_output = checkpoint.checkpoint(unet, **model_kwargs) sampler.step(model_output) # Gradient computation sample = sampler.get_last_sample() loss = - reward_function(sample) loss.backward() ``` ### 4. Relationship to the work [arXiv:2501.09732] Thank you for pointing out this work to us! We also noticed this work after the ICML submission deadline. For simplicity, we will refer to this work as ITS below. From our perspective, the ITS work represents a different approach to handling test-time scaling for diffusion models using combinatorial search techniques, while we focus on continuous search techniques using gradient-based optimization. To illustrate the difference: * If the reward model is not continuous or differentiable, the method proposed by ITS is more useful, as it does not require continuity or differentiability by design. * On the other hand, when the reward model is continuous and smooth, our proposed DNO is more favorable because it leverages more information for optimization, resulting in much faster convergence compared to the ITS work. This is also evident if we compare Figure 9 in the ITS work and Figure 3 of our work, where we show that using the gradient of the Aesthetic reward leads to significantly faster optimization. As illustrated above, we acknowledge that this is indeed a very important concurrent work, and we will include a discussion of this work in the revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you. I believe your method is significant. I have raised my score. --- Reply to Comment 1.1.1: Comment: Dear Reviewer F5TP, Thank you for acknowledging our rebuttal response and raising the score. We assure you that the discussed points will be reflected in the revised version. Best regards, Authors
Summary: This paper investigates the alignment problem of diffusion models during inference and proposes a tuning-free, prompt-agnostic method named Direct Noise Optimization (DNO). The authors theoretically investigate the properties of DNO and propose variants of DNO, aiming to solve problems of out-of-distribution reward hacking and optimization of non-differentiable reward functions. Experiments demonstrate that DNO achieves state-of-the-art performance on several reward functions. Claims And Evidence: Most claims in the paper are well-supported by theoretical analysis and empirical results. Methods And Evaluation Criteria: The paper primarily uses reward scores (e.g., Aesthetic Score, HPS Score, PickScore) and Out-of-Distribution (OOD) indicators (e.g., CLIP Score, ITM Score). These metrics are appropriate for assessing alignment performance. Theoretical Claims: The key assumption in Theorem 2.1 that the noise-to-sample mapping is smooth is not entirely convincing, and it is unclear how the cited Figure 4 in (Tang et al., 2024a) directly supports the claim. A more rigorous justification is needed. Experimental Designs Or Analyses: A potential limitation is the use of a simple animal prompt dataset, which may restrict the evaluation’s generality. Testing on more diverse prompts (e.g., scenes, abstract concepts) could better assess DNO’s robustness. Supplementary Material: Yes, I reviewed the supplementary material, which includes the provided code. Relation To Broader Scientific Literature: The proposed Direct Noise Optimization (DNO) method contributes to the growing body of work on aligning diffusion models with specific tasks or reward functions. Previous methods like DDPO, DPOK, AlignProp, and DRaFT have explored various approaches to this problem, including reinforcement learning and direct fine-tuning. DNO belongs to the category of inference-time alignment methods, which also includes LGD. Essential References Not Discussed: I noticed that there are some works on inference-time initial noise optimization, such as [1] () and [ 2]. These works seem to be closely related to the topic of this paper. However, they are not cited or discussed in the current version. --- [1] Eyring L, Karthik S, Roth K, et al. Reno: Enhancing one-step text-to-image models through reward-based noise optimization. NIPS, 2024. [2] Qi Z, Bai L, Xiong H, et al. Not all noises are created equally: Diffusion noise selection and optimization. ArXiv 2024. Other Strengths And Weaknesses: **Strengths** - DNO is a test-time optimization method that distinguishes itself from conventional approaches like reinforcement learning and direct fine-tuning of diffusion models. - The proposed method addresses OOD reward-hacking and non-differentiable rewards. - The paper provides detailed mathematical derivations and theory. --- **Weaknesses** - It is unclear whether the proposed method might negatively impact the original model's capabilities, such as diversity in generation. - lacks a comparison with other inference-time initial noise optimization methods. - The paper tests on simple animal prompts and does not evaluate DNO performance on commonly used t2i or complex prompts. Additionally, the paper claims that DNO is prompt-agnostic, but human preference metrics like HPS and PickScore actually require prompt consideration. Other Comments Or Suggestions: No further comments or suggestions. Questions For Authors: Why does DNO achieve significantly better performance with very few diffusion steps (e.g., 10 or 15 steps) compared to the standard setting of 50 steps (Table 2)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your time in reviewing our work. Here, please allow us to provide specific responses to your major comments. ### 1. On the smoothness assumption. Conceptually, it is quite straightforward to argue that the reward function is smooth with respect to pixel changes. Because small changes to the image pixels would not result in large differences in the reward function's score. To provide a more concrete answer, we conducted a quantitative analysis: We first sampled a noise vector $x_1 \sim \mathcal{N}(0, I)$ and then generated a second noise vector $x_2$ in the neighborhood of $x_1$, i.e., $x_2 \sim \mathcal{N}(\sqrt{0.9} x_1, \sqrt{0.1} I)$. Using the Aesthetic Score as the reward function $r(\cdot)$, we computed the following quantities: $A = E_{x_1, x_2} \frac{||r(M_\theta(x_1)) - r(M_\theta(x_2))||}{||x_1 - x_2||}$ and $B = E_{x_1, x_2} \frac{||\nabla r(M_\theta(x_1)) - \nabla r(M_\theta(x_2))||}{||x_1 - x_2||}.$ Using 100 samples with random prompts from Section 5.1, we estimated these values to be $A=0.19$ and $B=6.93$, respectively. These results rigorously demonstrate that the composite mapping $r \circ M_\theta$ is indeed smooth. We will include this justification in a more formal way in the revised manuscript. ### 2. Results on more diverse prompts. We quickly conducted some additional quantitative experiments using DNO with SD v1.5 and set $\lambda = 0.1$, similar to the setting in Section 5.2. We tested 1000 randomly selected prompts from the Pick-a-Pic test dataset https://huggingface.co/datasets/yuvalkirstain/pickapic_v1. Below is a new table that reports the average performance of our DNO. As shown, DNO can still perform well for complex prompts. This is not a surprising result, because by design, DNO optimizes noise vectors specific to each prompt, ensuring robust performance across diverse scenarios. | | SD v1.5 | DNO (1 min) | DNO (3 min) | DNO (5 min) | |---------------|--------|-------------|-------------|-------------| | Aesthetic | 5.769 | 6.013 | 6.993 | 8.305 | | HPS | 0.270 | 0.279 | 0.291 | 0.326 | | PickScore | 21.20 | 21.85 | 23.61 | 24.89 | ### 3. Discussion on the two related works. Thanks for mentioning these two works to us! After carefully reading them, we agree that these two works are indeed closely related to our work, and we will add them to our revised manuscript accordingly. While these two works also focus on noise optimization for diffusion models, there are several distinctions between them and our work: - **[1]** ReNo considers a similar reward-based gradient optimization for noise optimization. However, they only consider one-step distilled models, rather than the full-step diffusion models explored in our work. This makes their approach a simplified version of our proposed DNO method. Using one-step distilled models can result in faster optimization speed, but this inevitably sacrifices sample quality. Moreover, for one-step distilled models, there is only one noise to optimize, so they do not need to distinguish between ODE and SDE samplers. Finally, their work lacks a deeper analysis of several critical aspects covered in our paper, such as the OOD reward-hacking problem, convergence issues, and extensions to non-differentiability scenarios. - **[2]** This work considers a fundamentally different setting compared to our work and ReNo [1]. Their approach aims at constructing a "reward function" using only the sampling trajectory information, and then applies an optimization idea similar to DNO. Since they do not use any external reward function to directly improve the noise, it is natural to see (as demonstrated in Table 1 of their work) that their improvements are relatively marginal. ### 4. Impact on diversity. This is indeed an important question! Unfortunately, it is difficult to rigorously discuss diversity since it heavily depends on the reward function's optimization landscape, which is typically unknown in practice. For instance, if the reward function is strongly concave with a single global maximum, running DNO to convergence would collapse the distribution into a Dirac delta, eliminating diversity entirely. Conversely, if the reward function is constant, DNO would leave diversity unaffected. Thanks for highlighting this point; we will include this discussion in our revised manuscript. ### 5. Explanation of Table 2. This occurs because we fix the time budget to 1 minute; thus, running DNO with smaller $T$ allows more optimization steps. Although using smaller $T$ can improve performance on these benchmarks, in our main experiments (Table 1), we set $T=50$ to maintain consistency with existing baselines.
Summary: This paper introduces a novel approach for optimizing diffusion input noise based on a specified reward function. The key advancements over prior work include: - A new regularization technique that ensures the optimized noise remains within the distribution of the diffusion model. - A method for handling non-differentiable reward functions. - Optimization of a sequence of noise variables throughout the diffusion SDE sampling process, rather than just a single initial noise in ODE sampling. These improvements enhance performance and yield strong empirical results using SDv1.5 as the teacher model, all without requiring any network optimization. Claims And Evidence: The claims are substantiated with supporting evidence. Methods And Evaluation Criteria: The evaluation is generally well-reasoned. However, Section 5.1 may require additional work to further validate the effectiveness of the proposed regularization. Specifically, the current adversarial setting—optimizing noise to increase darkness with the prompt "white [animal]"—does not fully reflect real-world use cases. Are there more practical scenarios where this regularization would be particularly crucial? Theoretical Claims: I briefly reviewed the equations, and they appear to be correct. Experimental Designs Or Analyses: Yes, but some baselines and comparisons are missing. For instance, the performance of ODE-based and SDE-based noise optimization should be directly compared. Additionally, it would be valuable to explore simpler regularization techniques, such as KL regularization between the optimized noise and a standard Gaussian distribution, as a baseline. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: This paper addresses the problem of noise optimization for improved image generation. Optimizing noise presents a promising avenue for further performance enhancement beyond weight optimization in pretrained networks and holds significant potential in the emerging trend of test-time scaling. Essential References Not Discussed: it covers the related works well. Other Strengths And Weaknesses: A few additional strengths of this work include: - The use of SDE and a more flexible optimization target, which significantly outperforms prior approaches and achieves performance comparable to more expensive weight tuning. - The ability to handle non-differentiable rewards, broadening the applicability of the method. Other Comments Or Suggestions: n/a Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your time in reviewing our work. Here, please allow us to provide specific responses to your major comments. Regarding your comment: "Optimizing noise to increase darkness with the prompt 'white [animal]'—does not fully reflect real-world use cases. Are there more practical scenarios where this regularization would be particularly crucial?" Firstly, we would like to clarify that although optimizing blackness/whiteness was chosen to better reflect an adversarial setting, these optimizations also represent realistic applications. There is a genuine need to generate images with highly dark or highly light backgrounds, which cannot be achieved by base models through either prompting or best-of-k selection, and this application is actually inspired by a well-known trick for diffusion models called offset-noise (see CrossLabs Blog https://www.crosslabs.org/blog/diffusion-with-offset-noise.). This is why we believe these are also very important applications. From an academic perspective, the main goal of this section is to demonstrate that OOD Reward Hacking is more likely to occur in a strong adversarial setting, and we propose a solution to mitigate it. Interestingly, we have found that our proposed technique has been adopted in a more practical setting in a recent work (https://arxiv.org/pdf/2412.03876). In this work, the reward model evaluates whether the content is safe while the prompts are malicious. This resembles the adversarial setting in our brightness and darkness example, and they demonstrate that our method can mitigate the reward-hacking phenomenon to some extent. Looking ahead, as more reward models and applications emerge, we believe our proposed techniques will find even more important and crucial applications.
null
null
null
null
null
null
A General Framework for Inference-time Scaling and Steering of Diffusion Models
Accept (poster)
Summary: This study proposes FK steering, a scalable inference method for both image and text diffusion models. FK steering consists of a proposal generator, a potential function, and an intermediate reward. In each denoising interval, the diffusion model generates multiple proposals, and the potential function uses one of three functions—difference, max, or sum—to reweight the particles for resampling in the next denoising step. The intermediate reward employed by the potential function is chosen from among expected x_0, many sample reward, or learned reward. FK steering is a training-free method that outperforms fine-tuning methods like DPO. It even surpasses SDXL (with 2.6B parameters) when using SDv2.1 (with 0.8B parameters), and its superiority is evaluated on text-to-image tasks using metrics such as GenEval, image reward, and HPS. Additionally, the method demonstrates effectiveness in text diffusion models, proving that it is a general framework. Claims And Evidence: Overall, the experiments are well-executed and provide strong support for most of the claims. However, a few questions remain, and additional evidence would benefit the paper. It would be beneficial to strengthen experiments in the text-to-image domain with up-to-date open-weight models. Can FK steering make a Stable Diffusion 3 medium (2B) [A] model outperform the large (8B) [B] model? Or if it can surpass advanced models like Flux-dev [C]. Can FK steering make a timestep-distilled model like Flux-Schnell [D] outperform Flux-dev? In addition, experiments on ImageNet in Appendix A would be more convincing if they included additional metrics like FID and IS. Given that classifier guidance by Nichol & Dhariwal (2021) significantly improves these metrics, it is unclear if FK steering can achieve similar improvements. [A] Stability AI, https://huggingface.co/stabilityai/stable-diffusion-3.5-medium [B] Stability AI, https://huggingface.co/stabilityai/stable-diffusion-3.5-large [C] Black Forest Labs, https://huggingface.co/black-forest-labs/FLUX.1-dev [D] Black Forest Labs, https://huggingface.co/black-forest-labs/FLUX.1-schnell Methods And Evaluation Criteria: The proposed methods make sense for the inference time scaling. The paper clearly explains its components—a proposal generator, potential function, and intermediate rewards—along with all the possible choices for each, which are well-motivated. The evaluation criteria are also appropriate; for text-to-image tasks, metrics like GenEval and HPS, which are commonly used in the field, are employed. However, I have less expertise in text diffusion model experiments, so insights from other experts would be valuable. Theoretical Claims: I reviewed the equations and theoretical claims presented in the main manuscript. The notations are meticulously defined, and the derivations appear to be correct. Experimental Designs Or Analyses: Overall, the experiments effectively demonstrate the benefits of FK steering. However, I am curious as to why the GenEval performance does not match between Table 1 and Table 3. Additionally, for text-to-image tasks, it would be helpful to include more qualitative comparisons using multiple seeds. Supplementary Material: I focused on reviewing Supplementary Sections A and C. I've already provided comments on Section A; here are my remarks on Section C. The method controls diversity with lambda, but in diffusion models, the most effective hyperparameter for controlling diversity is the scale of classifier-free guidance. I'm curious whether reducing the classifier-free guidance scale while using a higher lambda could maintain diversity while improving human preference. Additionally, I'm interested in knowing whether the proposed method offers an orthogonal contribution to classifier-free guidance, or if it only proves effective at a specific classifier-free guidance scale. Relation To Broader Scientific Literature: Previous work aimed at improving diffusion model quality, text alignment, and human preference has largely relied on fine-tuning methods or inference steering techniques that employ gradient guidance from reward models. In contrast, this paper is novel and effective in that it is training-free and bypasses the need for gradient guidance. Essential References Not Discussed: No critical issues were found. Other Strengths And Weaknesses: As mentioned previously, the key strength of this paper is that it achieves effective results without relying on finetuning or gradient guidance. It explains each option and validates their effectiveness through ablation studies. However, the evaluation could be improved by incorporating experiments on more up-to-date models and enhancing the metrics in the ImageNet evaluations. Other Comments Or Suggestions: It would be beneficial to include a table or plot comparing the increase in inference time and memory cost. Additionally, showing whether performance continues to improve when increasing the number of particles beyond k=8 would help demonstrate the scalability of the proposed method. Questions For Authors: In the early denoising steps, the expected x_0 will be blurry as the diffusion models predict the “expectation” at each denoising step. So, the reward model may not be able to provide a reliable signal at that point. How does the reward model play a meaningful role in the earlier denoising steps? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for highlighting the strength of our results. We also deeply appreciate your comments about the high-quality execution of our paper, the rigor/strength of our evaluations, and the clarity of our exposition. _Can FK steering make a Stable Diffusion 3 medium (2B) [A] model outperform the large (8B) [B] model?_ **SD3 2B + FK steering indeed outperforms SD3 8B.** Stable Diffusion 3 2B with FK steering using just $k=3$ particles achieves a higher prompt fidelity score than the 8B parameter model. | Model | Param. Count | GenEval score | |-|-|-| | SD3 medium | 2B | 0.72 | | SD3 medium + FK(k=3) | 2B | **0.761** | | SD3 large | 8B | 0.74 | _Can FK steering make a timestep-distilled model like Flux-Schnell outperform Flux-dev?_ **Distillation + FK steering**. Improving timestep distillation is an interesting future direction and a natural extension of FK steering. _Given that classifier guidance by Nichol & Dhariwal (2021) significantly improves these metrics, it is unclear if FK steering can achieve similar improvements._ **Combining gradients with FK steering**. Our ImageNet experiment show that FK steering can improve on classifier-guidance. We have also added ImageReward gradient-guidance results: | Model | GenEval | IR | HPS | Sampling Time | |-|-|-|-|-| | SDv1.5 | 0.44 | 0.187 | 0.245 | 2.4s | | SDv1.5 + IR guidance | 0.450 | 0.668 | 0.245 | 20s | | SD v1.5 + FK (k = 4) | **0.54** | **0.898** | **0.263** | 8.1s | | SD v1.5 + IR guidance + FK (k = 4) | **0.56** | **1.290** | **0.268** | 55s | Here, FK steering **both outperforms and improves gradient guidance**. _I am curious as to why the GenEval performance does not match between Table 1 and Table 3. Additionally, for text-to-image tasks, it would be helpful to include more qualitative comparisons using multiple seeds._ - **Average vs best particle performance**. Table 1 has results from the best particle out of $k$ while Table 3 has results averaged over all $k$ particles. We do this to highlight that FK steering improves sample quality and prompt alignment metrics for all the particles. We will make this distinction more clear in the text and captions for the tables in the revised draft. _I'm interested in knowing whether the proposed method offers an orthogonal contribution to classifier-free guidance, or if it only proves effective at a specific classifier-free guidance scale._ - Thanks for this question. We ran FK steering with $k=4$ with a lower guidance scale in the table below. However, we note that higher guidance scales lead to better performance. | Model | Scale | GenEval | HPS |IR | |-|-|-|-|-| | SDv1.5 + FK | 4 | 0.52 | 0.795 | 0.256 | | SDv1.5 + FK | 7.5 | 0.54 | 0.898 | 0.263 | | SDv2.1 + FK | 4 | 0.59 | 0.901 | 0.262 | | SDv2.1 + FK | 7.5 | 0.62 | 1.006 | 0.268 | | SDXL + FK| 4 | 0.62 | 1.264 | 0.299 | | SDXL + FK| 7.5 | 0.64 | 1.298 | 0.302 | _The evaluation could be improved by incorporating experiments on more up-to-date models and enhancing the metrics in the ImageNet evaluations._ - **ImageNet**. We will add these numbers in the final paper revision. - **Up-to-date models**. We have included additional SD3 results above. _It would be beneficial to include a table or plot comparing the increase in inference time and memory cost_ - **Memory and sampling time increase**. In table 6, we show the increase in sampling time from FK steering with $k=4$ particles. We include sample times for a single GPU run as well as parallel generation with 2 GPUs. We will update our draft to contain FLOP/memory details. _Showing whether performance continues to improve when increasing the number of particles beyond k=8 would help demonstrate the scalability of the proposed method._ - **Beyond k=8**. In figure 4, we include the GenEval and ImageReward scores for increasing values of $k$, from 2 to 16. The figure indicates that increasing particle count improves both scores. For our text experiments, we will include additional results in the revised version of the paper. _In the early denoising steps, the expected x_0 will be blurry as the diffusion models predict the “expectation” at each denoising step...How does the reward model play a meaningful role in the earlier denoising steps?_ Thank you for this question. In figure 5 in the appendix, we plot the correlation of the reward $r_\phi(x_t) = r(x_0 = E_\theta[x_0 | x_t])$ with the reward at the terminal step $r(x_0)$. For ImageReward, we see a high correlation early in the generation. We demonstrate that learning the rewards will improve this correlation even further and can lead to improved performance. For instance, for toxicity we see a low correlation, prompting us to learn the intermediate rewards, which improves the generation's attribute accuracy as shown in table 4, see SSD-LM with learned $r_\phi$. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' rebuttal, which addresses my questions. I also read the reviews from the other reviewers and noted that there are concerns regarding the novelty of the work. However, I believe the strong empirical performance demonstrated in the paper outweighs these concerns. I will maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you for reading our response and highlighting the approach’s strong results.
Summary: This paper proposes Feynman-Kac steering for diffusion models using pretrained reward functions like ImageReward. **Update after rebuttal** My main concern was that earlier works proposing FK for diffusion were not clearly acknowledged and discussed. During the rebuttal phase the authors proposed specific revisions to remedy this. Therefore I have increased my score. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I believe there was only proof in Appendix E, which is a direct application of a lemma of Chopin et al. It appears correct to me. Experimental Designs Or Analyses: Did not check. Supplementary Material: Did not check except Appendix E. Relation To Broader Scientific Literature: I have some concerns about the novelty claims in this paper in relation to prior work. It is certainly not the first to propose FK for steering diffusion models. I feel that is it too strong to say things like "we propose FK steering" as on L17 of abstract. To me, the main contribution of the work is in making a clear and experimentally-demonstrated connection to modern text-to-image settings and the idea of using pretrained reward functions. "Conditional sampling within generative diffusion models" (https://arxiv.org/abs/2409.09650v1). An earlier work discussing FK steering of diffusion models. I think this one should be cited and discussed. This paper does cite and compare to the paper "Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding" (SVDD) (https://arxiv.org/abs/2408.08252). However, on L868 this paper says that SVDD only selects a single sample, but that is only the case for the choice alpha=0, whereas for alpha > 0 SVDD seems very similar (if not identical?) to FK (please clarify if this is incorrect). Some other concurrent related works (just an FYI): "Composition and Control with Distilled Energy Diffusion Models and Sequential Monte Carlo" (https://arxiv.org/abs/2502.12786). Also uses FK steering but requires an energy-based model (EBM) (which they distill from a diffusion model) to build reward functions that depend on the density p_t (for example, temperature control and composition). It might be nice to highlight how pretrained reward functions as in you suggest help avoid the need for EBMs. "Debiasing Guidance for Discrete Diffusion with Sequential Monte Carlo" (https://arxiv.org/abs/2502.06079). Similar approach for discrete temperature control. Essential References Not Discussed: "Conditional sampling within generative diffusion models" (https://arxiv.org/abs/2409.09650v1) (see Relation To Broader Scientific Literature) "Nested Diffusion Processes for Anytime Image Generation" (https://arxiv.org/abs/2305.19066) -- a similar idea to the "many-sample" approach mentioned on L246 was discussed in Section 5 of this reference. Other Strengths And Weaknesses: The writing is clear and the experiments are nice. Although I feel that the technical contributions is somewhat limited (see comments on prior work and missing references) I do appreciate the clarity of the exposition compared to earlier papers which are harder to digest. Using pretrained rewards is a good idea, and the experiments showcase clear practical applications of the method. However, I feel that earlier works proposing FK steering of diffusion models should be more clearly acknowledged. Other Comments Or Suggestions: N/A Questions For Authors: Please see Relation To Broader Scientific Literature and Essential References Not Discussed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for highlighting the quality of our writing and experiments, and the practical applications of our approach. _Concerns regarding novelty claims in this paper._ Various fields such as statistics, posterior inference [Naesseth et al 2019], have used Feynman-Kac interacting particle systems for rare-event sampling and posterior inference with state-space models. We evaluate using FK-IPS to sample from target distributions defined by arbitrary rewards and diffusion models. FK-IPS requires two things, (1) potential functions used to up-weight sample paths that yield high rewards, see eq 2, and (2) a method for sampling from the FK distributions. In this work, we provide: - **New potentials**. 1. We identify a simple condition in equation 3 that potentials must satisfy to consistently estimate the target distribution, and show that various new choices of potentials are possible. 2. The traditionally used DIFFERENCE potential minimizes the variance of the potentials (see thm 10.1 in [Chopin et al., 2020]), not necessarily generating high-reward samples. - For bounded rewards (e.g. ImageReward), using the DIFFERENCE potential can paradoxically eliminate particles that achieve the maximum reward early in generation. - In tables 3 & 5, we show that a potential we develop that satisfies the product conditions, yields higher reward samples than the difference potential. 3. We show that diffusion models offer many choices of intermediate rewards with different trade-offs between compute and knowledge of the terminal sample's reward. We propose two novel choices: - *Learned from data*. Using data and the noising process, we train intermediate rewards using regression. This objective generalizes learning a classifier trained on noisy states in Dhariwal et al., 2023 to arbitrary reward functions. - *Many-sample intermediate reward*. Using samples from $p_\theta(x_0 \mid x_t)$ we define intermediate rewards of the form: $\log \frac{1}{N} \sum_{i=1}^N \exp(r(x_0^i))$. - **Sampling**. 1. We show the effectiveness of the base model as the proposal generator, which is faster than techniques like gradient guidance and expands the use of SMC to discrete-state diffusion models and non-differentiable rewards. With this framework, we show **that inference-time steering of diffusion models is remarkably effective.** FK steering: - **Outperforms fine-tuning**. With only $2$ particles, FK steering outperforms DPO, DDPO fine-tuning approaches on prompt fidelity metrics for all the text-to-image models considered. - **Outperforms gradient-guidance**. FK steering outperforms gradient guidance and is **significantly faster** (8.1s vs 20s for SDv1.5). - **Gradient-free control of discrete models**: FK steering can be gradient-free and enables plug-and-play control of discrete-space diffusion models. - **Overcomes parameter counts**. FK steering enables smaller models (0.8B) to outperform much larger models (2.6B) with less total compute. _This paper says that SVDD only selects a single sample, but that is only the case for the choice alpha=0, whereas for alpha > 0 SVDD seems very similar (if not identical?) to FK_ **Comparison with SVDD**. SVDD selects a **single proposal** by sampling from a categorical distribution over the proposals $x_t^i$, with probability proportional to $\exp(r(x_t^i)/ \alpha)$ (see line 4 in algorithm 1). SVDD is a specific instantiation of nested importance sampling (algorithm 5 in Naesseth et al. 2019). FK steering uses SMC, which selects $k$ proposals instead of one, However, as noted in lines 198-204, it can use other particle-based samplers. **Prior works**. 1. **Nested diffusion sampling**. Elata et al 2023 uses an inner denoising loop to sample a single $x_0$ for each state $x_t$, rather than using $E_{\theta}[x_0 | x_t]$, in the transition kernel. This approach samples from the diffusion many times, but does not retain multiple samples for each step $x_t$. 2. **Conditional Sampling within Generative Diffusion Models** We discuss the relevant method from this review paper, Wu et al 2023, which proposes a particle-based sampler for conditional sampling with continuous diffusion models with $\log p(y | x)$ as reward. In contrast, we steer with arbitrary rewards and present various choices of potentials. Thank you for highlighting these references, we will add include them in the revised draft. The other related works cited were posted after the ICML submission deadline. Due to limited space, we will add a discussion on these works in either the final draft or in the discussion period. ### References [Wu et al 2023] Wu, Luhuan, et al. "Practical and asymptotically exact conditional sampling in diffusion models." (2023) [Chopin et al., 2020] Chopin, Nicloas et al. "An Introduction to Sequential Monte Carlo" (2020). [Naesseth et al. 2019] Naesseth, Christian et al. "Elements of Sequential Monte Carlo." (2019). --- Rebuttal Comment 1.1: Comment: I want to clarify that I appreciate the contributions of the paper -- my concerns are about the writing seeming to imply being the first to propose applying FK to diffusion (for exampe, L17 of abstract, L37). To me, the main contributions of the work are in making a clear connection to modern text-to-image settings, proposing new reward functions, and providing strong experimental support — all of which I appreciate. I just feel that earlier works proposing FK for diffusion should be more clearly acknowledged — I would consider raising my score if the authors could propose some concrete changes to do so. UPDATE: (I had posted this as an Official Comment but realized that is not visible to the authors). I appreciate the revisions and have increased my score. --- Reply to Comment 1.1.1: Comment: Thank you for engaging with our response. We propose the following concrete revisions: 1. L17 - *Original*: "In this work, we propose Feynman-Kac (FK) steering, a framework for inference-time steering diffusion models with reward functions." - *Revised*: "We apply Feynman-Kac (FK) interacting particle systems to the inference-time steering of diffusion models with arbitrary reward functions, which we refer to as _FK steering_." 2. L37 - *Original*: "We introduce Feynman-Kac (FK) steering, a framework for inference-time steering of diffusion models. - *Revised*: "Feynman-Kac measures provide a flexible framework for conditional sampling. FK measures have been used with interacting particle systems (Trippe et al., 2022, Wu et al., 2023a, 2024, Zhao et al., 2024) and divide and conquer approaches (Janati et al., Zhao et al., 2024) to enhance conditional sampling with diffusion models. In this work, we show that FK-IPS methods can provide a general framework for steering diffusion-based generative models with arbitrary rewards, which we refer to as FK steering." 3. L158 - *Original*: "FK steering builds on top of recent work such as TDS (Wu et al., 2023a) and others (Trippe et al., 2022; Cardoso et al., 2023; Dou & Song, 2024) that propose particle-based methods for sampling from unnormalized distributions. In appendix F.2, we show how TDS (Wu et al., 2023a) and SVDD (Li et al., 2024) are examples of FK-IPS (Moral, 2004). Our experiments demonstrate that expanding the choice of potentials, rewards, and samplers provides several improvements, such as higher-reward samples." - *Revised*: "FK steering builds on top of recent works that sample from Feynman-Kac path distributions for conditional sampling with diffusion models, either using particle-based sampling (Trippe et al., 2022, Wu et al., 2023a, Cardoso et al., 2023, Dou & Song, 2024, **Zhao et al., 2024**) or gradient-based sampling (**Chung et al., 2022, Janati et al., 2024**). In appendix F.2, we show how TDS (Wu et al., 2023a) and SVDD (Li et al., 2024) are examples of FK-IPS (Moral, 2004). Our experiments demonstrate the effectiveness of these methods for new settings, and the value of expanding the choice of potentials, rewards, and samplers. We are happy to iterate on these revisions to address any remaining concerns.
Summary: This paper proposes a general framework for inference-time scaling and steering of diffusion models using Feynman-Kac (FK) particle resampling. It claims to unify various existing steering methods by casting them into a single FK-IPS (Feynman-Kac Interacting Particle System) framework. The framework is validated empirically with relatively small numbers of particles. Claims And Evidence: The main claim—unifying inference-time steering methods into a single FK-based framework— might lack sufficient novelty. Specifically, the DIFFERENCE potential is essentially the common weighting strategy previously utilized by inference-time steering approaches such as SMC; the other two variants are also not too impressive. Even Figure 1 which shows the high-level principles does not appear to be significantly different as compared to current inference-time diffusion model guidance/alignment papers. Methods And Evaluation Criteria: Evaluation selection is appropriate. Theoretical Claims: NA Experimental Designs Or Analyses: My biggest concern is in the authors' selection of baselines. For example, in section 4.1, where text-to-image models are aligned with prompt alignment and aesthetic quality, for inference-time technique, this work compares against best-of-n (BoN). This work chooses DPO and DDPO to represent finetuning-based methods. However, these baselines are not representative of the current literature on this topic, especially in their convergence speeds. In terms of fine-tuning based approaches, direct back-propagation-based methods such as DRAFT [1] and ELEGANT [2] are very effective in aligning diffusion models. And I am certain that tasks like prompt alignment and aesthetic quality permit employing these methods because the reward functions are differentiable. On the other hand, BoN is also a pretty naive strategy. Why does this work not compare with more recent SMC-based methods? If the rationale is that these methods are all encapsulated by the 'general form' raised in this work, it may not be too convincing. [1] https://arxiv.org/pdf/2309.17400 [2] https://arxiv.org/pdf/2402.15194 Supplementary Material: NA Relation To Broader Scientific Literature: This work situates itself within recent efforts for inference-time steering of generative models, claiming to unify existing methods into a single theoretical framework Essential References Not Discussed: See suggested baseline selections above Other Strengths And Weaknesses: 1. I appreciate the writing flow of this work. It's easy to follow. Other Comments Or Suggestions: In page 5, the authors discuss variants of intermediate reward functions. However, the results do not appear to be new. Many papers have proposed using (1) DDIM or (2) trained value functions to serve as intermediate rewards. Questions For Authors: 1. Why did you choose simplistic inference-time methods (Best-of-N) and fine-tuning methods (DPO, DDPO) instead of more recent methods that are more representative? Please justify or include these additional comparisons. 2. Could you elaborate on the concrete novelty provided by the MAX and SUM potentials? How are these variants practically advantageous over the standard DIFFERENCE potential? 3. What exactly distinguishes your intermediate reward strategy from those already extensively explored in the literature using DDIM and trained value functions? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your feedback on our paper, and for noting the clarity of our writing. We discuss your concerns below. _The main claim—unifying inference-time steering methods into a single FK-based framework— might lack sufficient novelty_ - We show that Feynman-Kac interacting particle systems, a well-known tool for rare-event sampling, are an effective and flexible framework for sampling from reward tilted diffusion models, where the reward can be any arbitrary scalar-valued function and the diffusion model can be continuous or discrete state. - In section F of the paper, we do show that prior works such as TDS and SVDD are specific instances of FK-IPS. However, we generalize beyond these choices and show the benefit of this generalization. We provide a detailed list of contributions below [[also see response to r3 below](https://openreview.net/forum?id=Jp988ELppQ&noteId=bBpOWtgFwj)]. - Notably, these new choices for inference-time steering yield significant improvements. - **New choices yield higher-reward samples**. The product of potential condition we identify in equation 3 unlocks novel choices (of potentials, samplers, and rewards). - *Potential choices*. For example, tables 3 & 5 show that new potential choices outperform traditional ones, such as the difference potential. - *Intermediate rewards*. Furthermore, we show that diffusion models enable many choices for estimating intermediate rewards. In tables 4 & 5, we show that these new choices improve performance. _Why did you choose simplistic methods Best-of-N, DPO, DDPO?_ - **Choice of DPO, DDPO**. We select DPO and DDPO as benchmarks as they provide public checkpoints and code, as opposed to DRAFT or ELEGANT. DPO is used in prior work, such as [Esser et al. 2024]. We are happy to add DRAFT or ELEGANT as benchmarks in the revised draft. - **FK steering can boost fine-tuning**. In Table 1, DPO does improve performance over the base model, but combining it with FK steering is even more performant. This indicates that fine-tuning and inference scaling are two complementary axes, and **any improvements provided by DRAFT or ELEGANT could be further increased by using FK steering.** - **Comparison to classifier-guidance**. We have also added results comparing FK steering to gradient guidance. FK steering outperforms both DDPO, DPO, best-of-n, and gradient guidance. | Model | GenEval | IR | HPS | Time | |-|-|-|-|-| | SDv1.5 | 0.44 | 0.187 | 0.245 | 2.4s | | SDv1.5 + IR guidance | 0.450 | 0.668 | 0.245 | 20s | | SD v1.5 + FK (k = 4) | **0.54** | **0.898** | **0.263** | 8.1s | - **Best-of-N is simple yet effective**. One exciting result in our paper is that best-of-n can even beat fine-tuning approaches (that require significant training compute) despite requiring no training. _Could you elaborate on the novelty provided by the MAX and SUM potentials?_ - **Theoretical justification.** Based on eq. 3, many choices of potential provide a consistent approximation of the target distribution (see equation 3). However, when steering with arbitrary rewards, the DIFFERENCE potential does not necessarily yield high-reward samples. - **Bounded rewards.** Several choices of rewards are bounded. If a particle achieves maximum reward during sampling, the difference score will have to be negative and paradoxically will downweight this particle even though it has achieved a high reward. ImageReward is an example of such a bounded reward function. - **Empirical benefits.** In tables 3 and 5, the MAX potential outperforms the DIFFERENCE and SUM potentials across model classes as well as different numbers of particles. _What distinguishes your intermediate rewards?_ Prior works such as SVDD and TDS use the intermediate rewards $\log E_{p_\theta(x_0 \mid x_t)} \exp(r(x_0))$, either by approximating with the denoised expectation or by learning from model samples. In this work, we show that many choices of intermediate rewards can be used to consistently approximate the target distribution. 1. Unlike prior work that use intermediate rewards defined using model expectations, we show that the noise process and real data can be used to learn intermediate distribution via a regression objective. 2. Many-sample intermediate rewards: To the best of our knowledge, using samples from $p_\theta(x_0 | x_t)$ to define intermediate rewards $\log \sum_{i=1}^N \frac{1}{N} \exp(r(x_0^i))$ is a novel contribution of our work and was not proposed in prior works. The many-sample intermediate reward is a consistent estimator of $\log E_{p_\theta(x_0 | x_t)}\exp(r(x_0))$. ### References [Esser et al. 2024] Esser, Patrick, et al. "Scaling rectified flow transformers for high-resolution image synthesis." *Forty-first international conference on machine learning*. 2024.
Summary: The paper presents Feynman-Kac (FK) steering, a general particle-based framework for inference-time steering of diffusion models to generate outputs aligned with user-defined reward functions without requiring additional training or fine-tuning. FK steering generates multiple parallel sample trajectories from diffusion models and iteratively resamples these particles based on their scores computed using the potential function, which measures the user-interested property. Empirically, the method demonstrates substantial improvements in prompt fidelity and sample quality for both text-to-image and text diffusion models, outperforming fine-tuned baselines. Claims And Evidence: The paper claims that FK steering outperforms best-of-n sampling and gradient guidance, but there are some issues that need further clarification. (1) Missing gradient guidance baseline in image experiments: While the paper includes gradient guidance comparisons in text diffusion tasks, it does not evaluate gradient guidance in image generation tasks, which use continuous-state diffusion models. Since gradient guidance is known to perform well in continuous diffusion settings, a direct comparison in image experiments would provide a fairer assessment of FK steering’s advantages. (2) FK steering involves additional computation per particle due to resampling steps, making a direct comparison to best-of-n with the same number of particles k somewhat unfair. Since FK steering does not significantly outperform best-of-n in Table 1 and 2 (especially Table 1), this raises questions about its computational efficiency and whether its performance gains justify the added cost. Including more balanced comparisons that account for computational overhead would strengthen the evaluation. Methods And Evaluation Criteria: Selecting the best sample from multiple trajectories can reduce the diversity of generated outputs. Since diversity is an important factor in many generative tasks, incorporating it as an additional evaluation metric would provide a more comprehensive assessment of FK steering’s effectiveness and potential trade-offs. Theoretical Claims: The paper's theoretical claims, particularly the consistency of FK steering’s particle-based approximation to the target distribution, are well-grounded in sequential Monte Carlo theory and FK interacting particle systems. Experimental Designs Or Analyses: As mentioned before, the lack of gradient guidance baselines in image tasks, unfair compute comparison with best-of-n, missing diversity analysis, and limited resampling strategy evaluation weaken the rigor of the results. Addressing these would improve fairness and clarity. Supplementary Material: Yes, I reviewed the supplementary material, focusing on the experimental details and theoretical proofs. Relation To Broader Scientific Literature: The paper integrates ideas from diffusion model guidance, sequential Monte Carlo, rare-event sampling, and reinforcement learning-based fine-tuning, offering a unified inference-time steering framework. By demonstrating that FK steering can match or outperform fine-tuning approaches with lower computational cost, it contributes to the growing trend of efficient and controllable generative modeling. However, a direct comparison to recent adaptive guidance techniques and a more thorough analysis of computational efficiency trade-offs would further contextualize its impact within the broader literature. Essential References Not Discussed: For gradient guidance, the paper should cite "Diffusion Posterior Sampling for General Noisy Inverse Problems, Hyungjin Chung, Jeongsol Kim, Michael T. McCann, Marc L. Klasky, and Jong Chul Ye", which originally proposed the concept of training-free gradient guidance. Other Strengths And Weaknesses: The proposed framework is easy to implement, plug-and-play, and gradient-free, making it highly practical for real-world applications. Additionally, the paper is well-written and easy to follow, with clear explanations that effectively communicate the method and its contributions. Other Comments Or Suggestions: NA Questions For Authors: Please refer to the comments above. I am open to increasing the rating if all concerns are adequately addressed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful comments, and for highlighting that FK steering is a practical, effective approach for efficient, controllable generation with diffusion models. We also appreciate your assessment of our paper's clarity and strong theoretical grounding. We address each of your concerns below. Let us know if there is any additional information we can include that would be helpful for your review. _Missing gradient guidance baseline in image experiments._ 1. **Requested gradient guidance results**. - FK steering (with 4 particles, gradient-free) **outperforms gradient guidance** on all metrics and **is significantly faster** (8.1 vs 20 seconds for SDv1.5)! - FK steering combined with ImageReward gradient guidance performs even better but requires a significant increase in sampling time. ### Gradient-guidance Results | Model | GenEval | IR | HPS | Sampling Time | |-|-|-|-|-| | SDv1.5 | 0.44 | 0.187 | 0.245 | 2.4s | | SDv1.5 + IR guidance | 0.450 | 0.668 | 0.245 | 20s | | SD v1.5 + FK (k = 4) | **0.54** | **0.898** | **0.263** | 8.1s | | SD v1.5 + IR guidance + FK (k = 4) | **0.56** | **1.290** | **0.268** | 55s | Thank you for encouraging us to run these experiments. These results strengthen our paper and we will include them in our final draft. 2. **Additional advantages over gradient guidance**. Gradient-free steering has the following additional benefits: - **Enables steering with non-differentiable rewards**. Gradient-free steering methods enable the use of non-differentiable rewards such as perplexity from an autoregressive model or a trigram model, see table 2 in the paper for examples. Additionally, gradient-free steering can make use of closed-source reward models hosted via APIs, or non-differentiable constraints. - **Enables steering discrete models**. Gradient-free steering of masked diffusion models can enable attribute control (tables 2 and 4). More recently, LLaDA [Nie et al 2025] showed that discrete-state text diffusion models are competitive with auto-regressive models, making the development of gradient-free steering even more pertinent. _FK steering involves additional computation per particle due to resampling steps...this raises questions about its computational efficiency and whether its performance gains justify the added cost._ 1. **Computational efficiency**. This is an excellent question. In the text-to-image generation experiments, using interval re-sampling (calling the reward model every 20 of 100 total steps) results in minimal compute overhead. We include timing results in the appendix (table 6) which show that FK steering results in a **minimal increase in sampling time compared to best-of-n**. For our most performant model, SDXL, the increase in sampling time is only 3%, see table below. We also note that as model size increases, the gap between FK and best-of-n reduces further. | Model | Params | BoN (*k = 4*) | FK (*k = 4*) | |-|-|-|-| | SDXL | 2.6B | 42.3s | 43.5s | 21.7s | 2. **Improved results justify computational overhead**. In our controllable text generation experiments (see table 4) with the masked diffusion language model, FK steering with **k=4** achieves attribute accuracy of **22.0%**, significantly outperforming the **best-of-8** attribute accuracy of **3.7%**. Best-of-8 takes 19.3 seconds versus 20.7 seconds for FK steering(k=4) on a single A100 GPU. We have added the timing results to the revised draft. _Diversity is an important factor...incorporating it as an additional evaluation metric would provide a more comprehensive assessment of FK steering’s effectiveness and potential trade-offs._ - **Diversity Results**. Currently, we include CLIP-diversity scores in table 7, and provide 4 and 8 particle run samples, and highlight different components of the algorithm that can affect diversity. In the revised draft, we will include global diversity metrics as well. _...a direct comparison to recent adaptive guidance techniques and a more thorough analysis of computational efficiency trade-offs would further contextualize its impact within the broader literature._ - **Adaptive guidance techniques**. Other forms of conditioning, beyond the classifier and classifier-free guidance approaches we evaluate, could be used for the proposal generator in FK steering. We are very interested in exploring building on top of other recent methods in future work. - **Computational efficiency trade-offs**. In table 6, we include the sampling time of FK steering compared to best-of-n. We discuss additional timing results above. _For gradient guidance, the paper should cite "Diffusion Posterior Sampling for General Noisy Inverse Problems"_ Thank you. We will update our related works section to include this work in our final draft. ### References Nie et al. "Large Language Diffusion Models." arXiv preprint arXiv:2502.09992 (2025).
null
null
null
null
null
null
Optimization Proxies using Limited Labeled Data and Training Time -- A Semi-Supervised Bayesian Neural Network Approach
Accept (poster)
Summary: This paper proposes and evaluates a semi-supervised approach to training Bayesian Neural Network (BNN) optimization proxies for predicting solutions to constrained optimization problems. Specifically, the paper proposes augmenting training on labeled data (from solving optimization problem instances) with training on unlabeled data about constraint satisfaction in a "sandwich" fashion. The paper is concerned with settings in which both labeled data (for training and validation) and training time for the model are constrained. The authors argue that the use of BNNs allows both an increase in sample efficiency compared to (non-Bayesian) DNNs as well as a principled predictive uncertainty quantifications, for which they develop a new approach based on Bernstein concentration bounds. The paper evaluates the proposed approach on optimal electric power flow optimization benchmarks, demonstrating improved performance in the small-sample regime compared to a number of baseline methods. ### Update after rebuttal Overall, given the authors' responses I feel slightly more positive about the paper, but not enough to raise my score to a full accept. For that I would still like to see a more detailed ablation study on the training setup, as well as a comparison with more compute-intensive baselines (even though they may not be feasible from an applied perspective). Claims And Evidence: The claims appear well-supported overall. However, the empirical evaluation could be more comprehensive and include some other (e.g. non-power-flow) example applications as well as more detailed ablation studies of some of the hyperparameters (e.g. number of unlabeled examples, learning rates, etc.) Methods And Evaluation Criteria: * The optimal power flow problems are fine benchmark problems, but given that the proposed method is a generic one I would have expected to see some problems from other applications to better understand how the approach performs in other domains. * The evaluation criteria generally make sense to me - as someone with limited familiarity with the specific literature in this area. However, I am surprised to only see the performance compared on single instances without providing variances across the randomness in the results (e.g. w.r.t. the testing instances / data generation / splitting and w.r.t. to the weight initialization and other randomness in the methods themselves). This leaves open the possibility that the provided results are somewhat cherry-picked -- I am not suggesting that they are, rather that it's not clear to me from the paper, and the paper could to better at proactively assuaging this concern. * The evaluation excluded a number of baseline methods that would be interesting to have for comparison, such as self-supervised constrained optimization methods and Graph Neural Network-based large models. The authors rightfully argue that these methods may not be feasible in the setting targeted by the paper due to excessive computational requirements -- however, it would be very helpful from a scientific perspective to see the performance penalty incurred by using fewer samples with the proposed approach to understand the broader tradeoffs between the different methods. * Related to the above, the paper also mentions Ensemble DNNs as an alternative approach, but does not discuss them in detail due to "very high computational requirements". It's not necessarily clear to me that this will always be an issue. For instance, in settings where label generation is extremely costly but DNN training time is not a major concern, ensemble DNNs could be a good alternative. Given that a claimed primary contribution is the uncertainty quantification, having the evaluation against ensemble DNNs as at least one baseline would be highly valuable. Theoretical Claims: N/A -- there are no novel theoretical claims (the authors apply a "theoretical" version of Bernstein's inequality using the Mean Predictive Variance, but are basing this on a hypothesis that they only check empirically in a limited number of examples). Experimental Designs Or Analyses: Not in detail, the experimental setup seems reasonable (modulo the comments on evaluation criteria above). One thing I'd like to see more of are ablations of some of the training settings. BNN learning can be rather finicky, and so it would be important to understand how sensitive the setup is to the number of training samples and unlabeled samples and hyperparameters such as learning rates etc. Supplementary Material: Yes, I reviewed the full supplementary material. Relation To Broader Scientific Literature: * The main novel angle of the paper is the focus on and analysis of the small-sample regime in which labeled data and training time is limited. Previous works have often focused on settings where that data and time was less of a constraint, so the paper provides an interesting additional perspective. * Both the use of BNNs as optimization proxies and the semi-supervised data augmentation using feasibility information appears novel (though I have very limited familiarity with the literature). Essential References Not Discussed: I am not familiar enough with the specific literature to assess this properly. Other Strengths And Weaknesses: * The paper is generally well written. I appreciated that it does a decent job introducing the problem setting and some of the background material, allowing me as someone less intimately familiar with the literature to follow the presentation without too much trouble in most places. * The focus on limited data and training time and comparing performance under these constraints is a nice way of looking at the practicality of the approach (though I would like to understand the gap to an ideal baseline, see comments above) Other Comments Or Suggestions: N/A Questions For Authors: * You mention that "the ‘Max Ineq.’ growth could be easily suppressed by incorporating bound repair layers, as used in DNN models in ML4OPF. Why not do this? This would be a very useful ablation to understand the effect of a bound repair layer on the performance. Especially in light of the fact that you observe that "Sandwich BNN SvP shows slightly higher scaling inequality gaps". * It appears that Sandwich learning is a lot more helpful for case57 - why? Is this b/c constraints are more challenging to satisfy in this problem? It would be helpful if you could provide this kind of intuition more broadly in the discussion of your results. * How to select the number of unlabeled examples and schedule the learning steps such that the model doesn't over-fit to the constraints? Is there a principled way to do this? Can you provide any ablations on this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Methods and evaluation criteria** We agree with the reviewer that BNN training can be sensitive to various random choices in the training. We have run a new batch of experiments for the 118 bus problem with increased dataset size. The variance in the results from different learning experiments can be seen here. The time budget used here is 900s. Results are slightly different from what is reported in the paper due to us using a different machine. Despite these changes, we see that the results are quire robust. #### **Comparison of Max, Avg, and Min values for different $N$ for 15 Min Training on `case118`** | N | Type | %Gap | Max Eq | Max Ineq | |------|------|-------|--------|---------| | | Max | 1.597 | 0.115 | 0.019 | | 512 | Avg | 1.531 | 0.104 | 0.013 | | | Min | 1.506 | 0.090 | 0.002 | |-----------| | | Max | 1.530 | 0.094 | 0.017 | | 1024 | Avg | 1.510 | 0.089 | 0.014 | | | Min | 1.473 | 0.084 | 0.009 | |---------| | | Max | 1.473 | 0.083 | 0.010 | | 2048 | Avg | 1.456 | 0.079 | 0.009 | | | Min | 1.441 | 0.078 | 0.009 | More experiment results can be found here: [Link](https://drive.google.com/file/d/1sm9eEiYutFk89ZqKwJNI4pf8HQueUJmy/view?usp=share_link) **Comparison with ensemble DNN and GNN** We agree that ensemble methods can potentially help in quantifying uncertainty. Our comparison, for the OPF benchmark, is currently with state of the art scalable Neural network based methods given our limited training time and data availability. At present we are not aware of any existing work on ensemble methods for the case of OPF. If individual ML methods are trained sequentially without our framework, scalability might be an issue. However developing ensemble methods for OPF and comparing with our BNN model will an useful avenue of future work. **Questions** **1.** We did not do this for our experiments as we see that the inequality violations are already low, especially in comparison to the equality violations. Moreover, our mean inequality Gap results are similar to that of DNNs with repair obtained by ML4OPF tools. Due to limited time for rebuttable we did not setup those bound repair experiments with BNN. Also, note that criteria of SvP selection is only minimization of maximum equality gap. We also note that one can include inequality constraints in SvP criteria to improve upon further. We can include it in our final draft. **2.** This effectiveness of sandwich method for case57 likely due to the limited time budget of 600s that we have used for all the experiments in the paper. The sandwich BNN for case57 has ample time to converge, but larger cases are stopped early. Even under this constraint, we see that the Sandwich BNN still outperforms other methods even for larger cases. Further, case57 has less active constraints and thus intuitively we can say that unsupervised data is able to enforce feasibility much better in sandwiched learning. We conduct a robustness study on `case118' by running five trials with training times of 10 and 15 minutes, using 512, 1024, and 2048 supervised training samples, while keeping 2048 unsupervised samples constant across all models and trials. The results clearly demonstrate that increasing training time and the number of supervised samples consistently reduces \%Gap, Max Eq., and Max Ineq. Additionally, Mean Eq. remains an order of magnitude lower, and Mean Ineq. stays at the order of $10^{-5}$. For instance, the Sandwich BNN SvP model achieves a 1.50\% Gap, 0.094 Max Eq., and 0.014 Max Ineq. with 512 supervised samples and a 10-minute training time, averaged over five trials. In contrast, with 2048 supervised samples and 15 minutes of training, the same model improves to a 1.46\% Gap, 0.080 Max Eq., and 0.011 Max Ineq. This implies that increasing training time by five minutes while quadrupling the supervised samples results in a 2.66\% reduction in optimality error (\%Gap), a 17.5\% reduction in equality constraint feasibility error (Max Eq.), and more than a 27\% reduction in inequality constraint feasibility error (Max Ineq.). **3.** We find that early stopping the unsupervised round is necessary to prevent it from overfitting. The intuition we follow is that we do not want to fit either supervised or unsupervised models too well, to avoid biasing our surrogate towards either feasibility or optimality. Further, we observe that too many unlabeled examples will require too much compute to extract any meaningful information, thus we kept them constant at 2048. We fix each round of unsupervised training to 150 sec. and supervised training time as 100 sec. for 512 samples. While we do not fine tune this choice in our experiments, cros-validation on different relative training sizes might be a way to understand the dependence of hyperparameters that includes number of unlabeled examples and division of time between supervised and unsupervised training. --- Rebuttal Comment 1.1: Comment: Thanks for the the clarifications and additional results. The ablations in terms of the number of samples and training time are helpful. When talking about robustness I was also concerned with things like learning rates etc. I don't think these ablations are crucial, but it would be helpful to understand how sensitive the results are to changes to the training setup. > The optimal power flow problems are fine benchmark problems, but given that the proposed method is a generic one I would have expected to see some problems from other applications to better understand how the approach performs in other domains. This concern of mine still stands - The method is presented as a generic method, but evaluation is only done on OPF problems. Having results on at least one other problem domain would make me more confident in the general applicability of the results. > The evaluation excluded a number of baseline methods that would be interesting to have for comparison, such as self-supervised constrained optimization methods and Graph Neural Network-based large models. The authors rightfully argue that these methods may not be feasible in the setting targeted by the paper due to excessive computational requirements -- however, it would be very helpful from a scientific perspective to see the performance penalty incurred by using fewer samples with the proposed approach to understand the broader tradeoffs between the different methods. These kinds of comparisons would still be helpful, at least to have a ballpark understanding of how well the proposed approach works compared to other (possibly infeasible) benchmarks. --- Reply to Comment 1.1.1: Comment: The referee is right in pointing out that our method is demonstrated for only OPF, although the method it self is presented for a general problem. To address this,we have modified our code to deal with general constraints optimization problems and have run benchmarks on the following optimization problem also used by Donti et.al [1] as a non-convex benchmark, $\min_{y \in \mathbb{R}^{n}} \quad \frac12 y^{T} Qy + p^{T}\sin(y),$ $~\text{s.t.} ~Ay = x $ $,~~ Gy \leq h.$ We study the performance of the sandwich BNN method and the standard supervised BNN with only 512 supervised samples on this problem. For the sandwich method, we use 8x unsupervised samples in the unsupervised layers. Preliminary results of this are given below for two data sets generated from the above problem with 20 and 70 variables respectively. The data generation procedure is as given in [1]. | Model |(nV,nEq,nInEq)| Gap% | Max Eq violation | Max InEq violation |$T_{max}$ | |------------------|--------------|-------|------------------|--------------------|----------| | BNN (supervised) | (20,10,20) | 2.32 | 0.5092 | 0.262 |400 | | BNN (sandwich) | (20,10,20) | 0.71 | 0.140 | 0.001 |400 | | BNN (supervised) | (70,20,50) | 6.48 | 1.711 | 0.253 |800 | | BNN (sandwich) | (70,20,50) | 4.84 | 1.862 | 0.213 |800 | nV, nEq and nInEq are respectively the number of variables, equality constraints and inequality constraints. Results above are evaluated on 100 testing instances. These show the advantages of using the sandwiched approach for training a BNN model in the low-data/time-constrained regime for this problem. A more comprehensive version of this experiment can be added to the camera ready version of the manuscript. [1] Donti, Priya L., David Rolnick, and J. Zico Kolter. "DC3: A learning method for optimization with hard constraints." ICLR (2021).
Summary: The paper introduces a Bayesian Neural Network (BNN) framework as a surrogate model for constrained optimization, leveraging its intrinsic uncertainty quantification for robust predictions. It employs a semi-supervised training strategy that alternates between supervised learning on limited labeled data and unsupervised learning that enforces constraint feasibility via data augmentation. Experiments show that this approach significantly reduces constraint violations and prediction errors, outperforming traditional deep neural network methods in low-data, low-compute settings. Additionally, by applying Bernstein’s inequality, the model derives tight probabilistic confidence bounds, offering practical error estimates without extensive validation data. Claims And Evidence: The experimental results in the submission strongly support its core claims within the context of AC optimal power flow problems, demonstrating improvements in constraint satisfaction and reduced prediction errors under low-data and low-compute conditions. However, the generality of the approach beyond ACOPF is less clearly supported, as most evidence is specific to this application domain. In addition, the claim that a simple multiplier (2×MPV) reliably bounds the total variance in error is based on a hypothesis validated only in these experiments and may require further theoretical backing and broader empirical evaluation. Overall, while the claims are convincing for the studied case, their applicability to other constrained optimization problems remains somewhat problematic. Methods And Evaluation Criteria: Yes, the methods and evaluation criteria are well-aligned with the application. The paper’s use of semi-supervised Bayesian neural networks addresses the challenge of limited labeled data and short training times, which are key constraints in real-world power grid optimization. Evaluation metrics such as the optimality gap, equality/inequality constraint violations, and probabilistic confidence bounds directly assess performance in areas critical to ACOPF problems. Moreover, using established benchmark datasets ensures that the evaluation is robust and comparable to baselines. Theoretical Claims: I reviewed the derivations related to the probabilistic confidence bounds, which rely on established concentration inequalities like Hoeffding’s and Bernstein’s inequalities. The proofs themselves are adaptations of well-known results rather than new theoretical contributions, and they primarily serve to justify the use of the 2×MPV heuristic as a bound on the total variance in error. I did not encounter any significant issues in the correctness of these derivations, but it’s important to note that the paper does not present any substantial new theory. Overall, the theoretical claims are based on solid, previously established results, with the novel contribution being in their application within the BNN-based optimization proxy framework. Experimental Designs Or Analyses: I examined the experimental designs used for evaluating the ACOPF problems across different system sizes, where the authors compare metrics such as optimality gap, equality/inequality violations, and error bounds among various models. The overall design is sound for the application at hand, with a reasonable selection of benchmark datasets and a clear focus on both supervised and semi-supervised BNN approaches. However, one issue is that the analyses do not include any confidence intervals or error bars for the reported metrics, which makes it harder to assess the statistical significance and variability of the improvements. This omission limits our ability to fully evaluate the robustness of the experimental findings. Supplementary Material: Yes, I reviewed the supplementary material. I examined the additional experiments, including the extended results for larger test cases, as well as the supplementary figures and tables that further illustrate the performance trends. I also reviewed the supplementary sections that detail the existing theory behind the concentration bounds, relying on established results like Hoeffding’s and Bernstein’s inequalities. These sections reinforce the empirical findings and support the theoretical framework without introducing any significant new theory. Relation To Broader Scientific Literature: The paper’s key contributions are well-rooted in existing literature on constrained optimization and semi-supervised learning. Notably, the authors propose a sandwich training framework that alternates between a supervised phase on limited labeled data and an unsupervised phase that enforces constraint feasibility via data augmentation. Although demonstrated with Bayesian Neural Networks (BNNs) to leverage uncertainty quantification and tight error bounds via Bernstein’s inequality, the sandwich training framework itself is not limited to BNNs; it has broader potential for application across various neural architectures. This approach builds on prior methods—such as penalty methods and projection techniques for constrained optimization, as well as semi-supervised learning strategies like pseudo-labeling—and extends these ideas to effectively address real-world problems with scarce labeled data and strict operational constraints. Essential References Not Discussed: Yes, there are related works that could further contextualize the paper’s key contributions, particularly in the areas of surrogate modeling with limited data and constraint handling. For example: - ***Prior Fitted Neural Network as Surrogate:*** Müller, Samuel, Matthias Feurer, Noah Hollmann, and Frank Hutter’s “Pfns4bo: In-context learning for Bayesian optimization” (ICML 2023) addresses the challenge of limited data by leveraging a prior fitting approach. This work provides an alternative perspective on building surrogate models that effectively operate in data-scarce regimes, which is closely related to the paper’s focus on constrained optimization proxies. - ***Boundary Exploration for Bayesian Optimization with Unknown Physical Constraints:*** Tian, Yunsheng, Ane Zuniga, Xinwei Zhang, Johannes P. Dürholt, Payel Das, Jie Chen, Wojciech Matusik, and Mina Konakovic Lukovic’s work “Boundary Exploration for Bayesian Optimization With Unknown Physical Constraints” (ICML 2024) employs ensemble methods to explore constraint boundaries. This study offers insights into handling unknown physical constraints through an ensemble approach, complementing the paper’s sandwich training framework that enforces feasibility via semi-supervised learning. Including and discussing these works would enhance the understanding of how the paper’s contributions fit within the broader landscape of surrogate-based optimization and constrained learning methods. Other Strengths And Weaknesses: Strengths: - The paper introduces a novel sandwich training framework that alternates between supervised and unsupervised phases to enforce constraint feasibility. Importantly, this framework is not limited to Bayesian Neural Networks (BNNs), suggesting broader applicability to various neural architectures in surrogate modeling and optimization tasks. Weaknesses: - The focus on surrogate modeling involves a largely passive approach to identifying feasibility, actively optimizing only within a pre-identified feasible region rather than integrating feasibility and optimality exploration simultaneously. This approach is reminiscent of recent work such as CONFIG by Xu et al. (Xu, Wenjie, Yuning Jiang, Bratislav Svetozarevic, and Colin Jones. “Constrained efficient global optimization of expensive black-box functions.” In International Conference on Machine Learning, pp. 38485-38498. PMLR, 2023), which more actively integrates constraint exploration. Additionally, the paper does not provide any regret guarantees, a limitation also noted in similar treatments like CONFIG. Other Comments Or Suggestions: Merged with the section below. Questions For Authors: 1. While the sandwich training framework is demonstrated with BNNs, have you considered or experimented with applying it to other neural architectures, and what challenges do you anticipate in such extensions? 2. The experiments currently lack confidence intervals or statistical significance tests. Could you provide additional details on the variability of your results and any measures taken to assess statistical robustness? 3. Given that similar approaches like CONFIG (Xu et al., 2023) offer integrated exploration and regret guarantees, do you see a pathway for incorporating regret guarantees into your framework, and how might that impact performance? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Claims And Evidence: The 2×MPV Heuristic and Its Broader Validation** We appreciate this insightful comment. We believe that the 2MPV heuristic is sufficient to capture total variance in the error, as substantiated by the studies shown in Fig 4. This heuristic will require similar empirical validation before it can be used for other datasets. While the law of total variances argument in Eq.(5) makes the first step in understanding the theory behind this, a full theoretical understanding of why one term dominates at selected training data sizes will be the object of a future work. **Experimental Designs Or Analyses: Lack of Confidence Intervals or Statistical Significance Tests** We acknowledge the importance of assessing the robustness of our experimental results. We have run a new batch of experiments for the 118 bus problem with increased dataset size. The variance in the results from different learning experiments can be seen here. The time budget used here is 900s or 15 mins. Results (including the base case) are slightly different from what is reported in the paper due to the use of a different machine. Despite these changes, we see that the results are quire robust. #### **Comparison of Max, Avg, and Min values for different $N$ for 15 Min Training on `case118`** | N | Type | %Gap | Max Eq | Max Ineq | |------|------|----------|---------|----------| | 512 | Max | 1.59 | 0.115 | 0.018 | | 512 | Avg | 1.53 | 0.104 | 0.012 | | 512 | Min | 1.50 | 0.089 | 0.002 | |-| | 1024 | Max | 1.53 | 0.093 | 0.017 | | 1024 | Avg | 1.50 | 0.089 | 0.014 | | 1024 | Min | 1.47 | 0.084 | 0.009 | |-| | 2048 | Max | 1.47 | 0.082 | 0.010 | | 2048 | Avg | 1.45 | 0.079 | 0.009 | | 2048 | Min | 1.44 | 0.078 | 0.009 | Updated results with error bars can be found here: [Link](https://drive.google.com/file/d/1sm9eEiYutFk89ZqKwJNI4pf8HQueUJmy/view?usp=share_link) **Essential References Not Discussed:** We appreciate the reviewers' suggested works. While related, we note that our focus is not directly on "Bayesian optimization" but on building a "Bayesian surrogate" for the optimization problem. In particular, we do not focus on selecting additional training data via acquisition functions/active learning. However, techniques mentioned in (paper 1) such as PFNs can help potentially reduce training time of Bayesian inference used within our framework (as shown in *Transformers Can Do Bayesian Inference*, ICLR 2022). However, PFNs have so far been applied only to relatively small BNNs due to the high computational cost of the attention mechanism. We are actively exploring ways to scale PFNs, such as using linear attention or alternative architectures like CNNs. Unlike boundary exploration for identifying unknown constraints (paper 2), the constraints are explicitly defined in our work. It will be of interest to integrate our approach with boundary exploration—potentially for feasibility verification or active learning. We will include a discussion of these works to better position our contributions. **Weaknesses:** We agree that actively exploring feasibility and optimality is an important direction for future research. Our problem is not black-box optimization but rather surrogate modeling. It is challenging to model optimality in BNN with unsupervised data, as the target distribution of optimality is not known a priori without supervised data. Only the feasibility target distribution can be determined, which corresponds to a delta distribution at zero. Therefore, the simultaneous exploration of both optimality-and feasibility, including the right mix of training data with sandwich approach, will be our future work. **Questions** 1. As shown in the original draft, we did sandwich learning on DNN and showed that under low supervised data, BNN outperforms DNN considerably. Also, the comparative results with other DNN methods indicate that sandwiching remains effective. 2. As shown in (min-max) Table above, we see that our results are quite robust to changes in the training set, time budget and report the max and min values of all the performance metrics and additional results in an updated pdf file here [Link](https://drive.google.com/file/d/1sm9eEiYutFk89ZqKwJNI4pf8HQueUJmy/view?usp=share_link) 3. The current work is not focused on active learning or intelligent sampling of training data. Incorporating regret guarantees is an interesting direction for future work. One possible direction we see is to use a sandwich model with very low supervised data, which can provide a computationally cheap baseline, then use that as a surrogate in active search of training samples and developing regret guarantees.
Summary: The paper proposes a new Bayesian Neural Network (BNN) for solving non-linear constrained optimization problems that can be computationally expensive. The Bayesian Neural Network serves as a proxy for solving the original problem and is computationally more efficient. The downside of similar existing approaches is that they can perform poorly if labeled data and training times are limited. To address this deficiency, the paper's BNN leverages unsupervised learning that enforces constraint feasibility. Specifically, the model alternates between phases of supervised and unsupervised learning. Finally, it samples weights for the neural network from the posterior distribution to generate a posterior prediction matrix and selects the weight that best satisfies the equality constraints. The paper then demonstrates the effectiveness of their approach compared to deep neural network alternatives against standard benchmarks. Claims And Evidence: The work is mainly empirical and the claims are supported by the numerics seem to make sense. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense for the problem and application. The paper utilizes unsupervised learning to address limited data and a Bayesian approach to search for solutions that are more feasible. While I am less familiar with the evaluation criteria, it seems to be a standard benchmark for this type of work. Theoretical Claims: The paper makes few theoretical claims, mainly those related to Bernstein and Hoeffding bounds which seem correct. Experimental Designs Or Analyses: I primarily evaluated the experimental design of the paper which seems to use standard benchmarks from the open-source OPFDataset from Torch Geometric. The results seem to make sense and seem promising. The only issues I see are the robustness of the results as they only focus on testing their approach on a single sample size and training time setting. It would be interesting to see how robust their results are as you vary sample size and training time, especially since the paper claims their approaches are computationally efficient. Supplementary Material: I briefly looked at the code in the supplementary material, but did not test the code myself. Relation To Broader Scientific Literature: The key contributions of the paper build on the literature of end-to-end constrained optimization learning which is well outlined in the survey paper [1]. Specifically, the paper focuses on machine learning predicting solutions to constrained optimization problems. Within the space, they focus on the key challenges such as limited data and expensive compute during training. ___ [1] Kotary, J., Fioretto, F., van Hentenryck, P., and Wilder, B. End-to-end constrained optimization learning: A survey. In 30th International Joint Conference on Artificial Intelligence, IJCAI 2021, pp. 4475–4482. International Joint Conferences on Artificial Intelligence, 2021. Essential References Not Discussed: I am less familiar with this area and thus do not have any additional reference suggestions. Other Strengths And Weaknesses: Strengths 1) The proposed approach produces compelling numerical results in low-data settings, which for the most part make intuitive sense or are discussed by the authors. 2) The choice to combine Bayesian neural networks and unsupervised learning makes intuitive sense as both are suitable for low-data settings. Being able to sample multiple predictions is also an interesting feature as it allows users to cheaply tune the level of feasibility of the solutions which may be useful in practice. 3) The paper generally feels well written and well organized. Weaknesses 1) The paper only focuses in low-data and low-computation settings so for readers less familiar with the literature it's hard to contextualize how the method performs compared to more computationally expensive approaches or approaches that do better with more data. They also do not scale the amount of data so it's hard to understand if their approach also performs well when the amount of data increases or how the improvement scales as data increases. 2) The improved performance of the approach is not uniform overall the experiments, specifically the ones found in the appendix. The paper makes some claims on why (mainly related to training time), so it would be useful to numerically verify the claims. 3) There are some minor clarity issues related to notation. For example, it was hard to understand how exactly $\mathcal{D}\_f$ was constructed based on the notation and the notation $p_{W}^{m}$ doesn't seem to be explicitly defined in the main body. Other Comments Or Suggestions: 1. Typo on pg 2: "10 minutes of training tim on a single CPU core" 2. In Eq (3) you give the constraint $g$ an index $I$. It may be more clear if you included it in the formulation of Eq (1). 3. Missing a word: "Sandwich DNN in these tables a DNN with the same network architecture as the BNN, trained under the same time constraints." Questions For Authors: 1) I don’t understand how $\mathcal{D}\_{f}$ is constructed since the notation $\mathcal{D}\_f = \left( (\bf{x}\_j, \mathcal{F}(\cdot, x) \right)^{M}_\{j=1}$ is confusing. Are you trying to say you just sample some arbitrary feasible solution? Why is that computational inexpensive? Can more detail be provided? 2) What is $p_{W}^{m}$? It should maybe be formally defined. 3) When you alternate between unsupervised and supervised learning, is there benefit in learning until parameters converge? 4) To clarify, is the output of the model $f_{W^*}(\bf{x}^t)$? 5) Does performance improve with more unsupervised data? 6) How does performance scale as you increase data? 7) Are there any hyper parameters that need to be tuned for the BNN? How does the tuning affect performance? 8) How do you choose the variance parameters for the prior and likelihood? How do these parameters affect the performance of your model? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Weaknesses** **1. Scalability** We agree that exploring scalability is important. To address this concern, we have added additional experimental results showing that while our method excels in low-data regimes, it also remains competitive as the amount of available data increases. These experiments are performed on the case118 system due to the limited rebuttal window. But similar experiments can be performed and added to the paper for the camera ready version. **Avg. error values on `case118` with different numbers of supervised samples for Sandwich BNN SvP** | N |Train time | %Gap | Max Eq | Max Ineq | |-|-|-|-|-| |512 |15 min|1.5305|0.1041|0.01285| |1024|15 min|1.5097|0.0894|0.01442| |2048|15 min|1.4691|0.0800|0.01170| For instance, the Sandwich BNN SvP model achieves a 1.50\% Gap, 0.094 Max Eq., and 0.014 Max Ineq. with 512 supervised samples and a *10-min* training time (avg. of 5 trials). While, with 2048 supervised samples & *15-min* of training, it improves to a 1.46\% Gap, 0.080 Max Eq., and 0.011 Max Ineq. Detailed results can be found at: [Link](https://shorturl.at/nFTE9) **2. Non-Uniform Performance** We acknowledge the non-uniform performance in certain experiments. We find that this is due to variations in convergence rates that arise under the tight training time constraint (10 mins) that we have imposed on the system. For the 118 bus system trained on 1024 and 2048 samples, where increasing the training time from 10 to 15 minutes, the performance improves considerably. See the table above and [Link](https://shorturl.at/nFTE9) **3. Notation & Definitions** We will revise the notation. Here $ \mathcal{D}_{f} $ is constructed from the the unlabeled dataset $\mathcal{D}^u$. As a valid solution must fully satisfy all constraints, it has a true feasibility gap $\mathcal{F}$ of zero (eq. 2 in paper). Thus, we transform $\mathcal{D}^u$ into a labeled feasibility dataset $\mathcal{D}^f$ where each input $\mathbf{x}_j$ has a corresponding feasibility label of zero, i.e., $ \mathcal{D}^f = ( (\mathbf{x}_j, \mathcal{F}(\cdot, \mathbf{x}_j) = 0))$. As generating input samples is computationally cheap (just random variables within bounds), creating this feasibility dataset $\mathcal{D}^f$ is cheap. Note that during training, weights $w$ & corresponding output $f_w(x_j)$ are tuned to bring the computed feasibility gap $F(f_w(x_j),x_j)$ close to its true value of $0$. Also, $ p^m_W $ is the posterior distribution of the BNN weights after $ m$ training cycles, which include both supervised & unsupervised learning steps. This serves as the final posterior distribution used for making predictions. **Questions** 1&2. Check response under Weakness 3. 3. Our experiments suggest that early stopping is essential while alternating between supervised & unsupervised stages. In the limited data setting, waiting for each training phase to converge before switching to the next phase can be detrimental. For instance, spending a lot of time in the unsupervised layer can improve feasibility metrics but can adversely affect the %Gap (cost). 4. Yes. $f_W(\cdot)$ represents the function that NN computes when the weights are set to $W$. And $f_W(x^t)$ is the output of this function for input $x^t$. 5. Our method has an imposed time constraint and also early stopping to prevent overfitting. Under these conditions, we do not find that increasing unsupervised samples to have a significant effect on the training out comes. Learning with unsupervised data is expensive, hence significant increasing unsupervised data under short training time constraints can only enhance the performance so much. This aligns with other self-supervised DNN methods that require significant compute (Park & Hentenryck (AAAI 2023)). We performed a preliminary scaling study on case118 by increasing the unsupervised samples to $2^{12}$ and $2^{13}$ while keeping the number of supervised samples at $512$. We only see marginal improvements with $T_{max} = 600s$. ### **Sandwich BNN SvP** | M (UnSup)| %Gap | Max Eq | Max Ineq | |-|-|-|-| | $2^{12}$ | 1.5471 | 0.077811 | 0.009252 | | $2^{13}$ | 1.54431 | 0.07689 | 0.008866 | We believe that these results can be improved by changing the training constraints to better exploit more unsupervised data. We leave that study for future work. 6. See reply to weakness 1. 7. We use Gaussian prior having 0 mean & 0.01 variance. We tried 1.0, 0.1 and 0.01 as variance of prior and found 0.01 to be most suitable, although quality of solution was not drastically different (all results show same order of magnitude ). We choose very low variance parameter for likelihood ($10^{-6}$) because we know that the ground truth solution of feasibility layer is a delta distribution zero (as discussed for $\mathcal{D}^f$). So, no tuning was performed on likelihood parameters. Hyperparameter details are provided with supplementary material in config.json format and in code. 8. See response to point 7.
null
null
null
null
null
null
null
null
Sort Before You Prune: Improved Worst-Case Guarantees of the DiskANN Family of Graphs
Accept (poster)
Summary: This paper aims to strengthen the theoretical analysis of graph-based ANN algorithms. In DiskANN indexing, candidate vertices are considered in ascending order from their distance to the out-vertex $v$, a detail not leveraged in prior theoretical analysis works. By combining this sorted-distance implementation detail with the observation that most graph ANN indices are built over the Euclidean metric, rather than an arbitrary one, the work improves the distance approximation factor that such a graph index is guaranteed to satisfy. The work then analyzes beam search rather than greedy top-1 search and shows an analogous result. Claims And Evidence: The claims are well-supported. Methods And Evaluation Criteria: The experiments make sense. Theoretical Claims: The proofs for the two main results both appear correct to me. Experimental Designs Or Analyses: The experiments make sense. Supplementary Material: Yes, reviewed proofs. Relation To Broader Scientific Literature: This work improves the theoretical analysis of graph-based ANN algorithms. It is very much related to the Indyk & Xu paper it references extensively, and also focuses on a variant of the DiskANN indexing procedure. Essential References Not Discussed: The essential references, to the best of my knowledge, are discussed. Other Strengths And Weaknesses: This paper is very well written and easy to follow, and its extension of upper bound analysis to beam search is significant, because beam search is always used in practice in graph ANN algorithms. One weakness is that the work gives a distance approximation bound, which is difficult for typical ANN users to interpret. A probabilistic analysis of finding the exact top-1 neighbor would be more relevant, although difficult. The degree and step bounds are also quite weak still and often lead to trivial (brute-force level) upper bounds. Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the kind words and suggestions. As the reviewer noted, extending the existing analysis to the practically motivated beam search regime was a key motivation for this work. Additionally, we adopted a distance-based approximation scheme, leveraging our novel factor-revealing SDP to establish improved approximation guarantees for the Euclidean metric. While we agree that a probabilistic analysis of graph-based data structures would be valuable, it is important to note that DiskANN is deterministic and always returns the same set of nearest neighbors for a given query. Therefore, conducting a probabilistic analysis of exact top-1 nearest neighbor retrieval would require a new theoretical framework with a well-motivated assumption on the query distribution—an interesting yet challenging direction for future research.
Summary: This paper introduces a novel strategy for constructing graph structures to enhance Approximate Nearest Neighbor Search (ANNS) on high-dimensional vectors. This approach ensures a superior approximation ratio for L2 distance metrics compared to existing methods. ## **Update After Rebuttal** The rebuttal has carefully addressed these comments. Since I have no more concerns now, I have raised my score. Claims And Evidence: Regarding the theoretical results, the claims are generally okay for me. However, the evaluations have certain issues (please refer to the following comments). Methods And Evaluation Criteria: The proposed sorted α-reachable graphs can achieve faster search performances than the existing method DiskANN. However, in the experimental studies, this paper only conducts evaluations on 3 relatively small-scale datasets, and only reports the results of the search efficiency. Quite a few critical metrics are ignored, so the proposed solution may not be practical enough under the general evaluation criterias under benchmarks of ANNS. Theoretical Claims: The theoretical results look okay and may offer insights to existing research. However, the proposed approximation ratio appears to be limited under L2 distance. Experimental Designs Or Analyses: In terms of the experimental designs, I have the following comments: 1. **Dataset Scalability**: The datasets used in the experiments are relatively small. The authors should evaluate their method on larger-scale datasets containing over 100 million or even 1 billion vectors to demonstrate scalability and robustness. In particulr, the competitor DiskANN is designed for in-disk index, which aims to handle billion-scale vectors. 2. **Evaluation Scope**: The experimental study focuses only on the application scenarios of in-memory indexes. Except for the new graph structure, DiskANN also contributes to ANNS over large-scale data that needs to be stored on disk. This application scope should be emphasized and validated in the experiments." 3. **Baseline Selection**: If authors try to claim that their contributions lay on the novel graph structure, they may have to compare their methods with other existing graph-based ANNS index like NSG [PVLDB2019] and NSSG [TPAMI22]. [VLDB2019] Cong Fu, Chao Xiang, Changxu Wang, et al. Fast Approximate Nearest Neighbor Search With The Navigating Spreading-out Graph. PVLDB, 12(5): 461-474, 2019. [TPAMI22] Cong Fu, Changxu Wang, Deng Cai. High Dimensional Similarity Search With Satellite System Graph: Efficiency, Scalability, and Unindexed Query Compatibility. IEEE Trans. Pattern Anal. Mach. Intell. 44(8): 4139-4150 (2022) ===================== The rebuttal has carefully addressed these comments, so I have no concerns on these issues now. Supplementary Material: I read the main results in the appendix. Relation To Broader Scientific Literature: This paper propose a new strategy for ANNS index with a better approximation ratio. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: 1. How does varying k (e.g., increasing from 20 to 100) affect search performance? 2. For OpenAI dataset (dimension = 1536), the average node degree decreases from 56 to 53, while for SIFT1M (dimension = 128), it decreases from 49 to 44 (a similar trend is observed in the Wiki dataset). Does this imply that the performance of the proposed strategy diminishes as the dimensionality of the vectors increases? Code Of Conduct: Affirmed. Overall Recommendation: 4 Ethical Review Concerns: N/A
Rebuttal 1: Rebuttal: We appreciate the reviewer’s detailed feedback and would like to clarify the key contributions of our work. Rather than proposing a new algorithm that outperforms DiskANN, our primary contribution is theoretical, providing a deeper understanding of the DiskANN algorithm in two key ways. First, we establish an improved approximation guarantee for DiskANN by leveraging two critical insights: (a) explicitly analyzing the $\ell_2$ setting, which is widely used in practice and has favorable mathematical properties, and (b) examining the _sorting_ step in the prune procedure. Using these insights, we introduce a novel factor-revealing SDP to demonstrate an improved approximation factor of $\frac{\alpha}{(\alpha - 1)}$, compared to the previously known bound of $\frac{\alpha + 1}{\alpha - 1}$ from Indyk and Xu (NeurIPS 2023). Moreover, we show that omitting either sorting or the $\ell_2$ setting results in a best possible approximation ratio of $\frac{\alpha + 1}{\alpha - 1}$, matching Indyk-Xu’s original result. This suggests that DiskANN may perform better under the $\ell_2$ metric than other distance metrics. Second, we introduce techniques that establish the first-ever theoretical bounds for retrieving $k > 1$ candidates in a beam search setting using a graph-based ANNS algorithm. Despite its practical significance in scalable retrieval, this regime remains underexplored, with prior work (NeurIPS 2023, ICML 2020, SoCG 2018, SODA 1993) exclusively focusing on the $k = 1$ case. We also believe our factor-revealing LP/SDP framework will be valuable for further research in this setting. In short, our core contribution lies in the theoretical understanding of the sorted prune step in DiskANN, and to support our theoretical findings, we conducted experiments evaluating the impact of sorting during graph construction. We did not compare DiskANN with other ANNS algorithms, as this has been extensively studied in prior work. We thank the reviewer for suggesting additional experiments. Based on the suggestions, we include plots and tables for 100M-scale datasets, key metrics such as build times, and RAM usage and disk I/O reads per query (for SSD-based indices). Please note that all dropbox links are anonymized. This comprehensive empirical study will be included in the final version. Link to DiskIO plots: https://www.dropbox.com/scl/fo/c44n8ij92zpt19qdedw1o/AJ-GeHiB6-38q1eRqJ7XYSw?rlkey=yfvqwr98038o4xuvq9dj1ghcy&st=ws3cd67g&dl=0 Link to RAM usage and Build Times: https://www.dropbox.com/scl/fo/v3cdv5iv76aryqcrh639x/AJgfthBn3IFAVaUGdWGzHwU?rlkey=ghddj7evgo7ldcrl1whc6st7m&st=nsjbz81n&dl=0 Link to Large-Scale dataset plots: https://www.dropbox.com/scl/fo/mp2cq55hz6bbd7rswojyu/AHJF6oHGCVUt5xz6Ah-OPao?rlkey=rt7dvticzt2jlsvq3pkf4nncf&st=j4saygfe&dl=0 **Q1 (varying k)**: On two large scale datasets, we have included plots of Recall@k vs. QPS for different values of k to analyze the effect of sorting during pruning. The search behavior remains consistent with our observations for k= 100. Link to varied-k plots: https://www.dropbox.com/scl/fo/m6f1pdq01npiy0dlv2ueq/AJa2AL5Ij3J1S8hhkvVhax8?rlkey=6f83rhwgkafme6s6v6r4ke3v6&st=um503h2h&dl=0 **Q2 (degree reduction and varying dimension)**: We appreciate the reviewer’s insightful question. While the degree does decrease for graphs built using sorted pruning on real-world datasets (as shown in Table 2), there is no clear correlation between this decrease and data dimensionality. Consider a simple one-dimensional example: let $p, a, b, c$ be four points on a line with distances $d(p,a)=1$, $d(a,b)=0.5$, and $d(b,c)=1.5$. The sorted algorithm would connect p to both $a$ and $c$ to maintain $\alpha$-reachability, whereas a more efficient algorithm could instead connect $p$ to $b$ while still preserving $\alpha$-reachability. This example illustrates that degree change is not tied to dimensionality. Additionally, a lower degree does not necessarily translate to improved performance, as latency depends on both the average degree and the graph’s diameter. A reduced degree may increase the graph’s diameter, making the impact on final latency difficult to predict. To further investigate the effect of data dimension on the performance of sorted vs unsorted index construction, we conducted an additional experiment on the OpenAI dataset. We used a random Johnson-Lindenstrauss matrix to project the 1536-dimensional OpenAI embeddings into a 352-dimensional space and measured the degree change due to sorting. With sorting, the average degree was 54.15; without sorting, it increased to 57.3—a 5.8% rise. This aligns closely with the 5.6% increase observed in the original dataset (Table 2), suggesting that the degree change is a property of the dataset rather than a direct function of dimensionality. So it might be more a function of the inherent structure of the dataset as opposed to the dimension itself. --- Rebuttal Comment 1.1: Comment: In the rebuttal, the authors have carefully addressed my previous comments, particularly regarding the experimental design and analysis. After reviewing the updated results and discussion, I have no further concerns at this time. Therefore, I will raise my score based on the rebuttal.
Summary: This paper conducts a theoretical analysis of graph-based approximate nearest neighbor search algorithms. In particular, the authors propose a new theoretical framework called the $\alpha$-reachable graph and, using this framework, provide the first worst-case complexity analysis of beam search in situations where $k > 1$. ## update after rebuttal After the rebuttal, I decided to keep my original score, WA. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: None. Relation To Broader Scientific Literature: This paper advances the theoretical development of graph-based algorithms in approximate nearest neighbor search. In particular, it is a direct extension of [Indyk & Xu 2024]. Essential References Not Discussed: None. Other Strengths And Weaknesses: ## Strengths The greatest strength of this paper lies in its extension of [Indyk & Xu, 2024] to analyze the worst-case complexity of beam search for $k > 1$. Introducing the $\alpha$-reachable concept is intuitive, and the problem setup is straightforward without conceptual difficulties. Additionally, Problem 1 in Sec 3.1 is formulated clearly, and the restructuring of the problem in this way is intriguing. The proofs are elementary to follow. Moreover, the insight that a dual problem can be considered when transitioning from a general metric $D$ to the Euclidean distance is well-motivated and interesting. ## Weaknesses There are no major concerns, but if anything, the experiments in Sec 4 focus on illustrating the concept of "sorting" rather than supporting the paper's primary claim regarding worst-case complexity. It would be interesting to see a more direct evaluation of how the proposed bounds relate to general high-dimensional vector data. Other Comments Or Suggestions: In Sec A.4 of the supplementary material, the statement "hence we can reformulate 6 as 1" appears to be a typo and should likely be "hence we can reformulate objective 6 as problem 1." Similarly, there is inconsistency throughout the paper regarding whether to write "1" or "problem 1." I recommend unifying this notation. If only a number is used, it would be more precise to enclose it in parentheses as "(1)." Questions For Authors: I have no particular questions. Ethical Review Concerns: None. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and suggestions. We will address all typos and wording mistakes in the final version. We want to emphasize one point: in addition to giving the first analysis for beam search, which is widely used in practice, we also improve the current state of the art approximation guarantees for the DiskANN algorithm, when the underlying metric is euclidean. This suggests that DiskANN may perform better under the $\ell_2$ metric compared to other distance metrics. Regarding experiments for quantifying the solution quality, we include a link to plots demonstrating average approximation ratio across queries versus QPS for two large 100M-scale datasets: Microsoft SPACEV and BIGANN (https://big-ann-benchmarks.com/neurips23.html) for different values of $k$. The average approximation ratio is the mean over all queries, of the maximum over all items $i$ in the final beam of size $k$, of the distance of the $i$th candidate computed by the algorithm upon the $i$th closest NN. (Anonymized Dropbox link) Approximation Ratio Plots: https://www.dropbox.com/scl/fo/y76d8hn51pkl097g6yaa0/AGyI89kxLaI8ouvgG8FNjRM?rlkey=xahf5obtefjqz35jj1yig6p3r&st=5h6r0brf&dl=0 For convenience, we include the raw data for the BIGANN approximation ratio plot below. Note that for an approximation ratio of 1.008, the sorted instance supports a QPS of around 58000 (row 7) while the unsorted instance supports a QPS of only 25000 (row 11). | Row | Unsorted Instance QPS | Sorted Instance QPS | Unsorted Approximation Ratio | Sorted Approximation Ratio | |-------|-----------------------|---------------------|-----------------------------|---------------------------| | 1 | 84464.5 | 93165.4 | 1.02061 | 1.01429 | | 2 | 78923.5 | 85955.7 | 1.01918 | 1.01336 | | 3 | 76587.9 | 79858.2 | 1.01795 | 1.01257 | | 4 | 71633.8 | 76934.9 | 1.01704 | 1.01140 | | 5 | 67506.9 | 67532.4 | 1.01612 | 1.01064 | | 6 | 59775.7 | 61504.0 | 1.01384 | 1.00933 | | 7 | 53310.9 | 58497.7 | 1.01267 | 1.00846 | | 8 | 51330.2 | 54442.5 | 1.01221 | 1.00796 | | 9 | 40633.4 | 42155.7 | 1.01029 | 1.00675 | | 10 | 32898.3 | 32861.8 | 1.00907 | 1.00579 | | 11 | 27124.3 | 28039.5 | 1.00812 | 1.00524 | | 12 | 22319.7 | 23551.2 | 1.00726 | 1.00451 |
Summary: This paper addresses the problem of Approximate Nearest Neighbor Search (ANNS), which is crucial for various applications dealing with large datasets in high-dimensional spaces. Graph-based data structures, particularly those in the DiskANN family, have shown strong empirical performance. However, theoretical understanding of their worst-case behavior has been limited. This work builds upon the concept of alpha-reachable graphs introduced by for DiskANN, which provided the first provable worst-case guarantees. The authors identify three key open questions from that prior work: 1. Can stronger guarantees be derived for the Euclidean metric, commonly used in practice? 2. Does the sorting step before pruning in the DiskANN algorithm contribute to the theoretical guarantees? 3. Can provable guarantees be established for beam search, the practical algorithm used to retrieve k > 1 nearest neighbors? This paper aims to answer these questions, offering improved theoretical analysis for DiskANN, especially in Euclidean spaces, and providing the first worst-case guarantees for beam search in graph-based ANNS methods for retrieving multiple neighbors. Claims And Evidence: Authors introduce sorted α-reachable graphs, and use this notion to obtain a stronger approximation factor of α/(α−1) for the DiskANN algorithm on Euclidean metrics. Authors present the first worst-case theoretical analysis for the popular beam-search algorithm, which is used in practice to search these graphs for k > 1 candidate nearest neighbors. Methods And Evaluation Criteria: The proposed algorithm is similar to the DiskAnn graph, except that the proposed method e sort the candidate neighbors before pruning the graph, and the baseline method does not. Authors then compare the graph degrees, recall, and QPS on modern hardwares. Theoretical Claims: Authors define the sorted alpha reachable graph as follows. Given a dataset P, a distance metric D and α > 1, a directed graph G to be a sorted α-reachable graph if for any pair of points v, a ∈ P, either the edge (v, a) exists in G or there exists a point t ∈ P such that: 1. (v, t) is an edge in G 2. D(t, a) ≤ D(v, a)/α (α-reachability) 3. D(v, t) ≤ D(v, a) (sorting property) Author argues that when performing beam search on a sorted alpha reachable graph, the worst case is tight on the bounds: D(b_j , q) ≤ α/(α − 1) · D(a_j , q) for L2 metric, and D(b_j , q) ≤ (α + 1)/(α − 1) · D(a_j , q) for general metric, where a_j is the j'th nearest neighbor to q. Experimental Designs Or Analyses: Authors provided benchmarks on three datasets: OpenAI, SIFT-1M, and Wikipedia, with the baseline DiskANN implementation from ParlayANN. The benchmark and analysis compared the graph degrees, recall, and QPS on modern hardwares. It could be interesting to see if the method scale to larger datasets that big-ann provides (https://big-ann-benchmarks.com/neurips23.html). Or if the method generalizes to other small datasets provided by ann-benchmarks (https://ann-benchmarks.com/index.html) Supplementary Material: The proofs details in the supplementary material are solid. Relation To Broader Scientific Literature: - Essential References Not Discussed: - Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: Can authors provide more benchmarks such as the big-ann (https://big-ann-benchmarks.com/neurips23.html) or ann-benchmarks (https://ann-benchmarks.com/index.html)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their kind words and thoughtful suggestions. As suggested by the reviewer, we have included a link to plots on two large 100M-scale datasets from big-ann-benchmark, confirming the importance of sorting irrespective of dataset size. We will include all of these plots in the final version of our submission. For convenience, we also include a table capturing one such plot for the 100M-scale Microsoft SPACEV dataset, comparing QPS to 100@100Recall. See that for this dataset, the sorted index outperforms the unsorted index, delivering 23% higher QPS for a recall of ~0.93. (Anonymized Dropbox link) Large Scale Experiment Plots: https://www.dropbox.com/scl/fo/mp2cq55hz6bbd7rswojyu/AHJF6oHGCVUt5xz6Ah-OPao?rlkey=rt7dvticzt2jlsvq3pkf4nncf&st=7z86irq4&dl=0 | Row | Unsorted QPS | Unsorted 100@100 Recall | Sorted QPS | Sorted 100@100 Recall | |-------|---------------------|-------------------------------|-----------------------|----------------------------------| | 1 | 98,408 | 0.9044 | 108,715 | 0.9132 | | 2 | 88,007 | 0.9101 | 95,070 | 0.9184 | | 3 | 80,488 | 0.9153 | 88,984 | 0.9228 | | 4 | 76,305 | 0.9197 | 84,397 | 0.9268 | | 5 | 71,054 | 0.9235 | 78,176 | 0.9301 | | 6 | 63,347 | 0.9302 | 68,483 | 0.9358 | | 7 | 57,392 | 0.9357 | 61,069 | 0.9405 | | 8 | 54,370 | 0.9381 | 58,041 | 0.9425 | | 9 | 42,049 | 0.9479 | 44,921 | 0.9505 | | 10 | 33,312 | 0.9541 | 35,845 | 0.9560 | | 11 | 27,583 | 0.9587 | 29,182 | 0.9602 | | 12 | 22,502 | 0.9621 | 23,804 | 0.9634 |
null
null
null
null
null
null
Compositional Scene Understanding through Inverse Generative Modeling
Accept (poster)
Summary: This work demonstrates how generative vision models can be composed to enable robust compositional scene understanding. The proposed solution enables different scene understanding tasks involving discrete or continuous factors, such as localization and multi-attribute classification. The method can also incorporate pretrained diffusion models to enable zero-shot classification. Claims And Evidence: 1. The proposed method can accurately infer concepts. - The prediction of continuous factors is demonstrated well by the results in Tab. 1 and Figs. 4&5, where the method achieves impressive performance. - The prediction of discrete factors is demonstrated well in Tab. 5 and Fig. 6. 2. The proposed method can accurately infer the number of concepts. - This is supported well by Fig. 3. 3. The proposed method can incorporate pretrained diffusion models. - This is supported well by Fig. 7 and Tab. 3. Methods And Evaluation Criteria: - The proposed optimization scheme for determining $K$ and the set of $c_k$s is sound and explained well. - The evaluation is sound and uses established standard metrics. Theoretical Claims: There are no theoretical claims. Experimental Designs Or Analyses: The experimental design for object localization using CLEVR and multi-attribute classification using CelebA is sound and uses established datasets. - For the experiments on CelebA, I am wondering why the authors opt to use all male faces as the OOD set, which effectively removes the female/male attribute from classification. The more standard subdivision of the dataset in ID/OOD regions for compositional tasks would be to ensure that the ID set contains all values of all factors, but not all combinations, e.g., all women are only shown smiling, all men are only shown with a neutral face in the ID set. See, e.g., [1,2]. - What is the dataset used in Sec. 4.3 and Fig. 7/Tab. 3? Is this a dataset curated by the authors? A comparison on CelebA with pretrained models in zero-shot mode could have been equally insightful, or the authors could have opted for an established multi-label classification dataset. [1] Schott et al., 2022, Visual Representation Learning Does Not Generalize Strongly Within the Same Domain [2] Wiedemer et al., 2023, Compositional Generalization from First Principles Supplementary Material: I have read through all supplementary materials and considered them for my review. Relation To Broader Scientific Literature: The basic compositional model follows prior work in the literature, as also noted by the authors. To the best of my knowledge, the inversion of this process is novel, as is the application to multi-target classification and localization. Essential References Not Discussed: No. Other Strengths And Weaknesses: The paper is overall clear, well-written and easy to follow. The method is an interesting and novel extension of prior works, and experiments are straightforward and demonstrate well the efficacy of the method. Other Comments Or Suggestions: - small typo L198: ignore --> ignoring Questions For Authors: - How feasible would it be to extend this method to more "involved" types of compositions, such as part-whole relationships or hierarchical compositions? Could this method (in principle, given unlimited compute) be extended to infer, e.g., a complete scene-graph of an input? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and comments! Please see our response below about your concerns. **1. For the experiments on CelebA, I am wondering why the authors opt to use all male faces as the OOD set, which effectively removes the female/male attribute from classification. The more standard subdivision of the dataset in ID/OOD regions for compositional tasks would be to ensure that the ID set contains all values of all factors, but not all combinations, e.g., all women are only shown smiling, all men are only shown with a neutral face in the ID set. See, e.g., [1,2].** Thank you for your insightful question and the suggested references! Our primary goal is to evaluate the generalization ability of our compositional inference framework under challenging out-of-distribution (OOD) conditions by ensuring that the test data is substantially different from the training data. For example, in object discovery, the model is trained on Clevr while tested on a different dataset ClevrTex. Following the same spirit, we decided to train our model on CelebA female faces while testing it on CelebA male faces, as there are significant facial characteristic differences between the two groups. However, we agree that subdivision of the dataset in [1,2], where all attribute values are present in the training set but not all combinations, is also helpful for evaluating compositional generalization, and we will update the paper to discuss this evaluation procedure. Unfortunately, this type of combinational shift is not available to gather in a controlled manner in real-world datasets, leading us to use the CelebA dataset. [1] Schott et al., 2022, Visual Representation Learning Does Not Generalize Strongly Within the Same Domain [2] Wiedemer et al., 2023, Compositional Generalization from First Principles **2. What is the dataset used in Sec. 4.3 and Fig. 7/Tab. 3? Is this a dataset curated by the authors? A comparison on CelebA with pretrained models in zero-shot mode could have been equally insightful, or the authors could have opted for an established multi-label classification dataset.** Thank you for the insightful question! Yes, the dataset used in Sec. 4.3, Fig. 7, and Table 3 was curated by the authors and consists of 20 images containing a cat and a dog, 22 images containing a cat and a rabbit, and 28 images containing a dog and a rabbit. Our primary focus in these experiments was to evaluate multi-object perception performance (local factor discovery) using compositional pretrained models compared against Diffusion Classifier. We agree that testing these models on CelebA to discover global attribute factors could provide additional insights and will do so in the final version of the paper. **3. Small typo L198: ignore --> ignoring** Thank you for pointing out the typo! We will fix it in the next version. **4. How feasible would it be to extend this method to more "involved" types of compositions, such as part-whole relationships or hierarchical compositions? Could this method (in principle, given unlimited compute) be extended to infer, e.g., a complete scene-graph of an input?** Thank you for the insightful question! We believe that our method would in principle be able to scale to more “involved” types of compositions, such as inferring a complete scene-graph. In the setting of a scene graph, each of the composed factors in our systems would correspond to an “edge” or relation between two objects. We then enumerate through different edges in a graph until we get a graph whose composed set of edges leads to a good reconstruction of the scene. We will update the paper and discuss this in the future work.
Summary: The paper introduces an innovative inverse generative modeling framework designed to perform compositional scene understanding. Its primary innovation is to interpret visual scenes as compositions of smaller generative models, enabling effective generalization to complex or unseen scenes. Specifically, the authors propose training a generative model through diffusion processes and then using this model inversely, inferring conditional parameters (eg, object categories, locations, facial attributes) from images. Major contributions include the introduction of compositional modeling into generative inference, achieving improvement in generalization capability, and a demonstration of the method's broad applicability across various scene understanding tasks, including object discovery and zero-shot perception. Claims And Evidence: The claims made in the paper, primarily that compositional inverse generative modeling improves generalization on various visual tasks, are well supported by clear experimental evidence. The experiments across diverse tasks (ie, object detection on CLEVR and CLEVRTex datasets, facial attribute classification on CelebA, and zero-shot object recognition using pretrained Stable Diffusion) demonstrate improved performance over multiple baselines. Methods And Evaluation Criteria: The proposed method and evaluation criteria are appropriate for the addressed problems. Using established benchmarks (CLEVR, CLEVRTex, CelebA) and pretrained diffusion models (Stable Diffusion) provides relevant metrics, such as perception rate, estimation errors, and classification accuracy, all of which clearly measure method effectiveness and generalization. Theoretical Claims: No explicit theoretical proofs were presented. Experimental Designs Or Analyses: The experimental design and analyses presented in the paper are sound and robust. The authors differentiate between in-distribution and out-of-distribution scenarios, clearly showcasing generalization. The ablation studies, such as testing the effects of the multiple-initialization strategy, further strengthen the validity and thoroughness of their experiments. Supplementary Material: This supp material provides helpful insights into method implementation and additional robustness checks, significantly complementing the main text. Relation To Broader Scientific Literature: This paper effectively builds on prior work in generative modeling (diffusion models, generative classifiers) and compositional visual understanding. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: 1. The novelty of applying compositionality explicitly within inverse generative modeling. 2. Significant and demonstrable improvements over strong baseline models, especially in generalization scenarios. 3. Effective demonstration of zero-shot capabilities using pretrained generative models, indicating high practical relevance. Weaknesses: 1. The computational complexity involved when enumerating concept combinations might limit scalability for tasks involving numerous discrete concepts. Other Comments Or Suggestions: Minor suggestions include: 1. Clarify computational overhead or potential methods to reduce complexity in real-world deployments. 2. Consider discussing the limitations of compositional approximations explicitly in the paper. Questions For Authors: 1. How scalable is the compositional inverse generative modeling approach when dealing with a very large number of visual concepts or categories? Would an alternative optimization strategy reduce complexity without sacrificing accuracy? 2. Could you further elaborate on how this compositional modeling approach would handle dynamic or interactive scenes (eg, videos)? Have you considered any temporal extensions or adaptations? Ethical Review Concerns: n/a Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback and comments! Please see our response below about your concerns. **1.How scalable is the compositional inverse generative modeling approach when dealing with a very large number of visual concepts or categories? Would an alternative optimization strategy reduce complexity without sacrificing accuracy? Clarify computational overhead or potential methods to reduce complexity in real-world deployments.** Thank you for the insightful question! Overall, we agree that our approach is substantially slower than existing feedforward methods, but we believe that the additional computational cost is justified by the ability of our to generalize well to unseen complex scenes. When dealing with a large number of visual concepts and categories, our naive algorithm of enumeration has exponential growth of the search space ($M^K$) for discrete values. Beyond the early stopping strategy discussed in our submission, several additional approaches can potentially significantly mitigate this computational bottleneck. One approach could be to use heuristic search algorithms on discrete values – for instance we can run beam search with a beam width of K over each attribute sequentially which can reduce time complexity from $M^K$ to $O(M*K)$, making inference more efficient for large discrete spaces. Alternatively, continuous relaxation of discrete variables with gumbel-softmax/concrete relaxation could allow the use of gradient-based search and thus avoid enumerating all configurations. Finally, since our approach allows parallel processing across the $M^K$ configurations, inference time can be drastically reduced given sufficient computational resources, potentially approaching the time required for a single configuration evaluation. We will discuss these possible extensions in the future work. **2. Consider discussing the limitations of compositional approximations explicitly in the paper.** Thank you for the valuable suggestion! We will add a limitations section in the next version of our paper and discuss this. Our compositional generative modeling approach assumes object concept independence given the input image, enabling combinatorial generalization beyond the training distribution. However, one possible limitation of this full independence approximation is that it ignores the interaction between objects, which are crucial in many real-world scenarios. As a remedy, we could potentially learn additional models that model interactions between object components which can also be composed to represent more complex scenes. We will explicitly discuss this limitation and potential remedies in the limitations section. **3. Could you further elaborate on how this compositional modeling approach would handle dynamic or interactive scenes (eg, videos)? Have you considered any temporal extensions or adaptations?** Thank you for the insightful question! Extending our approach to dynamic or interactive scenes is an exciting direction for future work that we are also currently exploring. In the dynamic setting, we can change the reconstruction objective to not be reconstructing a given image but instead to reconstruct the entire video of the target interaction. We can condition each composed generative model on different aspects of the interactions such as the identity of an individual objects or agent in the environment. The inverse generative modeling procedure can then be used to persistently discover objects in the scene, even if they are occluded at different points in time, or to infer the individual behaviors of each individual agent in the interactive environment. We will update the paper and discuss this in future work.
Summary: This paper casts scene understanding as an inverse generative modeling problem, where attributes of the scene are extracted by seeking parameters of the generative model that best fit a given image. To facilitate generalizability, the paper proposes to model visual scenes compositionally - by model sub components of the scene with smaller models, and compose them in the form of energy based functions. The paper proposes different optimization method to search for discrete and continuous attribute variables, and experimental results show that the proposed formulation can more accurately find attributes of images comparing to related methods, while being able to generalize to novel scenes that have different content distribution than training images. Claims And Evidence: The claim that compositional inverse generative modelling enables more effective scene understanding is supported empirically by local factor perception (object counting, position estimation on CLEVR, CLEVREX datasets), global factor perception (facial attributes prediction on CelebA dataset), and zero-shot multi-object perception, of which the proposed method show better results than related previous methods. But the datasets used in the experiments mainly contain simple concepts (CLEVR dataset with limited number of objects, or human face of a fixed set of attributes), as also mentioned in the limitation section of the paper, it's no practical to scale the proposed method (in its current state) to more complex, realistic real world scenes. Methods And Evaluation Criteria: method: 1. The compositional generative modeling formulation of the proposed method makes intuitive sense. However, in order to escape local optima, the success inference of continuous concepts relies on multiple random trials. The number of random trials may vary depending on the complexity of the concept, which may not guarantee that the method will succeed in problems that are significantly different than the tested problems in the paper without further tuning this hyperparameter. datasets: 1. The tested datasets contain simple concepts (limited number of objects in CLEVR, limited set of attributes in face), so although performance evaluation positively supports the proposed idea, the method in its current state still cannot be extended to more complex real world scenes. metrics: 1. As mentioned in the limitations in the paper, the searching process of the proposed method is long due to exhaustive search nature, so when comparing with related methods, in addition to the perception accuracy, it'd be better to add inference time comparison as well. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: Yes, I checked all the subsections of the experiment section. No obvious issues. Supplementary Material: Yes, I checked detailed experimental setup, ablation, and more visualization results of the proposed method in the supplementary material. Relation To Broader Scientific Literature: The proposed method approximates the conditional probability distribution of the scene under different concepts with product of probability under individual concept, this is related to [1]; It further uses the denoising function in diffusion model to estimate the gradient of energy function, which is inspired by [2] [1] Du, Yilun, and Leslie Kaelbling. "Compositional generative modeling: A single model is not all you need." arXiv preprint arXiv:2402.01103 (2024). [2] Liu, Nan, et al. "Compositional visual generation with composable diffusion models." European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022. Essential References Not Discussed: I haven't found missing essential references. Other Strengths And Weaknesses: Strengths: 1. The paper is well written and easy the follow 2. The description of experimental setups are is comprehensive Other Comments Or Suggestions: please see questions Questions For Authors: 1. When inferencing a continuous concept, how to decide how many random trials are needed in order to escape a local optima solution? 2. For the tested perception tasks, how is the inference time of the proposed method compared to the related baselines ? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and comments! Please see our response below about your concerns. **1. The tested datasets contain simple concepts (limited number of objects in CLEVR, limited set of attributes in face), so although performance evaluation positively supports the proposed idea, the method in its current state still cannot be extended to more complex real world scenes.** Thank you for your insightful question! Our model is evaluated on widely adopted datasets (CLEVR and CelebA) that are commonly used for object discovery and multi-label classification. While these datasets are relatively simple, they serve as strong benchmarks for measuring effectiveness and generalization, as also acknowledged by reviewers 6fxB, Tv9z and s7kA. Nonetheless, we agree that testing our model on more complex scenes would be interesting. To take a first step towards that end, we conducted additional evaluations on ClevrTex [1], which features diverse object colors, textures, shapes, and complex backgrounds. As shown in the table below, our model outperforms all baselines in object discovery on ClevrTex, demonstrating its potential scalability to more complex scenarios. | | In-distribution |(ClevrTex 3-5 objects)| Out-of-distribution |( ClevrTex 6-8 objects)| |:---:|:---:|:---:|:---:|:---:| | | Perception Rate | Estimation Error | Perception Rate | Estimation Error | | **ResNet-50** | 3.9 | 0.00196 | 1.8 | 0.00198 | | **Slot Attention** | 41.9 | 0.00152 | 35.2 | 0.00161 | | **Generative Classifier** | 69.6 | 0.00098 | 52.9 | 0.00135 | | **Ours** | 85.2 | 0.00051 | 72.4 | 0.00078 | [1] Karazija, et al., 2021 Clevrtex: A texture-rich benchmark for unsupervised multi-object segmentation. **2. When inferencing a continuous concept, how to decide how many random trials are needed in order to escape a local optima solution?** Thank you for raising this important point! We acknowledge that escaping local optima in continuous concept inference can be influenced by the number of random trials, as demonstrated by the ablation study (Table IV) of our submission. To determine the optimal number of trials, a principled approach is to monitor the reconstruction error of the input image: if increasing the number of trials no longer leads to significant reconstruction improvements, further trials become unnecessary. In practice, our experiments show that a moderate number of trials (5 or 10) is sufficient for continuous inference tasks. **3. For the tested perception tasks, how is the inference time of the proposed method compared to the related baselines ?** Thank you for suggesting the evaluation metric! We run our model and baselines on Nvidia H100 and report inference times in the table below, where “N/A” indicates that the approach is not applicable to the task. Our results show that generative approaches, including our approach and Diffusion Classifier, require more time than the discriminative approaches like ResNet-50 and Slot Attention, while our approach consumes more time than Diffusion Classifier. Specifically, our approach takes longer than Diffusion Classifier due to the need to evaluate a composition of multiple diffusion models. However, this increased inference time for our approach comes with significantly improved generalization performance. Additionally, inference time bottleneck could be further mitigated through parallel processing, which our algorithm supports, or using heuristic search algorithms such as beam search to speed up optimization, which we leave for future work. We appreciate your suggestion and will include the inference time metric in the next version of our paper and discuss how improving inference speed is an interesting direction of future work. | | Local Factor (Clevr) | Global Factor (CelebA) | Zero-Shot (Pretrained)| |:---:|:---:|:---:|:---:| | ResNet-50 | 0.011s | 0.023s | N/A | | Slot Attention | 0.009s | N/A | N/A | | Generative Classifier | 102.874s | 28.485s | 99.436s | | Ours | 146.854s | 29.095s | 179.957s |
Summary: This paper proposed an inverse generative modeling approach to understand attributes of a single image. This approach trains conditional generative models based on different conditions to construct a multi-conditional generative model through the composition of EBMs. The compositional modeling can generalize the generation ability to out-of-distribution images to generate combinations of conditions that have not been seen during the training phase. After the compositional generative model is trained, the inverse generative modeling algorithm is used to search for condition parameters that maximize the log-likelihood from a single input image as the inference results. For conditions with continuous attribute values, this paper proposed a gradient-based heuristic algorithm that can effectively search for optimal conditions in continuous space. The experiments show that the proposed method outperforms the comparative methods in tasks such as scene object number inference and object localization. Claims And Evidence: The claims made in this paper are clear, which are supported by the experiments. Methods And Evaluation Criteria: The datasets and evaluation criteria of this paper make sense for the problem. Theoretical Claims: I checked the correctness of Equations 1-9. There is no proof in this paper. Experimental Designs Or Analyses: The experimental design and analysis are reasonable and effective. Supplementary Material: I have reviewed all supplementary materials. Relation To Broader Scientific Literature: The main contribution of this paper is to design a novel inverse generative modeling algorithm, which infers attributes of a single image by searching optimal conditional parameters on a trained compositional generative model. The training of compositional generative models is based on several previous approaches. Essential References Not Discussed: The introduction to related works is generally complete, but I am not sure if there are any important recent works that have not been discussed. Other Strengths And Weaknesses: Strengths The inverse generative modeling algorithm proposed in this paper does not require additional model training for image attribute inference. This makes the method have potential for direct application on existing compositional generative models. The experiments on multiple image understanding tasks also show the effectiveness and versatility of the approach. Weaknesses The compositional generative models are trained based on given conditions and images, so the proposed approach requires annotations of the corresponding image attributes. But Slot Attention is trained by reconstruction loss, which does not require the annotations. Therefore, one concern is whether the fairness of model comparison. In addition, I recommend the authors to compare with more advanced object-centric representation methods. For example, DINOSAUR [1], which is an extension of Slot Attention that uses the representation from more powerful pre-trained visual models. One concern is about the efficiency of the inverse generative modeling algorithm. If the number of conditions or the number of selectable discrete values of each condition are large, the size of the search space M^K will increase exponentially. It will be valuable to conduct comparative experiments on the inference efficiency of models. [1] Seitzer, Maximilian, et al. "Bridging the gap to real-world object-centric learning.", ICLR 2023 Other Comments Or Suggestions: In Line 8 of Algorithm 3, after calculating the gradient, should we need to update the original c_r^k? In the reference, some formats are different from other places, e.g., the paper link in Lines 440-441. Questions For Authors: In Table 3, why does the Diffusion Classifer Variant perform worse than the original Diffusion Classifer? Could the authors explain how to implement object discovery based on Slot Attention? Although Appendix A.5 mentions that the supervised version of Slot Attention is used in the experiments, I wonder if it is possible to infer the number of objects and locate the objects only through the mask of each object extracted by Slot Attention. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and comments! Please see our response below about your concerns. **1. One concern is the fairness of comparison with unsupervised Slot Attention. I recommend to compare with more advanced DINOSAUR.** Thank you for the insightful suggestion! We would like to clarify that the Slot Attention baseline we compare to in the paper has incorporated supervision on each head to the object coordinates in the scene (discussed in Appendix A.5), and thus has the same information as our approach. Furthermore, following the reviewer’s suggestion, we compared our model with DINOSAUR also with supervision on Clevr. As shown in the table below, DINOSAUR works better than Slot Attention, while our model outperforms DINOSAUR. ||In-distribution|(3-5 objects)|Out-of-distribution|(6-8 objects)| |:---:|:---:|:---:|:---:|:---:| ||Perception Rate|Estimation Error|Perception Rate|Estimation Error| |Slot Attention|80.4|0.00087|53.3|0.00130| |DINOSAUR|82.5|0.00084|59.0|0.00120| |Ours|94.7|0.00014|85.3|0.00035| **2. One concern is the efficiency of the algorithm. For discrete values, the size of the search space $M^K$ will increase exponentially.** Thank you for the insightful question! Overall, we agree that our approach is substantially slower than existing feedforward methods, but we believe that the additional computational cost is justified by the ability of our method to generalize well to unseen complex scenes. It’s true that our naive algorithm has exponential growth of the search space ($M^K$) for discrete values. Beyond the early stopping strategy discussed in our paper, several additional approaches can potentially significantly mitigate this computational bottleneck. One approach could be to use heuristic search algorithms on discrete values – for instance we can run beam search with a beam width of K over each attribute sequentially, which can reduce time complexity to $O(M*K)$, making inference more efficient for large discrete spaces. Alternatively, continuous relaxation of discrete variables with gumbel-softmax/concrete relaxation could allow the use of gradient-based search and thus avoid enumerating all configurations. Finally, since our approach allows parallel processing across the $M^K$ configurations, inference time can be drastically reduced given sufficient computational resources, potentially approaching the time required for a single configuration evaluation. We appreciate your suggestion, and will include comparative experiments of the overall time complexity in our paper and discuss ways to speed up inference. **3. In Algorithm 3, should we need to update the original $c_r^k$?** Yes, we should update $c_r^k$ with the gradient $\Delta c_r^k$ in line 8. We will add an explicit update step ($c_r^k ← c_r^k - \lambda\Delta c_r^k)$ to Algorithm 3 in the next version. **4. In the reference, some formats are different from others, e.g., Lines 440-441.** Thank you for pointing out the reference format issue! We will ensure reference format consistency in the next version. **5. In Table 3, why Diffusion Classifier Variant performs worse than the original Diffusion Classifier?** Diffusion Classifier is originally designed for single-label classification tasks and thus performs more poorly on the multi-object tasks we consider. To apply Diffusion Classifier on our setting, we directly condition the generative model using compound prompts (e.g., “a photo of a dog and a cat”), which we refer to as the Diffusion Classifier baseline in our experiments. We also further explore a single object variant of Diffusion Classifier for this compound setting, where we condition the generative model on individual concepts (e.g., “a photo of a dog”) and select the two classes with the highest score. Details of both methods are detailed in A.5. Since the Diffusion Classifier Variant uses individual concepts to fit multi-concept images, it has worse performance than Diffusion Classifier baseline. **6. How to implement object discovery with Slot Attention? Is it possible to infer object number and locate objects only through masks extracted by Slot Attention.** Thank you for the insightful question! To enable slot attention to predict object locations with supervision, we decode slot representations into object coordinates and enforce a coordinate prediction MSE loss by comparing the decoded object coordinates with ground truth, where we use Hungarian Algorithm to align decoded coordinates and ground truth. To infer object number, during training, we can enforce empty slots to predict a fixed coordinate [1,1] (the normalized rightmost corner of the image), while object slots predict the actual object coordinates. At inference time, if the inferred coordinates are greater than a threshold close to [1,1], we determine these slots as empty, while the rest are considered to contain objects. This approach allows Slot Attention to infer both object number and their locations. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. The authors claim that while the proposed approach is slower, it offers stronger generalization to unseen scenes, which appears to be a trade-off. But I think this statement holds only if the computational cost remains within a reasonable range (not very higher than other methods). So far I do not see experimental results of computational efficiency that can substantiate the claim. The application of the approach will be limited if it exhibits exponential growth in complexity. So I would like to maintain my score. --- Reply to Comment 1.1.1: Comment: Thank you for engaging in this discussion period! We truly appreciate your time and constructive comments! To evaluate inference efficiency, we conducted a comparison of runtime performance between our method and baseline models on an NVIDIA H100 GPU. The table below shows the inference time for the discrete concept inference on CelebA considering 4 attributes. As shown, the inference time of our approach is comparable to the baseline model Diffusion Classifier. To further enable our model to work on a large number of concept settings (i.e., larger $K$), we developed a continuous approximation of our approach that allows gradient-based optimization, thereby avoiding the exponential ($M^K$) inference cost. We provide a link for the algorithm here (https://imgur.com/a/gpULhuR). Specifically, to infer binary labels with gradient-based optimization, we relax the learnable binary labels to continuous parameters in the range (0, 1). These continuous parameters are optimized using gradient descent and clamped to (0, 1) at each step to remain valid. After optimization, we decide a label is 0 if the corresponding optimized relaxed parameter is smaller than 0.5, otherwise the label is 1. As is shown in the table below, the continuous gradient-based approach reduces inference time significantly, making it scale linearly with the number of concepts ($O(K)$). | Models | OOD Accuracy (CelebA 4 features) | Inference time (CelebA 4 features) | | ----------- | ----------- | ----------- | | Diffusion Classifier | 0.51 | 28.49s | | Ours | 0.60 | 29.10s| | Ours (continuous approx) | 0.55 | 22.15s | Additionally, we also provide the runtime of our approach and baselines on the zero-shot object perception task. Similar to the previous setting, we have developed a continuous approximation for the zero-shot object perception task to improve inference efficiency. The algorithm can be found in (https://imgur.com/a/SzAn0Jx). Specifically, for each concept (e.g., “a photo of a cat”), we assign a learnable weight to its corresponding noise prediction in the compositional model. These weights are then optimized via gradient descent. After optimization, we select the top two concepts with the highest optimized weights as the predicted objects in the scene. As shown in the table below, this continuous relaxation leads to a significant reduction in inference time, with the time complexity scaling linearly with the number of candidate concepts ($O(K)$). | Models | Accuracy (Zero-Shot)| Inference Time (Zero-Shot) | | ----------- | ----------- | ----------- | | Diffusion Classifier | 0.64 | 99.44s | | Ours | 0.80 | 179.96s| | Ours (continuous approx) | 0.68 | 101.05s|
Summary: This paper presents a computational framework for mining the structural properties of natural scene images by recasting the problem as an inverse generative modeling task. Specifically, the authors propose a generic inverse generative modeling paradigm that integrates compositionality into a diffusion-based generative model, enabling robust generalization beyond the training distribution. During training, the model undergoes optimization by minimizing the average denoising error across discrete concepts. At inference, the framework performs a constrained search within a predefined range of concept cardinalities to identify the configuration that minimizes the average denoising error. For continuous concepts, the authors employ multiple randomized initialization points, iteratively discarding low log-likelihood candidates until convergence to a single optimal solution. Furthermore, they leverage stochastic gradient descent (SGD) to iteratively refine concept representations, enhancing computational efficiency. Empirical evaluations substantiate the efficacy of the proposed methodology, demonstrating substantial performance improvements across three scene understanding tasks—local factor perception, global factor perception, and zero-shot multi-object perception—on the CLEVR, CelebA, and a custom small-animal dataset. These results validate the proposed approach’s effectiveness and establish new state-of-the-art performance benchmarks in the domain of compositional scene understanding. Claims And Evidence: - In general, the claims of this paper are supported by clear and convincing evidence; however, two of the experiments are too trivial to fully validate the proposed method's claims. Methods And Evaluation Criteria: - Evaluating the proposed methods on CelebA and the custom small-animal dataset may not be convincing, as the images contain only a limited number of attributes (typically 2–3). Theoretical Claims: - This paper does not present many theoretical claims. Furthermore, the proposed method is simply an extension of an existing approach Experimental Designs Or Analyses: - As mentioned above, the experiments on CelebA and the custom small-animal dataset are too trivial. For CelebA, it may be more appropriate to use the dataset from [1], which contains more attributes and is generated by StyleGAN. For the custom small-animal dataset, providing statistical characteristics such as distribution and ensuring a more consistent and diverse set of categories would make the evaluation more convincing. [1] SeqDeepFake: Detecting and Recovering Sequential DeepFake Manipulation Supplementary Material: - I review every part of the supplementary material. Relation To Broader Scientific Literature: - This method provides a framework for mining the structural properties of natural scene images. Unlike existing approaches that rely on text-to-image models, the proposed method leverages generative models with flexible conditioning, allowing it to be applied to a broader range of visual understanding tasks. This opens up a new direction for scene understanding. Essential References Not Discussed: - In general, this paper adequately references related approaches. Other Strengths And Weaknesses: - The implementation of the approach is methodical and straightforward, enhancing its practical applicability. - Comprehensive implementation details significantly improve the reproducibility of the research. - However, certain aspects lack sufficient discussion, such as the relationship between image captioning models like BLIP-2, which undermines the novelty and contribution of the work. Other Comments Or Suggestions: - In line 146, it would be clearer to use the full name, 'Energy-Based Model,' instead of the abbreviation when mentioning it for the first time. Questions For Authors: - 1. How does the proposed method perform on tasks involving a greater number of attributes, as mentioned above? - 2. The authors should clarify the advantages of their approach compared to existing image captioning models. - 3. In the zero-shot tasks, why does the diffusion classifier variant perform worse than the original version? Can the authors explain this phenomenon and provide a theoretical rationale for its relationship with the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and comments! Please see our response below about your concerns. **1. CelebA images contain only a limited number of attributes (typically 2–3). How does the proposed method perform on tasks involving a greater number of attributes?** Thank you for the insightful question! To demonstrate the effectiveness of our method on a greater number of attributes, we conducted an experiment on CelebA using 4 attributes (Black Hair, Eyeglasses, Smiling, and Wearing Hat), given our time and computational constraints. As shown in the table below, our model consistently outperforms baseline models in both in-distribution and out-of-distribution accuracy. These results demonstrate the potential of our approach to handle more than just 2–3 attributes effectively. | | In-distribution Accuracy (Female only) | Out-of-distribution Accuracy (Male only) | |:---:|:---:|:---:| | ResNet-50 Classifier | 0.76 | 0.57 | | Generative Classifier | 0.78 | 0.51 | | Ours | 0.78 | 0.60 | **2. For CelebA, it may be more appropriate to use the dataset from SeqDeepFake.** Thank you for suggesting this interesting paper and dataset! After reviewing SeqDeepFake, we don’t think that this dataset is directly applicable for our binary multi-label classification task. Our method requires binary labels for a fixed set of attributes, whereas SeqDeepFake provides edited attribute sequences with varying attribute annotation across images. Nevertheless, we find that the image captioning task in SeqDeepFake is very related and interesting, and would like to explore if our model can detect edited attribute sequences in future work. We will discuss the relevance of SeqDeepFake in the related work section in the next version of our paper. **3. For the custom small-animal dataset, providing statistical characteristics such as distribution and ensuring a more consistent and diverse set of categories would make the evaluation more convincing.** Thank you for your helpful suggestion! To clarify, our custom small-animal dataset consists of 20 images containing a cat and a dog, 22 images containing a cat and a rabbit, and 28 images containing a dog and a rabbit. We will include this dataset distribution description in the next version to enhance transparency and improve the evaluation's clarity. **4. Lack sufficient discussion on the relationship between image captioning models like BLIP-2. The authors should clarify the advantages of their approach compared to existing image captioning models.** Thank you for highlighting this connection to related work! By using pretrained text-to-image generative models (e.g., Stable Diffusion), our model can tackle image captioning tasks like BLIP-2. However, our approach is applicable to a broader range of scene understanding tasks other than image captioning. For example, by conditioning on object coordinates, our approach can perform object discovery tasks and even enable generalization to more complex scenes (many more objects) than seen at training. This flexibility and generalizability distinguishes our approach from traditional image captioning models. We appreciate your suggestion and will include a more detailed discussion in the next version. **5. In line 146, it would be clearer to use the full name, 'Energy-Based Model,' instead of the abbreviation when mentioning it for the first time.** Thank you for pointing out the abbreviation issue! We will fix it in the next version. **6. In the zero-shot tasks, why does the diffusion classifier variant perform worse than the original version?** Diffusion Classifier is originally designed for single-label classification tasks and thus performs more poorly on the multi-object tasks we consider. To apply Diffusion Classifier on our setting, we directly condition the generative model using compound prompts (e.g., “a photo of a dog and a cat”), as detailed in A.5, which we refer to as the Diffusion Classifier baseline in our experiments. We also further explore a single object variant of Diffusion Classifier for this compound setting, where we condition the generative model on individual concepts (e.g., “a photo of a dog”) and select the two classes with the highest score. Details of both methods are detailed in A.5. Since the Diffusion Classifier Variant uses individual concepts to fit multi-concept images, it has worse performance than Diffusion Classifier baseline. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Several of my concerns have been addressed. However, in the zero-shot setting, instead of implementing specific methods directly into the approaches, a common practice for handling multi-concept or multi-condition scenarios is to use prompt weighting or tools like the Compel package. I'm curious whether adopting such methods would lead to worse performance. As mentioned above, I will maintain my current score. --- Reply to Comment 1.1.1: Comment: Thank you for engaging in the discussion and for your thoughtful question! As suggested by the reviewer, we have added an additional comparison to see whether prompt weighting could solve the zero-shot perception task, using the Compel package. Specifically, we applied prompt weighting to the compound prompts as follows: 1. “a photo of a cat++, a dog, and a rabbit” 2. “a photo of a cat, a dog++, and a rabbit” 3. “a photo of a cat, a dog, and a rabbit++”. The idea is that if the image contains specific concepts (e.g., a cat and a dog), then the prompts “a photo of a cat++, a dog, and a rabbit” and “a photo of a cat, a dog++, and a rabbit” are expected to result in lower denoising error (higher likelihood) than the prompt “a photo of a cat, a dog, and a rabbit++”. To determine what two objects present in the scene, we choose the two prompt weighted compound prompts that have the lowest denoising error. We report the zero-shot perception accuracy in the table below. It can be seen that using Compel results in worse performance compared to our proposed approach. This suggests that simple prompt weighting may not be sufficient for effective multi-concept inference in this setting. | Models | Accuracy | | ----------- | ----------- | | Diffusion Classifer | 63.8 | | Compel (per reviewer's suggestion) | 35.2 | | Ours | 80.3 |
null
null
null
null
Reconstructing Cell Lineage Trees from Phenotypic Features with Metric Learning
Accept (poster)
Summary: This work addresses an important problem: reconstructing cell lineage trees, which provide insights of how diverse cell types arise from a single progenitor. More to the point, the authors set an additional challenge, which is to use transcriptomic data to construct cell linage trees, which sets them apart from the classical lineage inference methods that rely on genetic barcoding, CRISPR-based lineage tracing, or phylogenetic distance methods. In other words, they attempt to answer the question whether high-content molecular phenotypes give enough information for tree reconstruction and in doing so they define a novel method that can be used to build cell lineage trees using such data. The crux of the proposed method is to rely on the traditional framework of distance-based lineage reconstruction, which is a known NP-hard problem for which many approximation algorithms exist, such as the neighbor-joining algorithm, which they also use in their work. The novelty, however, consists in shifting the problem to a metric-learning one, whereby the idea is to learn an embedding function that maps data points to a space where distances approximate an additive metric. By doing so, the intuition is that it will facilitate the task of tree construction, if quartet constraints -- which are crucial for accurate lineage inference -- are well learned or provided. In an ideal scenario where all tree quartets are known, the approach should work near-perfectly. However, in more realistic settings where many quartets must be inferred, errors can propagate, and the final tree may only be an approximation of the true lineage structure. In summary, the metric learning problem relies on the definition of two loss terms that strive at verifying the four-point condition (or quartet constraint). In addition, the authors propose a regularization term to prevent the learned embedding to drift too much away from the original data, and a gating mechanism (a sort of feature selection mechanism) that emphasize only lineage-relevant features of the high-dimensional transcriptomic data they use. Experiments are mainly devoted to verify the benefits of the proposed embedding method, rather than comparing the overall approach to state of the art. Then, they compare to traditional metric learning methods inspired by contrastive learning losses. Since CellTreeQM also uses a metric-learning approach, comparing against standard loss functions provides a fair assessment of whether their tree-based additivity loss truly improves tree reconstruction. ## POST REBUTTAL Thank you for the rebuttal and the discussions. I maintain my positive score for this paper. Claims And Evidence: I think that most claims are partially supported by evidence in this work. * Claim 1: inferring or reconstructing the cell lineage tree from features measured on the individual cells is an important challenge that they address in this work. This is the essence of the proposed method, that uses transcriptomic data. * Claim 2: the open question the authors address is whether high-content molecular phenotypes give enough information for tree reconstruction. The experiments provide a positive answer to this question. * Claim 3: Cell-TreeQM efficiently reconstructs lineage structures under weak supervision and limited data, providing a scalable framework for uncovering cell lineage relationships. This is supported by the experiments in section 7. Methods And Evaluation Criteria: Let me separate methods from evaluation. On the methodological side, the proposed idea to project high-dimensional data into a latent representation that is amenable to preserve an appropriate distance relationship between data points is appropriate, relevant and novel, to the best of my knowledge. The proposed methodology is clearly exposed, except for a detail. The overall narrative of section 4.1 is to perform a joint optimization of tree construction and metric learning, as defined in the last optimization expression at the end of the section. However, the proposed method is *not* a joint optimization: first, the authors propose a method to learn an embedding, and subsequently, this embedding is used within the distance-based lineage reconstruction formulation and fed to the well-known neighbor-joining algorithm. The authors take steps to ensure that the learned distances conform to a tree metric, which helps mitigate the issue of separating the embedding and tree reconstruction steps. However, it still does not fully jointly optimize the tree and the embedding function together. Instead, it shapes the embedding space to be more tree-like before applying a separate tree reconstruction algorithm. This is a reasonable approximation, but not a true joint optimization approach, which would require incorporating the actual tree inference step directly into the learning process. On the experimental side, the authors do a good job in evaluating the proposed embedding against that learned through traditional contrastive losses. Although the selected metric learning baselines make sense for their approach, a more complete experimental evaluation could have included comparisons to state-of-the-art trajectory inference methods , which would help show how well CellTreeQM recovers lineage structure compared to methods that infer differentiation trajectories. Since the paper already discusses trajectory inference in Appendix C, a quantitative comparison would strengthen their claims. Moreover, instead of just using the neighbor-joining algorithm, they could have tested alternative distance-based tree reconstruction methods, which could have revealed whether their learned embeddings work well across different methods. Finally, if a dataset existed which had both phenotypic features and genetic barcodes, they could compare their phenotypic-based tree to a gold-standard lineage tree inferred from CRISPR mutations, which could have shown whether their learned embeddings truly capture lineage relationships or if they primarily reflect phenotypic similarity. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, I checked experimental design and analyses and they are appropriate in my opinion. As mentioned above, the authors mainly focus on metric learning, and compare their proposed learned embeddings to that learned with alternative contrastive losses, showing that the key idea of enforcing the quartet constraints is indeed important. They do so in a variety of settings, for both synthetic and real data. Supplementary Material: Yes, most of them, but I payed particular attention to appendix A and C. Relation To Broader Scientific Literature: This is a multidisciplinary project, lying at the intersection of machine learning and biology. I think the authors did a very good job in the main paper, and the interested reader can find much more in the appendix, especially Appendix A (which is an short introduction to the context of their study) and in Appendix C. Essential References Not Discussed: N/A Other Strengths And Weaknesses: This is a good work, the only comments I outlined above are: * Method: pay attention to the claim of a joint optimization problem, versus breaking the problem in two stages. * Experiments: while it is valid to compare the proposed method to metric learning baselines, I think this work would be much more valuable to the biology community if the experimental section was extended to cover 1) alternative tree construction algorithms (not only neighbor-joining), and 2) more end-to-end comparisons, although this is hindered by the probable absence of relevant datasets that could be used both by comparing phenotypic-based tree to a gold-standard lineage tree inferred from CRISPR mutations. Other Comments Or Suggestions: N/A Questions For Authors: * Can you please provide clarifications about the remark of a joint optimization problem vs. breaking the problem into first learning an appropriate embedding, and then applying a well-known tree construction algorithm? Did I misinterpret your approach? If not, can you maybe add some comments about the sub-optimality of the resulting lineage reconstruction when compared to a truly joint optimization approach? * Would it be possible to add experiments that use an alternative tree construction algorithm than the neighbor-joining one? You may just keep everything you've learned through the embedding, and apply another technique to further confirm the versatility of the learned embeddings. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Concern 1: Joint optimization claim ### Response We thank the reviewer for recognizing the importance of the cell lineage reconstruction problem and for raising this valuable point about our optimization formulation. We agree that Section 4.1 could be clearer in distinguishing between joint and staged optimization strategies, and we appreciate the opportunity to clarify our intent. As the reviewer correctly notes, CellTreeQM adopts a two-stage framework: 1. Learn an embedding space using quartet-based constraints that promotes tree-like geometry; 2. Apply a standard phylogenetic algorithm (e.g., Neighbor-Joining) to reconstruct the tree from the learned pairwise distances. Although staged in practice, we emphasize that **most of the tree inference is effectively encoded in the embedding step**. Once the embedding space approximates tree-additive distances, standard reconstruction methods reliably recover the topology (as shown in Appendix A.2, line 742). Figure 4 further illustrates that both Train RF and Test RF steadily decrease during training, indicating that the learned representation is progressively converging toward a tree-consistent structure. This is conceptually aligned with heuristic quartet-based methods (e.g. Quartet Puzzling), which also tackle the NP-hard tree reconstruction problem by optimizing over informative subsets of constraints. To better reflect our approach, we updated the manuscript to clearify that we revise the following object function $$\min_{f, T} \|D(f(x)) - D_T\|_2^2 + \lambda \Omega(f),$$ into quartet-based formulation $$\min_{f}\sum_{q \in \mathcal{Q}}\mathcal L( D\bigl(f(x_q)\bigr), D_{T0}\bigl(x_q\bigr))+\lambda \Omega(f),$$ where $Q$ is the set of known quartets and $D_{T0}$ is the pairwise distance for the known quartets. When $T$ is known, this objective reduces to a supervised metric learning problem. However, as we highlight in the manuscript, full lineage trees are often unavailable in biological datasets. Instead, researchers may have access to partial lineage information—such as clade-level groupings—while finer substructures remain uncertain. In these settings, we found quartet constraints offer a flexible and localized way to encode supervision and capture tree-additive properties. ## Concern 2: comparison between trajectory inference and cell lineage reconstruction ### Response As discussed in our response to Reviewer ekW4’s Concern 1, trajectory inference and lineage reconstruction address fundamentally different problems. To further illustrate this distinction, we conducted additional experiments using the C. elegans Small dataset. Using Monocle3, we constructed a principal graph from unsupervised clusters and colored cells by lineage annotation ([figure](https://anonymous.4open.science/api/repo/celltreeqm-rebuttal-7D65/file/monocle.png?v=b00073b0)). The resulting graph shows that many terminal lineages are incorrectly represented as internal nodes, highlighting a mismatch with known lineage structure. Because Monocle3 cannot incorporate known labels into graph construction, we also computed lineage centroids and built a minimum spanning tree (MST) to model their relationships ([HotSpot](https://anonymous.4open.science/api/repo/celltreeqm-rebuttal-7D65/file/mst.png?v=431c273e), [ASCII](https://anonymous.4open.science/api/repo/celltreeqm-rebuttal-7D65/file/mst_ascii.png?v=f4902aed)). Again, several terminal lineages are placed as internal nodes. HotSpot only shows the densely connect dots and ASCII shows the full MST. These results confirm that standard trajectory inference methods do not accurately reflect true lineage trees and are also not directly comparable to lineage reconstruction. ## Concern 3: Limited tree reconstruction methods ### Response We appreciate the suggestion to evaluate additional reconstruction algorithms beyond Neighbor-Joining (NJ). We assessed four alternatives—UPGMA, FastME, Ward, and Single linkage on the C. elegans Small dataset ([figure](https://anonymous.4open.science/api/repo/celltreeqm-rebuttal-7D65/file/reconstruct_method.png?v=d604180e)). Across all methods, CellTreeQM consistently achieves lower RF distances on both train and test sets, consistently demonstrating strong learning and generalization ability. The radar plot shows relative RF improvement across all five reconstruction algorithms comparing with Triplet and Quadruplet. Interestingly, Ward’s method slightly outperforms NJ, despite NJ’s theoretical alignment with additive trees. This suggests our embeddings may better support variance-based clustering in certain settings. We plan to investigate further in future work. ## Concern 4: Lack of gold-standard lineage tree comparison ### Response We thank the reviewer for this important point. Please see our response of vktx Concern3. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and the clarifications. I keep my positive score.
Summary: This paper introduces **CellTreeQM**, a novel deep learning-based method for reconstructing cell lineage trees using single-cell RNA sequencing data. The method leverages a **Transformer architecture** combined with **metric learning** to optimize the geometry of an embedding space, enabling accurate reconstruction of lineage trees even with limited supervision or noisy data. Unlike traditional approaches relying on CRISPR barcoding or heuristic distance-based methods, CellTreeQM explicitly formulates lineage reconstruction as a metric learning problem and incorporates **triplet and quadruplet loss functions** to enforce tree-like relationships in the embedding space. The authors also establish a benchmark for lineage reconstruction and demonstrate that CellTreeQM outperforms existing methods in terms of accuracy, robustness, and scalability. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. The Four-Point Condition ensures that distance matrix can be constructed into a tree, but whether it is applicable to the current task is unclear, as there has been no previous exploration in this regard. Experimental Designs Or Analyses: 1. This paper's experimental section only discusses comparisons with Quadruplet and Triplet, but in reality, there are many state-of-the-art (SOTA) cell lineage methods that have not been compared. 2. Many trajectory inference methods for constructing cell lineage from single-cell expression data ultimately yield tree-shaped results. Could we perform a simple comparison with these methods, e.g monocle3, Slingshot? 3. The Transformer encoder module has not been ablated, and while Transformers are well-known for their effectiveness on large datasets, methods like VAE may be more effective on the current small dataset. Supplementary Material: Yes. Relation To Broader Scientific Literature: 1. **Advancements in Cell Lineage Reconstruction**: The paper builds upon existing methods, which uses CRISPR barcoding for lineage tracing, by introducing a deep learning-based approach that leverages **metric learning** and **Transformer architectures** to improve accuracy and robustness in lineage reconstruction. 2. **Integration with Metric Learning**: It extends the application of **metric learning** techniques, commonly used in classification and clustering tasks, to the domain of cell lineage reconstruction, enhancing the ability to capture tree-like relationships in high-dimensional data. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. Why wasn't the effect of feature gates compared in the weakly supervised scenario? Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ## Concern 1: Lack of comparisons to SOTA methods > There are many state-of-the-art (SOTA) cell lineage methods that have not been compared. ### Response Thank you for raising the concern regarding comparisons to state-of-the-art (SOTA) cell lineage methods. Below, we clarify why existing methods are not directly comparable to our approach. **Phenotype-Based vs. Genotype-Based Cell Lineage Methods** In the second paragraph of the Introduction and Section 3, we review computational methods for cell lineage reconstruction, which primarily rely on genotype information. In contrast, we focus on phenotype-based lineage reconstruction—a fundamentally different and much more challenging problem setting. To our knowledge, and as noted by Reviewers RR5y and FYFt, CellTreeQM is the first work to formulate and solve this specific task. We propose that both observed empirical data and biological process considerations suggest that the standard full feature space (i.e., the transcriptome) does not directly contain cell lineage information. Current time-trajectory algorithms are not focused on recovering the historical cell lineage relationships. Rather, the algorithms try to find a piecewise one-dimensional parameterization of cells in the transcriptome feature space as a model of “phenotype dynamics” solely based on trivial metric (such as l1 and l2 distance); i.e., how the phenotypes of the cells might dynamically evolve through cell system processes without a reference to underlying cell lineage processes. In fact, phenotype dynamics might occur without any cell division. **Baseline Selection** We chose triplet and quadruplet contrastive losses as baselines because these are widely used in related contrastive learning tasks. As we show, CellTreeQM’s tailored assumption about the structure of cell lineage data leads to superior performance compared to these generic approaches. **Clarifying the Role of Trajectory Inference** By “SOTA,” the reviewer may also be referring to trajectory inference methods. (Other methods that operate over sequence data are not applicable to the gene expression data studied in our paper.) While trajectory inference and lineage reconstruction are closely related, they address distinct goals. We provide a detailed comparison in Appendix C.1 (moved from the main text due to space constraints). Indeed, Reviewers RR5y and FYFt also noted the conceptual differences between these two tasks. - Methodological Differences Trajectory inference is typically an **unsupervised** pipeline comprising (1) data preprocessing, (2) dimensionality reduction, (3) cell clustering, and (4) learning a graph representation of cluster relationships. Such methods usually yield average progression trends rather than a cell-by-cell hierarchical lineage. - Mismatch in Assumptions Trajectory inference assumes the cells dynamically change within the full feature space and the goal is to trace this dynamics. The dynamics can be decoupled from cell lineage history. However, if the dataset has the cells at every stage of cell development, the algorithms could approximate the cell lineage histories, assuming the local changes in phenotypes are due to cell lineage development. However, as noted in various examples in the paper, even under this setting, the full feature space cell distributions typically do not reflect cell lineage history, requiring additional learning as we propose in our paper. ## Concern 2: No ablation of the Transformer encoder and compare to VAE ### Response We thank the reviewer for considering the model architecture of CellTreeQM. We would like to clarify that the motivation and empirical support for using a Transformer encoder are provided in Section 5.4. Specifically, we compared fully connected (FC) networks and Transformer encoders on both the C. elegans Small and Large datasets, and found that Transformers consistently outperformed FCs, as shown in Table 5. While we acknowledge the relevance of concurrent works such as TreeAVE, which focus on generative modeling with hierarchical latent variables, our goal differs. CellTreeQM does not aim to model the full generative process. Rather, we assume that lineage-related signals lie in a subspace of the observed features. Instead of relying on reconstruction losses typical in VAE-based models, we introduce the Deviation Loss, \Omega, to prevent the learned embedding from drifting too far from the original data. ## Concern 3: No results of feature gating for weakly supervised scenario ### Response We apologize for the confusion. Due to space constraints, we omitted the results of feature gating in the weakly supervised setting from the main text. However, these results are included in Tables 10–14 of the appendix. While feature gating provides marginal gains on real datasets, they demonstrate that the gating mechanism can still contribute positively, particularly in settings where the signal-to-noise ratio is low.
Summary: The authors propose an algorithm CellTreeQM to reconstruct lineage relationships from phenotype data (unlike genotype data, which has been the main focus in the area of lineage reconstruction). The main new idea in the paper is the use of loss function based on four point condition which ensures that embedding eventually resembles an unrooted tree. At the same time this four point loss may create a distortion from allowing the model to fit to the original data; so the authors have an additional distortion loss term as well. This careful design of loss function is in my opinion the main new idea in the paper. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: While the authors discuss relevant theory background and how the tree reconstruction problem relates to existing theory, they do not appear to claim any significant new theoretical results (no new theorems/proofs). Experimental Designs Or Analyses: The experiments on C.elegans dataset are adequately explained in the supplement. Supplementary Material: Yes, I have read the supplement, though I have not verified the calculations therein (but they appear correct). Nit: Figure 10: "Brownie"->"Brownian" Relation To Broader Scientific Literature: Adequate (but also see comment below re metric embeddings) Essential References Not Discussed: While the references on tree and random forest embedding are discussed in the literature cited; it would be appropriate to discuss the connection between them directly; as I can imagine a randomized tree embedding (Bartal et al be cited) to be used instead of just a tree embedding. The output would be a probability distribution over trees as opposed to a single tree. Other Strengths And Weaknesses: The paper uses reasonably deep theory background to solve an interesting problem -- lineage reconstruction. However, the authors have used just one dataset C.elegans for their experiments and do not prove any new theory results either. This makes me wonder if the contribution crosses the threshold for acceptance to ICML. Hence my rating. Other Comments Or Suggestions: See question about using random forests instead of trees above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Concern 1: No theoretical results > they do not appear to claim any significant new theoretical results (no new theorems/proofs). ### Response While we do not present new theorems or proofs, our contribution lies in empirically demonstrating that the four-point condition serves as a strong prior for learning a generalizable embedding function—one that can correctly predict unseen quartet topologies. We hope we are demonstrating two new ideas: (1) with respect to learning tree-graphs, jointly learning the whole tree might be limited both algorithmically and by available supervised knowledge, but decomposition to quartet tree sub-trees allows utilization of partial information; (2) rather than using standard contrastive learning paradigm, explicitly utilizing a tree-graph metric theorem significantly improves the learning results. Following this comment, we have added additional test data (vktx Concern 3) and also additional phylogeny methods (FYFt Concern 3) as well as other single cell trajectory methods (FYFt Concern 2) for comparison. We note that actual empirical data with validated annotated lineages are extremely rare, but we understand that greater degree of testing would be desirable. ## Concern 2: Missing literature > While the references on tree and random forest embedding are discussed in the literature cited; it would be appropriate to discuss the connection between them directly; as I can imagine a randomized tree embedding (Bartal et al be cited) to be used instead of just a tree embedding. The output would be a probability distribution over trees as opposed to a single tree. ### Response Thank you for this suggestion for the background literature that we were not aware of. Indeed, an observed empirical distance matrix could be considered a graph metric and bounded by expectation over a distribution over a tree graph. This distribution might be mapped to a distribution of possible cell-lineage tree graphs in a similar framework as Bayesian phylogenetic algorithms. In our case, our assumption is that the original feature space and the resulting empirical metric does not reflect the lineage tree-graph, even as, say, a dominating metric. However, it might be possible to have a construction that computes a graph metric from the learned latent space and then associate it to a distribution over a tree metric ensemble. One might have to develop additional criteria for the tree metric such as “concentration” to incorporate into a learning scheme. While the full exposition as above is out of the scope of our current work, we have added the following references to our background literature: - Yair Bartal. Probabilistic approximations of metric spaces and its algorithmic applications. In Proceedings of the 37th IEEE Symposium on Foundations of Computer Science (FOCS), pages 184–193, 1996. - Yair Bartal. On approximating arbitrary metrics by tree metrics. In Proceedings of the 30th ACM Symposium on Theory of Computing (STOC), pages 161–168, 1998. ## Concern 3: Limited dataset > However, the authors have used just one dataset C.elegans for their experiments and do not prove any new theory results either. This makes me wonder if the contribution crosses the threshold for acceptance to ICML. Hence my rating. ### Response We thank the reviewer for this important point. In response, we have significantly expanded our experiments to include two additional reference datasets (including a CRISPR-based lineage tracing dataset from Yang et al., 2022), four more phylogenetic methods, and two trajectory inference methods. For the CRISPR dataset, we used cell lineages 3435_NT_T1 (151 cells) and 3435_NT_T6 (91 cells), both derived from mESC clone 1D5 with non-targeting sgRNA. Ground truth trees were reconstructed using Cassiopeia. Due to ambiguity in the tree structure, we sampled binary trees from the full lineage. We conducted weakly supervised learning with high-level partition prior in level 1, 2, and 3 with three repetitions. The detailed results are shown in [3435_NT_T1 BarPlot](https://anonymous.4open.science/r/celltreeqm-rebuttal-7D65/3435_NT_T1_barplot.png), [3435_NT_T1 Table](https://anonymous.4open.science/r/celltreeqm-rebuttal-7D65/3435_NT_T1_table.png), [3435_NT_T6 Barplot](https://anonymous.4open.science/r/celltreeqm-rebuttal-7D65/3435_NT_T6_barplot.png), [3435_NT_T6 Table](https://anonymous.4open.science/r/celltreeqm-rebuttal-7D65/3435_NT_T6_table.png). As in the C. elegans datasets, we show that we can achieve significant learning and CellTreeQM has better results, especially generalizing to unknown quartets than other baseline approaches. We note that due to the limited time for this response, we have not been able to consider tuning the model for these datasets and we expect better performance might be achieved with additional work. Please see FYFt Concern 2 for single cell trajectory methods and FYFt Concern 3 for phylogeny methods. --- Rebuttal Comment 1.1: Comment: Upgraded my score in light of more experiments.
Summary: This paper poses the reconstruction of lineage trees from phenotypic data as a metric learning problem, and devises a contrastive loss function to learn a metric given partial information about the topology of the lineage tree. The authors test this algorithm on ground truth data from C. elegans, as well as simulated data of branching random processes, and find that it performs quite well relative to reasonable baselines. Claims And Evidence: I think the most important claim of this paper is that the presented problem is indeed an important one, and on this I wholeheartedly agree. The cited literature is thorough and wonderfully bridges the biological motivation with the computational problem at hand. Phenotypic information is widely available, yet it is important to have methods such as these that bridge those with more targeted approaches and prior knowledge about cell lineage. I present this in the claims section rather than the relation to broader literature because it is a key source of originality in this paper to identify and tackle a problem that has not been addressed by several machine, learning papers already. there are many more specific claims made in the paper, and for the most part, these are well supported by the experiments presented. the supervised and semi supervised settings are particularly convincing. I think the one overstated claim in this paper is that this method is useful in unsupervised settings. well the authors do remark that this is preliminary, it is still mentioned in the abstract as contribution of this paper. However, the unsupervised setting is not thoroughly evaluated, and little attempt is made to compare with a large number of hierarchical clustering approaches that are widely used in single cell analysis. Nonetheless, I do not think the unsupervised setting is the main contribution of this paper. Methods And Evaluation Criteria: the combination of the branching diffusion process simulation with ground truth lineage information in flatworms is nice to see. I furthermore appreciate the choice of quantitative evaluations on these two data sets. these are well suited to the problem at hand. Theoretical Claims: no novel theoretical claims are made. Experimental Designs Or Analyses: the experimental designs are sound. Supplementary Material: the supplementary material presents a much more thorough and clear description of the model and particulars of the data set. these materials make an already clear manuscript even clearer. Relation To Broader Scientific Literature: The construction of lineage trees is an important problem in developmental biology, yet it is rare to have experimental validated trees down to the cellular level. Thus, there is a real need for papers that can augment limited experimental data. I particularly appreciate the starting point of this manuscript, which recognizes that phenotypic information does not correlate well in certain cases with lineage. thus, this paper targets an important problem in biology with novel methods and ideas with in machine learning, pioneering an important path forward for both fields. Essential References Not Discussed: the literature is very thorough already. I might suggest papers to further augment the case that phenotypic information alone is insufficient to reconstruct the lineage, as in the purely unsupervised case. https://www.nature.com/articles/s41586-021-04237-0 presents one example. Other Strengths And Weaknesses: the principal strength of this work is in its ideation of a novel research direction for a real problem in developmental biology, presented by the available methods and the perhaps surprising to similarity between phenotype and lineage in single cell data. The author is apply interesting machine, learning ideas in novel ways to address this problem. I cannot identify many weaknesses in this paper. The ability of this method to work in purely unsupervised settings is somewhat untested, and in the setting for which other methods exist these are not compared. Other Comments Or Suggestions: line 340: “known quartetes” -> “known quartets” line 316: "we training an model" -> "we train a model" Questions For Authors: I'm curious about the application of this method to settings in which we know the cell type lineage tree, but in a particular dataset may have millions of cells. This is a example of partial information about the cellular tree. With this method still be applicable. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: ## Concern 1: Overstated claim about unsupervised learning > Although acknowledged as preliminary, unsupervised performance is mentioned in the abstract. ### Response We appreciate the comment concerning the limited investigation of the unsupervised setting. We demonstrate the unsupervised setting with the brownian motion dataset and we are capable of filtering out independent noise during training, demonstrated by the dash lines in Figure 5. We agree that the investigation under unsupervised setting is still preliminary so we have reworded the abstract and other parts of the manuscript describing our unsupervised results. **In Abstract:** “we systematically explore, supervised, weakly supervised, and unsupervised…” → “we systematically explore weakly supervised training settings at different levels of information..” **Section 7.4:** From: “Although the problem is more difficult—and in practice, purely unsupervised lineage inference from phenotypes alone can be error-prone—we find that CellTreeQM can still learn a representation that partially respects tree-metric properties. See details in §F.5. To: “Here, we used a data-driven estimate of quartet order in the latent space. CellTreeQM can still learn a representation that partially respects tree-metric properties in the limited setting of the simulation data but performance on real data was not significant, suggesting better strategies are needed. See details in §F.5.” **Discussion section:** From: “Empirical results in supervised, weakly supervised, and unsupervised settings show that CellTreeQM considerably improves…” To: Empirical results in supervised and weakly supervised settings show that CellTreeQM considerably improves…” ## Concern 2: Comparing to clustering methods > Not thoroughly evaluated or compared to clustering methods widely used in single-cell analysis. ### Response As noted by the reviewer, we did not compare to various unsupervised clustering methods widely used in single cell literature. One of the key reasons is that the clustering methods, as currently used, are applied to whole transcriptomes. Carefully examined biological processes such as in C. elegans, show that phenotypic types are not the same as cell-lineage clades (also, we appreciate all the reviewers mentioning this fundamental concept in their comments). In fact, this disjunction is the central problem in cell lineage estimation. While we have not systematically studied clustering methods, we have now added examples of applying cell trajectory methods–we note that these methods work in the original feature space and attempts to create a piecewise one-dimensional (i.e., tree-graph) parameterization of the full phenotypic state, rather than trying to extract lineage-specific information. Please see our response to ekW4 Concern 1 and FYFt Concern 2. ## Concern 3: Missing literature > I might suggest papers to further augment the case that phenotypic information alone is insufficient to reconstruct the lineage, as in the purely unsupervised case. https://www.nature.com/articles/s41586-021-04237-0 presents one example. ### Response We thank the reviewer for this paper and we added this reference and revised the Key Challenges section to better reflect this comment. ## Concern 4: Scalability > I'm curious about the application of this method to settings in which we know the cell type lineage tree, but in a particular dataset may have millions of cells. This is a example of partial information about the cellular tree. With this method still be applicable. ### Response Thank you for this interesting question. We believe there are several approaches and barriers. First, as discussed above, typical cell types are not monophyletic, therefore cell type lineage tree is likely to be a mixture of cell tree lineages. As shown in [this figure](https://anonymous.4open.science/api/repo/celltreeqm-rebuttal-7D65/file/cell_type-vs-cell_lineage.png?v=50c079c2). Nevertheless, we could use its hierarchy as prior knowledge to designate known quartets. Ideally, the latent space with additive tree metric should be universal and therefore learnable with subsets of the millions of cells–perhaps using the strategy of random mini-batches. The key barrier will be the application of phylogeny algorithms. Even the heuristic algorithms are O(n^2)~O(n^3), which might be practically impossible for millions. Currently, large-scale phylogenies such as viral isolate phylogenies are built using a mixture of inferring “backbone” trees and using insertional type of approaches. We hope to continue our studies to include extreme scale up problems.
null
null
null
null
null
null
MedRAX: Medical Reasoning Agent for Chest X-ray
Accept (poster)
Summary: This paper introduces an AI agent multimodal, multi-task chest X-ray (CXR) interpretation and analysis called MedRAX. This method leverages an LLM-controlled reasoning and acting (ReAct) loop, dynamically reasoning through multi-step queries and selecting pretrained, task-specific tools to complete each step. The authors also introduce a new benchmark of 2,500 medical queries, on which MedRAX outperforms existing generalist and domain-specific multimodal language models on a wide variety of tasks. ## Update after rebuttal I thank the authors for their detailed rebuttal, particularly the inclusion of quantitative results on existing benchmarks. While I appreciate the additional descriptions of methodology, this still does not meet the standard of reproducibility. I understand that the open-source implementation will facilitate reproducibility in practice, but it is critical for a scientific paper to communicate the methodology clearly so that reviewers and readers can understand and evaluate its validity. As much as I appreciate the ambition and forward-thinking nature of this paper, I feel that the technical description of methodology remains a major shortcoming -- I am left wondering what is happening "under the hood" at each stage beyond the high-level descriptions provided. I will maintain my original recommendation of Weak Reject. Claims And Evidence: If the goal is to claim superiority over existing state-of-the-art, then experimental validation appears sound but could be strengthened. The justification for creating a new benchmark was not entirely made clear – what specifically is lacking in existing medical (or CXR-specific) reasoning and analysis benchmarks? Further, for a model with such diverse capabilities, it is surprising to see performance reported on two benchmark datasets against just four baseline models. I imagine this is due to the limited number of models capable of all tasks, but analysis could be performed on subsets of tasks or benchmarks could be chosen to facilitate comparison with more existing methods. Methods And Evaluation Criteria: The benchmark datasets used in this paper are appropriate for this unique and diverse multi-task setting. The proposed method is uniquely tailored to handling the suite of CXR analysis tasks considered. Theoretical Claims: N/A Experimental Designs Or Analyses: Experimental design appears sound, but it is difficult to evaluate such complex multi-task evaluation without more granular methods description or access to source code. Key methodological and implementation details range from completely missing to insufficiently described. Supplementary Material: No supplementary material was provided. Relation To Broader Scientific Literature: This is a unique and ambitious paper that may merit acceptance based on novelty alone. I imagine that this will be among the first of many future works to consider task-agnostic “agentic” approaches to healthcare data analysis, and I commend the authors for their foresight and quick experimentation. Essential References Not Discussed: This is a nascent space, and the authors sufficiently capture the few relevant prior studies. Perhaps AgentClinic [1] is worth mentioning, even though fully text-based, as an example of “agentic” AI for clinical use cases. While M4CXR [2] does not adopt an agentic approach, it is a relevant example of a multimodal foundation model for CXR analysis capable of a wide variety of tasks like MedRAX. [1] Schmidgall, Samuel, et al. "AgentClinic: a multimodal agent benchmark to evaluate AI in simulated clinical environments." arXiv preprint arXiv:2405.07960 (2024). [2] Park, Jonggwon, et al. "M4CXR: Exploring Multi-task Potentials of Multi-modal Large Language Models for Chest X-ray Interpretation." arXiv preprint arXiv:2408.16213 (2024). Other Strengths And Weaknesses: *Strengths*: - Highly unique and novel idea that is likely to garner significant attention and even be a cornerstone for future work. It is a forward-thinking approach to CXR interpretation - The paper is well-written and clearly organized with simple, effective visuals. *Weaknesses*: - This reads more like a tech report than a scientific paper – there is virtually no description of the core methodology. Nearly every line in Algorithm 1 requires concrete definition and detailed explanation. Certain key components of this should appear in the main text, but most implementation details should minimally appear in the Supplement (currently there is none!). - While I appreciate the work that went into curating a new benchmark, the justification for this needs to be clarified – what specifically is lacking about current benchmarks? - Similarly, while I am aware that only a select few models are even capable of all tasks considered in this setting, evaluation can be expanded to more benchmarks and more baseline methods. E.g., using established baselines could facilitate direct comparison with existing state-of-the-art on individual tasks or subsets of tasks. Other Comments Or Suggestions: This is a difficult paper to evaluate as a reviewer. I cannot in good conscience recommend acceptance for a paper that leaves so many core methodological details undescribed. However, if these details are clarified in the rebuttal (and, ideally, if evaluation is strengthened), then this is a very straightforward acceptance of what I could imagine becoming an influential paper. Minor comments: - Line 157 on RHS: Change “Llava-Med” -> “LLava-Med” - I would clarify the metric used in each table caption, even if it is mentioned elsewhere in the text. Questions For Authors: Algorithm 1 poses many questions about the method that go unanswered: 1. What does “Observe()” mean? 2. What is “Reason()”? Define this and explain how it is implemented. 3. What is “RequiresUserInput()”? How is this determined? 4. What is “GenerateUserPrompt()”? 5. How is it determined whether the agent can generate a response? 6. How is tool selection performed? Major questions: 7. What specifically is lacking about existing benchmarks that required the creation of a new one? 8. Can the authors provide additional evaluation on individual tasks or subsets of tasks in order to facilitate comparison with more relevant baselines? E.g., might it be possible to form comparisons on select tasks with M4CXR [2], MedVersa [3], or even Google’s MedPalm M [4] if using the same evaluation benchmarks as them? For a method with such diverse capabilities, the extent of evaluation feels underwhelming. 9. How does MedRAX perform compared to “specialized” models on individual tasks? This could be included as a gray row in each table to provide context for a reasonable upper bound on task performance. Minor questions: 10. What does it mean that “all questions underwent a quality check”? Was this performed by anyone with medical expertise? 11. This is mentioned in the Discussion: “Our initial observations suggest the importance of balanced tool utilization, where neither complete reliance on tools nor their complete absence produced optimal results.” This sounds fascinating but cannot be evaluated since no evidence was provided for this – what observations? Can the authors provide some quantitative (or other) analysis of this in the results? 12. Were any handcrafted prompts used in evaluation? What were they and how were they decided? [3] Zhou, Hong-Yu, et al. "A generalist learner for multifaceted medical image interpretation." arXiv preprint arXiv:2405.07988 (2024). [4] Tu, Tao, et al. "Towards generalist biomedical AI." Nejm Ai 1.3 (2024): AIoa2300138. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the insightful comments. We have worked hard to answer their concerns and think the suggestions have helped improve the clarity of our work. **Key Methodoloy.** The full details of Algorithm 1 are provided in response to `Reviewer KTkS`. Additionally, an anonymous GitHub repo is prepared at https://github.com/syaro1383/medrax **Prior Work.** We have added M4CXR and AgentClinic to the prior work section of the revised manuscript. **Need for ChestAgentBench.** The justification for making a new benchmark is provided in response to `Reviewer DGwp`. **Further Evals.** Thank you for suggesting an expanded evaluation with more benchmarks. We've conducted additional benchmarking to compare MedRAX with state-of-the-art models on two benchmarks. Model performances were obtained from Park et al. (2024) 1. MIMIC-CXR Radiology Report Generation This benchmark evaluates single-image chest X-ray radiology report generation on the MIMIC-CXR test set, which includes 3,858 images. It assesses the clinical accuracy of generated reports by analyzing the presence or absence of 14 observations of medical conditions using CheXbert. - mF1-14: Micro-averaged F1 score for all 14 CheXbert observation labels - mF1-5: Micro-averaged F1 score for 5 key findings (cardiomegaly, edema, consolidation, atelectasis, pleural effusion) - MF1-14: Macro-averaged F1 score for all 14 labels - MF1-5: Macro-averaged F1 score for 5 key findings **Table A: Single-image performance on MIMIC-CXR test set** | Model | mF1-14 | mF1-5 | MF1-14 | MF1-5 | |-------|--------|-------|--------|-------| | LLM-CXR† | 36.0 | - | 21.1 | - | | RaDialog | - | - | 39.4 | - | | METransformer† | - | - | - | - | | DCL† | - | - | - | - | | PromptMRG | - | - | 38.1 | - | | LM-RRG | - | - | - | - | | Med-PaLM M 84B | 53.6 | 57.9 | 39.8 | 51.6 | | CheXagent* | 39.3 | 41.2 | 24.7 | 34.5 | | MAIRA-1 | 55.7 | 56.0 | 38.6 | 47.7 | | LLaVA-Rad | 57.3 | 57.4 | 39.5 | 47.7 | | M4CXR | 60.6 | 61.8 | 40.0 | 49.5 | | MedRAX | 79.1 | 64.9 | 34.2 | 48.2 | 2. SLAKE VQA Benchmark The SLAKE benchmark evaluates medical visual question answering using 114 chest X-ray test samples with close-ended questions in English. These questions typically focus on the presence or absence of abnormalities, anatomical identifications, and medical condition assessments. - Accuracy: Percentage of exact matches between model predictions and ground truth answers - Recall: Proportion of ground truth words present in the generated responses **Table B: Medical VQA performance** | Model | Accuracy | Recall | |-------|----------|--------| | RaDialog | 0.0 | 45.6 | | RadFM | 68.4 | 69.7 | | CheXagent | 71.1 | 73.2 | | M4CXR | 85.1 | 86.0 | | MedRAX | 90.35 | 91.23 | **Other Comments.** We have corrected "Llava-Med" to "LLaVA-Med" and added metric descriptions to all table captions. **Questions.** 1-9. Discussed above. 10. Following the initial generation of questions by GPT-4o based on Eurorad cases, we performed an automated quality verification, also utilizing GPT-4o. Specifically, the automated check evaluated each generated question and answer set for structural consistency (e.g., six-choice question with one correct answer), explicit grounding in the provided clinical and radiological context, and clear verifiability of the correct answer from the original Eurorad source case material. Any questions failing these criteria were automatically identified and excluded. We have provided more details on this verification procedure in the revised manuscript. 11. During development, we observed that heavily favouring only tool use could lead to rigid or incorrect outputs if a tool failed or misinterpreted, while discouraging tool use missed opportunities for leveraging specialized analysis. Finding a balance, where the agent reasons first but uses tools judiciously to complement and tune its reasoning, appeared to yield better results empirically. A formal quantitative analysis exploring this balance was outside the scope of this paper's experiments but is noted as valuable future work. 12. Yes, the prompt for MedRAX is as follows: > "Answer this question correctly using the chain of thought reasoning and carefully evaluating choices. Solve using your own vision and reasoning and then use tools to complement your reasoning." This prompt encourages the agent to combine its own reasoning ability and external tools to complement or refine initial assessments. The prompt worked better empirically during development. A formal quantitative comparison of prompting strategies is planned for future work.
Summary: The paper introduces MedRAX, an AI-driven agent for the interpretation of chest X-rays (CXRs). MedRAX integrates various specialized state-of-the-art chest X-ray analysis tools and multimodal large language models into a single unified framework. Unlike existing solutions that often operate independently, MedRAX dynamically utilizes these specialized components to handle complex medical queries without additional training. The authors propose a novel evaluation framework, ChestAgentBench, featuring 2,500 expert-validated questions across seven essential CXR interpretation categories. In comparison experiments, MedRAX significantly outperformed other general-purpose and specialized biomedical models across all assessed tasks, including detection, classification, localization, comparison, relationship understanding, characterization, and diagnosis. Overall, the study demonstrates that integrating structured reasoning with multimodal specialized tools enhances both accuracy and interpretability in medical imaging tasks, presenting MedRAX as a practical step towards clinical deployment of automated CXR interpretation systems. Claims And Evidence: The paper clearly presents the MedRAX framework and thoroughly supports its claims with experimental results, demonstrating substantial advantages over existing methods. The key ideas—structured tool-based reasoning, modular tool integration, and comprehensive benchmarking—are explicitly defined and clearly validated by experimental outcomes. Methods And Evaluation Criteria: The methods consist of integrating various specialized medical AI tools (such as visual question answering, segmentation, grounding, report generation, disease classification, and chest X-ray generation) into a modular, structured reasoning framework known as MedRAX. The authors utilize a Reasoning and Acting (ReAct) loop that iteratively breaks down complex medical queries into sequential analytical steps. This systematic approach aligns well with practical clinical workflows in radiology, where interpretation often involves multiple interdependent steps and reasoning based on various specialized analyses. Theoretical Claims: The paper provided does not include theoretical proofs or formal algorithmic claims that require rigorous mathematical validation. Experimental Designs Or Analyses: The experimental evaluation uses straightforward accuracy metrics (percentage correct answers), which is suitable given the benchmark’s multiple-choice format. The comparisons with existing state-of-the-art general-purpose and biomedical models are clearly defined and implemented using official code and recommended configurations. The evaluation procedure (including handling retries for invalid responses and using regex-based response parsing) appears clearly described and methodologically sound, avoiding ambiguity in how outcomes are measured or interpreted. I have a minor concern for the benchmark creation: ChestAgentBench and CheXbench: - The authors describe the ChestAgentBench as containing 2,500 expert-curated questions across seven essential clinical competencies for chest X-ray interpretation. These categories (Detection, Classification, Localization, Comparison, Relationship, Diagnosis, Characterization) comprehensively represent clinically relevant tasks. I understand this is a comprehensive question pool, but how representative are these questions? - Questions were generated from expert-curated clinical cases (from Eurorad) using GPT-4o, with a clear methodology ensuring that answers are grounded explicitly in the original case descriptions. Any further verification by human experts for benchmark purposes? Supplementary Material: I have reviewed the demo in the GitHub link. Relation To Broader Scientific Literature: The paper positions itself clearly within broader scientific literature related to artificial intelligence (AI) in medical imaging, specifically chest X-ray (CXR) interpretation, and contributes by building upon several established ideas and addressing previously identified limitations. Essential References Not Discussed: n/a Other Strengths And Weaknesses: see above Other Comments Or Suggestions: see above Questions For Authors: see above Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the insightful feedback. **ChestAgentBench Representativeness.** We appreciate the reviewer's question on benchmark representativeness. We designed ChestAgentBench to be both representative and capable of assessing advanced reasoning, addressing limitations in prior benchmarks: * **Addresses Evaluation Gaps in Complex Reasoning:** Existing benchmarks often focus on simpler, single-step VQA or isolated tasks, which are insufficient for evaluating sophisticated AI agents designed for clinical practice. These simpler tasks do not capture the multi-step diagnostic reasoning, evidence integration, and tool use inherent in real-world radiology workflows. ChestAgentBench was explicitly created to fill this critical evaluation gap by presenting complex challenges that require agents to demonstrate deeper, sequential reasoning and integrated analytical capabilities, truly assessing their readiness for clinical application support. * **Ensures Broad Clinical Scope and Realistic Data Distribution:** The benchmark's representativeness is supported by its diverse foundation and statistically verified distributions. Derived from 675 real clinical cases, it includes scenarios from various hospital settings and covers a wide spectrum of common and important chest X-ray findings. **Distribution by Department:** * Emergency Room (ER): 19.7% * Intensive Care Unit (ICU): 4.9% *(Note: The remaining cases originate from other hospital settings, including general wards.)* **Frequency of Common Chest X-ray Findings:** * Mass: 26.3% * Effusion: 24.6% * Pleural Effusion: 21.5% * Consolidation: 21.3% * Nodule: 17.2% * Calcification: 10.7% * Pneumothorax: 7.6% * Lymphadenopathy: 7.6% * Pneumonia: 7.2% * Emphysema: 7.1% * Interstitial findings: 6.9% * Bronchiectasis: 5.9% * Atelectasis: 4.9% * Fibrosis: 4.1% * Edema: 3.9% * Cavitation: 3.9% * Fracture: 3.0% * Tuberculosis: 2.6% * Metastasis: 2.6% * Cardiomegaly: 1.5% This broad distribution across 53 anatomical areas, varied patient demographics, and numerous pathologies ensures agents are tested on a realistic range of clinical challenges. * **Grounded in Expert Cases and Assesses Integrated Clinical Competencies:** The benchmark's questions are directly derived from and verifiable against detailed findings and expert discussions within 675 authentic, expert-curated clinical cases, ensuring clinical validity and grounding in real-world knowledge. Furthermore, ChestAgentBench systematically evaluates the integration of seven core clinical competencies (such as detection, localization, comparison, relationship analysis, and diagnosis) through complex question types. This design forces agents to demonstrate multifaceted reasoning akin to a clinician synthesizing diverse information, rather than just performing isolated tasks, thereby providing a comprehensive assessment of their true diagnostic reasoning abilities grounded in expert practice. **Benchmark Quality Check.** We thank the reviewer for highlighting the need for clarification on our quality check procedure. Following the initial generation of questions by GPT-4o based on Eurorad cases, we performed an automated quality verification, also utilizing GPT-4o. Specifically, the automated check evaluated each generated question and answer set for structural consistency (e.g., six-choice question with one correct answer), explicit grounding in the provided clinical and radiological context, and clear verifiability of the correct answer from the original Eurorad source case material. Any questions failing these criteria were automatically identified and excluded. We have provided more details on this verification procedure in the revised manuscript.
Summary: This paper proposes MedRAX, a modular AI agent that integrates specialized chest X-ray (CXR) analysis tools with large language models to perform complex, multi-step medical reasoning. It introduces ChestAgentBench, a large benchmark of 2,500 expert-curated CXR reasoning tasks, to evaluate its performance. Experiments show MedRAX outperforms both general-purpose and specialized models in CXR interpretation, offering improved accuracy and transparency. Claims And Evidence: Overall, the technical claims are plausible and generally supported by the experiments. However, i would like to see the following: 1. Statistical significance or confidence intervals around the reported accuracy differences. 2. The failure cases of MedRAX (cases where it fails and the potential reason). Also, for the failure cases of the baselines like LLaVA-MED, is MedRAX provide correct predictions. The authors should discuss why MedRAX is correct in those senses. Methods And Evaluation Criteria: 1. The authors’ chosen method—combining a large language model (LLM) “agent” (GPT‑4⁰ in their reference implementation) with specialized CXR analysis tools in a ReAct loop—makes sense for the complex, multi-step nature of real radiological queries. The evaluation criteria are primarily classification accuracy (six-choice questions in ChestAgentBench, plus standard VQA accuracy on CheXbench), which is a straightforward metric for correctness. 2. While the choice of multi-choice questions (instead of free-form answers) is somewhat simplifying, it does allow for a consistent, reproducible metric. It would be helpful if the paper described the approach for verifying the “best” single correct answer in each question—especially when complex real-world findings can have nuanced interpretations. In general, the methods and metrics are appropriate and sensible for the problem. Theoretical Claims: NA Experimental Designs Or Analyses: I would also encourage an experiment explicitly probing spurious correlations within the pretrained models that MedRAX uses. For instance, it is well documented that pneumothorax classifiers may erroneously learn to rely on chest tubes as a proxy signal, leading to biased predictions. A practical approach would be: 1. Identify Influential Concepts. Use a post-hoc interpretability technique—either a concept-bottleneck model or saliency/heatmap analysis—to pinpoint the concepts or image regions that most strongly influence the model’s predictions. 2. Provide These Influential Concepts to GPT‑4o. Along with the model’s prediction, feed the extracted concepts or saliency annotations into GPT‑4⁰ to ask whether those features are causally relevant or potentially spurious. 3. Assess Biases and Explore Corrective Measures. If GPT‑4o flags a concept (e.g., a chest tube) as non-causal or suspicious, that insight can guide further investigation, such as retraining or debiasing strategies to mitigate reliance on that feature. Such an experiment could not only uncover hidden biases in the classification tools but also enhance the interpretability of MedRAX’s decision-making process, strengthening confidence in its clinical applicability. Supplementary Material: There is no supplementary material. Relation To Broader Scientific Literature: The authors situate their work in the broader context of: - LLM-based agent architectures (ReAct, tool orchestration, etc.). - Medical VQA and radiology-centric models (CheXagent, LLaVA‑Med, MMedAgent, etc.). - General multimodal LLM frameworks (GPT‑4⁰ with vision, Segment Anything approaches for image segmentation). They do a good job contrasting MedRAX with prior domain-specific solutions like CheXagent (which is specialized but not agent-based) and broad frameworks like GPT‑4o (strong reasoning but lacking targeted medical tools). I would also rather add "RaDialog: A Large Vision-Language Model for Radiology Report Generation and Conversational Assistance" to the related work. Essential References Not Discussed: See Relation To Broader Scientific Literature Other Strengths And Weaknesses: 1. Lack of prospective or real-world clinical validation beyond curated datasets. 2. No statistical significance or error bars reported. 3. Ablation studies would be helpful: for instance, measuring how MedRAX performs if certain tools are disabled (segmentation vs. classification vs. report-generation, etc.). 4. Does the final response also goes through the LLM (GPT 4o). If so, then there is a potential chance of halucination. Does the author have any idea to reduce it. 5. Certain details is not clear. For ex, i believe the interaction with the LLM needs sepcific prompt to generate thought (Reason(state,M) function in Algo 1) and action. However no mention of them in the paper 6. Does the author provide the CXRs to LLM. Then this method will be expensive. So, i would like to see the cost breakdown of the method consuming the LLMs. 7. Is algorithm 1 is an API? then detailed description needed for the API. 8. Sending CXR and other details, like report to GPT4o can be problematic as they can use these data for training and storing them in their server. The authors shall use these guidelines (https://physionet.org/news/post/gpt-responsible-use) to make sure that these patient data wont be shared to commercial LLMs like GPT4o. So details is necessary on how they send the data through LLM. Other Comments Or Suggestions: Please answer my points in Other Strengths And Weaknesses and i will raise the score. Questions For Authors: See Other Strengths And Weaknesses. Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Privacy and Security'] Ethical Review Concerns: See #8 in Other Strengths And Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We want to thank the reviewer for their thorough and thoughtful comments. **Statistical Significance.** We appreciate the reviewer's point about statistical measures. All experiments were run deterministically (LLMs with temperature 0 and deterministic tools), thus eliminating variability in model outputs for identical inputs. Additionally, since MedRAX integrates pre-trained models without any further training or fine-tuning, there is no randomness from training procedures. This approach ensures full reproducibility and makes traditional confidence intervals less applicable in this context. **Failure Cases.** For MedRAX failure cases, we have observed instances where the LLM becomes overconfident in its own reasoning and neglects to effectively utilize available tools, leading to incorrect conclusions that could have been corrected with proper tool integration. Regarding baseline failures, models like LLaVA-MED often struggle due to limited training data, resulting in poor generalization to diverse clinical scenarios. MedRAX mitigates this limitation by combining specialized tools with a general-purpose LLM that has broader knowledge and reasoning capabilities. We have included specific examples demonstrating these failure patterns in the revised manuscript. **Experiment Probing Spurious Correlations.** We appreciate the reviewer's thoughtful suggestion on investigating spurious correlations within the pretrained models used by MedRAX. The concern about biased predictions is addressed through diverse tools that provide cross-validation capabilities. Specifically, MedRAX integrates grounding models that highlight disease regions, allowing the LLM to validate classification predictions against visual evidence from multiple sources. This multi-tool approach helps mitigate reliance on any single potentially biased component. We agree that the reviewer's suggested approach represents a valuable direction for our future work. **Related Work.** We have added "RaDialog: A Large Vision-Language Model for Radiology Report Generation and Conversational Assistance" to the related work section. **Strengths and Weaknesses.** 1. Thank you for raising this important question. MedRAX integrates into clinical workflows as an interactive radiologist copilot. Clinicians can ask questions about CXRs (e.g., for disease classification, localization, VQA) via the user-friendly interface to assist in diagnosis. Its flexible deployment options (local or cloud) and modular design address practical IT and privacy barriers to adoption. 2. Discussed above. 3. Table 1 serves as a partial ablation, demonstrating the performance differential between standalone VQA tools (e.g., CheXagent), general-purpose models (GPT-4o), and our integrated MedRAX framework. However, we acknowledge that a more granular ablation study selectively disabling specific tools would provide deeper insights into which tools are most critical for different categories. We will include this analysis in the revised manuscript. 4. The final response is processed through GPT-4o. Our framework is designed to minimize hallucinations by providing comprehensive context from multiple specialized tools, creating an information-rich environment for the LLM. This multi-tool approach helps ground the model's responses in concrete findings from validated medical AI systems rather than relying solely on the LLM's internal knowledge. 5. MedRAX utilizes LangChain to manage the reasoning process dynamically. The Reason(state,M) function relies on: (1) an initial system prompt defining the agent's role, available tools with descriptions, and required output structure, and (2) the continuously updated conversation history (memory M) containing past thoughts, actions, and tool results. The agent framework routes execution based on the LLM's output: structured tool calls trigger tool execution, while a final response concludes the process. We have included the agent system prompt and tool descriptions in the revised manuscript. 6. MedRAX is flexible: while our evaluation used GPT-4o vision for performance, it can operate without vision LLMs by relying on specialized visual tools. The benchmark evaluation cost using GPT-4o vision involved 2,500 questions with on average 1.85 images per question (~512x512px, ~255 tokens per image) at GPT-4o's input rate of $3.75/1M tokens. 7. A breakdown of core methodology is provided in response to `Reviewer KTkS` 8. Regarding patient data privacy, MedRAX's modular architecture supports locally deployed LLMs to prevent sensitive medical data transmission to external servers. Acknowledging reviewer guidelines, it also supports Azure OpenAI service with appropriate opt-out configurations. This flexibility allows institutions to implement MedRAX compliantly via on-premises deployment or properly configured cloud services, meeting their specific privacy requirements. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I would like to thank the authors especially for the breakdown of core methodology. However, for reducing hallucination, i dont find the answer to be satisfactory. The authors must take some actions on this in future step. I can provide some papers which can help the authors in this directions: 1. Semantic Consistency-Based Uncertainty Quantification for Factuality in Radiology Report Generation. Wang et al. 2. Direct Preference Optimization for Suppressing Hallucinated Prior Exams in Radiology Report Generation. Banerjee et al. Include this limitation of hallucination in details in the discussion. Also, regarding the privacy, if the authors use GPT-4o API directly, please remove that and add the recommendations by MIMIC-CXR guidelines (use azure endpoint or Vertex from Google) in the final code. Else i am pretty satisfied with the rebuttal and i also read the responses for other reviewers as well. I am raising my score. --- Reply to Comment 1.1.1: Comment: Thank you for your thorough review and thoughtful suggestions that have helped improve our manuscript. We appreciate your concerns regarding hallucination reduction and privacy considerations. We definitely intend to implement your recommended approaches in future work to reduce hallucinations, as this represents a key milestone for deploying MedRAX in clinical settings. The papers you've suggested provide valuable directions for this effort. Regarding privacy concerns, we will add support for Azure OpenAI endpoints with appropriate privacy configurations in our final codebase, ensuring compliance with MIMIC-CXR guidelines for patient data protection. These improvements, along with the detailed ablation studies and failure case analyses, will significantly enhance both the technical rigor and clinical applicability of our work.
Summary: The key contributions of this work are: - MedRAX: An AI framework integrating multiple chest X-ray (CXR) analysis tools without extra training, dynamically orchestrating components for complex medical queries. - ChestAgentBench: An evaluation framework featuring 2,500 queries across seven categories, built from 675 expert-curated clinical cases, to assess multi-step reasoning in CXR interpretation. - Performance: MedRAX significantly surpasses general-purpose and biomedical-specific models in complex reasoning tasks, offering transparent workflows. - Interface: A user-friendly interface enabling flexible deployment from local to cloud solutions, addressing healthcare privacy needs. This paper emphasizes that structured orchestration of medical AI tools, combining large-scale reasoning and domain-specific knowledge, outperforms purely end-to-end models. **update after rebuttal** I’m satisfied with the authors’ response to provide more details about the algorithm and dataset. In my initial review, I had some concerns about the fit of this paper for ICML, as I felt the topic might not attract broad interest. However, after reading the other reviews, I’ve reconsidered my stance. Therefore, I am increasing my score from 2 to 3. Claims And Evidence: This work proposes an agent-based system for chest X-ray interpretation, integrating domain-specific models as tools with general-purpose LLMs for reasoning. Unlike purely end-to-end approaches, this method leverages existing specialized tools, albeit at the cost of increased inference time. The paper persuasively argues that this combined strategy outperforms E2E models, as structured reasoning tailored to the relatively closed-domain nature of chest X-ray interpretation effectively introduces beneficial inductive biases. Experimental results support this claim. Methods And Evaluation Criteria: This work focuses on integrating several existing tools; however, it lacks technical details on the implementation of core modules in Algorithm 1, such as Reason, RequiresUserInput, and SelectTool, among others. Providing a more in-depth explanation of these components would strengthen the paper by clarifying how these modules function and contribute to the overall system. Theoretical Claims: N/A Experimental Designs Or Analyses: To compare existing works, the authors have utilized two established benchmarks and introduced ChestAgentBench. The design of ChestAgentBench is generally logical and reasonable, and the statistical overview of the benchmark dataset is appropriate. However, providing a more detailed distribution of findings across different body regions would enhance the dataset's transparency. For example, it would be beneficial to present the frequency of the most common chest X-ray findings, such as pleural effusion, cardiomegaly, pulmonary nodules/masses, and others, within the dataset. This would offer a clearer understanding of its composition. Additionally, the category levels in Figure 4d should be standardized, as the current categorization appears somewhat inconsistent. A more informative approach would be to stratify the dataset as follows: - Statistics by department: ER, ICU, and general ward - Findings in chest X-ray: Pleural effusion, cardiomegaly, tuberculosis, nodule/mass, pneumothorax, consolidation, etc. This refinement would provide a more structured and comprehensive breakdown, improving the interpretability of the dataset. Supplementary Material: I’ve reviewed the anonymous project page, which provides a helpful overview of the project. Relation To Broader Scientific Literature: This work is highly relevant to foundation models in the medical domain, as it explores an alternative approach to integrating sophisticated existing models with reasoning modules, rather than relying solely on an end-to-end foundation model. Essential References Not Discussed: I think the Related Work section does a great job of discussing the limitations of previous studies and clearly highlighting how this work differs from them. Other Strengths And Weaknesses: The presentation of this work is good making it easy to follow, and I enjoyed reading the manuscript. This paper serves as a strong application of machine learning in healthcare, making it a good fit for the Machine Learning for Healthcare track (or similar tracks, if exists). However, for the regular track at ICML, it is unclear whether it would attract significant interest from the broader audience. Since it somewhat lacks novel or particularly compelling machine learning techniques or methodologies that would appeal to the ML research community, the paper's positioning may not align with ICML’s focus. Other Comments Or Suggestions: N/A Questions For Authors: #1. Could you elaborate on why all methods, including MedRAX, perform poorly on image-text reasoning questions? Is it due to the inherent difficulty of this specific task, or because the image datasets originate from external institutions not included during training—essentially representing an external validation scenario common in clinical studies? #2. The authors argue that MedRAX could be integrated into existing clinical workflows. However, in my understanding, the practical utility and deployment potential of the proposed agent remain unclear. Could you specify how MedRAX could be integrated in daily clinical practice? #3. Given the low resolution of Figure 2, it's difficult to be certain, but there appears to be an abnormal pattern in both the left and right lungs, possibly indicating bilateral effusions. Therefore, it may be preferable to select a different image for visualization. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful feedback and appreciate their recognition of the importance of this problem in the healthcare domain. **Algorithm 1 Core Methodology.** Algorithm 1 outlines the iterative reasoning process of the MedRAX agent in the ReAct loop, as follows: 1. **`Observe(Q, I, M)` / State Preparation:** This step initializes the reasoning cycle by gathering the necessary context. It aggregates the user's query (`Q`), any associated input images (`I`), and the entire history maintained in the agent's memory (`M`), which includes previous interactions, tool outputs, and reasoning steps. This consolidated state is fed into the LLM. 2. **`Reason(state, M)` -> `thoughts` (LLM Reasoning):** The LLM analyzes the current state (query, images, memory) and available tools (`T`) to generate internal 'thoughts'. These thoughts form a plan deciding the next action: generate a final response, ask the user for input, or select one or more tools. 3. **`RequiresUserInput(thoughts)` (Implicit Decision):** This condition is implicitly evaluated by the LLM during the `Reason` step. If the LLM's 'thoughts' indicate ambiguity or insufficient information that cannot be resolved by tools, it will opt to generate a clarifying question for the user instead of proceeding with a tool call or final answer. 4. **`GenerateUserPrompt(thoughts, M)` (Implicit Action):** If `RequiresUserInput` is effectively true, the LLM's generated output *is* the prompt for the user. It formulates a natural language question based on its 'thoughts' and the context in memory (`M`) to elicit the needed information. 5. **`SelectTool(thoughts, T, M)` -> `tool(s)` (LLM Tool Selection):** When the LLM's 'thoughts' indicate a need for specific capabilities (e.g., classification, segmentation), it selects the most appropriate tool or *multiple tools* from the available set (`T`). This selection is based on the LLM matching its reasoning to the predefined descriptions and capabilities of each tool. For each selected tool, the LLM also formulates the necessary input arguments in a structured format (e.g., JSON). 6. **`Execute(tool(s))` -> `result(s)` (Tool Execution):** The agent executes the selected tool function(s). This involves invoking each tool with the structured arguments prepared by the LLM in the `SelectTool` step. The output(s) of these execution(s) are captured as `result(s)`. 7. **`M ← M ∪ (thoughts, tool(s), result(s))` (Memory Update):** Following successful tool execution(s), the agent's memory (`M`) is updated. This crucial step logs the LLM's 'thoughts' that led to the tool call(s), the identity of the `tool(s)` used, and the `result(s)` obtained. This ensures that outputs from tools become part of the context for subsequent reasoning cycles. 8. **`CanGenerateResponse(thoughts)` / `GenerateResponse(thoughts, M)` (Response Generation):** Following `Reason`, this checks if the LLM's 'thoughts' plan a tool call. If not (`CanGenerateResponse` effectively true), the LLM synthesizes the final natural language answer (`GenerateResponse`), drawing upon its concluding 'thoughts' and information in memory (`M`). If 'thoughts' indicate a tool is needed, this step is bypassed, and the agent proceeds to `SelectTool` and `Execute`. **ChestAgentBench Statistics.** Thank you for the thoughtful comment. The distribution of common findings and department origins of cases are provided in response to `Reviewer DGwp`. We have revised Figure 3d to incorporate these detailed statistics. **ICML Relevance.** We submitted MedRAX under the "Application-Driven Machine Learning" track highlighting areas like healthcare. Our work directly addresses significant challenges within this critical domain, and we chose ICML because we believe high-quality, impactful healthcare AI research, while perhaps historically under-represented, is important for the ML community. **CheXbench Image-Text Reasoning.** This benchmark assesses fine-grained visual reasoning, requiring models to differentiate between options with subtle but critical distinctions (e.g., 'left' vs. 'right' pleural effusion). We observed poor performance across all evaluated models on this task. This suggests the challenge lies in the inherent difficulty of fine-grained radiological interpretation rather than the benchmark dataset itself (derived from the widely-used OpenI). **Clinical Relevance.** Thank you for raising this important question. MedRAX integrates into clinical workflows as an interactive radiologist copilot. Clinicians can ask questions about CXRs (e.g., for disease classification, localization, VQA) via the user-friendly interface to assist in diagnosis. Its flexible deployment options (local or cloud) and modular design address practical IT and privacy barriers to adoption. **Figure 2.** We have replaced Figure 2 with a clearer case showing distinct unilateral findings.
null
null
null
null
null
null
Continual Generalized Category Discovery: Learning and Forgetting from a Bayesian Perspective
Accept (poster)
Summary: This paper introduces Variational Bayes Continual Generalized Category Discovery (VB-CGCD), a Bayesian framework designed to address key challenges in Continual Generalized Category Discovery (C-GCD), including label bias, pseudo-label errors, and the learning-forgetting tradeoff. VB-CGCD comprises four key components: (1) self-supervised offline fine-tuning, (2) self-corrective re-labeling, (3) variational Bayesian distribution estimation with a nearest-class-mean classifier, and (4) covariance-driven early stopping. Experimental results demonstrate that VB-CGCD outperforms existing methods across standard and newly proposed benchmarks with minimal labeled data. Claims And Evidence: 1. The problem formulation in Eq. (1) requires that the total losses on previous t-1 tasks with parameters leanred after task t-1, does not be increased after learning task t. How is this gaurantee ensured? Does this optimization objective always have a valid solution? It seems that VB-CGCD could not fulfill this constraint, as shown in Figure 2 (a), the accuracy of previous tasks decreases after learning new tasks. 2. The paper states: “To mitigate feature drift, we perform fine-tuning exclusively during the offline stages and keep the backbone network parameters frozen throughout all sessions, including offline and online classification learning.” However, freezing the backbone may hinder the model’s plasticity, particularly when significant distribution shifts occur between sessions. Further discussion on the trade-off between stability and adaptability would strengthen the paper. 3. One of the core ideas, i.e., modeling each class as a multivariate normal distribution (MVN), has been explored in previous continual learning works [1-4]. [1] Fecam: Exploiting the heterogeneity of class distributions in exemplar-free continual learning. NIPS2023 [2] Learning semi-supervised gaussian mixture models for generalized category discovery. ICCV 2023 [3] Steering prototype with prompt-tuning for rehearsal-free continual learning. WACV 2024 [4] Happy: A Debiased Learning Framework for Continual Generalized Category Discovery, NIPS 2024 Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper presents the theoretical upper bound (Table 9) of model accuracy at the final session, but does not provide formal proofs. This upper bound might come from experiments. Experimental Designs Or Analyses: Variational inference-based methods are generally computationally expensive. However, the paper does not compare the computational cost of VB-CGCD with other methods, focusing only on parameter sizes. A computational efficiency analysis would enhance the evaluation. Supplementary Material: I briefly reviewed the supplementary section. Relation To Broader Scientific Literature: The key contributions of this paper aim to address the challenge of learning from both known and novel categories in an evolving data stream. By providing a practical solution, it aligns with broader efforts in the field to advance continual generalized category discovery. Essential References Not Discussed: Several continual learning works [5-9] that leverage variational inference should be discussed in the related work section for a more comprehensive comparison. [5] Variational continual learning, ICLR 2018 [6] Variational auto-regressive gaussian processes for continual learning, ICML 2021 [7] Generalized variational continual learning, ICLR 2021 [8] Continual learning via sequential function-space variational inference, ICML 2022 [9] Continual variational autoencoder learning via online cooperative memorization, ECCV 2022 Other Strengths And Weaknesses: Other stengths. The improvement over previous methods is substantial. Other Comments Or Suggestions: 1. It is recommended to include punctuation in each numbered equation. 2. Please ensure the correct quotation marks are used throughout the paper. Questions For Authors: 1. When the number of new categories is unknown, how does VB-CGCD estimate it effectively? 2. Can you further explain how the class merging is conducted? Specifically, how are “the latent variables of old classes and the pseudo-labeled distribution merged into a unified MVN-NCM classifier”? 3. According to the ablation study results in Table 11, early stopping and re-labeling seem to hinder the learning of new classes. Does this suggest that the proposed method prioritizes stability (performance on old classes) over plasticity (performance on new classes) to achieve a tradeoff? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Claims And Evidence: >1. The constrained optimization equation formally represents the objective of "learning new classes without forgetting old ones." We observed that maximizing overall accuracy inevitably leads to a decline in the classification performance of previous classes, as illustrated in P2 of Figure 2. Therefore, considering the interplay between old and new classes, we believe that this issue is a multi-objective optimization problem. Accordingly, we relax the original constraint by imposing a covariance constraint, which approximates a balance between preventing forgetting and effectively learning new classes. >2. With advancements in self-supervised pre-training, the effectiveness of self-supervised fine-tuning in small-scale scenarios has diminished. On the other hand, due to the presence of classification errors, employing supervised fine-tuning during online sessions can introduce a significant number of misclassifications, which in turn leads to a decline in overall performance. Meanwhile, we did not entirely freeze the backbone; instead, we performed fine-tuning with labeled data during the offline phase, which was performed using verified ground-truth labels. >3. Most methods use the MVN as a sampling distribution for data replay or to assist clustering, rather than directly as a classification prototype. In contrast, VB-CGCD approximates the MVN through variational inference, and we believe that this strategy is the key to our improved performance. ## Experimental Designs Or Analyses: We acknowledge that Variational Inference can be computationally intensive. However, our VI process does not need to reach full convergence; only a few steps (1000 epochs in our experiments) are sufficient to distinguish different categories. Additionally, by not requiring fine-tuning of the backbone network during training, we only need to extract features once for each sample, performing a single forward pass through the backbone, which significantly reduces training time. Consequently, compared to other methods that necessitate frequent fine-tuning of the backbone network, our training time is substantially reduced. The running time during training on one RTX-8000 is shown as follows: |Datasets|OfflineFine-Tuning (mins)|OfflineSession (mins)|OnlineSession (mins)|Overall (mins)| |-|-|-|-|-| |C100|67|3.03|1.51|97| |TINY|112|2.27|1.58|145| |IN100|140|3.36|1.22|169| |CUB|12.09|2.01|1.12|39| Compared to HAPPY (over 25 hours of training) and PromptCCD (over 40 hours of training), VB-CGCD demonstrates significantly superior training efficiency. After training, VB-CGCD achieves inference computational efficiency comparable to other prototype-based approaches, normally in milliseconds. ## Q&A >1. As a dynamically extensible incremental learning classifier, VB-CGCD can be integrated with advanced clustering algorithms, including those capable of dynamically estimating the number of categories. Since VB-CGCD does not require a preset number of categories, it can be combined with many off-the-shelf methods for estimating unknown category counts. We employed the classical silhouette score to implement a method for estimating the number of unknown categories and integrated it into VB-CGCD. For experimental results, please refer to Response 2 of Reviewer 1. >2. VB-CGCD models each category as an independent probability distribution and maintains a collection of all category distributions (i.e., a matrix of latent variables) in memory. After estimating the pseudo-label distribution, we incorporate it into the distribution collection (prototype matrix). During re-labeling, the distances between all samples and this collection are computed to assign labels accordingly. Subsequently, the pseudo-distribution is removed from the collection, and a new distribution is re-estimated. The merge operation is purely an array manipulation, which accounts for VB-CGCD’s inherent scalability, and the reason for being able to handle unknown number of categories. >3. Since the primary goal of continual learning is to enhance overall learning performance, we agree that preventing overfitting to new classes is necessary to mitigate the forgetting of previously acquired knowledge. In fact, many continual learning methods, whether by adding regularization constraints or employing data replay, aim to some extent to slow down the learning of new classes to prevent forgetting old ones. Without imposing any constraints and allowing unrestricted learning of new classes, the model's classification accuracy on new classes may continue to improve, but the accuracy on old classes could drop drastically, as illustrated in phase P3 of Figure 2. Therefore, to maximize overall accuracy, continual learning must strike a balance between stability (preventing forgetting) and plasticity (learning new categories). This trade-off inherently implies that optimizing one aspect will inevitably come at the expense of the other. --- Rebuttal Comment 1.1: Comment: Thank’s for the rebuttal, which addresses some of my concerns. However, I still believe that the optimization objective in Eq. (1) may not be well-aligned with the proposed method. Introducing a relaxation to the constraint could be beneficial, yet this aspect appears to be missing in the original paper. In addition, while the authors provide an explanation regarding the effect of self-supervised pre-training, they do not directly address my concern about freezing the model after the offline stage, which may limit the model’s plasticity. Lastly, the absence of discussion on relevant works [1–9] weakens the paper’s comprehensiveness. Therefore, I would prefer to maintain my original recommendation at this stage. --- Reply to Comment 1.1.1: Comment: ## 1. Eq(1) Eq(1) is a general optimization objective for class incremental learning, which originates from the classical work on continual learning GEM [1]. In our discussion of the P2 phase in Figure 2, we argue that this objective can only achieve a suboptimal outcome (akin to the P1 phase). **Eq (1) is not directly related to our work and was not the focus of our optimization**; it was included merely as a general definition in the class incremental learning problem formulation. *We will avoid this ambiguity in the revised version.* >[1] Gradient Episodic Memory for Continual Learning. NeurIPS 2017. ## 2. Model plasticity We employ the pretrained model as a frozen backbone network for feature extraction, which encodes raw images into features. **The model's plasticity is principally derived from the Bayesian Neural Network (BNN)-based prototype classifier constructed atop this backbone.** (1) This BNN classifier parametrically models each category's distribution as **learnable multivariate Gaussian distributions**. The strong approximation capacity of multivariate Gaussians endows the classifier with plasticity for incremental learning. (2) The per-class independent distribution fitting mechanism enables **dynamic expansion**, allowing the model to adaptively accommodate arbitrary numbers of novel categories across domains. Notably, baselines like HAPPY and PromptCCD also adopt backbone freezing with only final-block tuning. **Their plasticity primarily stems from MLP classification heads**, analogous to our BNN's functional role. Compared to an MLP classification head, BNN offers more flexible dynamic expansion and enhanced adaptability. Besides, many SOTA CIL approaches freeze the backbone while solely training classifiers atop it [1]. This design philosophy preserves the strong generalization capabilities of pretrained models. Representative methods, including **FeCAM, L2P [2], and DualPrompt [3], strictly maintain the frozen pretrained backbone without any fine-tuning**. >[1] Class-Incremental Learning: A Survey. TPMI, 2024. >[2] Learning to Prompt for Continual Learning. CVPR 2022. >[3] DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning, ICCV 2023. To demonstrate the plasticity of our model, we continued learning on IN100 after having completed incremental learning on C100 with the pretrained backbone and BNN classifier. The model sequentially learns IN100 categories while retaining C100 knowledge, thereby validating its adaptability. |S0|S1|||S2|||S3|||S4|||S5||| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| |All|All|Old|New|All|Old|New|All|Old|New|All|Old|New|All|Old|New| |82.24|81.42|81.94|76.3|81.04|81.31|78.0|80.0|80.82|70.2|79.55|79.82|76.0|78.6|79.31|68.6| The result shows that VB-CGCD achieved a final accuracy of 78.6, demonstrating its ability to effectively classify both C100 and IN100, which underscores its strong cross-domain adaptability. ## 3. Missing Related Works GPC [2] employs a GMM for clustering and estimates unknown category numbers through split-merge mechanisms, focusing primarily on clustering rather than classification. FeCAM [1] utilizes Gaussian prototypes obtained via statistical methods, making them non-learnable and sensitive to data scale variations. In contrast, VB-CGCD employs a learnable BNN to establish Gaussian prototypes, offering greater generality and effectively handling covariance differences between classes. CPP [3] uses class means as prototypes (prompts) and integrates a Transformer as the classifier, whereas VB-CGCD employs Gaussian distributions along with a non-parametric distance function, enhancing robustness and interpretability. HAPPY [4] utilizes Gaussian prototypes as a replay mechanism to mitigate forgetting, while VB-CGCD is replay-free and directly utilizes prototypes for classification. VCL [5], GVCL [7], VAR-GP [6], and S-FSVI [8] are methods that approach continual learning as a sequence of tasks, utilizing variational inference to regularize parameter updates via the KL divergence. These techniques bridge Bayesian inference with continual learning, providing insights into the trade-off between plasticity and stability. These methods employ variational inference for regularization in learning likelihood probabilities, while VB-CGCD leverages variational inference to directly learn generative models of data distributions, enabling classification through distance functions. OCM [9] uses variational autoencoders to learn data distributions, serving as samplers for replay mechanisms to mitigate forgetting, whereas VB-CGCD operates without the need for replay. *The revised version will include these in the related work, clarifying technical distinctions and providing a comprehensive discussion.* We hope that our responses have addressed your concerns and would greatly appreciate it if you could consider raising our score. Also, let us know if there are any more concerns.
Summary: This paper addresses Continual Generalized Category Discovery (C-GCD), or say, iGCD, a task where a model must incrementally learn new classes from unlabeled data streams while preserving knowledge of previously learned classes, a challenge exacerbated by mixed-class data streams and catastrophic forgetting. C-GCD is a task that involves: IL, semi-supervised learning, and class discovery. In this work, the authors Variational Bayes C-GCD (VB-CGCD), a Bayesian framework that uses variational inference to model class distributions, align covariances between old and new classes, and mitigate forgetting through a covariance-aware nearest-class-mean (NCM) classifier and an early stopping mechanism. The proposed method, VV-CGCD is evaluated on well-established GCD benchmarks: CIFAR-100, TinyImageNet, ImageNet-100, and CUB-200. The results show improvements over the prev. arts. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes, I reviewed the full supp. Relation To Broader Scientific Literature: C-GCD methods can be used to discover novel classes/concepts, which can benefits a spectrum of related fields, such as tissue discoevry and protein discovery. Essential References Not Discussed: There is a significant lack of discussion on related work in this study. First, the coverage of the Novel Class Discovery (NCD) and Generalized Category Discovery (GCD) literature is very limited, with only two works from each category being discussed. However, a broader spectrum of NCD techniques is highly relevant to this work, as they serve as the foundation for both GCD and Category-wise GCD (C-GCD). A more comprehensive discussion is needed to highlight the technical and methodological similarities between this study and existing NCD and GCD approaches. Second, there is a notable omission of discussions and comparisons regarding Class-incremental NCD, which forms the basis of C-GCD. Foundational works in this domain [1, 2, 3] are neither reviewed nor compared, despite their relevance. Addressing these gaps would strengthen the contextualization of this work within the broader research landscape. [1] Joseph K J, Paul S, Aggarwal G, et al. Novel class discovery without forgetting[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 570-586. [2] Roy S, Liu M, Zhong Z, et al. Class-incremental novel class discovery[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 317-333. [3] Liu M, Roy S, Zhong Z, et al. Large-scale pre-trained models are surprisingly strong in incremental novel class discovery[C]//International Conference on Pattern Recognition. Cham: Springer Nature Switzerland, 2024: 126-142. Other Strengths And Weaknesses: Strengths: 1) the paper is well-written and well-presented. It is easy to read. 2) the proposed Bayesian C-GCD framework is novel, interesting, and highly generalizable. 3) the improvements on the evaluated benchmarks look promising and significant. 4) the abaltion study is sufficient. Weakness: 1) the benchmarks used for evaluation are significantly limited. The proposed method is based on pre-trained DINO. Note that CIFAR, ImageNet, CUB, are all used in the representation learning of DINO. Although DINO does not use labels to supervise the model, **nearly none of the "novel" classes the model meets at each incremental session are really novel** because these concepts, or even visual content are senn by the encoder during its pre-training. Therefore, I cannot hold high confidence about the effectiveness of the proposed method with current evaluation. Evalution on truely novel benchmarks are needed. 2) Insufficient discussion and comparison with related work. iNCD methods should be adapted and compared. 3) dependence on Initial Clustering Quality: The performance of VB-CGCD relies heavily on the quality of the initial clustering used to generate pseudo-labels. Poor clustering results, especially in high-dimensional spaces, could lead to inaccurate pseudo-labels, which may propagate errors and degrade the model's performance over time. Other Comments Or Suggestions: NA Questions For Authors: Q1: Sensitivity to Hyperparameters: The method involves several hyperparameters, such as the regularization term and the early stopping threshold. The paper does not provide a detailed sensitivity analysis, leaving it unclear how robust the method is to variations in these hyperparameters across different datasets or tasks. Q2: How would the method behave in other modalities (non-image data)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ## Weakness: >1. We acknowledge that DINO-ViT-B16 was pre-trained on the ImageNet dataset, though its training was based on self-supervised learning using unlabeled data, and it was not trained on CIFAR or CUB. Since all our baselines including HAPPY (SOTA) utilize DINO for feature extraction, VB-CGCD also uses DINO to ensure a fair comparison with other methods. Additionally, to further assess the generalizability of our approach, we have conducted experiments on Stanford Cars, FGVC-Aircraft, CORE50, and Food-101 datasets. |Datasets|S0|S1|||S2|||S3|||S4|||S5||| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| ||All|All|Old|New|All|Old|New|All|Old|New|All|Old|New|All|Old|New| |StanfordCars|72.55|69.41|71.06|45.38|67.08|67.75|56.46|65.45|66.18|53.51|62.35|63.77|37.68|60.92|61.22|55.39| |FGVCAircraft|82.32|76.23|79.22|55.47|74.95|75.28|72.34|70.15|71.91|54.51|65.13|67.84|43.67|60.66|62.64|58.03| |Core50|99.92|96.50|99.92|79.57|91.11|95.95|61.40|88.91|91.11|73.29|84.93|88.80|54.12|82.28|84.84|58.85| |Food101|84.78|81.67|81.80|81.04|79.51|79.87|77.32|78.25|77.85|81.04|75.41|76.30|68.2|73.30|73.76|69.12| Compared to other baselines, VB-CGCD outperforms them by a decent margin in terms of average overall accuracy, as shown below: |Methods|StanfordCar|FGVCAircraft| |-|-|-| |MetaGCD|54.67|47.16| |HAPPY|62.79|53.10| |VB-CGCD|65.04|69.42| > 2. Thank you for your valuable feedback. We will include a discussion on FRoST (Class-iNCD) in the related work section. FRoST employs a Gaussian distribution as the sampling distribution for its replay mechanism, incorporating samples from previously learned classes during training to mitigate forgetting. This represents a typical replay-based approach. In contrast, VB-CGCD does not utilize a replay mechanism but instead relies directly on prototypes for classification. The experimental comparison between FRoST and VB-CGCD is as follows: |Datasets|S0|S1|||S2|||S3|||S4|||S5||| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| ||All|All|Old|New|All|Old|New|All|Old|New|All|Old|New|All|Old|New| |C100|FRoST|90.36|76.87|79.58|63.30|65.31|68.88|43.90|58.01|61.09|36.50|49.27|50.90|36.20|48.03|48.17|46.80| |TINY|FRoST|85.86|75.15|78.56|58.10|65.64|67.83|52.50|51.32|54.31|30.40|48.22|52.14|16.90|40.15|42.73|16.90| |IN100|FRoST|96.20|87.50|92.96|60.20|79.63|83.37|57.20|76.78|77.00|75.20|66.18|68.65|46.40|63.82|66.40|40.60| |CUB|FRoST|90.26|77.03|83.95|43.53|50.77|53.46|34.33|46.42|49.31|26.09|39.40|41.47|23.08|34.55|35.12|29.45| In terms of final overall accuracy, VB-CGCD outperforms FRoST by an average of 30.76%, demonstrating its superior performance. >3. In unsupervised scenarios, most methods are influenced by the performance of the clustering algorithm. Our strategy is to minimize errors introduced by the clustering algorithm from the classifier's perspective to enhance performance. To ensure fair comparisons with other methods, we have employed the fundamental k-means algorithm. Of course, our approach can also be integrated with more advanced clustering algorithms to further improve performance. ## Q&A >1. Across all datasets and tasks, we employed a consistent set of hyperparameters, including learning rate and training epochs. Specifically, the fine-tuning coefficient λ was uniformly set to 0.001. Regarding the early stopping strategy, we posit that equalizing the average covariance between new and old classes effectively mitigates class bias. Therefore, we monitor the covariance of both new and old classes during training and halt training when they are equal, i.e., when R=0. We have conducted a sensitivity analysis on the R. Please refer to Response 4 of Reviewer 2. >2. Since our method performs density estimation solely on features, it can be broadly applied to various modalities. We conducted experiments on audio and text classification tasks. For audio we evaluated our method on esc50, speechcommands, and audiomnist datasets. For text, we evaluated stackoverflow, and clinc. |Datasets|S0|S1|||S2|||S3|||S4|||S5||| |-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-|-| ||All|All|Old|New|All|Old|New|All|Old|New|All|Old|New|All|Old|New| |ESC50|72.98|75.10|72.03|96.66|78.05|74.68|100.0|78.68|78.05|82.92|80.33|78.05|100.0|78.0|78.93|70.45| |Speechcommands|85.25|80.43|82.37|72.11|78.27|79.13|72.67|71.75|76.92|45.38|70.59|70.54|71.01|69.18|69.52|66.44| |AUDIO-MNIST|96.22|96.40|96.22|97.26|96.94|96.40|100.0|97.05|96.74|98.85|96.94|96.71|98.68|96.4|96.78|93.75| |Clinc|98.66|94.22|98.22|74.22|92.25|94.0|81.77|90.83|91.87|83.55|90.71|90.66|91.11|88.66|90.32|73.77| |Stackoverflow|89.0|86.83|87.2|85.0|86.28|86.66|84.0|84.75|85.57|79.0|84.88|84.25|90.0|84.8|84.77|85.0| The experimental results indicate that VB-CGCD demonstrates good performance in these tasks as well, further validating its generalizability and robustness to other modalities.
Summary: This paper studies the task of Continual Generalized Category Discovery (CGCD). It analyzes C-GCD’s forgetting dynamics through a Bayesian lens, revealing that covariance misalignment between old and new classes drives performance degradation. To solve these issues, this paper proposes Variational Bayes C-GCD (VB-CGCD) integrates variational inference with covariance-aware nearest-class-mean classification. VB-CGCD adaptively aligns class distributions while suppressing pseudo-label noise via stochastic variational updates. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: This paper derives the method from the perspective of variational bayes, which is clear. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: This paper makes novel contributions of bayes variational inference for continual generalized discovery. Essential References Not Discussed: Please discuss the recent work of C-GCD Happy [R1] in the Related Work. References: [R1]. Happy: A Debiased Learning Framework for Continual Generalized Category Discovery. NeurIPS 2024. Other Strengths And Weaknesses: Strengths: 1. This paper is well-motivated and easy to follow. 2. The writing and the diagram are clear. I appreciate Figure 1. 3. The proposed variational Bayes-based method is novel to the community of category discovery. 4. The performance gains are remarkable. Weakness: 1. Discussion of related work Happy [R1] is missing in the Related Work. 2. Why are some results of Happy in Table1 different from the reported results in the original paper [R1]? 3. Please include a brief introduction of the method (i.e., spirit, pipeline, etc) in the caption of figure 1. 4. The experiments of hyperparameters should be added, including $\epsilon$ in early stopping. 5. Could the authors provide some experiments that support the claim that self-supervised loss during continual learning introduces feature drift? References: [R1]. Happy: A Debiased Learning Framework for Continual Generalized Category Discovery. NeurIPS 2024. Other Comments Or Suggestions: No. Questions For Authors: See Other Strengths And Weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: >1. We appreciate the reviewer’s valuable feedback. We have regarded HAPPY as an important SOTA but inadvertently missed its discussion in the related work section. We will include the discussion of HAPPY in the revised version, with a particular emphasis on its technique of using Gaussian distributions for prototype sampling, to provide a more comprehensive overview of the relevant work. >2. We reproduced the experimental results of HAPPY and found that our reproduced results outperform those reported in the original paper. We reported the reproduced results in our paper. >3. We will update the overall process in the caption of Table 1 as follows: "(a) Offline Stage: Labeled data is used to fine-tune the backbone network. After fine-tuning, features are extracted, and variational inference is performed to obtain and store the class distribution. (b) Online Stage: Unlabeled data is first processed by the feature network to extract features, which are then clustered to assign pseudo-labels. These pseudo-labels are combined with a first round of variational inference to obtain an approximate distribution, then combined with the stored class distribution from the previous session for re-labeling." >4. Across all datasets, we employed the same hyperparameter configuration, including conventional parameters such as epochs and learning rate, as detailed in Appendix C.3. Regarding the fine-tuning hyperparameter λ, we found that the contribution of self-supervision to performance is almost negligible. This is primarily because, compared to self-supervised learning on extremely large-scale pretraining datasets, the fine-tuning dataset has a limited impact on performance, with the cross-entropy loss playing a more crucial role. Therefore, we set λ to a very small value of 0.001. As for the early stopping hyperparameter, as discussed in Figure 2, we argue that ensuring equal average covariance between new and old classes can effectively mitigate class bias. Consequently, our early stopping strategy dictates training to be stopped once the covariances of new and old classes are equal (i.e., when R equals 0). We conducted experiments on the CIFAR 100 dataset, focusing on the parameter R, adjusting its value to achieve different trade-offs between model plasticity and stability. #### Comparisons in overall accuracy as R varies ||S1|S2|S3|S4|S5| |----|----|----|----|----|----| |R=3|85\.55|82\.2|79\.38|77\.74|75\.61| |R=2|86\.85|83\.3|80\.2|78\.41|76\.13| |R=1|88\.4|85\.9|83\.41|82\.57|81\.11| |R=0|88\.4|86\.0|83\.61|82\.72|81\.23| |R=\-1|87\.03|83\.75|81\.31|80\.18|77\.91| |R=\-2|87\.01|83\.71|81\.23|79\.94|77\.71| #### Comparisons in the accuracy of old classes as R varies ||S1|S2|S3|S4|S5| |----|----|----|----|----|----| |R=3|91\.62|85\.31|81\.98|79\.17|77\.44| |R=2|91\.32|86\.63|83\.15|80\.01|78\.11| |R=1|90\.18|87\.2|84\.64|82\.81|81\.92| |R=0|89\.46|87\.06|84\.75|82\.85|81\.84| |R=\-1|87\.34|84\.2|81\.91|79\.57|77\.91| |R=\-2|87\.30|84\.10|81\.82|79\.31|77\.56| #### Comparisons in the accuracy of new classes as R varies ||S1|S2|S3|S4|S5| |----|-|-|---|-|-| |R=3|55\.2|63\.5|61\.2|66\.3|59\.1| |R=2|64\.5|63\.3|59\.5|65\.6|58\.3| |R=1|79\.5|78\.1|74\.8|80\.7|73\.8| |R=0|83\.1|79\.6|75\.6|81\.7|75\.7| |R=\-1|85\.5|81\.1|77\.1|85\.1|77\.9| |R=\-2|85\.6|81\.4|77\.1|85\.0|79\.0| The experimental results indicate that the parameter R serves to balance the accuracy between new and old classes, with the overall accuracy being highest when R equals 0. We will include these details in the updated version of the paper. >5. In continual learning, new data is introduced at each session. If the backbone network is continuously fine-tuned, the features extracted from the same input using different versions of the network (e.g., h₁ and h₂) will inevitably differ, i.e., h₁(x) ≠ h₂(x). We designed an experiment on CIFAR-100, where we continuously perform self-supervised fine-tuning using data from each session. Subsequently, we evaluate feature drift by conducting supervised learning. |SSL\-finetuned|overall|S0|S1|S2|S3|S4|S5| |--|-|--|---|--|--|--|--| |S0|85\.37|85\.3|83\.0|84\.9|85\.2|90\.1|84\.0| |S1|84\.55|84\.95|81\.9|83\.4|84\.8|86\.7|84\.0| |S2|83\.84|83\.94|82\.0|82\.28|82\.4|88\.0|83\.6| |S3|85\.02|85\.34|81\.8|84\.1|84\.6|89\.1|84\.0| |S4|84\.45|84\.76|82\.5|83\.5|83\.5|88\.1|83\.2| |S5|85\.01|84\.99|81\.9|84\.6|85\.6|88\.6|84\.4| The experimental results indicate that as fine-tuning progresses, the accuracy of each session, as well as the overall accuracy, fluctuates within a 2% range, demonstrating the presence of feature drift. Moreover, this drift does not have a consistently positive impact on accuracy and does not lead to significant performance improvements. Since our classification relies on storing the distribution of old classes, any drift may result in inaccurate classifications. Therefore, we chose to avoid continuously fine-tuning the pre-trained network.
Summary: This manuscript investigates the problem of Continual Generalized Category Discovery, addressing the challenges of mixed new and old categories and high uncertainty in unlabeled data under a continual learning setting. The authors propose a new variational Bayesian framework that utilizes offline fine-tuning and self-correcting re-labeling to mitigate label bias and label noise. By employing covariance-aware early stopping, the framework balances new-category plasticity and old-category stability. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: No Essential References Not Discussed: No Other Strengths And Weaknesses: **Strengths**: 1.The paper is well written, with strong readability and logical clarity. 2.The newly designed framework demonstrates impressive performance, achieving state-of-the-art results. **Weaknesses:** 1.The experimental design is not sufficiently comprehensive. The ablation study only covers the removal of entire modules but does not analyze specific design components within those modules. For instance, the paper mentions using Mahalanobis similarity to account for covariance structures and mitigate the curse of dimensionality. However, the benefit of introducing Mahalanobis similarity into the proposed framework has not been clearly demonstrated. 2.Although the paper states that existing methods can effectively estimate the number of new categories when it is unknown, there are no experiments or analyses for scenarios in which the number of new categories in each session is indeed unknown. Such a setting would be crucial to validate the effectiveness of the proposed framework in real-world conditions. 3.The framework reduces interference by distinguishing between new and old categories, offering an improvement over purely distance-based methods. However, the paper lacks a more in-depth comparison with open-set recognition and GCD approaches that also differentiate known from unknown classes. (e.g. Extreme Value Theory in open-set recognition and DPN[1] in GCD ) I would be happy to increase my score if the authors can resolve all concerns. [1] Generalized Category Discovery with Decoupled Prototypical Network Other Comments Or Suggestions: None Questions For Authors: Please see weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: >1. We conducted an ablation study on the benefit of introducing Mahalanobis distance, and the results obtained by replacing the Mahalanobis with the Euclidean distance are as follows: Datasets|Methods|All\(S0\)|All\(S5\)|Mf|Md --|---------|---|---|----|--- C100|w/oMahalanobis|90\.52|76\.06|10\.1|74\.96 Tiny|w/oMahalanobis|87\.64|70\.33|9\.7|67\.5 IN100|w/oMahalanobis|93\.6|85\.06|8\.04|86\.56 CUB|w/oMahalanobis|83\.78|55\.79|0\.82|28\.15 Compared to VB-CGCD with Mahalanobis distance, the overall final accuracy decreased by approximately 5.56 on average. This is because Euclidean distance, being a special case of Mahalanobis, is susceptible to collapse in high-dimensional spaces. By incorporating covariance information, Mahalanobis distance effectively mitigates this issue, thereby achieving a more optimal classification boundary. We appreciate your valuable feedback and will incorporate this aspect into the revised version of our paper. > 2. Since our proposed approach primarily serves as an incremental learning classifier and is decoupled from the clustering module, it allows integration with any clustering method (including those that estimate the number of categories). As mentioned in the paper, there are many off-the-shelf methods [1][2] available for estimating the number of categories. Nonetheless, we employed the classic silhouette score to implement a method for estimating the number of unknown classes, which we integrated into VB-CGCD. The results are as follows: |Datasets|S0|S1|||S2|||S3|||S4|||S5||| |--|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----|----| ||All|All|Old|New|All|Old|New|All|Old|New|All|Old|New|All|Old|New| |C100|91\.68|82\.48|87\.2|58\.9|79\.08|82\.31|59\.7|76\.25|78\.90|57\.7|75\.24|76\.07|68\.6|73\.13|74\.94|56\.8| |Tiny|88\.32|85\.55|86\.82|79\.2|82\.41|84\.01|72\.8|79\.87|81\.41|69\.1|77\.20|78\.82|64\.2|74\.77|76\.34|60\.6| |IN100|94\.32|91\.93|93\.12|86\.0|89\.45|90\.43|83\.6|88\.6|88\.28|90\.8|86\.11|87\.62|74\.0|84\.56|85\.08|79\.8| |CUB|85\.72|79\.98|83\.68|60\.91|65\.98|67\.42|57\.41|63\.91|65\.94|49\.47|57\.54|58\.24|51\.76|54\.40|57\.54|25\.61| Due to errors in estimating the number of unknown classes, the clustering error is exacerbated, ultimately leading to an average overall accuracy reduction of approximately 5.66. Nonethless, it demonstrates the scalability of our method to handle scenarios with an unknown number of types, and our performance still outperforms the SOTA method, HAPPY, which also uses the silhouette score. |Methods|All|Old|New| |- |--- |--- |--- | |HAPPY|68.80|72.40|45.74| |VB-CGCD|77.23|79.88|60.34| [1] DeepDPM: Deep Clustering with an Unknown Number of Clusters. Meitar Ronen, Shahaf E. Finder, Oren Freifeld. CVPR 2022. [2] PromptCCD: Learning Gaussian Mixture Prompt Pool for Continual Category Discovery. Fernando Julio Cendra, Bingchen Zhao, Kai Han. ECCV 2024. > 3. Our approach distinguishes between known and unknown classes by approximately estimating the unknown distribution, thereby effectively reducing the interference of known classes on the learning of new ones—a strategy that is similar to certain open-set methods. However, there are two key differences between GCD and CGCD: 1) GCD does not need to address the forgetting problem inherent in continual learning; 2) In the unsupervised learning phase of CGCD, supervised data is inaccessible, making it unsuitable for semi-supervised approaches. For example, during training, DPN relies on using L_{known} as a part of the loss function, which renders it inapplicable in CGCD scenarios. Moreover, methodologically, DPN uses the mean as its prototype; however, in high-dimensional spaces, this approach is prone to feature collapse. In contrast, utilizing a Gaussian distribution is more robust. We will further emphasize this distinction in the related work section. Additionally, we evaluated VB-CGCD on the same datasets which were evaluated in DPN ‘s paper, and the results are shown below. | | banking | | | stackoverflow | | | clinc | | | |-------|--------|-------------|---------------------|----------------------|---------------------|---------------------------|---------------------|---------------------|--------------------| | | All | Known | Novel | All | Known | Novel | All | Known | Novel | | DPN | 72\.96 | 80\.93 | 48\.60 | 84\.23 | 85\.29 | 81\.07 | 89\.06 | 92\.97 | 77\.54 | | VB\-CGCD | 75\.55 | 82\.88 | 53\.15 | 84\.2 | 83\.73 | 85\.6 | 90\.97 | 95\.16 | 78\.19 | The experimental results indicate that VB-CGCD achieves performance comparable to DPN. However, it is important to note that VB-CGCD does not need access to any labeled data during the unsupervised learning phase, which is a distinct difference from DPN.
null
null
null
null
null
null
Adaptive Estimation and Learning under Temporal Distribution Shift
Accept (poster)
Summary: This paper focuses on estimation and learning time series data in the presence of temporal distribution shifts. The authors propose a wavelet soft-thresholding estimator that optimally estimates the ground truth sequence under unknown shifts and provide theoretical error bounds for their method. Their approach generalizes existing research by linking the sequence’s non-stationarity to sparsity in the wavelet domain. The paper also applies this estimator to binary classification under distribution shifts and establishes its connection to total-variation denoising. The authors conduct experiments on synthetic data to validate their proposed method. Claims And Evidence: The authors made several claims in their paper. However, some claims are not well supported: - The authors claimed that using higher-order wavelets can achieve better performance. However, their theoretical analysis and algorithm do not provide a principled method for choosing the optimal wavelet transform. - The authors claimed that their algorithm achieve better computational efficiency for binary classification setting compared to the existing works. However, no empirical evidence is provided to support it. - The authors claimed the superior performance of their proposed wavelet-denoising based algorithms in estimating the ground-truth, compared to prior works. However, they lacks real-world data validation for it. Methods And Evaluation Criteria: - The authors proposed wavelet-denoising algorithm (Algorithm 1) for time series data estimation problem. However, it is not clear how to apply this algorithm for binary classification problem as shown in Theorem 9. Given that Algorithm 1 only takes Y as input instead of both X and Y. More explanations are needed. - The algorithm does not provide a principled method for choosing the optimal wavelet transform. While Haar wavelets work well in many cases, higher-order wavelets may be needed for complex trends, but their selection remains ad hoc. - The algorithm uses fixed soft-thresholding for wavelet denoising, which may not always be optimal. - The algorithm assumes that the ground truth sequence has sparse wavelet coefficients. However, it's hard to verify this assumption in practical application. - The temporal shift setting the algorithm considers is limited. Specifically,it does not continuously update its model as new data arrives (online learning). Theoretical Claims: - The theoretical analysis for binary classification setting (section 3) provide no new insights compared to existing works in domain adaptation literature. In particular, the error is bounded by the distance of the joint data distribution between training and testing datasets is widely known. - While the paper claims that higher-order wavelets (e.g., Daubechies wavelets) can improve estimation, it does not provide a theoretical comparison between different wavelet families. - The analysis is limited to a specific type of temporal shift, as the authors only consider scenarios where the training and testing data are modeled as a linear combination of two base distributions. Experimental Designs Or Analyses: - The authors primarily validate their method on synthetic datasets (e.g., Random and Doppler signals) rather than real-world time series data such as financial market data, climate trends, or network traffic patterns. This limits the generalizability of their findings to practical applications where temporal distribution shifts may be more complex and unpredictable. - The study focuses on classical estimation and statistical learning techniques, omitting comparisons with modern deep learning approaches that incorporate adaptive architectures for handling temporal distribution shifts (e.g., transformer models or recurrent neural networks) - The authors leverage higher-order wavelets (i.e., Daubechies-8) but does not thoroughly explore how different wavelet families affect estimation performance. Ablation study need to be conducted to explore this issue. Moreover, there is no discussion on when Haar wavelets suffice versus when higher-order wavelets provide advantages, leaving an open question about optimal wavelet selection. - The impact of hyperparameter selection requires further discussion. Specifically, the authors rely on a fixed soft-thresholding approach for wavelet denoising but do not investigate how adaptive tuning could enhance performance. - Experiment with binary classification setting is highly recommended to demonstrate the utility of the proposed method. Supplementary Material: I reviewed the supplementary material and found that the authors did not include code or data for reproducibility. Relation To Broader Scientific Literature: This paper falls within the fields of domain adaptation and time-series estimation, tackling challenges related to temporal distribution shifts in machine learning. However, its technical contributions and advantages over existing literature remain unclear. Please refer to other sections for further details. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Please see the above sections. Other Comments Or Suggestions: Please see the above sections. Questions For Authors: Please see the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Experiments on real dataset** Please see the response to Reviewer c9dT under the same comment title. **How to do optimal wavelet selection.** This is a well-known model selection problem in wavelet denoising, not a drawback unique to our work. We acknowledge this in Section 7. Practitioners typically analyze data trends—if they follow a piecewise polynomial of degree $k$, a wavelet of order $k+1$ is chosen, as it effectively models such structures. However, higher-order wavelets introduce numerical instabilities and variance, though the latter can be mitigated with more data. Selecting a wavelet basis is akin to choosing a kernel in Gaussian Processes—application-specific and guided by practical intuition. A more comprehensive exposition can be obtained from [5]. **Theoretical justification for using higher order wavelets** Please see the comment to Reviewer 4jCD under the same comment title. **Computational efficiency in comparison to prior work (Mazzetto and Upfal , 2023)** Applying the algorithm from (Mazzetto and Upfal , 2023) requires $O(\log n)$ calls to an ERM oracle. While our method requires exactly one call as detailed in Section 4. This is the reason for computational efficiency. **How to use Algorithm 1 for binary classification?** Please refer to the statement of Theorem 9. We compute refined the loss estimate of a model at most recent timestamp. For this, loss sequence $\ell(f(x_n),y_n),\ldots,\ell(f(x_1),y_1)$ is given as input to Algorithm 1. ERM is performed using the returned loss estimate. **Algorithm uses fixed soft-thresholding for wavelet denoising, which may not always be optimal** The guarantee in Theorem 2 is obtained via the universal soft-thresholding based denoising. The bound is known to be minimax optimal in light of (Mazzetto and Upfal , 2023). **The algorithm assumes that the ground truth sequence has sparse wavelet coefficients but it's hard to verify this assumption** We would like to point out that such an assumption is not made. Instead the bounds naturally adapts to the sparsity of the wavelet coefficient and hence the degree of non-stationarity of the groundtruth without any prior knowledge. **The temporal shift setting the algorithm considers is limited and not applicable to online learning...** We clarify that our setup is offline: given all data $y_n, \dots, y_1$, the goal is to estimate the ground truth at $t=1$. For online settings, Algorithm 1 can be used iteratively to update estimates over time. The multi-resolution nature of wavelets enables efficient updates with $O(\log n)$ complexity per round. Unlike typical online methods with cumulative regret guarantees, our approach provides stronger per-round estimation error guarantees. **Study omits comparisons with transformers and RNN architectures** Our method operates in a data-scarce regime. For example, to estimate the groundtruth at timestamp 50 we only have just 50 data points. Advanced models like transformers and RNNs excel with abundant data. Hence, we compare against methods suited for low-data settings that are also known to have robust theoretical guarantees on estimation error. **Results on adaptive tuning schemes for thresholding** As per reviewer’s suggestion, we conducted experiments on other well studied thresholding schemes namely SUREShrink [1] and Energy based thresholding [3] heuristic. The MSE results are reported below for Doppler signal and different wavelet types. Name in brackets indicates the thresholding scheme. Similar observations hold true for Random ground truth signal. | Noise Level | Haar (sure) | Haar (energy) | Haar (soft) | |-------|------------|--------------|-------------| | 0.2 | 0.0376 ± 0.0023 | 0.0210 ± 0.0010 | 0.053 ± 0.0017 | | 0.3 | 0.0400 ± 0.0031 | 0.0265 ± 0.0015 | 0.056 ± 0.0018 | | 0.5 | 0.0490 ± 0.0044 | 0.0456 ± 0.0024 | 0.065 ± 0.0018 | | 0.7 | 0.0653 ± 0.0051 | 0.0726 ± 0.0032 | 0.072 ± 0.0031 | | 1.0 | 0.0938 ± 0.0055 | 0.1265 ± 0.0054 | 0.088 ± 0.0035 | | Noise Level | DB8 (sure) | DB8 (energy) | DB8 (soft) | |-------|------------|--------------|-------------| | 0.2 | 0.0190 ± 0.0013 | 0.0195 ± 0.0007 | 0.0204 ± 0.0007 | | 0.3 | 0.0257 ± 0.0017 | 0.0274 ± 0.0010 | 0.0265 ± 0.0010 | | 0.5 | 0.0464 ± 0.0029 | 0.0527 ± 0.0015 | 0.0444 ± 0.0016 | | 0.7 | 0.0760 ± 0.0054 | 0.0904 ± 0.0023 | 0.070 ± 0.0021 | | 1.0 | 0.1406 ± 0.0090 | 0.1697 ± 0.0035 | 0.129 ± 0.0058 | This experiment shows that some thresholding schemes may outperform universal soft thresholding empirically, but no single method is best overall. While SUREShrink is known to achieve minimax MSE optimality in non-parametric regression, its extension to point-wise bounds remains unclear. We will include these insights in the manuscript. **References** [1] https://www.jstor.org/stable/2291512 [3] https://digital-library.theiet.org/doi/full/10.1049/iet-smt.2016.0168 [5] https://www.sciencedirect.com/book/9780123743701/a-wavelet-tour-of-signal-processing
Summary: This paper investigates the problem of learning under temporal distribution shift, where the task is to estimate the ground truth related to the last observation under minimal stationarity assumptions. They prove new bounds on existing versions of a soft-thresholding algorithm for the problem, and translate these findings to general classification and learning problems under temporal distribution shift. The key contribution of the work is that their bounds depend on the sum of the wavelet coefficients, which allow for fast rates when the sparsity level is high. Claims And Evidence: The claims are supported by clear and convincing evidence. I would appreciate more insight into the proof techniques for each of the Lemmas, but the results themselves are convincing. Methods And Evaluation Criteria: Synthetic datasets are provided, and the paper’s approach is shown to outperform the state of the art in a number of cases. The paper could benefit from a real data experiment or experimental application. Theoretical Claims: I did not check the correctness of proofs, and the lack of proof ideas/techniques makes it hard to verify the accuracy of proofs in the main body. The results themselves are convincing and not completely surprising, leading me to believe that the theoretical claims are not overstated. Experimental Designs Or Analyses: The experimental methodology seems sound. Supplementary Material: No Relation To Broader Scientific Literature: The paper considers an important and foundational problem in statistics and machine learning, and relates its findings to previous related works on the Total Variation Denoising problem tackled by Van de Geer and more immediately related works such as Mazzetto and Upfal 2023. The paper builds upon these previous works and expands the literature on the problem by proving new upper bounds on the risk of estimation under temporal distribution shift, and highlight the adaptive nature of such wavelet based methods. The main contribution of the paper as per my understanding is the precise quantification of the role of sparsity in the accuracy of these wavelet-based methods. It does not, as far as I know, propose an entirely new methodology or algorithm, so its contribution is mainly a useful theoretical one to better understand existing algorithms in the field. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: I think the paper is mostly clear, with a few minor drawbacks related to clarity: - Algorithm 1, what is H? not defined - Thm 2 “syetm” - Lemma 4: not clear whether this is an already known result, this appears very foundational, please cite if it came from somewhere else or state that this is a previously unknown result and an original contribution. - Def 6: I do not understand the last line, could you explain that more clearly? WIth the T1=Ts if T = Tr, etc. - Paragraph before section 4 is not clear to me, with a few spelling/grammar mistakes. - Theorem 9 typo “defined obtain” - First paragraph of section 5, “has been not uncovered” typo/unclear Other Comments Or Suggestions: N/A Questions For Authors: 1) Below the definition of the total variation class, you define the alternate sequence penalising the sum of squared differences, but the example you give is not the sum of squared differences right? Is this a typo? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. Please see the responses below. **Experiments on real dataset** As an application of our proposed methods, we conduct a model selection experiment using real-world data. We evaluate our method on data from the Dubai Land Department (https://www.dubaipulse.gov.ae/data/dld-transactions/dld_transactions-open), following the setup identical to that of [2]. The dataset includes apartment sales from January 2008 to December 2023 (192 months). Each month is treated as a time period, where the goal is to predict final prices based on apartment features. Data is randomly split into 20\% test, with two train-validation splits: (a) 79\%-1\% and (b) 75\%-5\%. For each month $t$, we train Random Forest and XGBoost models using a window of past data where we consider window sizes $w \in [1,4,16,64,256]$, yielding 10 models per month. Validation MSEs from past and current months are used to refine the current month’s estimate of MSE via Algorithm 1 or ARW from [2]. The refined validation scores are used to select the best model for final MSE evaluation on test data. We report the average MSE of this model selection scheme over 192 months and 5 independent runs, comparing our method to ARW [2]. In the table below, HAAR and DB8 are versions of Algorithm 1 with the corresponding wavelet basis given as input. Case 1: 79\%-1\% Train-Validation split for each month -- | Method | MSE ± Std. Error | |---------|------------------| | **ARW (from [2])** | $0.079 \pm 0.0005$ | | **HAAR (ours)** | $0.0722 \pm 0.0006$ | | **DB8 (ours)** | $0.0762 \pm 0.0002$ | Case 2: 75\%-5\% Train-Validation split for each month -- | Method | MSE ± Std. Error | |---------|------------------| | **ARW (from [2])** | $0.0719 \pm 0.0005$ | | **HAAR (ours)** | $0.0736 \pm 0.0011$ | | **DB8 (ours)** | $0.0768 \pm 0.0008$ | We see that wavelet based methods shine especially when the validation data is scarce. This allows us to include more data for training while still allowing to obtain high quality estimates for validation scores. Such a property can be especially helpful in data-scarce regimes. Unlike the synthetic data experiments, here we find that Haar wavelets perform better than DB8. This can be attributed to the following facts: i) the noise in the observations depart from iid sub-gaussian assumption and; ii) the high degree of non-stationarity in the pricing data as indicated by Fig.6(b) in [2] makes the underlying trends to have a low degree piecewise polynomial structure. This makes the groundtruth irregular (or less smooth) which can be suitably handled by lower order Haar wavelets which are also less smooth and abrupt (see Fig. 2 in Appendix). **Lemma 4: please cite if it came from somewhere else or state that this is a previously unknown result** We have mentioned in the proof that Lemma 4 that it is adapted from (Achille and Soatto, 2018). For clarity, we will add the phrase “(adapted from Achille and Soatto, 2018)” in the Lemma statement itself. **Def 6: I do not understand the last line, could you explain that more clearly? With the T1=Ts if T = Tr, etc** Thanks for pointing this out! There is a typo and the sentence should be “Here T represents training data T≡T_r (or testing data T≡T_s) and T1 ≡ T_s (or T1 ≡ T_r).” We will revise the manuscript accordingly. We will also add the following explanation into the manuscript. “Specifically, under the Training Distribution Shift Scenario, the training data T≡T_r consists of (1) samples from the testing distribution T_1≡T_s, and (2) samples from a dissimilar distribution T_2. When, more and more dissimilarly-distributed samples from T_2 are added into the training distribution, this leads to higher dissimilarity ratio $\beta$. Under the Testing Distribution Shift Scenario, the testing data T≡T_s consists of (1) samples from the training distribution T_1≡T_r, and (2) samples from a dissimilar distribution T_2. When, more and more dissimilarly-distributed samples from T_2 are added into the testing distribution, this leads to higher dissimilarity ratio $\beta$. **Typos** Thanks for pointing out the typos. We will fix them. $H$ in Algorithm 1 should be $W$ the matrix for wavelet transform. The example below the definition of the TV class must be sum of squared differences as the reviewer pointed out. This is a typo that will be corrected. **References** [1] Adapting to Unknown Smoothness via Wavelet Shrinkage, David L. Donoho and Iain M. Johnstone, Journal of the American Statistical Association, 1995 [2] Model Assessment and Selection under Temporal Distribution Shift, Elise Han, Chengpiao Huang, and Kaizheng Wang, ICML 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your responses. I found the real data experiment useful and a good addition to the paper. I will maintain my score.
Summary: The paper studies the problem of temporal distribution shift, by formulating the problem as parameter estimation in an univariate non-stationary sequence with sub-gaussian noise. The authors propose a wavelet denoising approach for this estimation problem, and theoretically upper bound its error rate. The authors then show how to use this algorithm as a subroutine for learning using ERM. Finally, the authors show that their method outperforms a sliding window baseline from prior work on synthetic data. ## update after rebuttal I thank the authors for their rebuttal. Most of my concerns have been addressed, except for the limitations of the univariate setting. I will keep my score. Claims And Evidence: 1. The authors empirically show that higher order wavelets (DB8) outperforms the Haar wavelets. However, there does not seem to be theoretical justification for why this happens. Having a toy example to demonstrate this would also be helpful. 2. The authors study a very simple system with a univariate time series. How would the theorems and algorithm be extended to the multivariate setting? 3. The authors hint (at the end of Section 4) that it possible to solve the ERM problem using a differentiable surrogate loss, but this is not expanded on further methodologically or empirically, and backpropagating through Algorithm 1 may be non-trivial. Methods And Evaluation Criteria: 1. The authors have only empirically evaluated the estimation setting (Section 2) in their experiments, but not the learning setting (Section 4). The authors should evaluate their method on the learning setting as well, particularly when the hypothesis class is infinite and gradient-based methods are required (as the authors discuss at the end of Section 4). 2. The authors have only tested on synthetic data with fairly simple ground truth signals. It would be interesting to test the learning setting on real time-series such as those in the Wild-Time dataset. Theoretical Claims: I did not check the proofs of the theorems. Experimental Designs Or Analyses: Please see "Methods And Evaluation Criteria" above. Supplementary Material: I reviewed the related works (Appendix A) and the additional empirical results (Appendix E). Relation To Broader Scientific Literature: The authors study a variant of the problem proposed in Hanneke and Yang (2019) for learning in the nonstationary sequential setting. Their primary baseline is the work by Mazzetto and Upfal (2023), which proposes a sliding window algorithm. The authors propose an algorithm based on wavelet transforms which has a long history in time series analysis. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Typos: "syetm" on L147, "gven" on L254 Questions For Authors: 1. Could the authors give some intuition on how they are able to achieve Lemma 1 and Theorem 2 without any assumptions on $\theta_i$, e.g. bounds on $|\theta_i|$ or $|\theta_{i+1} - \theta_i|$? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. Please see the responses below. **Why DB8 can outperform Haar wavelets in the synthetic experiments** The main reason why higher order wavelets can outperform Haar is because, $k+1$-th order wavelets provide an ideal basis for sparsely compressing the information in a gorundtruth signal that has a piecewise polynomial trend with degree upto $k$. A sparse set of wavelet coefficients of the groundtruth signal leads to a low value for the bound in Lemma 1. We attempt to capture this intuition empirically as demonstrated in Fig.4 in Appendix and pointed the readers to that figure in the main text itself (L165, col 1). **Theoretical justification for using higher order wavelets** Wavelet basis of order $k+1$ is known to be an ideal basis for sparsely representing piecewise polynomials of degree (upto) $k$, or more generally groundtruths with low $k$-th order total variation. $k$-th order total variation measures the the variation incurred in the $k$-th order (discrete) derivatives of a groundtruth. It is known in the literature that low $k$-th order TV for a groundtruth is equivalent to sparse (measured in the sense of L1 norm) wavelet coefficients in a $k+1$-th order wavelet basis. For a precise quantification see Theorem 1 in [4]. **Extension to the multivariate setting** We acknowledge that we study the estimation in a univariate setting. Extension of our point-wise estimation proposal to multivariate time series that takes into account inter-task correlations is an interesting direction to explore as a future work. **Experiments on real dataset** Please see the response to Reviewer c9dT under the same comment title. **Intuition behind achieving Lemma 1 and Theorem 2 without assuming prior bounds on $|\theta_i|$ or $|\theta_{i+1} - \theta_i|$** We first address informally why no prior bound on $|\theta_i|$ is required. The wavelet soft-thresholding estimator applies a soft-threshold at the level of $\tau := \sigma \sqrt{\log n}$. Suppose $\alpha_i$ is a wavelet coefficient of the groundtruth data. Consider the case when $|\alpha_i| > \tau$. Then the shrinkage caused by soft-thresholding only introduces a bias of at-most $\tau$. Similarly when $|\alpha_i| \le \tau$, then the bias is capped at $|\alpha_i|$ itself. This intuition can be extended to noisy wavelet coefficients upto constants using concentration arguments. Eventhough we do not assume any prior knowledge on $|\theta_{i+1} - \theta_i|$, note that the bound in Theorem 2 naturally gets worse when there is lot of intra-sequence total variation. We remark that in the bound $\bar theta_{1:t} = (\theta_1 + \ldots + \theta_t)/t$. Hence higher intra-sequence total variation leads to larger values for the bound. The highlight here is that the bound (which is minimax optimal as per the results of Mazzetto and Upfal 2023) in Theorem 2 is obtained with no such prior knowledge on the intra-sequence variation. The reason for attaining such an adaptive bound is primarily algebraic as highlighted in the proof of Theorem 2. **References** [4] Minimax Estimation via Wavelet Shrinkage, David L. Donoho and Iain M. Johnstone, Annals of Statistics, 1998
Summary: Given noisy observations of independent but non-identical random variables, the authors consider the problem of estimating the most recent ground truth. A key insight of the paper is that although the ground truth sequence may be non-stationary in the time domain, its wavelet transform reveals a sparse structure. Leveraging this sparsity, the authors propose a wavelet soft-thresholding (denoising) algorithm that automatically adapts to the level of temporal shift. In addition to providing pointwise error bounds for this estimator, the paper extends the ideas in two important directions: - It analyzes the effect of temporal distribution shift on the performance of machine learning models via upper and lower bounds on the loss function. - It applies the estimation method to binary classification under distribution shift—developing a computationally efficient ERM-based algorithm that uses a single call to an ERM oracle yet achieves near-optimal excess risk bounds. Finally, the paper draws connections between its estimation error guarantees and the classical problem of total-variation (TV) denoising, which shows that any algorithm achieving similar pointwise error guarantees is minimax optimal for TV-denoising. Claims And Evidence: Yes I found the claims to be well supported. Methods And Evaluation Criteria: I found the methodology to be well formulated. Specifically the authors use the following set of ideas - Wavelet Representation: Although the ground truth $\theta_1, \dots, \theta_n$ may be nonstationary in the time domain, its wavelet transform reveals a sparse structure; only a few key wavelet coefficients carry most of the information. - Wavelet Denoising Algorithm: The proposed algorithm (Algorithm 1) proceeds as follows: 1. Wavelet Transform: Compute the empirical wavelet coefficients: $\tilde{\beta} = W y$ where $y = [y_n, \dots, y_1]^T$ and $W$ is the wavelet transform matrix. 2. Soft-thresholding: Apply soft-thresholding to the coefficients: $\hat{\beta} = T_\lambda(\tilde{\beta}), \quad \text{with} \quad T_\lambda(x) = \operatorname{sign}(x)\max\{|x| - \lambda, 0\}.$ 3. Reconstruction: Obtain the denoised signal by the inverse wavelet transform: $\hat{\theta} = W^T \hat{\beta}.$ The final estimate $\hat{\theta}_1$ is the last coordinate of $\hat{\theta}$. Theoretical Claims: I did not check the proofs in detail, but the results are believable given prior classic work. - Pointwise Error Bound: For the Haar wavelet system, the paper shows that with high probability, $ |\hat{\theta}_1 - \theta_1| \leq \kappa \cdot U(r^*),$ where $U(r^*)$ reflects the local bias-variance trade-off of the data, and $\kappa$ is a constant (depending on logarithmic factors). - Adaptivity via Sparsity: The error bound is directly linked to the sparsity of the wavelet coefficients. Using higher order wavelet systems (e.g., Daubechies DB8) can capture more complex trends, potentially leading to even sharper error bounds. - Excess Risk for Classification: The approach is extended to binary classification under temporal distribution shift. By employing wavelet-based loss estimates, the authors design an empirical risk minimization (ERM) procedure that achieves near-optimal excess risk bounds with only one ERM call. - Optimality for Total Variation Denoising: The paper proves that any algorithm satisfying a pointwise error bound like that of their wavelet estimator is minimax optimal for the TV-denoising problem (up to logarithmic factors). Experimental Designs Or Analyses: The algorithm performs well on the simulations. Although the baselines are better on a certain class of ground-truth signals, the proposed approach looks reasonably robust (I suspect the performance drop is due to the ground truth signal not being smooth enough). Supplementary Material: I didn't check it in detail. Relation To Broader Scientific Literature: - I think this paper makes a significant contribution to the theory of learning under distribution shifts by leveraging classical techniques from Donoho. - The error guarantees are pointwise, which is stronger than typical cumulative error metrics in online learning. - The algorithm is also computationally feasible as it can leverage fast Wavelet transforms. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: - The rate in Theorem 11 has $n^{\frac{1}{3}}$ for the square loss and $n^{\frac{2}{3}}$ for the absolute error. Have these been interchanged? - The paragraph below corollary 8 needs to be rewritten. There are many missing details and typos, such as "When $\beta$, the lower bound reduces to the conclusion that the loss is larger than 0" Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for correctly recognizing and appreciating our contributions. **Rates in Theorem 11** The displayed rates are indeed the optimal rates for squared and absolute losses. **Paragraph below corollary 8** We will fix the typos and rewrite it for better clarity.
Summary: In this paper, the authors  study the problem of estimation and learning under temporal distribution shift. In simple words (of the author),  they consider an estimation task where  n independently drawn observations are given by y_n, \hdots, y_1 and E[y_i] = \theta_i. The goal is to construct an estimator for theta_i. The authors do not assume that the observations are identitally distributed. Claims And Evidence: The authors claim that wavelet-denoising-based algorithm using Haar tranform achieves the optimal point-wise estimation error guarantees, which has been proved for subGaussian noise, which is a favourable case. It has not been demonstrated for other distributions. The authors demonstrate the effect of temporally shifted distribution in ML settings. However, this is done under assumptions with infinite number of samples from the training data for perfect model training, and perfect ML training is considered which largely dilutes the problem in the ML setting. Methods And Evaluation Criteria: Experimental results are most lacking in this work. Only synthetic datasets have been used. Demonstration on real-world data sets that have temporally shifted data is useful. Baselines are not provided and all results are based on ablations. Theoretical Claims: Some of the proofs are checked and following are the issues: 1.. Lemma 1 holds for the ground truth that has a linear representation using wavelet coefficients. What happens to the upper bound when such a linear representation does not exist. More specifically, it is essential that the authors list the set of assumptions prior to analysis. Several constants in Lemma 1 are not defined. Further, H in algorithm table is not defined. 2. In Theorem 2, what is \bar theta?  3. Lemma 4 and proof is not clear. The proof is adapted from (Achille and Soatto, 2018)  and several notations are not defined. Experimental Designs Or Analyses: NA. Supplementary Material: Some parts of the proof was reviewed. Relation To Broader Scientific Literature: The problem presented by the authors is very interesting as it takes an alternate route to analysing temporally shifted data. One way to model such a data would be to assume stationarity and fit a random process. However, this work does not assume stationarity or identical distributions. The modelling is achieved using wavelet transforms which is a more general formulation. The stationarity of the sequence in turn affects the sparsity of the wavelet coefficients. Essential References Not Discussed: NA Other Strengths And Weaknesses: The problem addressed is interesting and important connections between the statistical literature and machine learning formulation has been made. However, several simplifying assumptions hider the true potential of the work. The major weakness of the work are the experimental results as detailed above. Other Comments Or Suggestions: Some additional comments are as follows: 1. Here, the data is not modeled as a random process. Since it is a closely related technique, it is appropriate to introduce literature in this direction? 2. In Lemma 1, subGaussian assumption helps achieve faster convergence usually. What happens under other probabilistic assumptions on \epsilon? 3. Lemma 1 holds for the ground truth that has a linear representation using wavelet coefficients. What happens to the upper bound when such a linear representation does not exist. More specifically, it is essential that the authors list the set of assumptions prior to analysis. 4. Several constants in Lemma 1 are not defined. Further, H in algorithm table is not defined. 5. In Theorem 2, what is \bar theta?  6. Lemma 4 and proof is not clear. The proof is adapted from (Achille and Soatto, 2018)  and several notations are not defined.  7. This work attempts to connect the statistical formulation to that of machine learning. However, the main essence of such an extension arises in how the dataset challenges are handled. However assumptions with  infinite number of samples from the train-ing data for perfect model training and perfect ML training are considered which largely dilutes the problem in the ML setting. Questions For Authors: Same as above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their comments. Please see the responses inline. **Experiments on real dataset** Please see the response to Reviewer c9dT under the same comment title. **Experiments are based on ablation** Ablation on noise level was conducted primarily to test the validity of the algorithm for various signal-to-noise levels. **Existence of Linear Representation of Ground Truth** Wavelet basis are universal basis for $\mathbb{R}^n$. This means that they are of full rank and any ground-truth sequence in $\mathbb{R}^n$ can be written as a linear combination of the basis vectors. So the existence of such linear combination is not an assumption, but a structural property that always holds true. **Extending to other noise distributions** In this work we primarily focused on subgaussian noise, as the main motivation for our methods is to get high quality estimates of the groundtruth without any prior knowledge on how it evolves. This is common in many studies from non-paramteric regression (see for eg [6] for a comprehensive exposition). That said, extending to other noise distributions that are heavy tailed is a possible future direction of research. **Missing Constants in Lemma 1** We believe that we have defined all constants pertaining to Lemma 1 in its statement. **$H$ in Algorithm 1** Thanks for pointing this typo, instead of $H$, it should be the Wavelet Transform Matrix $W$ that is given as input to the algorithm. **Meaning of $\bar \theta_{1:t}$** It is the average of last $t$ ground truth values. $\bar \theta_{1:t} := (\theta_1+\ldots+\theta_t)/t$. We will update the manuscript to reflect this. **Assumptions for estimation** The only assumption we make is the fact that the noise in the observations are iid sub-gaussian as stated in Lemma 1. No further assumptions are made. **Connection to random process** Random processes such as Gaussian processes impose prior structural assumptions (which are often hard to verify) on how the ground-truth evolves over time. However, no such structural assumptions are placed while deriving our bounds in Lemma 1 and Theorem 2. Instead, the bound naturally adapts to the degree of smoothness in the ground-truth. We will mention this in the context of random process. However, viewing our problem setup through the lens of GPs holds the potential to unlock new properties that are simultaneously connected to estimation quality and uncertainty quantification. This is indeed a good direction to explore in the future. **The proof is adapted from (Achille and Soatto, 2018)** In the paper, we have acknowledged in Appendix C that we adapt the proof from (Achille and Soatto, 2018). Note that (Achille and Soatto, 2018) did not make a distinction between the training distribution $D_{Tr}$ and the predictive distribution $p_{\theta}( y | \mathbf{x} )$. As the result, it is important to include our full proof so that avid readers are able to understand and recreate all of the mathematic derivations. The proof will also serve as a reference for researchers who want to improve our performance bounds in Theorem 7. **Infinite number of training data and perfect ML model training in Section 3** Our goal with Section 3 was just to act as a compelling motivation for Section 4. In particular, to formally show how distribution shift between training and test can degrade model performance. Though we deal with population level losses, extension to finite samples can be realized with standard concentration inequality arguments. Our focus was just to show formally that if the test distribution is a mixture of training and some other contaminant distribution, then the model performance degrades as the contamination fraction increases. This sets the stage that performance degradation is inevitable even in an ideal case for perfect model training and it is important to develop algorithms whose performance degrades gracefully under distribution shift in practical use-cases. Note that to develop our algorithms in Sections 4, we already considered realistic scenarios with limited training data and imperfect machine learning training processes. **References** [6] Adaptive Piecewise Polynomial Estimation via Trend Filtering, Ryan Tibshirani, Annals of Statistics 2014
null
null
null
null
Accelerating Spectral Clustering under Fairness Constraints
Accept (poster)
Summary: - This paper proposes a computationally efficient algorithm for the fair spectral clustering problem. - The key of the proposed algorithm is the use of DC (Difference of Convex functions), which leads to the ADMM framework. - The authors claim that the proposed method is empirically faster than two existing algorithms, o-FSC and s-FSC. - The main distinction is that the proposed algorithm is based on gradient-based optimization, whereas the two existing algorithms rely on eigendecomposition routines. ## update after rebuttal - Thank you for the clarifications. I will maintain my rating. Claims And Evidence: - Overall, the claims are supported by the experiments. - The gap between the theoretical and empirical complexities of s-FSC and the proposed algorithm could be discussed more thoroughly (see **Questions For Authors** below for details). Methods And Evaluation Criteria: - Several experimental setups (e.g., metrics) are aligned with those used in existing methods. - The FacebookNet dataset was used in both Kleindessner et al. (2019) and Wang et al. (2023); however, this paper does not consider it. Theoretical Claims: - Theoretical convergence of the proposed algorithm - While ADMM convergence is well-known for convex problems, as the authors mentioned, does the proposed ADMM with the fairness constraint also guarantee theoretical convergence? - Question about the proof of Proposition 3.2: - In Eq. (24), $\inf_{V}$ is applied to $-\langle V, MH \rangle$, where it appears that $H$ is not a variable with respect to the infimum operator. However, in Eq. (25), $\inf_{H, V}$ is applied to $-\langle V, MH \rangle$, and in Eq. (26), $\sup_{H}$ is applied to $\langle MV, H \rangle$. - How can the duality (or equivalence) be proven? Or, are these results well-established? Experimental Designs Or Analyses: - Questions about the comparison between s-FSC and the proposed method: - Figure 2: For $k = 2$, neither s-FSC nor the proposed method appears to achieve high balance (0.2 for LastFMNet and over 0.5 for the 4area dataset), which is far from the perfect fairness (balance = 1). Can these results be considered fair? - Additional experiments/metrics would enhance the practical effectiveness of the proposed algorithm: - Baselines: While the objective of the proposed algorithm aligns closely with those of o-FSC and s-FSC, given that the paper considers the spectral clustering for group fairness, it would be beneficial to compare it with other existing algorithms for group fairness. Examples are: - Bera et al. (2019) https://proceedings.neurips.cc/paper_files/paper/2019/file/fc192b0c0d270dbf41870a63a8c76c2f-Paper.pdf, - Backurs et al. (2019) https://proceedings.mlr.press/v97/backurs19a/backurs19a.pdf - Metrics: - The average balance metric used is a relaxed version of the commonly adopted balance metric in the fair clustering literature, which is defined as the minimum of Eq. (3) over the cluster indices $l \in [k]$. Computing the minimum balance would strengthen the experiment section of this paper. Supplementary Material: - No explicit supplementary materials were attached. Relation To Broader Scientific Literature: - The faster optimization can improve the practical applicability of the proposed algorithm in real-world. Essential References Not Discussed: - Fairness in constrained spectral clustering (https://doi.org/10.1016/j.neucom.2025.129815) - This work was very recently accepted. However, I believe the authors should at least conceptually compare their approach with this study. Other Strengths And Weaknesses: - N/A Other Comments Or Suggestions: - N/A Questions For Authors: - Complexity of s-SFC and the proposed algorithm: - While Sections 2.2.2 and 3.2 claim that both s-FSC and the proposed algorithm theoretically have a complexity of $O(n^2)$, the experimental results indicate that the proposed algorithm significantly reduces computation time on real-world datasets. Can the authors explain why? Ethical Review Concerns: - N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > Q1. FacebookNet dataset To address the reviewer's concern, we ran additional experiments on FacebookNet. **Table ztMM-1:** Clustering results on FacebookNet |#Clusters|Method|Time (s)|Balance|SC Cost| |---|---|---|---|---| |2|o-FSC|0.798|1.00|0.126| |2|s-FSC|0.193|1.00|0.126| |2|Ours|0.055|1.00|0.133| |25|o-FSC|9.908|0.84|14.114| |25|s-FSC|6.422|0.84|14.114| |25|Ours|0.131|0.84|14.128| |50|o-FSC|12.243|0.58|37.084| |50|s-FSC|11.558|0.58|37.084| |50|Ours|0.215|0.58|37.100| **These results further demonstrates the computational advantage of our method**, while achieving similar clustering quality to exact algorithms. We tested on a new large-scale dataset in *Reply to Q4 of Reviewer jPJw*. > Q2. ADMM convergence. Please refer to *Reply to Q1 of Reviewer 1h8o*. > Q3. Proof of Proposition 3.2. Our proof relies on the identification of $\phi(MH)$ as $\sup_{V} \langle V, MH \rangle - \phi^\star(V)$ which holds true as $H \mapsto \phi(MH)$ is convex. The infimum over $H$ is performed over the whole function $g(H) - \phi(MH)$, explaining how we coupled the infimums over the different variables to go from (24) to (25). We will add parentheses to better highlight that the problem we are tackling is $\inf_{H} \left [ g(H) - \sup_{V} \langle V, MH \rangle - \phi^\star(V) \right ]$. The equivalence between solving $\inf_H g(H) + \phi(MH)$ and $\inf_V \phi^\star(V) - g^\star(MH)$ is precisely the goal of Proposition 3.2. It was established in [Tonin et al. 2023] for the case of abstracts linear spaces and non-self-adjoint operators, the proof here follows a similar structure. > Q4. Balance interpretation. As noted in [1], Problem (6) doesn't guarantee perfect balance in general. Enforcing it may degrade spectral objective ($Tr(H^\top M H)$), so a trade-off is intrinsic to Fair SC. Our key contribution is a much faster algorithm for solving the *existing* Fair SC formulation. Fig. 2 shows our method achieves similar balance to the exact eigendecomposition, but does so substantially faster. Future work can explore regularizations to further improve balance. > Q5. Additional recent baselines and minimum balance. Our method follows the established line of work on fair SC, which enforces fairness via linear constraints in the embedding [1,2]. - [Bera et al.] propose LP-based $k$-(means, median, center) clustering. Using their formulation in our work would ignore the RatioCut objective, resulting in suboptimal solutions wr.t. the spectral objective. - [Backurs et al.] use fairlets for prototype-based clustering. However, extending the fairlet analysis, which relies on the $𝑘$-median and $𝑘$-center cost of the fairlet decomposition, to the spectral setting is not trivial. SC involves a spectral embedding step followed by a clustering in the embedding space, where reassigning points within a fairlet can significantly alter the spectral embedding and hence potentially violating fairness, making it non-trivial to incorporate their analysis in the fair SC case. The primary contribution of our present work is to design a significantly faster method for the already established Fair SC problem defined in [1], rather than designing alternative fair clustering problems. Therefore, these baselines are not directly comparable w.r.t. problem scope and spectral solution quality. We agree that they offer valuable alternative perspectives on fairness in the more general clustering literature and we will discuss them in our Related Works section. For balance, we follow [1,2] in reporting average balance, but also include minimum balance for Tab. 2 with $k=25$ per the reviewer’s suggestion. **Table ztMM-2:** Min balance |Dataset|Time (s-FSC)|Time (Ours)|Min. Balance (s-FSC)|Min. Balance (Ours)| |---|---|---|---|---| |LastFM|19.08|**4.59**|0.0027|0.0029| |Thyroid|30.49|**7.38**|0.0011|0.0012| |Census|136.60|**15.78**|0.0001|0.0001| |4area|166.92|**25.85**|0.1582|0.1517| > Q6. Theoretical complexity. Complexities of methods are: - o-FairSC: Computes null space ($O(nh^2)$) and eigendecomposition ($O((n-h)^3)$). - s-FairSC: Each eigensolver iteration costs $O(n^2+nh^2+nk^2)$, with the constant depending the Laplacian spectrum. - Ours: Dominated by matrix multiplications (applying $M$ to an $n \times k$ matrix), with $O(n^2 k)$. Since $k \ll n$, and modern libraries optimize this operation, we gain substantial practical speedups. The improvement is therefore in the efficiency of the core operations. In fact, while for an $n \times n$ matrix both matrix multiplication and matrix eigendecomposition have the same $n^3$ complexity, the former is much more efficient in practice. In the final version, we will add a separate paragraph on computational complexity detailing the above points. [1] Kleindessner et al. Guarantees for Spectral Clustering with Fairness Constraints [2] Wang et al. Scalable Spectral Clustering with Group Fairness Constraints
Summary: This work addresses the issue of fairness in spectral clustering by proposing a new efficient method for fair spectral clustering (Fair SC). The authors introduce a novel algorithm that casts the Fair SC problem within the difference of convex functions (DC) framework and employs an alternating direction method of multipliers (ADMM) type of algorithm adapted to DC problems. The key contributions include a new variable augmentation strategy and the use of gradient-based algorithms to solve the subproblems efficiently, avoiding the computationally expensive eigen-decomposition required by previous methods. The paper demonstrates the effectiveness of the proposed method through numerical experiments on both synthetic and real-world datasets, showing significant speedups in computation time over prior art, especially as the problem size grows. ## Update after rebuttal Thanks for your explanation, I will maintain my rating. Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence. The authors provide a detailed derivation of the proposed algorithm and demonstrate its effectiveness through extensive experiments. The main claims are: 1. The algorithm achieves higher computational efficiency compared to existing methods (o-FSC and s-FSC). 2. The method maintains the fairness constraints while achieving comparable clustering quality. The evidence provided includes: 1. Theoretical analysis of the DC framework and ADMM algorithm. 2. Numerical experiments on synthetic datasets (m-SBM, RandLaplace) and real-world datasets (LastFMNet, Thyroid, Census, 4area). 3. Comparison of runtime and balance metrics with existing methods. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the problem at hand. The authors use standard benchmarks and metrics to evaluate the performance of their algorithm. The evaluation criteria include: runtime, balance, clustering cost. The methods are well-suited for the problem, as they address the computational inefficiency of previous methods while maintaining fairness constraints. Theoretical Claims: When the authors switch the problem 6 (traditional fair spectral clustering optimization problem) in the work to their proposed optimization problem, they made two changes: 1. Use $M^2$ instead of $M$, 2. Enforce fairness on $MH$ instead of $H$. It would be better to mathematically prove the feasibility of these two changes instead of empirical results. Experimental Designs Or Analyses: The experimental designs and analyses are valid. The experiments include: - Comparison of runtime and balance metrics with existing methods (o-FSC and s-FSC). - Sensitivity analysis of the ADMM penalty parameter α. - Scalability analysis on datasets of varying sizes and cluster numbers. However, it would be better to include more benchmark dataset and baseline method, like [FEN24](https://www.mdpi.com/2073-8994/17/1/12), [ZHA24](https://www.sciencedirect.com/science/article/pii/S0925231224009810?casa_token=T6cg70BHqNAAAAAA:Ow9fzmBXvJE6p3Y5vmFyUPCA-35q3KdbX2BwaSI5b1kHxBGhlxRtZp_CmyxOrokXzyvdN9oO), and other state-of-art methods for fair spectral clustering. Supplementary Material: This work did not provide supplementary material. Relation To Broader Scientific Literature: The key contributions of the paper are well-related to the broader scientific literature. The authors reference and build upon previous work in fair machine learning, spectral clustering, and optimization methods. Essential References Not Discussed: There are a few essential references that could be included for a more comprehensive understanding of the context. For example, there are several new fair spectral clustering (also gradient based) methods proposed in recent years. Could you review more related work and explain the difference, and the novelty of this work? Other Strengths And Weaknesses: **Strength:** 1. The proposed method significantly improves the computational efficiency of Fair SC. 2. The algorithm is shown to be effective on both synthetic and real-world datasets. **Weakness:** 1. There exists some other fair spectral clustering method, like [FEN24](https://www.mdpi.com/2073-8994/17/1/12), [ZHA24](https://www.sciencedirect.com/science/article/pii/S0925231224009810?casa_token=T6cg70BHqNAAAAAA:Ow9fzmBXvJE6p3Y5vmFyUPCA-35q3KdbX2BwaSI5b1kHxBGhlxRtZp_CmyxOrokXzyvdN9oO). [FEN24](https://www.mdpi.com/2073-8994/17/1/12) claims that they reach the same time complexity as this work. Could you add a more baseline method to this work for comparison to illustrate the strength of the proposed method. 2. When the authors switch the problem 6 (traditional fair spectral clustering optimization problem) in the work to their proposed optimization problem, they made two changes: - Use $M^2$ instead of $M$, - Enforce fairness on $MH$ instead of $H$. Could you add some theoretical analysis to explain the feasibility of these two changes? 3. This work could review more recent related work and discuss their limitations. 4. The performance of spectral clustering and fairness looks worse than other methods. Other Comments Or Suggestions: Some suggestions about writing: 1. For tables, the method with better numerical results could be highlighted. 2. Since this work is related to clustering, some figures for clustering results would be necessary to illustrate the effectiveness of the proposed method to maintain the performance and fairness. 3. Providing more details on the implementation of the proposed algorithm. 4. Could further add theory to convergence rate analysis. Questions For Authors: Please see weakness. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > Q1. Comparative analysis with recent works [FEN24, ZHA24] We first provide discussions, followed by experiments. - Our work designs a much *faster* method for the *existing* Fair SC problem defined in [3]. [ZHA24] focuses on improving balance and is orthogonal to our algorithmic contribution: it learns an induced fairer graph where Fair SC is applied. Future work could combine our method on top of [ZHA24] by applying our faster algorithm to their learned graph. - [FEN24] does not adopt [3]. They instead introduce fairness by changing the SC objective with a different fairness regularization term; they do not employ the group-fairness constraint $F^\top H=0$. They apply coordinate descent to this new objective where they relax the discrete clustering indicator constraint _without_ orthogonality constraints $H^\top H=I$. Therefore, [FEN24] solves a different problem than (6): their solution is far from the exact one of (6) leading to higher spectral cost and potential training instability. We ran additional experiments comparing with FFSC [FEN24], following the setup in Tab. 2. **Our method is 7–12x faster and achieves significantly better spectral clustering cost and fairness constraint satisfaction.** FFSC's modified objective shows a different balance-clustering trade-off with higher balance but worse clustering and group fairness. Note the two different measures of fairness, where balance promotes same number of individuals in each cluster (Eq. (3)), and group fairness ensures proportional representation (Eq. (4)). **Table iKL4-1:** Comparison with FFSC [FEN24] |Dataset|Method|Time (s) (↓)|Spectral Clustering Cost (↓)|Balance (↑)|Group Fairness $\|\|F^\top H\|\|^2$ (↓)|$\|\|H^\top H - I\|\|^2$ (↓)| |---|---|---|---|---|---|---| |LastFM|FFSC|33.74|1067.906|**0.2701**|4.032|3.91E+00| |LastFM|Ours|**4.59**|**1.086**|0.0093|**0.000014**|**1.36E-11**| |Thyroid|FFSC|56.11|3080.848|**0.0170**|15.79|1.02E+01| |Thyroid|Ours|**7.38**|**0.353**|0.0030|**0.000001**|**2.18E-11**| |Census|FFSC|193.92|146669.885|**0.0051**|9.47|2.14E+00| |Census|Ours|**15.78**|**130.973**|0.0004|**0.000012**|**4.12E-10**| |4area|FFSC|237.35|285174.392|**0.6403**|58.30|2.66E+00| |4area|Ours|**25.85**|**242.000**|0.3823|**0.000001**|**4.22E-10**| We also ran on new datasets in *Q1 to Reviewer ztMM* and *Q4 to Reviewer jPJw*. > Q2. Feasibility of changes. We address the two changes: - Casting the Fair SC problem directly as DC using $M$ requires computing $M^{1/2}$, with complexity akin to the original problem. Standard SC seeks the top eigenvectors of $M$, which are identical to those of $M^2$. Thus, without fairness constraints, optimizing with $M^2$ yields the same solution. Extensive empirical evidence confirms that even with fairness constraints, this substitution preserves clustering quality. - Enforcing fairness via $MH$ instead of $H$ enables efficient dualization of the ADMM subproblem. While in general $F^\top (MH)=0$ isn’t strictly equivalent to $F^\top H=0$, it enforces fairness on the affinity-weighted embeddings $MH$, which we noticed promotes similar groups. One intuition is that in the simpler SC case, where $H$ is top eigenvectors of $M$ as $MH = H \Lambda$ with $\Lambda$ eigenvalues, the constraint $F^\top (MH)=0$ matches $F^\top H = 0$. As shown in Tab. 3, our method achieves comparable balance and clustering cost to the exact algorithm on multiple real-world problems. While we agree that a stronger theoretical link would be ideal, we noticed that establishing it is challenging and remains an open problem. Extensive empirical validation supports the feasibility of this choice, making the method a strong candidate for efficient Fair SC. > Q3. Additional visualizations and implementation details We consider 2D datasets from [FEN24]: 'Elliptical, 'DS-577'. Figures are at https://imgur.com/a/GiBnUIs Clustering labels are shown by color; sensitive groups by shape. Legend: “C-i, G-j” for Cluster-$i$, Group-$j$. These plots show that our method produces assignments comparable to exact algorithms (o-FSC, s-FSC). Critically, we achieve this with reduced computations, as shown in the main paper. We emphasize that our main contribution lies in accelerating Fair SC - not improving fairness metrics over the existing Fair SC. Following reviewer suggestions, we will revise tables to bold best results. App. B provides implementation details, including hyperparameters, optimization settings, and $\alpha$-update rule. We will also release our code. > Q4. Convergence rate. For ADMM convergence, see *Q1 to Reviewer 1h8o*. Regarding rates: Deriving rates for ADMM in general nonconvex settings is challenging and typically requires assumptions difficult to verify in general for our (10). While a rate analysis is promising future work, our method shows significantly higher computational efficiency as detailed in the *Q4 to Reviewer ztMM*. [3] Kleindessner et al. Guarantees for Spectral Clustering with Fairness Constraints
Summary: This paper studies the problem of fair spectral clustering. The authors propose an ADMM-like algorithm for optimization with theoretical guarantee, and the experimental results show effectiveness in improving fairness. Claims And Evidence: Overall the claims are justified by evidences. However, the performance of s-FSC and the proposed method seems mostly comparable on all the datasets, as shown in Tab. 3. The only difference is in compuatational time, but both methods take at most few minutes. This poses question on the advantages or superiority of the proposed method. Methods And Evaluation Criteria: The methods and criteria for comparison are limited. For example, only one recent work is considered as the alternative baseline, and only balance is considered as the fairness metric. This makes evaluating the significance of this work hard. Furthermore, most experiments are conducted on low-dimensional data, while experiments on high-dimensional and large-scale datasets are needed to validate the effectiveness. Theoretical Claims: I have checked the proofs and theoretical results. However, there is no insight provided for the two propositions, and therefore it is hard to conclude theoretical contributions. Experimental Designs Or Analyses: The experimental designs look sound Supplementary Material: I have reviewed the supplementary results and proof. Relation To Broader Scientific Literature: This paper can be of contribution to fair unsupervised learning. However, I am not sure if the discussion leads to a significant contribution, as the setup is limited to unsupervised clustering. Essential References Not Discussed: I don't see any essential references missing. Other Strengths And Weaknesses: N/A. Other Comments Or Suggestions: Fig. 2 is a bit uninformative. I am not sure what conclusions I should draw from it. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: > Q1. Our contribution within the broader literature We note that our primary contribution is to design a significantly faster method for the *existing* well-established Fair SC problem as defined in [1], rather than improving the balance of Fair SC. We respectfully disagree that the performance of our method and s-FSC is "mostly comparable". Our method achieving similar balance to the exact algorithm validates the quality of our found solution. Our method consistently improves computational efficiency over SOTA s-FSC, as shown in Tab. 2,3 and Fig. 2,3. The improvement is particularly evident as the sample size $n$ and number of clusters $k$ grow, as shown in Tab. 1 and Fig. 3. Crucially, these speed-ups stem from the fact that our algorithm replaces the eigendecomposition of s-FSC with significantly more efficient DC steps, resulting in a more scalable approach for Fair SC on real-world datasets. We want to emphasize that our work may not limit itself to “unsupervised clustering". Clustering is unsupervised by definition and is the main focus of this work, as stated in the Introduction. Our proposed DC framework exhibits favorable properties for applicability to broader settings, e.g., following work could consider constraints on the clustering solution (e.g., must-link/cannot-link constraints) to be integrated directly into the optimization problem as long as they preserve the DC structure. > Q2. Additional insights for the theoretical results The main contribution of this paper is designing much faster methods for spectral clustering with fairness constraints. This is done by reformulating the fair SC problem into a DC functions problem (Eq. (10)). Using this new formulation, an ADMM-type algorithm can be used to efficiently perform fair SC. This is because there is no need to compute the eigendecomposition of the Laplacian matrix. In short, Proposition 3.2 derives the dual of the DC formulation. This is important because existing work either relied on expensive eigendecomposition or did not consider the fairness constraints. Proposition 3.3 provides the exact closed-form expressions for all the terms involved in the dual problem so that gradient-based techniques can be applied. > Q3. Additional metrics and baselines To address the reviewer's concern, we have conducted additional comparisons between our method with the very recent FFSC method [FEN24] to enrich the comparative analysis. We also report the group-fair constraint metric (Definition 2.2). We report the results in *Table iKL4-1* in the rebuttal. We observe that **our algorithm is 7-12x faster than FFSC and results in significantly better spectral clustering cost and fairness constraint satisfaction**. FFSC modifies the objective with additional regularization, so it solves a different problem than ours, resulting in a different balance-clustering trade-off with higher balance but substantially worse clustering quality and group fairness metric. Note that these are two different measures of fairness, where balance promotes the same number of group individuals in each cluster (Eq. (3)), and the latter is related to the proportion of each group in all clusters being the same as in the general population (Eq. (4)). Additional details are given in the *Response to Q1 of Reviewer iKL4*. We additionally ran our method on the FacebookNet dataset in *Q1 to Reviewer ztMM* and also report the minimum balance metric in *Q5 to Reviewer ztMM*. > Q4. Large-scale experiments Our current experiments include datasets of size up to ~35,000 samples, in line with the scale of datasets commonly used in the fair SC literature, e.g., [2,FEN24]. To address the reviewer's concern, we additionally tested on the Diabetes dataset ($n=253,680,k=2$). Our method results in a fair clustering in 123.36s, whereas the s-FSC (previous SOTA for [1]) did not even converge after 24 hours. Our method achieves fairness constraint satisfaction $||F^\top H||^2=0.000064$ (closer to 0 is better), 0.78 balance, 1.99 SC cost, and orthogonality $||H^\top H-I||^2=6.49 \times 10^{-9}$ (closer to 0 is better). This experiment shows that **our method allows to scale Fair SC to problems that were not even possible to solve in reasonable time** with previous algorithms. > Q5. Fig. 2 intuition. Fig. 2 showcases runtime and balance comparing our method and the SOTA s-FSC exact baseline. The left plots show that **our method consistently outperforms s-FSC** in terms of runtime, with even better efficiency gains as $k$ increases. The right plots compare the balance achieved by both methods, showing **our method does not decrease the fairness** compared to the exact algorithm. We will clarify this in the caption. [1] Kleindessner et al. Guarantees for Spectral Clustering with Fairness Constraints. ICML 2019 [2] Wang et al. Scalable spectral clustering with group fairness constraints. AISTATS 2023 [FEN24] Feng et al. Fair Spectral Clustering Based on Coordinate Descent. Symmetry 2024
Summary: This work considers fair spectral clustering, demonstrating a reformulation of the fairness constraints which allows for a considerable improvement in the runtime of existing fair algorithms. Specifically, by reformulating the trace maximization problem often used for FSC into one of multiple subproblems in the difference of convex functions setting, state of the art methods for the unfair problem are able to be employed with fairness considerations. Algorithmic ideas are sketched theoretically and further validated with experiments on real and synthetic datasets. Claims And Evidence: Claims are well supported by theoretical analysis and experimental validations. My only concerns are with respect to the convergence analysis and problem assumptions (see questions). Methods And Evaluation Criteria: Yes. Theoretical Claims: I skimmed the proofs deferred to the appendix and did not note any major issues. I do feel the paper lacks a more thorough theoretical treatment of the convergence guarantees (discussed in Section 3.2). Moreover some of the problem assumptions are not immediately clear to me (see my questions). Experimental Designs Or Analyses: The experimental design appears valid / standard for the problem at hand. Supplementary Material: I reviewed the additional experiments in the appendix which align with those of the main text. Relation To Broader Scientific Literature: The authors effectively contrast their results against prior work in spectral clustering (with and without fairness constraints). As a non-expert on this problem, the paper caught me up to speed well. Essential References Not Discussed: n/a Other Strengths And Weaknesses: Strengths: The paper is well written and provides a nice introduction to spectral clustering + the methods leveraged to ensure added fairness constraints are met. Moreover, the results (experimental in particular) are compelling to demonstrate a significant computational improvement over the prior state of the art. Weaknesses: The noted assumption that M is full rank seems potentially weak in practice. For example, the authors note that this is violated when datasets have duplicates--a very common issue when collecting data. Correct me if I'm wrong on this! Other Comments Or Suggestions: n/a Questions For Authors: Can you explain why the balance is worse for your algorithm for small values of k? Is the assumption that full rank assumption on M standard? In what contexts might this not hold / how much does it weaken the presented results? Can you expand on the convergence guarantee of this algorithm? Can this difference of convex function reframing be used to capture other notions of fairness (ie. not just the notion of balance)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback and address the remaining concerns below. > Q1. Expand on the convergence analysis. Our convergence analysis is based on the theory for ADMM applied to nonconvex problems from [1], which analyzes the convergence of ADMM for structured nonconvex problems of the form presented in our Equation (11) (a composite function minimization with linear constraints). Applying Proposition 3 from [1] to our specific ADMM structure, we can assert the following: Provided the sequence of dual variables ($P^{(i)}$) generated by the algorithm converges (i.e., $\lim_{i\to \infty} P^{(i)} = P^\star$ for some $P^\star$), and under mild assumptions (Assumptions 7, 8, 9 in [1], which correspond to our problem having closed sets, local solutions being found for subproblems, and regularity assumptions), then any limit point ($H^\star, Y^\star$) of the primal sequence ($H^{(i)}, Y^{(i)}$) satisfies first-order conditions. We will revise the paragraph on "Convergence analysis" on page 6 to more formally state in a separate Proposition the convergence guarantee and its conditional nature on dual convergence based on [1], due to the technical challenges in this nonconvex setting. > Q2. Balance interepretation as $k$ changes. Fig. 2 shows that balance typically decreases as the number of clusters $k$ increases. This trend is not unique to our algorithm. It is observed in both our method (orange line) and the exact s-FSC algorithm (blue line). As $k$ increases, the expected size of each cluster diminishes. Smaller clusters can lead to a greater number of highly unbalanced groups. The reported balance metric is the average of (3), i.e., the minimum ratio of group ratios in each single cluster [2]. Consequently, even if a few clusters exhibit poor balance (due to small cluster size), this can significantly lower the balance score. Similar trends and reasonings can be found in [2, Sec. 6.1]. From a theoretical angle, [2, Theorem 1] notes that, if $k$ increases for fixed $n$, the fairness recovery of fair SC (in the SBM setting) becomes looser. We also note that the primary contribution of our present work is to design a significantly faster method for the _existing_ Fair SC problem, rather than improving the balance of Fair SC. > Q3. Rank of $M$. - This is a technical assumption needed to compute the gradient of $g^\star$ from Proposition 3.3. - Regarding duplicates in the data. While in regression it may be meaningful to have duplicated data as the same input can result in a different output (e.g., a different measurement), in unsupervised learning the duplicates just lead to a simple re-weighting of the empirical risk. Therefore, one can simply filter the duplicates in a pre-processing step. - This assumption can be satisfied using infinite-dimensional kernels $\kappa$. Specifically, this is always the case with universal kernels (e.g., the Gaussian kernel) [4]. - In case the given data leads to $M$ not being full rank, the problem can be regularized with $\mathcal{M} := M+(1+\omega)I$ for small $\omega>0$. This is a well-studied regularization technique that is widely used, e.g., in kernel methods [3]. > Q4. DC applicability to other settings. The reviewer raises an interesting point regarding the applicability of our framework to more general setting/notions. For example, it might be possible to extend it to constrained spectral clustering. By considering a modified indicator function $h(\cdot)$, our framework can accommodate other constraints/fairness metrics whenever the DC formulation is maintained. The ADMM algorithm (Algorithm 1) can then be applied with appropriate modifications to the subproblem w.r.t. $Y$. For instance, linear constraints have been used in the literature to encode must-link and cannot-link constraints within spectral clustering [5]. We will add a remark in the final version of the paper to reflect this potential broader application of our method. --- [1] Magnússon et al. "On the convergence of alternating direction lagrangian methods for nonconvex structured optimization problems." IEEE Transactions on Control of Network Systems (2015). [2] Chierichetti et al. Fair Clustering Through Fairlets. NeurIPS 2017. [3] Christopher M Bishop and Nasser M Nasrabadi. Pattern recognition and machine learning. Springer, 2006 [4] Ingo Steinwart and Andreas Christmann. Support vector machines. Springer Science & Business Media, 2008. [5] Kawale et al. "Constrained spectral clustering using l1 regularization." International Conference on Data Mining (2013).
null
null
null
null
null
null
FlowAR: Scale-wise Autoregressive Image Generation Meets Flow Matching
Accept (poster)
Summary: This paper attempts to address two limitations in VAR work: (1) the complex and rigid scale design and (2) the dependency between the generator and the tokenizer. To address these issues, the paper makes two simplifications: (1) each scale is simply double the previous one; (2) the coarse scale tokens are obtained by directly downsampling the finest scale tokens. These two simplifications make the VAR framework more concise and general, allowing it to be combined with existing continuous tokenizers (VAE) while achieving better generation results despite the simplified design. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes. I checked the class-conditional image generation results (ImageNet) and ablation studies, including results for 256 and 512. All benchmarks and metrics follow the VAR paper, and there are no issues with my aspects. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: Yes. I checked the class-conditional image generation results (ImageNet) and ablation studies, including results for 256 and 512 resolutions. These experimental designs and evaluation metrics follow the VAR paper, and I see no issues. Supplementary Material: No supplementary material was provided with this paper. Relation To Broader Scientific Literature: NA Essential References Not Discussed: The idea of combining flow-matching with multi-scale latents has been widely applied in previous works [1,2]. Additionally, the paradigm of leveraging previous scales to better generate new scales has already been proposed [3]. However, this paper lacks the corresponding citations. * [1] f-dm: A multi-stage diffusion model via progressive signal transformation, ICLR 2023 * [2] Pyramidal Flow Matching for Efficient Video Generative Modeling, ICLR 2025 * [3] Diffusion forcing: Next-token prediction meets full-sequence diffusion, NeurIPS 2024 Other Strengths And Weaknesses: ## Strengths: 1. This paper simplifies the complex scale design in VAR, making the VAR framework compatible with any continuous tokenizer (VAE), simplifying the framework to make it more general while also improving the performance. 2. Using autoregression to generate semantics and using them as conditions for diffusion (flow matching) is a reasonable idea. 3. The paper's writing and figures are clear and easy to understand. ## Weaknesses: 1. The authors lack detailed analysis of the proposed scale design. For example, why did the paper simply double the scale rather than triple it? Why were only 5 scales designed? How would the results be affected with more or fewer scales? Providing analysis and results for these design choices would be helpful. 2. The method is essentially a combination of Autoregressive and Diffusion approaches, where Autoregressive integrates information from multiple scales to generate semantics, which then serve as conditions for Diffusion (FlowMatching) generation. Compared to the original VAR, this method might lead to slower generation speed. However, the paper lacks experiments comparing generation speeds and related analysis. 3. In Table 1, FlowAR's performance advantage mainly comes from FID and IS, but the improvements in Precision and Recall metrics are not substantial. Could the authors provide more analysis and explanation for this experimental phenomenon? 4. The direct comparison of different tokenizers in Table 3 may not be entirely fair, as different tokenizers have different parameter counts. It would be helpful if the authors added parameter count comparisons for the Tokenizers (VAEs). 5. The "Any VAE" in Figure 2 might be ambiguous; it's suggested that the authors change it to "Any continuous tokenizer (VAE)". Other Comments Or Suggestions: NA Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and address the concerns below. > Essential References Thank you for the suggestions. FlowAR focuses on ***next-scale*** prediction through ***autoregressive modeling***, whereas the referenced works explore diffusion-based approaches with multi-stage architectures [1], multi-scale designs [2], or independent noise levels combined with causal next-token prediction [3]. We will cite and discuss all related works accordingly. > W1: scale design analysis We have ablated the scale design in Table 9 of the Appendix (page 11 in paper). For quick reference, we provide the results below. As shown, even with just three scales ({1, 4, 16}), FlowAR achieves an FID of 6.10—comparable to VAR, which uses a much more complex scale configuration ({1, 2, 3, 4, 5, 6, 8, 10, 13, 16}) and obtains an FID of 5.81. |method |scales | FID | |:--:|:--:|:--:| | VAR| {1, 2, 3, 4, 5, 6, 8, 10, 13, 16} |5.81 | | VAR| {1, 2, 4, 8, 16}| N/A | | FlowAR| {1, 2, 4, 8, 16} |3.61| | FlowAR| {1, 4, 8, 16} |4.88 | | FlowAR| {1, 4, 16} |6.10| > W2: speed comparison with VAR In the table below, we compare the generation speed and performance of FlowAR with VAR and other widely used diffusion- and flow-matching-based models, including MAR, DiT, and SiT. |model|params|inference time (sec/image)|FID| |:---:|:---:|:---:|:---:| |DiT-XL|675M|1.7|2.26| |SiT-XL|675M|1.7|2.06| |MAR-B|208M|1.25|2.31| |VAR-d30|2B|0.07|1.97| |FlowAR-L|589M|0.12|1.90| |FlowAR-H|1.9B|0.24|1.65| As shown, although FlowAR involves additional denoising steps due to flow matching and is slightly slower than VAR, FlowAR-L remains over 10× faster than other diffusion- and flow-matching-based models such as MAR, DiT, and SiT while also achieving superior image generation quality. > W3: Precision and Recall metrics in Table 1 Precision and Recall metrics often saturate and offer limited additional insight beyond FID. Therefore, following prior works such as MAR, VAR, DiT, and SiT, we primarily focus on optimizing and reporting FID, which remains the most sensitive and informative metric for assessing image generation quality in our evaluations. > W4: Update Table 3 with tokenizer’s parameters Thank you for the suggestion. In the updated table below, we include the tokenizer parameters to provide a more complete and transparent comparison. |tokenizer|tokenizer params |generator |FID| |:---:|:---:|:---:|:---:| | multi-scale residual VQGAN|109.0M |VAR |5.81 | |DC-AE | 323.4M | FlowAR|4.22 | |SD-VAE |83.7M | FlowAR|3.94 | | MAR-VAE|66.5M | FlowAR|3.61 | > W5: Change “Any VAE” to “Any continuous tokenizer (VAE)” Thank you for the suggestion. We will update the text to “any continuous tokenizer (VAE)” accordingly. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttals. My questions are well answered. I would keep my original rating.
Summary: In this work, the authors propose FlowAR which generates images sequentially at different scales. FlowAR first generates conditional vector of different resolution using AR model and then relies on flow matching model to generate clean image of corresponding resolution. Unlike VAR, FlowAR is more flexible which doesn't rely on curated VAE to acquire the latent space. On standard ImageNet generation benchmark, FlowAR achieves competitive performance comparing to SoTA baselines. Claims And Evidence: Yes, claims are supported by convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods are properly evaluated. Theoretical Claims: N/A Experimental Designs Or Analyses: Yes, experimental designs are sound. Supplementary Material: Yes, I checked all the supplementary materials. Relation To Broader Scientific Literature: The paper is related to diffusion/flow matching model as well as autoregressive model for image generation. Essential References Not Discussed: N/A Other Strengths And Weaknesses: See "Questions For Authors". Other Comments Or Suggestions: See "Questions For Authors". Questions For Authors: 1. What is the inference speed of FlowAR compared with baselines like VAR. How much overhead is there from the flow-matching module? 2. Can the authors give more detailed description about the AR and flow-matching Transformer? For example, is it based on the implementation of VAR? 3. Did the authors have ablation study on how to balance sizes of AR and flow-matching models? How are their sizes determined? 4. Are there visualization of samples generated of different scales in FlowAR which could help better demonstrate what FlowAR learns to generate images. 5. In table 3, FlowAR with MAR-VAE achieves the best performance. However, MAR used VAR from LDM which is less powerful than SD-VAE. Does authors have any assumptions about why this is the case? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and address the concerns below. > Q1: speed comparison with VAR While FlowAR requires additional denoising steps due to the use of flow matching, it is only slightly slower than VAR, and FlowAR-L still achieves a significant 10× speedup over other diffusion- and flow-matching-based models such as MAR, DiT, and SiT. |model|params|inference time (sec/image)|FID| |:---:|:---:|:---:|:---:| |DiT-XL|675M|1.7|2.26| |SiT-XL|675M|1.7|2.06| |MAR-B|208M|1.25|2.31| |VAR-d30|2B|0.07|1.97| |FlowAR-L|589M|0.12|1.90| |FlowAR-H|1.9B|0.24|1.65| > Q2: more detailed description about the AR and flow-matching Transformer Our autoregressive and flow matching modules are built upon the VAR and SiT codebases, respectively. As stated in the paper, we will ***fully open-source*** the training and inference code, along with model checkpoints, to enable the community to reproduce our results and examine all implementation details. > Q3: balance sizes of AR and flow-matching Thank you for the question. We ablate the effect of AR and flow matching model sizes in the table below. Increasing the size of the flow matching module initially improves performance, but further scaling eventually leads to a performance drop. |model|AR params|Flow matching params|inference time (sec/image)|FID| |:---:|:---:|:---:|:---:|:---:| |FlowAR-L|504M|70M|0.03 |2.32| |FlowAR-L|411M|143M| 0.07|2.01| |FlowAR-L(default setting)|309M|280M| 0.12 |1.90| |FlowAR-L|152M|420M| 0.19|1.98| > Q4: visualization of samples While our primary focus is on the image quality at the final scale, we provide intermediate-scale visualizations at [anonymous link](https://anonymous.4open.science/r/Visualization-1A5C/README.md) and will add more intermediate-scale visualizations in the final version. > Q5: VAE from MAR To analyze the impact of different underlying VAEs, we report their reconstruction FID (rFID) in the table below: |tokenizer|rFID| |:--:|:--:| |MAR-VAE|0.53| |SD-VAE|0.82| As shown, MAR-VAE achieves a better rFID, indicating better reconstruction quality, which correlates with improved generation FID (gFID) in both our experiments and those reported in the MAR paper. To further validate the effect of MAR-VAE versus SD-VAE, we also compare FlowAR with another generator, UViT, using both tokenizers. MAR-VAE consistently delivers better generation performance across different generator architectures. |model|MAR-VAE|SD-VAE|gFID| |:--:|:--:|:--:|:--:| |UViT|√||3.24| |UViT||√|3.52| |FlowAR-S|√||3.61| |FlowAR-S||√|3.94|
Summary: This paper proposes a multi-scale approach for image generation by combining autoregressive modeling with flow matching at each scale. Instead of using a VQVAE-based tokenization as in the VAR approach, the method uses continuous latents from a VAE which are downsampled to get tokens at different scales. An autoregressive transformer takes as input the condition and multiscale representations encoding semantics at different scale. Conditioned on this, the velocity vectors are predicted. Experiments are performed on the ImageNet dataset where the approach outperforms prior work. Claims And Evidence: +The work claims to address the problem of multiscale autoregressive image generation. To this end, limitation of prior work are identified and a new framework is proposed for autoregressive image generation + The method section is clearly written and provides substantial argument in support of the design choices, for example, VAE encoder for multiscale semantics instead of a VQ-VAE based approach. +Experimental results on the ImageNet dataset show the effectiveness of the approach. Methods And Evaluation Criteria: +To address the problem of image generation within the framework of autoregressive modeling the idea of using VAE over VQVAE with streamlined upsampling procedure compared to prior work is good. +The evaluation is based on comparison of the visual fidelity of the generated images and the parameter overhead with respect to the baseline. + Adequate ablations are performed. Theoretical Claims: The work does not make any new theoretical claims. The efficiency of continuous representations from VAE compared to the code-book look-up of the VQ-VAE is well-established. Experimental Designs Or Analyses: +The experimental setup is inline with prior work and compared wrt to the FID and IS scores. +Ablations are performed to validate the design choices. Supplementary Material: The supplemental material provides additional ablations, training details and an impact statement. Relation To Broader Scientific Literature: The work is related to multiscale autoregressive image generation within the transformer backbone. The motivation is the success of transformers and autoregressive modeling in NLP. This has gained traction for modeling images. Essential References Not Discussed: Literature on multiscale image generation is not new and the work should discuss them in the related work. For example. [a] PixelCNN models with auxiliary variables for natural image modeling. ICML 2017 [b] Generating high fidelity images with subscale pixel networks and multidimensional upscaling. ICLR, 2019. [c] MaCow: Masked convolutional generative flow. NeurIPS, 2019 [d] Pixelpyramids: Exact inference models from lossless image pyramids. ICCV 2021 Other Strengths And Weaknesses: +The work is incremental but provides a good solution to advance the field of autoregressive image generation using transformers. +The paper is very well written and follows the limitations of prior work which are well addressed with new modeling and formulation. - Related work does not do justice to the prior work on multiscale generation even though prior work are not based on transformers, they are still relevant and should be discussed. Other Comments Or Suggestions: No Questions For Authors: 1. How does the method scale with resolution? Are there any limitations on the resolutions eg, square images in the VAE based framework considered? 2. Are same number of timesteps used for flow-matching across image scales? 3. How well does the approach work for long-text conditioning? 4. How are the positional encodings handled with the VAE based approach and transformers? 5. What is the level of controllability that can be handled with these models at different scales? for complex images and conditions how does the model ensure faithful generation across scales? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and address the concerns below. > Essential References We thank the reviewer for the suggestion. These works represent early efforts in applying autoregressive modeling to image generation in pixel space. We will cite and discuss them in the related work section. > Q1: scale with resolution In our design, each subsequent scale is simply double the previous one—providing a simple and intuitive hierarchical structure. Additionally, our method supports non-square resolutions, though this may result in a slight drop in performance. |scales|FID| |:---|:---:| |$\{1\times 1, 2\times 2, 4\times 4, 8\times 8, 16\times 16\}$ (default setting)|1.90| |$\{1\times 1, 2\times 4, 4\times 8, 8\times 16, 16\times 16\}$|2.21| |$\{1\times 1, 4\times 2, 8\times 4, 16\times 8, 16\times 16\}$|2.16| > Q2: time steps across scales Thank you for the question. By default, we use ***the same number of time steps*** (25 steps) for all scales. We ablate the effect of using different denoising time steps across scales (1x1, 2x2, 4x4, 8x8, 16x16) in the following table. |steps at 1X1|steps at 2X2|steps at 4X4|steps at 8X8|steps at 16X16|inference time (sec/image)|FID| |:---:|:---:|:---:|:---:|:---:|:---:|:---:| |25|25|25|25|25|0.12|1.90| |50|50|50|50|50|0.24|1.89| |15|15|15|15|15|0.08|2.08| |10|10|10|10|10|0.06|2.48| |15|15|20|25|25| 0.12|1.96| |25|25|20|15|15| 0.08|1.94| As shown, increasing the time steps from 25 to 50 across all scales results in only a marginal FID improvement of 0.01, while doubling the inference time. Reducing the steps to 15 or 10 significantly speeds up inference but comes at the cost of degraded image quality. We also evaluate varying the number of steps across scales. Using fewer steps for smaller scales leads to a slight FID drop of 0.06 without affecting inference speed. Conversely, assigning fewer steps to larger scales also slightly degrades performance but yields faster inference. Overall, these results suggest that a uniform 25-step setting offers a better trade-off between generation quality and efficiency. > Q3: long-text conditioning Since FlowAR is an autoregressive model, we can either concatenate the long text condition in front of the image sequence—similar to other autoregressive models [A]—or incorporate cross-attention as done in diffusion models [B]. [A] Autoregressive Model Beats Diffusion: Llama for Scalable Image Generation [B] High-Resolution Image Synthesis with Latent Diffusion Models > Q4: positional encodings The positional embeddings are handled in the same way as prior methods [C, D, E, F] (e.g., absolute position embeddings are added at the input layer). As stated in the paper, the detailed implementation (for both training and inference) will be fully open-source, allowing the community to reproduce and examine all details. [C] Visual Autoregressive Modeling: Scalable Image Generation via Next-Scale Prediction [D] Autoregressive Image Generation without Vector Quantization [E] Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models and Time-Dependent Layer Normalization [F] Scalable Diffusion Models with Transformers > Q5: level of controllability We thank the reviewer for the insightful question. In this work, our goal is to develop a general and broadly applicable framework by treating all scales uniformly—for example, using the same number of denoising steps across scales—on the standard ImageNet benchmark, following the next-scale prediction framework introduced by VAR. While datasets such as LAION or COCO may benefit from dataset-specific scale designs, our focus is on establishing a simple and general scale design that can generalize across settings without requiring dataset-specific tuning.
Summary: The paper explores image generation using a next-scale prediction objective, in which image latents (from a VAE) are progressively super-resolved from 1x1 scale to 2x2, 4x4, and so on. To that end, the authors present a method in which an autoregressive model predicts a conditioning for a per-scale flow-matching model. Unlike models like VAR that rely on a specifically trained residual multi-scale tokenizer, FlowAR works on down-sampled latents from arbitrary VAEs, which makes it broadly applicable. On class-conditional ImageNet generation, FlowAR outperforms VAR and other baselines, and achieves SotA performance for next-scale prediction models. ## update after rebuttal The authors addressed most of my concerns and I raised my rating accordingly. Claims And Evidence: Overall the claims are sensible, yet, there are a few (FLOPS) controlled baselines that would be important to ablate to properly verify the contribution of the method: 1. What's the importance of the autoregressive model in the method? How would a baseline perform in which there is no AR model, but all scales can attend to each other and all scales are diffused at once like Matryoshka Diffusion Models [Gu et al., 2023]? Of course, in this case the input is not the fully predicted previous scales, but the noised version of all the scales at once. 2. Similar to before, how then would a FLOPS-controlled single-scale flow model perform? Is there a need for multi-scale prediction? 3. I appreciate the per-token prediction ablation in L413, but how would the performance change if the semantics s were not predicted in a next-scale manner, but fully autoregressive? I.e. autoregressive on a inter-scale, and on a intra-scale level (raster-scan). 4. Like pyramidal flow [Jin et al., 2024], what if there were no split between AR and flow models, but instead you predict the flow between upsampled previous scale and next scale directly? 5. What is the performance and runtime trade-off when trading-off AR model and flow model layers? MAR shows that the diffusion layer can be rather small, but in FlowAR they are somewhat large. Methods And Evaluation Criteria: The submission demonstrates the proposed method in a class-conditional generation framework with ImageNet-1k. This setting is very commonly used, and under that umbrella the evals make sense. That said, class-conditional ImageNet generation is a benchmark that is somewhat over-optimized, and the common generation metrics (gFID against train-set statistics) favor and measure overfitting to the train set. Indeed, FlowAR surpasses the train-val FID of 1.78, as reported by VAR. At that point, I would argue that the benchmark becomes nearly meaningless, and more scalable and less data-starved benchmarks are required. Commonly, fully autoregressive models can quickly overfit with just a few passes over the same data, and the FlowAR trains for 400 epochs. This might differ with hypbrid AR+Flow models, but it is also not clear whether any data augmentation was used. I would also like to note here that it is unclear to me if the Fig 1 results should be compared like that. As far as I can tell, VAR was trained for a shorter number of epochs. How did the authors ensure these two settings are comparable? Theoretical Claims: The submission does not make any theoretical claims. All claims are empirical in nature. Experimental Designs Or Analyses: The paper gives enough implementation details to make a reasonable attempt at reimplementing it, but not enough to be fully confident of the exact training procedure and settings. It is also not very detailed in the ablation settings. Supplementary Material: I reviewed all parts of the supplementary. It consists mostly of training details, visualizations, and an ablation of scale sequence construction and scale configuration. Both ablations are topics I wondered about when reading the main paper, and I appreciate their inclusion. Relation To Broader Scientific Literature: This submission is timely, given the community's recent interest into alternative generation schemes (other than diffusion and raster-scan next-token prediction) that explore hierarchical, or multi-scale approaches. The work is very related to VAR and classical "progressive super-resolution" works (e.g. cascaded diffusion, Imagen), as well as recent literature that explores next-token prediction with continuous latents (e.g. GIVT, MAR, Fluid). Essential References Not Discussed: There are a few important missing references. "Next-scale" training and prediction has been a long-standing research area, going back to at least early explorations with GANs [1,2]. It has also been studied from the perspective of cascaded super-resolution with diffusion models, e.g. [3,4], as well as quite recently and relevant to FlowAR in videos with flow models [5]. VAR has also had relevant follow-up work, e.g. [6]. More related to the autoregression + continuous targets direction, [7,8] are also relevant works in that direction, with [7] building upon the cited MAR work. I would also note [9] as a quite relevant work that performs diffusion on multiple scales at once, instead of in an autoregressive manner. [1] Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks, Denton et al., 2015 [2] Progressive Growing of GANs for Improved Quality, Stability, and Variation, Karras et al., 2017 [3] Cascaded Diffusion Models for High Fidelity Image Generation, Ho et al., 2021 [4] Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding, Saharia et al., 2022 [5] Pyramidal Flow Matching for Efficient Video Generative Modeling, Jin et al., 2024 [6] Infinity: Scaling Bitwise AutoRegressive Modeling for High-Resolution Image Synthesis, Han et al. 2024 [7] Fluid: Scaling Autoregressive Text-to-image Generative Models with Continuous Tokens, Fan et al., 2024 [8] DART: Denoising Autoregressive Transformer for Scalable Text-to-Image Generation, Gu et al., 2024 [9] Matryoshka Diffusion Models, Gu et al., 2023 Other Strengths And Weaknesses: The paper presents strong results on IN1K class-conditional generation. The method is overall straight-forward, and greatly simplifies some of the limitations of VAR's need for a specially trained multi-scale tokenizer. Indeed, it can be run with a wide variety of standard VAE models. That said, I feel that there are a few important baselines missing, and I have doubts about the validity of the IN1K class-conditional generation benchmark when pushing models to such high performance levels. Train-set gFID to me is not a solid metric, and it looks like VAR significantly out-performs FlowAR in terms of IS. The setting and the rank-reversal between metrics makes me have some doubts about the more fine-grained side of results, but overall the method is conceptually simple and seems to work reasonably well. Other Comments Or Suggestions: Generally the manuscript is well written and easy to read. I did not spot many obvious typos. I would suggest to show more visuals, and if possible, show visuals of intermediate stage reconstructions (even if they may not be fully valid). Questions For Authors: On L288: "it provides indirect semantic injection, potentially weakening the effectiveness of semantic guidance." This claim is not very clear to me. Could the authors expand upon the reasoning for this? In which sense is the "semantic injection" indirect? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback. > C1: importance of AR Unlike Matryoshka Diffusion Models (MDM), which use a nestedUNet to jointly denoise all scales, FlowAR conditions on previously generated scales. Below, we include a baseline, “diffuse all noisy scales” (without AR), which faces two issues: (1) ***slower inference***, as denoising is applied to all scales, resulting in extremely long sequences; and (2) ***worse FID***, due to the absence of guidance from previously generated scales. |model|params|inference time (sec/image)| FID| |:---:|:---:|:---:|:---:| |MDM|434M|3.16|3.51| |diffuse all noisy scale|600M|2.80|1.98| |FlowAR-L|589M|0.12|1.90| > C2: single-scale flow model Below, we compare with SiT-XL, a pure single-scale flow model, and “FlowAR-L w/ single-scale.” Our multi-scale design outperforms SiT-XL while requiring 14% of the Flops and achieving a 13.9× speedup. Slightly slower than the single-scale variant, the multi-scale design achieves a 1.78 FID improvement. |model| params | FLOPS |inference time (sec/image)|FID | |:---:|:---:|:----:|:----:|:----:| |SiT-XL| 675M |58.3T |1.67|2.06| |FlowAR-L w/ single-scale| 589M | 6.2T| 0.09 |3.68 | |FlowAR-L w/ multi-scale | 589M |8.2T |0.12|1.90| FLOPs are computed across the entire generation process. > C3: token-wise prediction of semantics This token-wise design results in slower inference and worse performance: |model | inference time (sec/image)| FID| |:---:|:---:|:----:| |token-wise AR | 0.58 |3.02| |scale-wise AR (ours) | 0.12|1.90| > C4: no split between AR and flow models Below, we experimented with directly predicting the flow between the upsampled previous scale and the next scale. |model| inference time (sec/image)|FID | |:---:|:---:|:----:| |no split | 0.45|2.45 | |with split (ours) |0.12|1.90| > C5: performance vs. runtime Unlike MAR, which models each token individually, FlowAR models the probability over entire scales. It also reduces inference time by using only 5 AR steps (with 25 denoising steps each), compared to MAR’s 256 AR steps (with 100 steps each). As shown below, FlowAR-H is 5× faster than MAR-B while achieving a 0.66 FID improvement. |model|params|inference time(sec/image) |FID| |:---:|:---:|:----:|:----:| |MAR-B|208M|1.25|2.31| |FlowAR-H|1.9B|0.24|1.65| Due to response length limitation, please also refer to our response to Reviewer 6Cuw’s Q3 for the ablation on AR and flow model sizes. > ImageNet train-set gFID We thank the reviewer for raising this point. FID is a widely used metric in prior works (e.g., MAR, DiT, VAR, SiT, StyleGAN), and our setup follows these standards for fair comparison. > Training settings of VAR and FlowAR For ***data augmentation***, we follow VAR’s setup with RandomCrop and horizontal flipping. As noted in the paper, we will ***fully open-source*** our code and checkpoints for reproducibility and transparency. Below, we clarify the training settings of VAR and FlowAR. While diffusion and flow models typically require long training (e.g., MAR: 800 epochs, DiT/SiT: 1400), FlowAR achieves strong performance with just 400 epochs. Compared to VAR-d30/d24 (350 epochs), FlowAR-H trains for a similar duration but uses nearly half the GPU hours, thanks to its simpler scale design and shorter sequences. The table below summarizes A100 training hours and FID scores. Compared to VAR-d20/12, our 250-epoch version of FlowAR achieves better performance while incurring only half the training cost. |model|epochs | params|training hours| FID| |:--:|:--:|:--:|:--:|:--:| |VAR-d30| 350 | 2B|9657 hours| 1.97 | |FlowAR-H| 350| 1.9B| 4667| 1.70| |FlowAR-H| 400 | 1.9B|5334 hours| 1.65 | |VAR-d12|250|132M |788 hours | 5.81| |FlowAR-S|250|170M|401 hours |4.12| |FlowAR-S|400|170M|642 hours| 3.61| |VAR-d20|250|600M |3012 hours |2.95| |FlowAR-L|250|589M | 1460 hours | 2.15 | |FlowAR-L|400|589M |2335 hours |1.90 | > Essential References We will cite and discuss all mentioned works in the related work section and compare with [3, 8, 9] in the experiments. Our work is most closely related to VAR and its follow-up [6], which autoregressively predict the next scale—unlike other methods that, while leveraging multi-scale information, do not adopt this formulation. > IS vs. FID We observe an IS–FID trade-off when tuning the CFG scale. While we focus on optimizing FID, a slight adjustment still allows both FID and IS to outperform VAR by a notable margin. |model|cfg|FID|IS| |:---:|:---:|:----:|:----:| |VAR|2.4|1.97| 323.1 |FlowAR-H|2.4|1.65| 296.5| |FlowAR-H|3.0|1.75|357.3| > Visualization Due to response length limitation, please also refer to our response to Reviewer 6Cuw’s Q4 for the visualization of intermediate stage generated samples. > L288 claim Our Spatial-adaLN injects scale- and position-wise semantics for fine-grained conditioning, unlike the simpler concat approach (L281), which provides only sequence-level, indirect guidance. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed rebuttal and clarifying my questions. Most of my concerns are addressed, except for C5. My question there was more about the trade-off between AR model size and flow model size. For example, in Appendix B the authors list FlowAR-H's model size as being split into a 30-layer AR model and 18 layer flow model. What's the impact of assigning more or less capacity to the AR / flow model? --- Reply to Comment 1.1.1: Comment: As mentioned in previous Rebuttal, due to response length limitation, we ablate the size of AR and Flow matching model sizes in the response to ***Reviewer 6Cuw’s Q3***. For your convenience, we provide the table below. As shown, increasing the size of the flow matching module initially improves performance, but further scaling eventually leads to a performance drop. |model|AR params|Flow matching params|inference time (sec/image)|FID| |:---:|:---:|:---:|:---:|:---:| |FlowAR-L|504M|70M|0.03 |2.32| |FlowAR-L|411M|143M| 0.07|2.01| |FlowAR-L (default setting)|309M|280M| 0.12 |1.90| |FlowAR-L|152M|420M| 0.19|1.98| |FlowAR-H| 1620M| 321M| 0.13| 1.74| |FlowAR-H (default setting)| 1280M| 633M| 0.24 |1.65| |FlowAR-H| 855M| 1048M| 0.35 | 1.67|
null
null
null
null
null
null
Disparate Conditional Prediction in Multiclass Classifiers
Accept (poster)
Summary: This paper introduced the measure of Disparate Conditional Prediction (DCP) to multi-class classifiers based on equalized odds. They provided optimization methods for scenarios with and without the access to confusion matrices and justified the algorithm efficacy by using decision trees and NN models on three datasets. Claims And Evidence: DCP is extended to multi-class scenarios, and I think the claims are well evidenced in the paper. Methods And Evaluation Criteria: The proposed DCP measure looks good, and some standard datasets are applied. Theoretical Claims: I read through the theoretical analyses but was unable to verify their correctness. Experimental Designs Or Analyses: The results comparisons are among different strategies or with the true DCP. It would better to simply declare this is the best choice if there is not any feasible baselines. Supplementary Material: The supplementary material primarily includes code for the experimental section. Relation To Broader Scientific Literature: I am unsure whether this topic holds significant importance in fairness research today. Maybe more fairness audit research should be discussed to better position this work. Essential References Not Discussed: This work is based on Sabato&Yom-Tov (2020) and I think the most related works are included in the paper. Other Strengths And Weaknesses: Strengths: The proposed DCP for multi-class scenarios is well presented, and I believe some readers in the fairness community will find this work interesting. Although I may not fully grasp every detail, I find the paper to be of good quality. Weaknesses: 1. Some figures are presented without further explanation. For example, what we are expected to see or understand through Figure 1. The authors should not just simply refer to it (line 146). 2. Some notations are not easy to understand. For example, above Eq. 2 we see \eta’s definition, while it takes two input variables in Eq. 4I found this part confusing when reading here. Other Comments Or Suggestions: 1. In Eq. 4, both w and \pi are not dependent on the classifier according to their definitions in section 3, but they are said determined by C below Eq. 3. 2. Eq. 2 executes addition group by group, then will it lead to a trivial solution? Questions For Authors: 1. Equalized odds emphasizes true and false positive rates, which are well-interpreted in binary classification. It would be helpful if the authors could provide some examples illustrating the significance of equalized odds in multi-class scenarios. 2. I understand this work currently focuses on auditing a trained classifier. But I am curious have you considered integrating the proposed DCP into a in-processing fairness method? What is the challenge of doing so? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful review and comments. Please find below our response to your comments and questions. **Comment**: The results comparisons are among different strategies or with the true DCP. It would be better to simply declare this is the best choice if there are not any feasible baselines. **Reply**: We have included different strategies for ablation purposes, as we believe it strengthens the justification of our use of the chosen strategies. **Comment**: Maybe more fairness audit research should be discussed to better position this work. **Reply**: Our work falls within the realm of fairness auditing using only aggregate statistics, without assuming access to the classifier. This is in contrast with many other works, which require access to the classifier and the ability to query it. The challenge of auditing fairness using limited information has received significant attention in recent years, as evident, for example, in [1] & [2]. Our work is unique, in that it is the first, to our knowledge, to address the multiclass setting. The state of fairness audit research is discussed in the first three paragraphs of the Introduction, and in the second paragraph of the Related Work section. We will expand the discussion on recent works on fairness auditing with limited information in the final version of the paper. [1] Pinzón, Carlos, Catuscia Palamidessi, Pablo Piantanida, and Frank Valencia. "On the incompatibility of accuracy and equal opportunity." Machine Learning 113, no. 5 (2024): 2405-2434. [2] Jialu Wang, Yang Liu, and Caleb Levy. Fair classification with group-dependent label noise. In Proc. of the 2021 ACM Conference on Fairness, Accountability, and Transparency, FAccT ’21, page 526–536, New York, NY, USA, 2021b. Association for Computing Machinery. **Comment**: What we are expected to see or understand through Figure 1. **Reply**: Figure 1 provides a visualization of the function $\eta$, demonstrating that it is piecewise concave. We will refer to the figure when discussing the function's properties and explain the visualization. **Comment**: Above Eq. 2 we see $\eta$’s definition, while it takes two input variables in Eq. 4. **Reply**: Above and in Eq. 2, we use $\eta_a^y$, which is a scalar, and $\eta_a$, which is a vector of scalars. $\eta$ without superscripts or subscripts is a function that takes two arguments. We will clarify this in the final version. **Numbered Comment 1**: In Eq. 4, both w and $\pi$ are not dependent on the classifier according to their definitions in section 3, but they are said determined by C below Eq. 3. **Reply**: Thank you for the comment. Indeed this is a typo, since only $M_A$ is determined by C. We will fix this in the final paper. **Numbered Comment 2**: Eq. 2 executes addition group by group, then will it lead to a trivial solution? **Reply**: As our analysis below Eq (2) shows, the solution is not trivial, because of the constraint that the values of $\eta_a^y$ must be consistent with the distribution D. **Question 1**: Equalized odds emphasizes true and false positive rates, which are well-interpreted in binary classification. It would be helpful if the authors could provide some examples illustrating the significance of equalized odds in multi-class scenarios. **Reply**: In multiclass scenarios, the multiclass equalized odds criterion measures any differences in conditional prediction probabilities between sub-populations. This includes not only the difference in the rate of correct predictions as in the binary case, but also the types of prediction mistakes. For instance, if a patient's heart attack is misdiagnosed as an anxiety attack (which may mean the patient is denied care), this is significantly different than being misdiagnosed as a stroke (which may lead to delayed care). If some sub-populations incur more of a certain type of misdiagnosis error, this could indicate bias in diagnosis, as well as lead to undesired differences in treatment. This is one example of the importance of the multiclass equalized odds criterion. We will add a discussion with additional examples to the final version of the paper. **Question 2**: I understand this work currently focuses on auditing a trained classifier. But I am curious have you considered integrating the proposed DCP into a in-processing fairness method? **Reply**: We agree that this is an important question, and we intend to study it in future work.
Summary: The authors proposed a multi label extension of the framework developed by Sabato & Yom-Tov (thats only binary) that allows to compute bounds on fairness metrics only using population level quantities. Dcp is determined by the matrix, and the proportions and a function of conditional prediction rates and such that there is an efficient way on computing it, the authors then derive a multiclass version of DCP. Then the authors compute an analytical lower bound on the DCP and then a procedure to upper bound the multiclass version of the DCP. This procedure is a loop of LPs which can be solver efficiently. Finally the authors present a way to compute a bound on the DCP with out the need of having all the conditional proportions (the per class matrix), in particular there is no access to the classifier, just to its population level frequencies. Weakness - The writing of the paper is confusing. The derivation of the multiclass DCP should be a theroem. Also the metric DCP lacks motivation. Strengths - The authors propose an efficient algorithm to compute bounds on DCP. - The algorithms can be computed efficiently Claims And Evidence: Yes, the experiments and proved are correct and relevant. Methods And Evaluation Criteria: They do, but the DCP is not propperly motivated Theoretical Claims: Refer to summary Experimental Designs Or Analyses: Refer to summary Supplementary Material: I did not check the supplementary material Relation To Broader Scientific Literature: I am somehow familiar with the fairness literature but not with the main work the authors cite (Sabato and Yom-tov) Essential References Not Discussed: Refer to summary Other Strengths And Weaknesses: Refer to summary Other Comments Or Suggestions: ... Questions For Authors: No questions for the authors. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful review and comments. Please find below our response to your comments and questions. **Comment**: The derivation of the multiclass DCP should be a theorem. **Reply**: Thank you for the suggestion. We will add a formal theorem statement for the derivation of DCP. **Comment**: the metric DCP lacks motivation. **Reply**: The main motivation of the DCP, as discussed in the Introduction, is that it has a consistent interpretable meaning as a fraction of the population, regardless of the number of protected attribute values, the number of classes, or the degree of class imbalance. Thus, DCP is useful for interpretably auditing and comparing classifiers. We will add a discussion of the disadvantages of other previously suggested measures, to highlight the issues that are overcome by the use of DCP. These include the following: - Measures based on differences discount differences that are small in value, although they could be meaningful in terms of fairness. This is especially problematic when there is class imbalance. For instance, if the probability of predicting a certain label is 0.1 in one sub-population and 0.2 in another, this would be identified as significantly more unfair than if the probabilities are 0.001 and 0.002, respectively. However, when this label is very rare to begin with (such as in cancer diagnosis), these differences could indicate meaningful unfairness. Previous measures that try to correct this, such as ratio-based measures, suffer from other issues, such as lack of boundedness and difficulty achieving normalization that retains the meaning of the measure. The DCP measure does not have these issues, since it always identifies the fraction of the population that is affected by the disparate classification probabilities, which is inherently normalized and meaningful, regardless of class imbalance. - Most previous works only consider cases with two sub-populations. Standard extensions to multiple sub-populations maximize over pairwise comparisons between sub-populations. However, this disproportionally penalizes classifiers in which a single sub-population is treated unfairly, in comparison to classifiers in which several sub-populations are treated unfairly, since both types of classifiers would be deemed to have the same amount of unfairness. The DCP measure uses an optimal common baseline and sums over all sub-population differences from the baseline, thus properly differentiating between classifiers of significantly different fairness levels. - The lack of quantifiable interpretability of the measures proposed in the literature implies that they are not guaranteed to have a consistent interpretation of unfairness across all possible classifiers and confusion matrices. While we have listed above several specific issues, trying to solve these issues by fixing the measure in an ad-hoc manner leads to other issues, and so forth. The quantifiable interpretability of DCP ensures that the value of the measure is meaningful in all possible scenarios, and that this meaning is consistent, so that one can also compare the value of the measure for different classifiers and obtain meaningful conclusions.
Summary: This paper provides methods to bound the unfairness of multiclass classifiers. In particular, they extend the Disparate Conditional Prediction (DCP) metric from prior work to the multiclass setting. The DCP measure for a classifier quantifies the fraction of the population for whom the classifier's prediction distribution (conditional on the group and true outcome) differs from that of an equalized odds fair classifier. The paper shows how this metric can be generalized to the multiclass setting and forward methods to bound the DCP of any classifier with and without access to the group-conditional confusion matrices. They demonstrate the efficacy of this approach over multiple datasets, with the proposed evaluation approach achieving the sharpest bounds on the DCP of decision trees and kNNs trained on the datasets. Claims And Evidence: The main claim of the paper is that DCP is a useful metric for assessing the fairness of multiclass classifiers and can be evaluated effectively using the proposed methodology. Both claims seem well-justified. For the former claim (on the usefulness of DCP), the introduction does a good job of motivating the necessity of a DCP-based method for multiclass classifier audits. However, there seems to be heavy reliance on prior works for this motivation (e.g., Sabato & Yom-Tov for DCP and Wang et al. for limitations of other metrics in the multiclass setting). I would encourage the authors to expand the discussion/examples on the limitations of prior methods for multiclass classifier audit and the usefulness of DCP to make the paper more self-contained. I am also curious if any of the limitations of the other metrics in multiclass setting can be demonstrated within the paper itself. For example, in the setting where one has access to the group confusion matrices, ${M_a}$, one can trivially construct quantify unfairness as $max_{a_1, a_2} || M_{a_1} - M_{a_2}||_F$, where $||\cdot||_F$ denotes the Frobenius norm. Its unclear whether the the main issue with this (and other prior ways of quantifying fairness for multiclass settings) is interpretabality of the metric or if they have fail to capture some crucial aspects of unfairness. Methods And Evaluation Criteria: The methods used to estimate DCP in the multiclass setting are well-described and look correct. For the empirical analysis, most of the datasets used also make sense. However, I don't completely understand the purpose of the experiments on UK election patterns at the end of Section 7 and on US education data in the appendix. Specifically, it's unclear what the DCP criteria would imply in this case. The paper claims that a "high DCP value of such a classifier would indicate a possibly large difference between voting pattern changes across regions". However, it seems to me that any such inference can only be made when the classifier is fairly accurate and it's unclear if that's true in this case. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: The experimental design seems sound. However, Table 1, Table 2, and Figure 3 are quite difficult to read and interpret in their current form. I would encourage adding a more detailed caption and labels as well as increasing the font size of all tables and plots in the main body. Supplementary Material: The supplementary material contains the code for the experiments. Relation To Broader Scientific Literature: The paper adds to the literature on fairness audits methods. Specifically, it focuses on how to perform accurate fairness audits for multiclass classifiers. To do so, it employs the DCP measure from the prior work of Sabato & Yom-Tov (2020) and generalizes it to the multiclass setting. To my knowledge, prior work on fairness hasn't focused much on interpretable extensions to multiclass settings. And the trivial extensions from binary to multiclass setting that first come to mind are of the type that I stated in the earlier section (using matrix norms), which may be easier to compute but not necessarily easy to interpret. In that sense, I like the formulation proposed in the paper and can it see being broadly relevant for audits of other unfair human/automated decision processes as well. Essential References Not Discussed: No Other Strengths And Weaknesses: I found the Greedy initialization technique interesting. Table 1 shows that in some cases (e.g., when #labels=3) Greedy initialization alone leads to significant improvement in the upper bound, and running LM leads to a small improvement beyond that. In other cases (e.g., when #labels=8, 9), there is a larger gap between Greedy and Greedy+LM. I am curious if this is related to the number of labels or if its some artifact of how the greedy initialization is done. Other Comments Or Suggestions: No other comments Questions For Authors: 1. What are the limitations of the other metrics in multiclass setting and can these limitations be clearly demonstrated within the paper itself (e.g., using the datasets used for empirical analysis)? 2. What's the purpose of the experiments on UK election patterns and US education data and what does the DCP quantification mean in these settings? 3. Regarding the gap between the upper bounds from Greedy and Greedy+LM methods, what's the reason for the change in this gap across #labels? It seems this gap is smaller when #labels is small, but not sure if that trend will hold across datasets. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful review and comments. Please find below our response to your comments and questions. **Question 1**: What are the limitations of the other metrics in multiclass settings? **Reply**: Standard unfairness measures (see the Related Work section) have several disadvantages that are overcome by DCP. While these were discussed in the work that suggested binary DCP, we will add a relevant discussion to the paper, to highlight these issues and make the paper self-contained. The disadvantages include the following: - Measures based on differences discount differences that are small in value, although they could be meaningful in terms of fairness. This is especially problematic when there is class imbalance. For instance, if the probability of predicting a certain label is 0.1 in one sub-population and 0.2 in another, this would be identified as significantly more unfair than if the probabilities are 0.001 and 0.002, respectively. However, when this label is very rare to begin with (such as in cancer diagnosis), these differences could indicate meaningful unfairness. Previous measures that try to correct this, such as ratio-based measures, suffer from other issues, such as a lack of boundedness and difficulty achieving normalization that retains the meaning of the measure. The DCP measure does not have these issues, since it always identifies the fraction of the population that is affected by the disparate classification probabilities, which is inherently normalized and meaningful, regardless of class imbalance. - Most previous works only consider cases with two sub-populations. Standard extensions to multiple sub-populations maximize over pairwise comparisons between sub-populations. However, this disproportionally penalizes classifiers in which a single sub-population is treated unfairly, in comparison to classifiers in which several sub-populations are treated unfairly, since both types of classifiers would be deemed to have the same amount of unfairness. The DCP measure uses an optimal common baseline and sums over all sub-population differences from the baseline, thus properly differentiating between classifiers of significantly different fairness levels. - The lack of quantifiable interpretability of the measures proposed in the literature implies that they are not guaranteed to have a consistent interpretation of unfairness across all possible classifiers and confusion matrices. While we have listed above several specific issues, trying to solve these issues by fixing the measure in an ad-hoc manner leads to other issues, and so forth. The quantifiable interpretability of DCP ensures that the value of the measure is meaningful in all possible scenarios, and that this meaning is consistent, so that one can also compare the value of the measure for different classifiers and obtain meaningful conclusions. **Question 2**: What's the purpose of the experiments on UK election patterns and US education data and what does the DCP quantification mean in these settings? **Reply**: The experiments are intended to demonstrate how our approach can be used in diverse settings. Specifically, as discussed on page 8 of the paper, our analysis of the UK election dataset shows how political scientists could identify regional variability in voting patterns using the DCP measure. In the education dataset, our goal was to test whether educational attainment progressed more (or less) in some states compared to others. In this case, consider the scenario where authorities do not publish their achievements (or lack thereof) in disadvantaged areas, and only publish aggregate information. The DCP measure can help uncover such cases. **Question 3**: Regarding the gap between the upper bounds from Greedy and Greedy+LM methods, what's the reason for the change in this gap across #labels? **Reply**: This is indeed an interesting question. We conjecture that the greedy approach is less tight when there are more labels, because it relies on iterative solutions for a binary version of the problem. When the number of labels is larger, there are exponentially more possible combinations, and so we believe this leads to the greedy initialization being more likely to follow a sub-optimal iterative path. **Comment**: [On election patterns] it seems to me that any such inference can only be made when the classifier is fairly accurate. In the election experiment, the classifier predicts by assuming no changes in voting patterns between consecutive elections. Thus, we do not expect the classifier to always be accurate. On the contrary, its inaccuracy and unfairness are the objects of study, as they reveal changing voting patterns, and a variability of changes between regions, as measured by the DCP. **Comment**: Difficulty of interpreting Tables and a Figure. **Reply**: Thank you for the comment, we will improve the readability of these in the final version of the paper.
Summary: This work extends disparate conditional prediction, which measures the deviation from equalized odds, to multiclass classification problems. As the confusion matrices are not available, the authors derived the lower bound and the upper bound of the DCP of a multiclass classifier. The upper bounds are obtained using a local minimization procedure. To overcome the non-smoothness, the functions are split into locally linear parts, and the solutions are obtained by a combination of sequential solutions of standard linear programming problems. Claims And Evidence: The claims made in the submission are clearly supported by theoretical analysis on the lower bound and the upper bound. Methods And Evaluation Criteria: The proposed methods look seamlessly reasonable. Theoretical Claims: There is no proof for the theoretical claims. Experimental Designs Or Analyses: In the experiments, only decision tree and nearest neighbor classifiers are examined. However, researchers would expect more complicated models, such as SVM and neural networks. Supplementary Material: The supplementary material is a mix of python and matlab code. It is a bit weird that the matlab code is used for processing the data and python scripts are used for the simulation and optimization. But in general, they look good with detailed comments. Relation To Broader Scientific Literature: The contribution of this paper is related to audit the fairness of multiclass classifiers. Essential References Not Discussed: No. Other Strengths And Weaknesses: - The core contribution of this paper builds upon the criteria established by Sabato & Yom-Tov (2020), which raises questions about its originality and novel contributions. - The approach to determining lower and upper bounds demonstrates sound optimization approach to addressing the problem. - The experiments are only evaluated on simple tree-based and nearest neighbor models. Other Comments Or Suggestions: The paper is clearly written. I did not find any typo. Questions For Authors: - What does the nuisance distribution mean in Section 4? And what is its insight on real world data? - What is the convergence rate of the proposed Algorithm 1? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your helpful review and comments. Please find below our responses to your comments and questions. **Comment**: There is no proof for the theoretical claims. **Reply**: All theoretical claims are proved, either in the body of the paper or in the appendix. If there are any theoretical claims for which you believe proofs are missing, we would be grateful if you could point them out to us. **Comment**: In the experiments, only decision trees and nearest neighbor classifiers are examined. However, researchers would expect more complicated models, such as SVM and neural networks. **Reply**: The types of classifiers were chosen to be suitable for the low-dimensional categorical data-sets that we are reporting. Since our methods only use the prediction rates of each label in the output classifier, the complexity of the models does not have any effect on the usage of our methods. In addition, if a classifier is 100% accurate in its predictions, then the equalized odds criterion holds trivially, in which case the DCP would be zero. We appreciate the suggestion to extend our experiments - we will add experiments in which the classifiers are generated by SVMs and neural networks to the final version of the paper. **Comment**: originality and novel contributions **Reply**: While we build on a previous work, which defined binary DCP, the definitions and methods in that previous work are not applicable to multiclass classifiers, since they heavily rely on the binary nature of the problem. The calculation of the extension of DCP to the multiclass setting is challenging, and all of our methods for bounding the DCP are completely novel. As we discuss in the paper, the computational problems that arise in the multiclass case are not trivial and cannot be solved efficiently by out-of-the-box solvers. For instance, we use a combination of sequential linear programming with a function split to handle the non-smoothness. In addition, we handle instabilities encountered by the LP solvers when they deal with extremely high derivatives imposed by the $\eta$ function. Thus, our contributions are novel and original. **Question**: What does the nuisance distribution mean in Section 4? And what is its insight on real world data? **Reply**: The nuisance distribution represents the deviation of the classifier's conditional probabilities on each sub-population from the baseline conditional probabilities. This is a modeling construct that was used to derive the DCP measure in the previous work that we build upon. In this paper, we extend this to the multiclass case. In real-world data, a nuisance distribution with a higher mixture parameter ($\eta_a^y$) indicates that a larger part of the population is classified with conditional probabilities that deviate from the baseline conditional probabilities, thus indicating that the classifier is less fair. **Question**: What is the convergence rate of the proposed Algorithm 1? **Reply**: Our problem contains the non-linear $\eta$ functions, which are approximated by linear functions. For a general non-linear programming problem, sequential linear programming (SLP) methods converge linearly if all functions are smooth. That is because they do not use Hessian information, and their whole purpose is to handle the constraints efficiently (determine who is active and who is not). If one uses Hessian information, we get sequential quadratic programming (SQP) and the convergence is asymptotically quadratic once the active set of constraints is identified, similar to Newton’s method [1]. For Algorithm 1 to converge linearly, we first need to have smooth constraints, and for that we use the split of $\eta$ presented in Equation (16) and detailed in Appendix D. Furthermore, we also limit the size of the steps to be of size at most $\tau$, as detailed in Appendix D. This approach is called a Trust Region method, and is commonly used with sequential quadratic or linear programming methods. The method was analyzed in [2] for a case that is similar to ours, where the objective of the constrained problem is linear, like in Eq. (9) and Eq. (15). It is shown that the convergence of the SLP method is linear, and the rate depends on the radius of the trust region method ($\tau$, in our case). We will add this discussion to the final version of the paper, including plots demonstrating the convergence rate for different choices of $\tau$ in our problems. [1] Nocedal J, Wright SJ, editors. Numerical optimization. New York, NY: Springer New York; 1999. [2] Kiessling D, Zanelli A, Nurkanović A, Gillis J, Diehl M, Zeilinger M, Pipeleers G, Swevers J. A feasible sequential linear programming algorithm with application to time-optimal path planning problems. In 2022 IEEE 61st Conference on Decision and Control (CDC) 2022 Dec 6 (pp. 1196-1203). IEEE.
null
null
null
null
null
null
Efficient Graph Continual Learning via Lightweight Graph Neural Tangent Kernels-based Dataset Distillation
Accept (poster)
Summary: This paper proposed a graph dataset distillation method via Graph Neural Tangent Kenrels (GNTK) for efficient graph continual learning. The main idea is using the Bernoulli sampling method to approximate the graph Laplacian which is required for computing the gradients. By carefully setting the probability of Bernoulli distribution, this paper claimed that it could trade off the computational efficiency and approximation error. Further, this paper proposed a data selection and a supervised fine-tuning method to achieve the authors' goal. The experimental results seem to support its claims. Overall, some parts of this paper are unclear and thus I am confused by these, at least in this version. Claims And Evidence: The key claims in this paper involve two aspects: 1) Low-rank optimization; and 2) The data-selection and supervised fine-tuning(SFT). 1.1. The motivation of Bernoulli random variables. > Suppose we have an undirected graph and its adjacency matrix $A$, and we have the corresponding graph Laplacian $L:=D-A\in\mathbb{R}^{N\times N}$ where $D$ is the degree matrix [1]. The first question is why do we need to use Bernoulli distribution? According to the Authors' claims, we just need $r$ ranks, so why not directly choose the largest $r$ eigenvalues? > From efficiency, these two do not have any difference; From approximation error, the largest-$r$ eigenvalues lead to a smaller approximation error than any choice of Beronulli sampling. 1.2. I am not sure why and how low-rank optimization helps efficiency. > According to (2) and (3) in the manuscript, I find that the low-rank graph Laplacian $\tilde{L}$ is still $N\times N$. > To compute the $\tilde{L}$, one need to have the original graph Laplacian $L$; And then apply the spectral decompostion on it $L=U \Lambda U^\top$; Next, computing its low-rank approximation $\tilde{L}$; Finally, Replace $L$ with $\tilde{L}$. I am confused by it. Why not directly use $L$ since $L$ and $\tilde{L}$ have the same shape? 2. Data selection and SFT. > The definition of $\Theta(G_i)$ in (4) is unclear. In (3) $\Theta(G_1, G_2)$ is a kernel function with two inputs; In Page 6, Line 326, $\Theta$ is a $N\times M$ matrix; But in (4), $\Theta(G_i)$ becomes a values with a single input. > The motivation of (5) is unclear. I admit that my knowledge in the field of SFT may be insufficient. But I failed to understand what (5) was doing. I suggest the authors to explain it more. [1] https://en.wikipedia.org/wiki/Laplacian_matrix Methods And Evaluation Criteria: Same as ``Claims and Evidence''. Theoretical Claims: Seem to be correct. Experimental Designs Or Analyses: None. Supplementary Material: No. Relation To Broader Scientific Literature: None. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: 1. Page 5, line 239. Formally writing a theorem or proposition before the proof may be better than directly writing the proof. 2. Late error. Page 18, line 946. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ## Q1: Why use Bernoulli distribution than largest r eigenvalues? We adopt Bernoulli sampling because it **effectively balances high-frequency and low-frequency components, preserving both local and global graph structures**. In contrast, **directly selecting the largest $r$ eigenvalues mainly captures local variations, while selecting the smallest $r$ eigenvalues emphasizes global structures.** ### **1. Theoretical Justification** Under Lipschitz gradient assumption [1], we assume that the GNN $ f_\theta $ satisfies $$ \|\nabla_\theta f_\theta(A) - \nabla_\theta f_\theta(B)\| \leq L_\nabla \|A - B\|, $$ $L_\nabla$ is the constant. Therefore, when approximating $ L $ with $ \tilde{L} $, the induced error in the gradients $ \| \nabla f_\theta(LX) - \nabla f_\theta(\tilde{L} X) \| $ is controlled by $ \| f_\theta(LX) - f_\theta(\tilde{L} X) \| $. Given eigen-decomposition: $ L = U \Lambda U^\top, \tilde{L} = \tilde{U} \tilde{\Lambda} \tilde{U}^\top, $ where $ \tilde{L} $ is a low-rank approximation, we introduce a selection indicator $ \xi_i $ for each eigenvalue $ \lambda_i $, leading to an error formulation: $$ (L-\tilde{L})X = \sum_{i=1}^{n}(1-\xi_i)\lambda_i u_i (u_i^\top X). $$ To ensure a fair comparison, we analyze the **normalized error bound**: $$ \mathbb{E}_\xi\left[\frac{\|(L-\tilde{L})X\|}{\|LX\|}\right] = \frac{\sum \mathbb{E}[1-\xi_i]\ \lambda_i^2\|u_i^\top X\|^2}{\sum \lambda_i^2\|u_i^\top X\|^2} $$ For Bernoulli, where $ \xi_i \sim \text{Bernoulli}(p) $ with $ p = \frac{r}{n} $, $ \mathbb{E}[1-\xi_i] = 1-p \approx 1-\frac{r}{n}. $ Thus, the normalized error bound is: $ \text{Error}_{\text{Bern}} \approx 1 - \frac{r}{n}. $ This result is independent of the eigenvalue distribution, ensuring robustness across different graphs. In comparison, $$ \frac{\sum_{i=r+1}^{n}\lambda_i^2\|u_i^\top X\|^2}{\sum_{i=1}^{n}\lambda_i^2\|u_i^\top X\|^2} = \text{Error}_{\text{Largest}}. $$ $$ \frac{\sum_{i=1}^{n-r}\lambda_i^2\|u_i^\top X\|^2}{\sum_{i=1}^{n}\lambda_i^2\|u_i^\top X\|^2} = \text{Error}_{\text{Smallest}} . $$ **These methods depend on the eigenvalue spectrum; simply dropping local or global signals will lead to significantly larger errors** [2, 3]. ### **2. Experimental Validation** We conducted a comparative analysis of various sampling strategies: |Method|NCI1|NCI109|PROTEINS|molbace|molbbbp|molhiv| |-|-|-|-|-|-|-| |Bernoulli|**66.4**|**65.6**|**75.9**|**76.8**|**68.2**|**69.3**| |Largest|65.4|64.0|74.7|76.1|67.9|69.2| |Smallest|65.1|65.4|75.6|76.7|67.9|69.1| Experimental results demonstrate that **Bernoulli sampling achieves the best performance** in preserving both global and local graph information. [1]LECTURES ON LIPSCHITZ ANALYSIS. [2]Self-supervised graph-level representation learning with local and global structure. PMLR, 2021. [3]From local to global: Spectral-inspired graph neural networks. NeurIPS 2022. --- ## Q2: Explanation of low-rank optimization We acknowledge that our initial complexity analysis contained an error. Below, we provide a **corrected and more detailed explanation**. The efficiency improvement of optimization primarily stems from reducing the computational complexity of matrix operations. Instead of directly computing $N \times N$ dense matrix $ L $, we use a sampled low-rank approximation: $ L \approx U_r \Lambda_r U_r^T, $ where: $ U_r \in \mathbb{R}^{N \times r}, \Lambda_r \in \mathbb{R}^{r \times r}. $ To compute $ \tilde{L} X = U_r \Lambda_r U_r^T X $ efficiently, we decompose into 3 sequential steps: - 1. $ Z = U_r^T X, \quad Z \in \mathbb{R}^{r \times d}, \quad \text{cost: } O(rNd) $ - 2. $ Y = \Lambda_r Z = \Lambda_r (U_r^T X), \quad Y \in \mathbb{R}^{r \times d}, \quad \text{cost: } O(r^2 d) $ - 3. $ \tilde{L} X = U_r Y = U_r (\Lambda_r (U_r^T X)), \quad \text{cost: } O(Nrd) $ Thus, overall computational complexity: $ O(rNd) + O(r^2 d) + O(Nrd) \approx O(Nrd), $ where $ r \ll N $. This significantly reduces the complexity compared to the naive approach **$ O(N^2 d) $**. This low-rank approximation approach is conceptually related to **linear attention** in Transformers [4]. [4]Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention. ICML 2020. --- ## Q3: Unclear definition of Θ(Gi) We will correct this **inconsistency in the notation of Θ in the final version of the paper** by carefully distinguishing between the kernel function and the matrix and updating notation table to ensure clarity. --- ## Q4: Definition of equation (5) Equation (5) is a combination of two terms: - Cross-entropy loss: $\frac{1}{M} \sum_{i=1}^{M} - o_i \log(\hat{o}_i)$ measures the difference between true labels $o_i$ and predicted probabilities $\hat{o}_i$. - Regularization: $\eta \|\Theta\|^2$, is a regularization term that adds a penalty proportional to the squared $L_2$ norm of model parameters $\Theta$ to prevent overfitting. --- ## S1&S2: Typo Errors We will carefully proofread and improve the manuscript. Thanks.
Summary: This work introduces a novel LIGHTGNTK that contains a low-rank GNTK approximation via Bernoulli sampling and a unified subgraph anchoring strategy for efficient and effective dataset distillation in multi-level tasks. Claims And Evidence: The main claim of the paper is that the low-rank approximation of the Laplacian matrix in GNTK can efficiently capture both structure and feature relationships in gradient-based dataset distillation. This claim is derived from the structure-based Laplacian matrix and the features-based similarity matrix. Methods And Evaluation Criteria: The proposed LIGHTGNTK and three evaluation tasks make sense for the dataset distillation at hand. Theoretical Claims: The paper validates its theoretical claim of high-quality low-rank approximation by using Bernoulli-based sampling with a theoretical guarantee. Experimental Designs Or Analyses: To verify the versatility of the proposed dataset distillation framework on different graph tasks, 7 graph classification datasets, 4 node classification datasets, and 2 link prediction datasets with limited training data are used in this paper to conduct comprehensive experiments. Supplementary Material: The notation table, theoretical analysis on gradient and approximation quality, experiment details including datasets statics and baseline details, and more experimental results are reported. Relation To Broader Scientific Literature: Downstream tasks such as graph continual learning, graph classification, node classification, link prediction, and graph foundation models will benefit. Essential References Not Discussed: N/A Other Strengths And Weaknesses: S1: A novel integration of low-rank GNTK approximation with dataset distillation has been proposed in the paper, enabling efficient graph continual learning. S2: Comprehensive empirical validation across 13 datasets and multi-level tasks (node/edge/graph) have been conducted in the paper, demonstrating broad applicability in the future. S3: The theoretical guarantees on approximation quality have been proven, enhancing methodological credibility. Other Comments Or Suggestions: W1: Ablation studies on key components (e.g., sampling probability and layer-specific gradients) are missing. W2: It would be better to provide the computational efficiency of LIGHTGNTK. W3: Why some Graph Representation Learning models are not discussed in this paper? Such as: Tan, S., Li, D., Jiang, R., Zhang, Y. and Okumura, M., 2024, July. Community-invariant graph contrastive learning. In Proceedings of the 41st International Conference on Machine Learning (pp. 47579-47606). You, Y., Chen, T., Sui, Y., Chen, T., Wang, Z. and Shen, Y., 2020. Graph contrastive learning with augmentations. Advances in neural information processing systems, 33, pp.5812-5823. Questions For Authors: Q1. The ablation experiments are suggested to be conducted to evaluate the effectiveness of each proposed component in LIGHTGNTK. Q2. It would be better to report the original training time on the whole dataset, which can validate the efficiency of this approach. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your comments, suggestions, and every effort spent on reviewing our work. Here we attempt to address all your remaining concerns. In the following, we quote your comments and then give our detailed response point-by-point. --- ## W1: Ablation studies on key components are missing. We acknowledge that our initial submission lacked comprehensive ablation studies. To rigorously validate this, we have **performed extensive ablation experiments focusing on the sampling probability and the use of layer-specific gradients**. ### 1. Sampling Probability We investigated the impact of dataset distillation quality and computation cost **under different sampling rates** for Bernoulli sampling: Table 1. Performance Comparison among Different Sampling Rates | sample rate | NCI1(ACC) | NCI109(ACC) | PROTEINS(ACC) | ogbg-molbace(ROC-AUC) | ogbg-molbbbp(ROC-AUC) | ogbg-molhiv(ROC-AUC) | |---|---|---|---|---|---|---| | 0.05 | 65.7 | 65.0 | 74.5 | 74.9 | 67.2 | 68.8 | | 0.1 | 66.4 | **65.6** | **75.9** | **76.8** | **68.2** | 69.3 | | 0.2 | **66.5** | 64.6 | 75.0 | 76.6 | 67.6 | **69.4** | | 0.5 | 65.4 | 65.1 | 74.5 | 76.4 | **68.2** | 69.3 | Table 2. Time consumption (seconds) among Different Sampling Rates | sample rate | NCI1 | NCI109 | PROTEINS | ogbg-molbace | ogbg-molbbbp | ogbg-molhiv | |---|---|---|---|---|---|---| | 0.05| 117 | 109 | 21.6 | 43.0 | 57.8 | 621 | | 0.1 | 144 | 136 | 24.8 | 48.8 | 59.7 | 683 | | 0.2 | 183 | 171 | 26.1 | 59.1 | 63.3 | 733 | | 0.5 | 186 | 180 | 28.3 | 62.9 | 65.4 | 816 | **The results indicate that a sampling probability of 0.1 achieves the best balance between training efficiency and model accuracy.** Lower probabilities leed to performance decay due to inadequate training data, while higher probabilities increase computational cost without significant gains in performance. ### **2. Layer-Specific Gradients** We analyzed the effect of employing layer-specific gradients by comparing it with LIGHTGNTK that uses gradients across all layers, first layer and last layer. Table 3. Performance of gradient computation using layer-specific gradients | dataset | NCI1(ACC) | NCI109(ACC) | PROTEINS(ACC) | ogbg-molbace(ROC-AUC) | ogbg-molbbbp(ROC-AUC) | ogbg-molhiv(ROC-AUC) | |---|---|---|---|---|---|---| | First Layer | 62.3 | 60.6 | 70.8 | 72.3 | 64.2 | 63.5 | | Last Layer | 66.2 | 64.7 | 75.5 | 76.3 | 66.7 | 68.1 | | All Layers | **66.4** | **65.6** | **75.9** | **76.8** | **68.2** | **69.3** | The results indicates using gradients across all layers can provide best performance, while choosing gradients from the last layer offers balance between accuracy and efficiency. --- ## W2: Provide computational efficiency of LIGHTGNTK. We compared the training time of GNTK and LIGHTGNTK on more datasets. The results, detailed in the table, indicate that LIGHTGNTK achieves an average of 30% reduction in training time. Table 4. Time consumption (seconds) between GNTK and LIGHTGNTK | | NCI1 | NCI109 | PROTEINS | DD | ogbg-molhiv | ogbg-molbbbp | ogbg-molbace | |---|---|---|---|---|---|---|---| | GNTK | 196 | 184 | 26.3 | 278 | 921 | 72.6 | 54.5 | | LIGHTGNTK | 144 | 136 | 24.8 | 203 | 683 | 59.7 | 48.8 | --- ## W3: More discussion of Graph Representation Learning models To comprehensively evaluate the impact of different pretrain methods, we conducted additional experiments **using graph contrastive learning (GraphCL [1], CI-GCL [2]) to pretrain our GNN backbone**, comparing it with GPT-GNN-based pretraining approach. Table 5. Performance Comparison under Different Pretrain Methods | pretrain method | NCI1(ACC) | NCI109(ACC) | PROTEINS(ACC) | ogbg-molbace(ROC-AUC) | ogbg-molbbbp(ROC-AUC) | |---|---|---|---|---|---| | GPT-GNN (LIGHTGNTK) | 66.4 | 65.6 | 75.9 | 76.8 | 68.2 | | GraphCL (LIGHTGNTK) | 66.5 | 63.9 | 69.3 | 76.2 | 65.6 | | CI-GCL (LIGHTGNTK) | 66.7 | 64.3 | 73.5 | 76.9 | 67.4 | | GPT-GNN (FULL) | 80.0 | 77.7 | 78.6 | 72.7 | 65.0 | | GraphCL (FULL) | 79.1 | 80.7 | 72.7 | 76.5 | 65.4 | | CI-GCL (FULL) | 79.1 | 81.2 | 76.3 | 77.2 | 65.8 | The experimental results indicate **the generality of our LIGHTGNTK**, which is effective under different GNN backbones pretrain methods, and the stronger the ability of the base model, the better the quality of data selecting. [1] Community-invariant graph contrastive learning. ICML. [2] Graph contrastive learning with augmentations. NeurIPS.
Summary: The paper introduces LIGHTGNTK, a novel framework for efficient Graph Continual Learning (GCL) via dataset distillation. It enables GNNs to adapt to diverse downstream tasks without extensive fine-tuning, overcoming high computational costs that hinder Large Graph Models (LGMs). Specifically, LIGHTGNTK utilizes a low-rank approximation of the Laplacian matrix to capture structural and feature relationships effectively. Moreover, the proposed unified subgraph anchoring strategy supports graph, node, and edge-level tasks. Extensive experiments on multiple graph datasets demonstrate the state-of-the-art performance of LIGHTGNTK. Claims And Evidence: Yes, the claims in this paper are all convincing. Methods And Evaluation Criteria: Yes, evaluation criteria are accurately chosen in this paper. Theoretical Claims: The proofs in the main paper and Appendix D make sense. Experimental Designs Or Analyses: The validity of experimental designs and analyses is sufficient to demonstrate the effectiveness of the proposed method. Supplementary Material: The Appendix A, B, C, and D in the supplementary material are all reviewed. The supplementary material looks fine. Relation To Broader Scientific Literature: The studied problems of graph neural network and dataset distillation are popular in existing machine learning community, which can motivate various real-world applications ranging from financing and bioscience. Thus, the paper is related to broad scientific leterature. Essential References Not Discussed: All essential related works are cited in this paper. Other Strengths And Weaknesses: Strengths: - This paper innovatively employs the low-rank approximation of the Laplacian matrix to estimate the gradient similarity between datasets for dataset distillation. - This paper introduces a new anchor strategy to unify different dataset distillation tasks into the same framework. - The model diagrams are aesthetically pleasing and clearly detailed, contributing to the paper's readability and ease of understanding. Weaknesses: - There is confusion in using symbols $\theta$ and $\Theta$ throughout the text. The authors should standardize symbol usage, particularly noticeable in Sec. (3.2). - The absence of parameter sensitivity analyses renders the experimental section incomplete. - Assumption 1, the validation-test distribution assumption, constitutes a strong prior that lacks sufficient empirical support. The authors are suggested to conduct additional experiments comparing test and validation sets to substantiate this assumption. Other Comments Or Suggestions: Suggestion 1: Include experiments in terms of parameter sensitivity to substantiate the claims made in the paper. This will help in understanding the impact of different components and parameters on the model's performance. Suggestion 2: Address the inconsistencies in symbol usage, particularly concerning $\theta$ and $\Theta$. It is important to ensure correct and consistent notation throughout the paper to avoid confusion for readers. Some discrepancies have been noted, and correcting these will improve the clarity and professionalism of the manuscript. Questions For Authors: Please see suggestions. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your comments, suggestions, and the time spent reviewing our work. Below, we address each of your concerns in detail. --- ## W1: Confusion in using symbols $\theta$ and $\Theta$ throughout the text We sincerely apologize for the inconsistency in the notation of $\theta$ and $\Theta$ in our manuscript. This was an oversight on our part, and we greatly appreciate your feedback on this issue. **To address this, we will carefully distinguish between the kernel function and the matrix in the final version of the paper. Additionally, we will update the notation table to ensure clarity and maintain consistency throughout the manuscript.** --- ## W2: The absence of parameter sensitivity analyses renders the experimental section incomplete We acknowledge that our initial submission lacked comprehensive ablation studies. To address this, we have conducted **extensive experiments analyzing the impact of sampling probability and layer-specific gradients**. ### **1. Sampling Probability Analysis** We evaluated dataset distillation quality and computational cost **under different sampling probabilities** in Bernoulli sampling: #### **Table 1. Performance Comparison Across Different Sampling Rates** | Sample Rate | NCI1 | NCI109 | PROTEINS | ogbg-molbace | ogbg-molbbbp | ogbg-molhiv | |-|--|--|-|-|-|-| | 0.05 | 65.7 | 65.0 | 74.5 | 74.9 | 67.2 | 68.8 | | 0.1 | 66.4 | **65.6** | **75.9** | **76.8** | **68.2** | 69.3 | | 0.2| **66.5** | 64.6 | 75.0 | 76.6 | 67.6 | **69.4** | | 0.5| 65.4 | 65.1 | 74.5 | 76.4 | **68.2** | 69.3 | #### **Table 2. Computation Time (seconds) Across Different Sampling Rates** | Sample Rate | NCI1 | NCI109 | PROTEINS | ogbg-molbace | ogbg-molbbbp | ogbg-molhiv | |--|-|-|-|-|-|-| | 0.05 | 117 | 109 | 21.6 | 43.0 | 57.8 | 621 | | 0.1| 144 | 136 | 24.8 | 48.8 | 59.7 | 683 | | 0.2 | 183 | 171 | 26.1 | 59.1 | 63.3 | 733 | | 0.5 | 186 | 180 | 28.3 | 62.9 | 65.4 | 816 | **Our findings indicate that a sampling probability of 0.1 achieves the optimal balance between training efficiency and model accuracy.** A lower sampling rate leads to a decline in performance due to insufficient training data, whereas a higher rate increases computational costs without significant performance gains. ### **2. Layer-Specific Gradient Analysis** We further examined the impact of employing layer-specific gradients by comparing LIGHTGNTK's performance when using gradients across all layers, only the first layer, and only the last layer. #### **Table 3. Performance Comparison of Layer-Specific Gradient Computation** | Dataset | NCI1 | NCI109 | PROTEINS | ogbg-molbace | ogbg-molbbbp | ogbg-molhiv | |-|-|-|-|-|-|-| | First Layer | 62.3 | 60.6 | 70.8 | 72.3 | 64.2 | 63.5 | | Last Layer | 66.2 | 64.7 | 75.5 | 76.3 | 66.7 | 68.1 | | All Layers | **66.4** | **65.6** | **75.9** | **76.8** | **68.2** | **69.3** | **The results demonstrate that utilizing gradients across all layers yields the best performance, while using gradients from only the last layer provides a favorable trade-off between accuracy and efficiency.** --- ## W3: The validation-test distribution assumption constitutes a strong prior that lacks sufficient empirical support We acknowledge the concern regarding **Assumption 1 (validation-test distribution consistency)** and would like to clarify this assumption both theoretically and empirically. ### **1. Theoretical Justification** From a **theoretical perspective**, assuming an **identical distribution across training, validation, and test sets** is a widely adopted principle in machine learning, particularly when data splitting is performed **randomly** under the i.i.d. (independent and identically distributed) assumption. This practice aligns with foundational theories established in various works, such as [1][2]. ### **2. Empirical Verification** To further substantiate this assumption, we conducted **quantitative analyses** using **Maximum Mean Discrepancy (MMD)** and **Kullback-Leibler (KL) divergence** on the learned GNN embeddings after training. #### **Table 4. MMD and KL Divergence between Validation and Test Sets** | | NCI1 | NCI109 | PROTEINS | DD | ogbg-molhiv | ogbg-molbbbp | ogbg-molbace | |-|--|-|-|-|-|--|-| | MMD | 0.0023 | 0.0024 | 0.0120 | 0.0056 | 0.0018 | 0.0196 | 0.1171 | | KLD| 0.0660 | 0.0520 | 0.0941 | 0.0019 | 0.0275 | 0.0651 | 0.0301 | Following [3][4], the MMD and KL divergence values are **below the critical thresholds** commonly used to assess distributional shifts. These results provide strong empirical support that the validation and test sets follow **statistically similar distributions**, thereby validating the i.i.d. assumption. [1] Sampling: Design and Analysis. Chapman and Hall/CRC, 2021. [2] Asymptotic Properties of Random Restricted Partitions. Mathematics, 2023. [3] Prompt-based Distribution Alignment for Unsupervised Domain Adaptation. AAAI, 2024. [4] KL Guided Domain Adaptation. ICLR, 2022.
Summary: This paper introduces a novel dataset distillation method called LIGHTGNTK for graph continual learning, which benefits the efficient and effective fine-tuning of large graph models. Specifically, the proposed LIGHTGNTK utilizes the low-rank approximation of the Laplacian matrix in Graph Neural Tangent Kernel to efficiently capture both structure and feature relationships enabling effective gradient-based dataset distillation. Moreover, this paper utilized a subgraph anchoring strategy to unify graph-, node-, and edge-level tasks under the same dataset distillation framework. Extensive experiments demonstrating the efficiency and effectiveness of LIGHTGNTK across various graph tasks, including graph classification, node classification, and link prediction. Claims And Evidence: Yes, the claims made in the submission are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the methods in this paper make sense for the problem. Theoretical Claims: Yes, I have checked the proof in Section 4.2.2. Experimental Designs Or Analyses: Yes, I have checked experimental designs and analyses in this paper. Supplementary Material: Yes, I have reviewed the supplementary material completely. Relation To Broader Scientific Literature: This paper is related to dataset distillation. Essential References Not Discussed: No missing references. Other Strengths And Weaknesses: - Strengths: 1. This paper is well organized and presented clearly defining the problem statement. The core narrative and all technical contributions are written clearly and concisely, guiding most readers well to fully understand the contributions. 2. This paper presents a sensible, clearly presented, and interpretable approach to graph dataset distillation. The method section is further supported with proofs and significant empirical findings. 3. All details are presented for full reproduction in this paper including optimization, hyperparameters, datasets, and architectural settings. The further details presented in the appendix are a nice addition to support reproducibility. 4. This paper does a good job covering comprehensive experiments to evaluate the proposed method, presenting efficiency and effectiveness compared with state-of-the-art methods on a variety of datasets for different tasks. - Weaknesses: 1. The motivation behind graph dataset distillation is not entirely novel, as the concepts of GNTK have been previously discussed in [1]. 2. The distillation reliance on similarities and the pairwise GNTK similarity matrix may become computationally intensive and could be further optimized. 3. The Introduction in this paper lacks a high-level insight of the GNTK to explain why it could capture both structural and feature relationships for dataset distillation via gradients, which makes the reader difficultly to understand the motivation of the LIGHTGNTK intuitively. [1] Kernel Ridge Regression-Based Graph Dataset Distillation Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your comments, suggestions, and the time spent reviewing our work. Below, we address each of your concerns in detail. --- ## W1: Concepts of GNTK have been previously discussed Thank you for your valuable comment. While the concepts of GNTK have indeed been explored in KIDD [1], our approach in **LIGHTGNTK** differs significantly. **KIDD is specifically designed for graph classification and employs GNTK for kernel ridge regression-based dataset distillation**. However, the kernel regression paradigm in KIDD does not guarantee that training graph samples remain relevant during the inference stage. In contrast, our **LIGHTGNTK framework unifies graph classification, node classification, and link prediction tasks**. We introduce a **Bernoulli-sampled low-rank Laplacian-approximated GNTK** to enhance similarity-based dataset distillation. This novel formulation enables efficient, scalable, and task-adaptive distillation beyond the scope of KIDD. [1] Kernel Ridge Regression-Based Graph Dataset Distillation --- ## W2: GNTK similarity matrix may become computationally intensive Thank you for raising this concern. While constructing the GNTK similarity matrix has a theoretical complexity of $O(N^2)$, we mitigate computational costs by selecting only the most informative samples and reducing redundant information, following the LESS [2] framework to maintain distillation performance. Furthermore, to further optimize computational efficiency, we explore **pooling-based methods** (e.g., mean pooling) to derive a coarse-grained validation set and a smaller similarity matrix. The experimental results (on node classification tasks using CiteSeer and Cora) are presented below: ### Mean-Pooling Results: | gpc | CiteSeer | Cora | |-----|-----|-----| | 1 | 38.4 | 44.0 | | 10 | 60.1 | 72.0 | | 50 | 63.1 | 76.8 | ### Pairwise Calculation Results: | gpc | CiteSeer | Cora | |-----|-----|-----| | 1 | 44.7 | 44.6 | | 10 | 62.7 | 76.2 | | 50 | 63.4 | 77.9 | The results indicate that **mean pooling retains a considerable portion of the performance achieved by pairwise computation** while significantly reducing computational costs. Although pairwise similarity remains superior, mean pooling presents a viable alternative in scenarios where computational efficiency is a priority. [2] LESS: Selecting Influential Data for Targeted Instruction Tuning --- ## W3: Why could GNTK capture both structural and feature relationships? Thank you for your insightful question. The ability of **GNTK** to capture both **structural** and **feature** relationships is rooted in its **gradient-based dataset distillation mechanism**. In our **LIGHTGNTK** framework, **structural information** is propagated through the Laplacian matrix $ L $, while **feature information** is encoded via the weight matrix $ W $. Mathematically, in standard (non-graph) dataset distillation, the gradient of the loss function with respect to the weight matrix at layer $ l $ is given by: $$ \frac{\partial\mathcal{L}}{\partial W^{(l)}} = \frac{\partial\mathcal{L}}{\partial Z^{(l)}} \cdot a^{(l-1)T} $$ where $ a^{(l-1)} $ represents the activations from the previous layer, and $ T $ denotes the matrix transpose to ensure correct dimensional alignment. However, in the **GNTK-based** formulation, the gradient incorporates graph structural dependencies via the Laplacian matrix $ L $ and is expressed as: $$ \frac{\partial\mathcal{L}}{\partial W^{(l)}} = H^{(l-1)T} L \left( \frac{\partial\mathcal{L}}{\partial Z^{(l)}} \odot \sigma^{\prime} (Z^{(l)}) \right) $$ where: - $ H^{(l-1)} $ is the node representation at layer $ (l-1) $. - $ L $ is the **graph Laplacian**, encoding structural relationships. - $ \sigma^{\prime} (Z^{(l)}) $ represents the element-wise derivative of the activation function. From this formulation, it is evident that **GNTK-based gradients inherently embed structural information through $ L $** while simultaneously encoding feature dependencies via $ H^{(l-1)} $ and the weight parameters. This allows GNTK to effectively distill graph-structured datasets while **preserving essential structural and feature relationships**.
null
null
null
null
null
null
Learning Event Completeness for Weakly Supervised Video Anomaly Detection
Accept (poster)
Summary: The main challenge in video anomaly detection is the lack of dense frame-level annotations, leading to incomplete localization in existing WS-VAD methods. To tackle this, authors introduced Learning Event Completeness for WS-VAD, featuring a dual structure that captures both category-aware and category-agnostic semantics. It uses an anomaly-aware Gaussian mixture to define precise event boundaries and includes a memory bank-based mechanism to enhance text descriptions of anomaly-event categories. The proposed method shows promising results on the XD-Violence and UCF-Crime datasets. ## Update after rebuttal With additional experiments, some of my concerns have been addressed. However, some of my questions remain, specifically regarding the loss function ablations. Therefore, I will keep my score as it is. Claims And Evidence: The proposed approach is somewhat incremental, and the results are promising. However, the problem formulation and motivation behind the approach are somewhat weak and need clearer articulation. The authors attempt to reformulate the problem in the second paragraph on page 1, but the proposed approach—which includes a dual structure category-aware and category-agnostic semantics, an anomaly-aware Gaussian mixture to define precise event boundaries, and a memory bank-based mechanism to enhance text descriptions—requires further clarification with some of existing approaches. Additionally, comparing this approach with existing work is necessary to clearly address the challenges it aims to solve. This would provide a stronger foundation for the proposed approach and enhance its completeness. The overall structure of the proposed approach presents some issues, as shown in Figure 2. Q represents textual features, while k and V represent video features, indicating that it involves cross-attention rather than self-attention in the Cross-Modal Aware block, which requires correction. On page 4, in the second column, the authors state "modeling cross-modal interactions through a cross-attention operation." Furthermore, there is no clear formulation regarding the local transformer and global GCN. It remains unclear what the main contribution is beyond merely combining existing approaches to improve performance; the approach needs something novel to distinguish itself. The experimental results support the contributions of the proposed approach, which outperforms existing methods by a large margin across different evaluation settings. The visualization results further demonstrate the advantages of the proposed approach. Methods And Evaluation Criteria: The authors propose an approach to address the challenges in video anomaly detection by introducing Learning Event Completeness techniques. The proposed approach is evaluated in both coarse-grained and fine-grained settings, yielding very promising results, as shown in the quantitative and qualitative analyses. Furthermore, when compared to approaches from 2023 and 2024, the proposed method outperforms them by a large margin. Some of the experimental settings require clarification. For example, in Table 5 ("Explorations of the Model Structure"), the authors evaluate the vision-only anomaly-aware branch and the cross-modal anomaly-aware branch. What are the differences between the last rows and the second-to-last row in terms of experimental settings? If there are different settings, how did the authors evaluate them? Furthermore, the main contribution of the paper is the dual structure that captures both category-aware and category-agnostic semantics. It is essential to clearly present how this approach is evaluated and to articulate the contributions of the proposed method more explicitly. Additionally, in Table 6 ("Ablation Studies of Loss Function"), only two loss functions are presented. However, as defined in Equation 10, there are four loss functions. What are the contributions of the other loss functions, and why were they not evaluated? A clear explanation regarding this would be beneficial. Theoretical Claims: The proposed approach makes sense but appears somewhat incremental, primarily consisting of a combination of existing methods. However, the results in both coarse-grained and fine-grained settings are interesting. The approach lacks some theoretical explanations, particularly regarding the problem formulation and motivation for the dual structure that captures both category-aware and category-agnostic semantics. In Section 3, the vision branch includes a local transformer and global GCN, but this needs clearer presentation. Additionally, it is important to explain how the main contributions enhance the learning of discriminative features. While some formulations are provided, they are not entirely clear. If these aspects were supported by a more robust theoretical explanation, it would significantly enhance the paper and clarify the concepts for the reader. Experimental Designs Or Analyses: The experimental design is somewhat satisfactory, as the authors evaluate the approach in different settings, such as coarse-grained and fine-grained. However, there are some issues to address. For instance, in Table 1, only one unsupervised approach is presented. It is unclear whether the proposed approach is evaluated in an unsupervised manner. It appears that the evaluation is conducted with weakly supervised setting under different backbone networks, making the unsupervised row less relevant in this context. In the fine-grained setting, it is essential to clarify the specific evaluation setting used for the proposed approach—whether it is weakly supervised, unsupervised, or something else. This needs to be explicitly stated. Additionally, a major issue in the experimental settings is the evaluation of the proposed modules and the loss function formulation in Tables 5 and 6. These aspects need to be clearly presented for better understanding. Supplementary Material: There is no Supplementary Material Relation To Broader Scientific Literature: The authors attempt to address the main challenges of video anomaly detection with a dual structure that captures both category-aware and category-agnostic semantics. The proposed idea is intriguing, and the experimental results are promising, particularly in two main settings: coarse-grained and fine-grained. However, a significant aspect of this paper is how the problem is formulated and the motivation behind it, where there appear to be gaps. As mentioned earlier, the novelty of the approach seems somewhat incremental. While the GMP-based Local Consistency Learning branch is a new contribution, its presentation is unclear, and there is a lack of strong emphasis on its significance. Essential References Not Discussed: The authors have included most of the related works in the experimental comparisons published in 2023 and 2024. The existing approaches are compared across different experimental settings. However, the authors need to discuss some of the related work in the introduction section to strengthen the argument and motivation for the proposed approach. As I mentioned, the formulation of the problem and motivation is weak, which may explain why the authors did not include a discussion of the 2023 and 2024 papers. Other Strengths And Weaknesses: The results of the proposed approach are significant; however, it appears to be a combination of existing methods. There is a lack of clear presentation in some sections, particularly in the proposed approach and the experimental settings. For instance, the formulation of the local transformer and global GCN needs clarification. What are the main contributions of these modules? Additionally, it is important to explain how these components enhance the performance of the proposed approach and the overall contributions of the proposed modules. These aspects should be emphasized for better understanding. Other Comments Or Suggestions: The authors need to carefully review the figures and table captions to ensure clarity for the reader. For example, in Figure 2, it should specify "cross-attention" rather than "self-attention," based on the features of Q, K, and V. Questions For Authors: Most of my concerns are outlined in the above, and the authors need to address them. Specifically, in section three regarding the proposed approach and experimental settings, I have the following questions: 1.) The formulation of the local transformer and global GCN needs clarification. What are the main contributions of these modules? how it affect the proposed approach? It is important to explain how these components enhance the performance of the proposed approach and the overall contributions of the proposed modules. 2). In the loss function formulation, only two loss functions are evaluated and presented in the manuscript. What about the other loss functions? 3). What are the differences between the last rows and the second-to-last row in terms of experimental settings? If there are different settings, how did the authors evaluate them? 4) . As shown in Figure 2, there are three branches that are components of the proposed approach. However, in the ablation section, only VOB and CMB are evaluated. What about the other branch? What is the contribution of the proposed branch? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Reviewer zcbm __Q1: The formulation of the local transformer and global GCN needs clarification. What are the main contributions of these modules? how does it affect the proposed approach?.__ We appreciate the reviewer's insightful suggestions. The CLIP image encoder is primarily used to extract image features. But it exhibits limited proficiency in **modeling temporal dependencies inherent in videos**. To address this limitation, we integrate a local transformer layer alongside GCN modules to augment video features. We conduct ablation studies on GCN and Transformer modules and report the fine-grained AVG performance on XD-Violence: |GCN|Local Transformer|0.1|0.2|0.3|0.4|0.5|AVG| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |$\checkmark$||46.39|38.45|32.16|26.23|20.92|32.83| ||$\checkmark$|45.21|37.58|30.92|24.94|18.79|31.49| |$\checkmark$|$\checkmark$|**48.78**|**40.94**|**34.28**|**28.02**|**22.68**|**34.94**| In detail, we incorporate one local Transformer layer and 4 GCN layers. We present ablation studies that explore the rationale behind our choice and report the fine-grained AVG performance on XD-Violence: |Local Transformer/GCN|1|2|3|4|5| |:-:|:-:|:-:|:-:|:-:|:-:| |**1**|29.71|31.89|33.77|**34.94**|34.46| |**2**|28.24|30.27|32.03|33.15|32.76| |**3**|26.68|27.91|28.88|30.40|30.21| Also, we report fine-grained AVG results on UCF-Crime: | Local Transformer/GCN|1|2|3|4|5| |:-:|:-:|:-:|:-:|:-:|:-:| |**1**|10.46|11.67| 12.94|**13.56**|13.27| |**2**|9.37 |10.56|11.37|11.98|11.60| |**3**|8.01 |9.15|10.03|10.51|10.46| __Q2: In the loss function formulation, only two loss functions are evaluated and presented in the manuscript. What about the other loss functions?__ Thanks for your comments. We do some ablation studies to evaluate the utilities of $L_{gmm}$ and $L_{reg}$ in Table 6. For the $L_{agnostic}$ and $L_{aware}$, since our model is designed to predict coarse-grained and fine-grained anomalies simultaneously, $L_{agnostic}$ and $L_{aware}$ are the basic terms used to train the model. Specifically, $L_{agnostic}$ is utilized to supervise coarse-grained anomaly detection, and $L_{aware}$ is utilized to supervise fine-grained anomaly detection. If either of them is removed, the model cannot detect anomaly events of the corresponding granularity (coarse-grained and fine-grained). **Q3: In Table 5, what are the differences between the last rows and the second-to-last row in terms of experimental settings? If there are different settings, how did the authors evaluate them?** Thanks for your comments. The last row of Table 5 indicates that the proposed algorithm leverages both visual information and textual descriptions of anomaly categories for both coarse-grained and fine-grained detections, while the second-to-last row only uses the visual information of the video. There is no difference in the experimental settings of other parameters. __Q4: As shown in Figure 2, there are three branches that are components of the proposed approach. However, in the ablation section, only VOB and CMB are evaluated. What about the other branch?__ Thanks for your suggestions. In the ablation section, we evaluated the VOB and CMB, and the other branch is the proposed Gaussian Mixture Prior-based Local Consistency Learning mechanism, which is only used to guide the model's training. We explore ablation studies about this module by controlling the loss term $\mathcal{L}_{gmm}$ in Table 6. For clarity, we add several experiments and put them in a table for comparisons: |GMP|VOB|CMB|0.1|0.2|0.3|0.4|0.5|AVG| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| ||$\checkmark$||37.23|30.96|23.77|18.03|14.33|24.86| |||$\checkmark$|38.61|31.54|24.04|18.67|14.81|25.53| |$\checkmark$|$\checkmark$||42.86|33.82|30.58|24.66|18.92|30.17| |$\checkmark$||$\checkmark$|46.03|38.87|32.18|27.02|22.04|33.23| ||$\checkmark$|$\checkmark$|46.69|39.51|32.17|27.16|22.01|33.51| |$\checkmark$|$\checkmark$|$\checkmark$|**48.78**|**40.94**|**34.28**|**28.02**|**22.68**|**34.94**| __Q5: The authors need to discuss some of the related work in the introduction section to strengthen the motivation for the proposed approach.__ We appreciate the reviewer's insightful suggestions. **We will strengthen the discussion about our motivation in the camera-ready version**. Here, we also re-emphasize it. We observe the existing methods predict incomplete and fragmented event segments, and an example is also provided in Figure 1. The core reason for this phenomenon is the lack of frame-wise annotations in the weakly supervised learning framework, resulting in sparse semantics. For this reason, we propose a novel Gaussian mixture prior-based strategy and memory bank-based prototype mechanism to mitigate this issue. __Q6:The authors need to carefully review the figures and table captions to ensure clarity for the reader".__ Thanks for your suggestions. We will modify these typos and inaccurate expressions.
Summary: This paper is dedicated to improving anomaly detection performance by enhancing the completeness of predicted events. A dual-branch structure is introduced to capture both category-aware and category-agnostic semantics between vision and language. A prototype learning mechanism based on a memory bank is proposed to improve the representation of textual features. Besides, the approach utilizes learnable Gaussian masks to achieve local consistency in predictions. Claims And Evidence: The performance of the proposed method in coarse-grained and fine-grained anomaly detection has been effectively validated on two datasets. However, the mechanism of the introduced dual structure and memory bank-based prototype learning for improving the completeness of event predictions have not been clearly stated. In addition, in Figure 5, more examples are needed to demonstrate the improvement in the completeness of event predictions. Methods And Evaluation Criteria: Based on the provided main results and visual experiments, the proposed method appears to be meaningful in improving the performance of WSVAD. Besides, since the dual structure has already been introduced in VadCLIP, it would not be appropriate to present it as a primary contribution. Theoretical Claims: No Experimental Designs Or Analyses: The performance of the proposed method in coarse-grained and fine-grained anomaly detection has been effectively validated on two datasets. In addition, the ablation experiments utilize the setting where only one module is ablated independently, which is not ideal for verifying the relationships between each contribution. It is recommended to provide ablation experiments that incrementally introduce each module. Moreover, the visualizations in Figure 3 could be enhanced by including additional details, such as the source of the visualized features, particularly indicating whether the visualized normal features include those from normal segments in anomalous videos. Meanwhile, this visualization demonstrates the discriminative visual representations for different anomaly categories. However, in the event completeness visualization presented in Figure 5, which is a binary classification result, the analysis lacks the explanation of how the representations for different anomaly categories contribute to enhancing event completeness. Supplementary Material: No Relation To Broader Scientific Literature: The dual-branch structure has already been proposed in VadCLIP. The proposed method should emphasize the difference between its dual-branch framework and that in VadCLIP, with experiments highlighting the difference contribute to performance improvements. Additionally, as the issue of event completeness is extensively studied in the Weakly Supervised Temporal Action Localization task, citing these works can convincingly introduce event completeness into WSVAD. Essential References Not Discussed: Please refer to the comments on 'Relation to Broad Scientific Literature'. Other Strengths And Weaknesses: Strengths: To the best of our knowledge, this paper is the first to introduce the completeness of events in predictions, which has been the issue also emphasized in other temporal localization tasks, to WSVAD. The issue addressed has indeed limited the performance of WSVAD but has been overlooked. Based on the visualization experiments, the proposed method learns discriminative features and achieves more accurate and complete predictions. Weaknesses: More comprehensive ablation studies should be conducted to further validate the effectiveness of the proposed modules. The mechanism of the dual structure and memory bank-based prototype learning in enhancing the completeness of event predictions is expected to be clearly explained and validated. Other Comments Or Suggestions: Memory bank-based prototype learning and GMP-based local consistency learning are introduced alternately in Section 3, though the structure of this section could be improved for better clarity and flow. Additionally, the tense usage should be checked, as both present and past tenses are used inconsistently in some paragraphs, such as in the conclusion. Questions For Authors: Q1: The dual structure has already been proposed in VadCLIP, so it would be preferable not to present it as a primary contribution. Q2: The ablation experiments use the setting where only one module is ablated independently, which is not ideal for verifying the relationships between each contribution. It is recommended to provide ablation experiments that incrementally introduce each module. Q3: Multiple dense anomalous events may exist in an anomalous video, such as some testing videos in XD-Violence, and it is important to evaluate whether the proposed method might incorrectly classify these frequent anomalies as a single anomalous event. Q4: The visualizations in Figure 3 should provide more details, such as the source of the visualized features, particularly clarifying whether the visualized normal features include those from normal segments within anomalous videos. Meanwhile, this visualization demonstrates the discriminative visual representations for different anomaly categories. However, in the event completeness visualization presented in Figure 5, which is a binary classification result, the analysis lacks the explanation of how the visual representations for different anomaly categories contribute to enhancing event completeness. Ethical Review Concerns: No Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Reviewer MBS1 __Q1: The dual structure has already been proposed in VadCLIP, so it would be preferable not to present it as a primary contribution.__ Thanks for your suggestions. We will update it on the camera-ready version. __Q2: The ablation experiments use the setting where only one module is ablated independently, which is not ideal for verifying the relationships between each contribution. It is recommended to provide ablation experiments that incrementally introduce each module.__ We appreciate the reviewer's insightful suggestion. Here, we provide an extra ablation study on XD-Violence to verify the relationships between each contribution. We have observed that the proposed modules can bring positive gains. |VOB|CMB|VAP|PMB|0.1|0.2|0.3|0.4|0.5|AVG| |-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |$\checkmark$||||33.74|28.57|22.45|17.81|13.40|23.19| ||$\checkmark$|||39.23|32.96|25.77|20.03|16.33|26.86| |$\checkmark$|$\checkmark$|||42.17|34.66|27.98|22.47|17.24|28.90| |$\checkmark$|$\checkmark$|$\checkmark$||46.24|38.75|31.97|26.63|20.05|32.73| |$\checkmark$|$\checkmark$||$\checkmark$|47.67|39.98|32.85|27.59|21.00|33.82| |$\checkmark$|$\checkmark$|$\checkmark$|$\checkmark$|**48.78**|**40.94**|**34.28**|**28.02**|**22.68**|**34.94**| __Q3: Multiple dense anomalous events may exist in an anomalous video, such as some testing videos in XD-Violence, and it is important to evaluate whether the proposed method might incorrectly classify these frequent anomalies as a single anomalous event.__ We appreciate the reviewer's insightful suggestion. For fine-grained anomaly detection, the model is asked to predict anomalous events about each category, as reported in Table 3 and Table 4. Here, we divide the original test set (800 videos) into two parts. One part contains videos with only one type of abnormal events, and the other part contains videos with multiple types of abnormal events. The former contains 753 videos, while the latter contains 47 videos and is more challenging. We report fine-grained prediction results in these 47 videos that contain multiple anomaly categories: |Methods|0.1|0.2|0.3|0.4|0.5|AVG| |-|:-:|:-:|:-:|:-:|:-:|:-:| |VadCLIP|35.37|26.36|23.85|6.03|4.46|19.21| |ReFLIP|39.38|26.96|24.73|14.47|9.89|23.09| |Ours|**41.68**|**28.32**|**27.47**|**20.20**|**15.37**|**26.61**| We observe that our proposed method achieves significant improvement, especially when the the evaluation criteria are more stringent, namely larger tIoU values. __Q4: The visualizations in Figure 3 should provide more details, such as the source of the visualized features, particularly clarifying whether the visualized normal features include those from normal segments within anomalous videos. Meanwhile, this visualization demonstrates the discriminative visual representations for different anomaly categories. However, in the event completeness visualization presented in Figure 5, which is a binary classification result, the analysis lacks the explanation of how the visual representations for different anomaly categories contribute to enhancing event completeness.__ We sincerely appreciate the reviewer's insightful comments and constructive suggestions regarding the visualizations in our paper. Below, we address the specific concerns raised about Figure 3 and Figure 5. For Figure 3, we acknowledge the reviewer's request for clarification on the source of the visualized features. The visualized features are the enhanced feature $X_{l}$ learned by our model, and the source of them are features extracted by the CLIP image encoder. Besides, the visualized normal features include those from normal segments within anomalous videos. As shown in Figure 3, these features form clusters well, further proving that our learned features are highly discriminative. **We will revise the caption of Figure 3 for these suggestions in the camera-ready version**. Although Figure 5 highlights binary classification results, the model utilizes the distinctive representations learned for different anomaly categories (depicted in Figure 3) to inform its decisions. This capacity to differentiate among anomaly types indirectly helps delineate the boundaries and attributes of anomalous events. Furthermore, **In the anonymous link https://anonymous.4open.science/r/Visualization_ICML_Rebuttal-B61B/README.md, we also provide some visualization examples to analyze the influence of different anomaly categories on enhancing event completeness.** __Q5: Some other suggestions about expressions and the section structure.__ Thanks for your valuable suggestion. We will improve the structure of Section 3 and some expressions in this paper in the revised version.
Summary: This paper proposes a new WSVAD framework, LEC-VAD, that utilizes visual and language modalities for category-agnostic and category-aware anomaly detection. The authors employ a Gaussian mixture method to guide the model in predicting more complete anomaly boundaries. Additionally, a memory bank-based prototype learning mechanism is introduced to enhance the text feature representation related to anomalies. LEC-VAD achieves state-of-the-art performance in both coarse-grained and fine-grained results on the XD-Violence and UCF-Crime datasets. Claims And Evidence: The claims are mostly clear and convincing. Methods And Evaluation Criteria: The approach is conceptually sound, and the evaluation datasets align with community standards. Theoretical Claims: N/A. Experimental Designs Or Analyses: The paper includes comprehensive experiments: comparisons across feature backbones, ablation studies, and hyperparameter analysis. However, all ablation studies focus solely on fine-grained detection results, lacking ablation experiments on coarse-grained performance. Supplementary Material: No supplementary material is provided. Relation To Broader Scientific Literature: The key contributions include the Gaussian mixture prior for smoother local supervision and the memory-based prototype mechanism for feature enrichment. However, similar techniques (e.g., Gaussian kernels for temporal modeling [1] and smoother guidance[2], memory-based prototypes for representation enhancement [3,4]) have been explored in action/anomaly detection literature. The authors need to clarify how these adaptations differ in the context of anomaly detection or provide novel insights specific to VAD. [1]Gaussian temporal awareness networks for action localization. CVPR2019. [2]GlanceVAD: Exploring glance supervision for label-efficient video anomaly detection. [3]HR-Pro: Point-supervised temporal action localization via hierarchical reliability propagation. AAAI2024. [4]Anomaly Detection with Prototype-Guided Discriminative Latent Embeddings. Essential References Not Discussed: The cited baselines are comprehensive and include recent WS-VAD methods. Other Strengths And Weaknesses: Strengths: 1.The paper is well-structured and clearly written. 2.The motivation—detecting more precise anomaly boundaries—is reasonable and addresses a critical limitation in WS-VAD. 3.The proposed method achieves state-of-the-art performance across multiple benchmarks and evaluation metrics. Weaknesses: 1.Problem setting: The fine-grained detection assumes closed-set anomaly categories are available, this is a common practice in the Temporal Action Localization (TAL) field. However, the reliance on category labels in Video Anomaly Detection (VAD) domain may limits the generalizability of the trained model, since most abnormal categories are unknown/unpredictable in real-world application (the key difference between VAD and TAL ). 2.Motivation: While the title and motivation emphasize "Event Completeness," the Gaussian mixture mechanism primarily enforces local consistency rather than explicitly addressing boundary completeness. Visualizations of Gaussian-rendered anomaly score and its impact on boundary precision would strengthen this claim. 3.Method:The proposed Guassian mixture and memory-based prototype techniques has been used in other video action/anomaly detection methods[1-4]. Their adaptation to VAD lacks domain-specific innovations or insights compared to prior work in action detection. 4.Experiments: As noted earlier, the absence of coarse-grained ablation studies weakens the evaluation. [1]Gaussian temporal awareness networks for action localization. CVPR2019. [2]GlanceVAD: Exploring glance supervision for label-efficient video anomaly detection. [3]HR-Pro: Point-supervised temporal action localization via hierarchical reliability propagation. AAAI2024. [4]Anomaly Detection with Prototype-Guided Discriminative Latent Embeddings. Overall, the contributions in the current version of paper are limited. However, if the authors address my concerns, I may consider increasing my rating. Other Comments Or Suggestions: None. Questions For Authors: The XD-Violence dataset contains videos with multiple anomaly categories and events, and labels are not available for each anomaly event. Could the authors clarify how mAP is computed in this multi-label scenario? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: __Q1: The fine-grained detection assumes closed-set categories. However, the reliance on category labels in VAD may limit the generalizability, since most abnormal categories are unknown/unpredictable.__ Thanks for your suggestion. We agree that an over-reliance on category labels could potentially limit the generalizability. However, our proposed two mechanisms can mitigate the risks, including the usage of the CLIP text encoder and the prototype-based memory bank mechanism (PMB). First, CLIP itself can recognize open-set categories. Second, PMB can be understood to some extent as identifying general patterns and characteristics of anomalies, instead of focusing on learning specific categories. To verify the generalization, we report the AUC on **an extra dataset UBnormal**, where only 7 abnormal categories are visible during training and 12 abnormal categories are used for testing. The results show our model can efficiently handle open-set anomaly categories, especially when using PMB. |Methods|AUC(%)| |-|:-:| |RAD|50.30| |Wu et al.[1]|53.70| |DMU|59.91| |RTFML|60.94| |ReFLIP|61.13| |**Ours w/o PMB**|60.85| |**Ours w/ PMB**|**61.65**| [1] Not only look, but also listen: Learning multimodal violence detection under weak supervision. ECCV __Q2: The Gaussian mixture mechanism primarily enforces local consistency rather than explicitly addressing completeness. Visualizations of Gaussian-rendered scores would strengthen this claim.__ Thanks for your suggestions. We define the average tIoU between predictions and GT for all classes' instances as a metric of the Event Completeness (EC) and report fine-grained results on XD-Violence: |Methods|EC(%)| |-|:-:| |Ours w/o GMP|12.6| |Ours w/ GMP|**15.5**| We offer visual comparisons that illustrate the differences in outcomes when incorporating Gaussian-based scores versus not using them in https://anonymous.4open.science/r/icml_vis-CDD6/README.md Besides, we also provide visualizations for various categories in https://anonymous.4open.science/r/Visualization_ICML_Rebuttal-B61B/README.md We find that our model can get desirable results. __Q3: The proposed Gaussian mixture and memory-based prototype techniques.__ We appreciate the reviewer's insightful observations. First, both [1] and [2] are used in VAD instead of WS-VAD. Besides, they only model a **unimodal Gaussian model** with complex kernel function designs, but our method models prediction scores of multiple anomaly categories as **Gaussian Mixture Model** to ensure local consistency of predictions. With GMM, our model can learn the correlations between multiple anomaly categories, but the unimodal Gaussian model cannot. For the memory-based prototype technique, [3] uses point annotations of action clips to create a memory bank storing reliable visual prototypes with threshold strategies. However, our weakly supervised scenario cannot access point annotations, so our memory-based is designed to build flexible text representations of anomaly categories. This can be also understood as general patterns of learning abnormal behaviors, as we explained in Q1. Besides, [4] learned prototype-guided discriminative embeddings to separate normal and abnormal data. It uses a different paradigm and purpose, which is essentially different from our method. __Q4: The absence of coarse-grained ablation studies weakens the evaluation.__ Thanks for your suggestions. Due to the page limit, we showed more challenging fine-grained results in the original paper. Here, we provide coarse-grained results using I3D features on XD-Violence [1] Ablation studies of the model structure: |VOB|COM|AP(%)| |-|:-:|:-:| |✅||84.85| ||✅|87.10| |✅|✅|**88.47**| [2] Ablation studies of the loss function: |$L_{reg}$|$L_{gmm}$|AP(%)| |-|:-:|:-:| |||85.43| |✅||86.95| ||✅|87.79| |✅|✅|**88.47**| [3] Ablation studies of text enhancement: |VAP|PMB|AP(%)| |-|:-:|:-:| |||82.14| |✅||86.69| ||✅|86.83| |✅|✅|**88.47**| We observe that coarse-grained results show consistent conclusions with fine-grained versions, which further reveals the utilities of the proposed components. __Q5: Multi-label evaluation for XD-Violence dataset.__ We use the same evaluation paradigm and codes as prior works like VadCLIP, ReCLIP, etc. Among 800 test videos, 47 samples have multiple labels. As labels for individual anomaly events aren't provided, the current evaluation paradigm only computes mAP without considering classes, which may introduce bias. However, since the proportion of such multi-label cases is low and all current methods follow this paradigm, the comparison remains fair. We also evaluated results only on 47 videos and found that its results were far lower than those on 800 videos, which reveals its results did not play a leading role. |Methods|0.1|0.2|0.3|0.4|0.5|AVG| |-|:-:|:-:|:-:|:-:|:-:|:-:| |VadCLIP|35.37|26.36|23.85|6.03|4.46|19.21| |ReFLIP|39.38|26.96|24.73|14.47|9.89|23.09| |Ours|**41.68**|**28.32**|**27.47**|**20.20**|**15.37**|**26.61**|
Summary: This paper explores weakly supervised video anomaly detection, introducing a dual-structure framework that captures both category-aware and category-agnostic semantics through vision-language integration. To enhance anomaly scoring, the authors propose a learnable Gaussian mixture mask that produces smoother scoring patterns. The effectiveness of the proposed approach is validated on standard benchmark datasets, including UCF-Crime and XD-Violence. Claims And Evidence: The experimental evaluation and ablation study are thorough, effectively verifying the contribution of each component and the influence of hyperparameters in the objective function. However, the paper could be further improved by explicitly demonstrating the impact of the local transformer layer and the GCN module in enhancing video features, which is currently not addressed. Additionally, the rationale behind the choice of 𝐾 in top-𝐾 scores and their impact remain unclear to the reviewer. Methods And Evaluation Criteria: The inference time of the proposed method is not provided. Additionally, the definitions of the vision-only anomaly-aware branch (VOB) and the cross-modal anomaly-aware branch (CMB) are unclear. Are the authors referring to VOB as the component that outputs F_{video}, or does it serve a different purpose? Please clarify. Theoretical Claims: This paper primarily focuses on the empirical design of improved architectural structures and learning objectives, without making any theoretical claims. However, Equation 8 could be further justified, particularly regarding the choice to regularize the predicted anomaly scores rather than the ground truth scores, as the latter may appear to be a more suitable option. Experimental Designs Or Analyses: The C3D result for LEC-VAD is missing in Table 1. Please clarify the reason. Supplementary Material: N/A Relation To Broader Scientific Literature: This study could be valuable to the video anomaly detection community. Additionally, some of the paper's empirical design choices for post-processing may also benefit the action detection community. Essential References Not Discussed: Please consider including recent publications in this field, particularly those focused on LLM- or VLM-based approaches, along with relevant discussions. Other Strengths And Weaknesses: Please clarify the concept of Event Completeness in the proposed approach, as it remains unclear. At the very least, the authors should justify how their method addresses this aspect. Other Comments Or Suggestions: Please provide a detailed comparison between the proposed method and ReFLIP-VAD. Additionally, consider enlarging the text in Figure 3 for improved readability. Questions For Authors: Please refer to the above comments. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Reviewer QCZn __Q1: The impact of the local transformer layer and the GCN module in enhancing video features.__ Thanks for your advice. The CLIP image encoder is primarily used to extract image features. However, it exhibits limited proficiency in modeling temporal dependencies inherent in videos. To address this limitation, we integrate a local transformer layer alongside GCN modules to augment video features. In detail, we incorporate one local Transformer layer and 4 GCN layers. In the following table, we present ablation studies that explore the rationale behind our choice and report the fine-grained AVG performance on XD-Violence. |Local Transformer/GCN|1|2|3|4|5| |:-:|:-:|:-:|:-:|:-:|:-:| |**1**|29.71|31.89|33.77|**34.94**|34.46| |**2**|28.24|30.27|32.03|33.15|32.76| |**3**|26.68|27.91|28.88|30.40|30.21| Also, we report fine-grained AVG results on UCF-Crime: |Local Transformer/GCN|1|2|3|4|5| |:-:|:-:|:-:|:-:|:-:|:-:| |**1**|10.46|11.67| 12.94|**13.56**|13.27| |**2**|9.37 |10.56|11.37|11.98|11.60| |**3**|8.01 |9.15|10.03|10.51|10.46| __Q2: The rationale behind the choice of 𝐾 in top-𝐾 scores__ In this paper, we set $K=max(\lfloor T/16\rfloor$), as explained in Implementation Details. This decision stems from our utilization of multi-instance learning (MIL) to get video-level outcomes. Previous works that used MIL used the same setup [1,2], which can **adapt to the length of videos**. The denominator uses 16 because these two datasets sample 16 consecutive frames into one segment during feature preprocessing. [1] Two-Stream Networks for Weakly-Supervised Temporal Action Localization with Semantic-Aware Mechanisms, CVPR 2023 [2] Vadclip: Adapting vision-language models for weakly supervised video anomaly detection, AAAI 2024 __Q3: The inference time of the proposed method__ Thanks for your advice. I provide the average inference time (in seconds) of each video for fine-grained predictions on XD-Violence and UCF-Crime, and compare results with representative methods with a single Nvidia V100 GPU: |Methods|Inference Time on XD-Violence|Inference Time on UCF-Crime| |-|:-:|:-:| |VadCLIP|0.21|0.43| |ReFLIP|0.38|0.63| |ours|0.23|0.46| We observe that our method outperforms the SOTA ReFLIP in terms of speed. When compared to VadCLIP, it incurs only a marginal increase in processing time, yet achieves substantial improvements in performance. **Q4: The definitions of VOB and the CMB.** Thanks for your comments. VOB denotes both coarse-grained and fine-grained predictions derived exclusively from visual information, without incorporating category text descriptions. Instead, CMB combines visual inputs and text descriptions of anomaly classes. __Q5: Eq. 8 is chosen to regularize the predicted scores rather than ground truth.__ Thanks for your comments. Eq.8 acts as a regularizer on the prediction scores and should not be used for GT. We hope to constrain the consistency of coarse-grained and fine-grained predictions by using Equation 8 __Q6: The C3D result for LEC-VAD is missing in Table 1.__ Thanks for your advice. We apologize for the omission of this result in the original paper. To rectify this, we will include the C3D results for our method in Table 1. This result is also presented here: |Modality|Methods|Features| AP(\%)| |:-:|:-:|:-:|:-:| |RGB|LEC-VAD|C3D|79.58| __Q7: Discussions about publications including LLM- or VLM-based approaches.__ Thanks for your advice. Some LLM- or VLM-based methods are used to generate abundant and diverse anomaly class descriptions. Notably, these works introduce **additional data and knowledge** provided by LLM, and the results based on them may be unfair comparisons. This introduction of external expert knowledge is not in the scope of this article. However, **we will add relevant works and discussions in the related work**. __Q8: The concept of Event Completeness and how our method addresses this aspect.__ Event completeness can be conceptualized as the proportion of predicted intervals relative to the total actual event intervals. It can be reflected when using larger tIoU values. As shown in Table 3 and Table 4, compared with ReFlip, our method achieves more **relative improvements** for larger tIoU values. __Q9: A detailed comparison between the proposed method and ReFLIP-VAD.__ The core idea of ReFLIP-VAD includes: 1) modeling the global and local temporal dependencies; and 2) designing learnable prompt templates to provide interpretable and informative class clues. However, our method focuses on **the event completeness**. While our focus is also on generating informative text descriptions, we introduce an innovative memory bank-based mechanism, eschewing the use of prompt templates generated by an additional pre-trained encoder. __Q10: Consider enlarging the text in Figure 3 for improved readability.__ Thanks for your advice. We will improve them in the camera-ready version.
null
null
null
null
null
null
Tackling Dimensional Collapse toward Comprehensive Universal Domain Adaptation
Accept (poster)
Summary: Universal Domain Adaptation (UniDA) tackles unsupervised domain adaptation where the target domain may contain classes that differ arbitrarily from the source, except for a common subset. A common approach, partial domain matching (PDM), aligns only the shared classes but often fails when many source classes are missing in the target—performing even worse than a simple baseline trained solely on source data. This work reveals that such failure is due to dimensional collapse in the target representations. To remedy this, the authors propose leveraging both alignment and uniformity techniques from modern self-supervised learning (SSL) on the unlabeled target data, preserving the intrinsic structure of the learned features. Experimental results demonstrate that SSL significantly improves PDM, setting new state-of-the-art performance across various UniDA scenarios with different proportions of shared classes, marking an important advance toward comprehensive UniDA Claims And Evidence: The claims and evidence in general are clear and convincing. But why SSL works in addressing dimension collapse may need more explanation. Methods And Evaluation Criteria: 1. There are concerns regarding the novelty of using SSL to address the dimensional collapse issue. The authors should further clarify this approach and compare it with previous SSL methods used in domain adaptation. 2. It would be even better if there were a more detailed explanation—such as a theoretical analysis—of why SSL mitigates dimensional collapse. Currently, the paper demonstrates the effectiveness of SSL through experimental results, but the underlying reasons for its success lack a more compelling, supportive explanation. Theoretical Claims: No theoretical claims Experimental Designs Or Analyses: 1. The experiments need to be further extended. UniDA is essentially a generalization of basic DA paradigms such as PDA, OSDA, and SDA, and Extreme UDA is an even broader generalization with greater challenges. If the proposed method performs well in the more challenging UDA settings, it is also necessary to evaluate it on simpler settings (like those inherent in PDA/OSDA) to demonstrate the method's robustness. 2. The ablation study is somewhat simplistic, as it was only conducted on a subtask of the small Office31 dataset, making it difficult to draw comprehensive conclusions. It is necessary to conduct experiments on a wider range of tasks to better demonstrate the method's effectiveness. Supplementary Material: I have reviewed the supplementary material, that make senses. Relation To Broader Scientific Literature: It can be applied to addressed dimension collapse problem in many fields. Essential References Not Discussed: UDA is an extension and generalization of previous methods such as PDA, OSDA, and SDA. In the Additional Related Works section, we recommend reviewing these settings as well to ensure the overall completeness of the paper. Other Strengths And Weaknesses: + This paper explores the extreme UniDA scenario, which is more challenging and rigorous compared to previous UDA approaches. This research problem is more reflective of real-world situations and holds significant practical importance. + The observed phenomenon of dimensional collapse is very intriguing and has the potential to inspire further research in the field. - The primary contribution of this paper is addressing the issue of dimensional collapse using SSL methods. Since SSL has been widely applied in domain adaptation, we suggest that the authors dedicate a section to thoroughly discuss this issue, in order to better emphasize the novelty of their approach. Other Comments Or Suggestions: Please see above strengths and weaknesses Questions For Authors: No additional questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and are glad to hear that our research problem resonates with real-world scenarios and that the observed dimensional collapse is found to be intriguing. The reviewer’s main concerns lie in the novelty of using SSL and the lack of experiments on partial domain adaptation (PDA) and open set domain adaptation (OSDA). We address these concerns below. **Novelty of using SSL to address the dimensional collapse issue.** > There are concerns regarding the novelty of using SSL to address the dimensional collapse issue. The authors should further clarify this approach and compare it with previous SSL methods used in domain adaptation. We refer the reviewer to the "Missing References" section of Reviewer tc8f, where we discuss the distinction between our work and the literature on self-supervised learning for domain adaptation. **Why SSL mitigate DC?** > The claims and evidence in general are clear and convincing. But why SSL works in addressing dimension collapse may need more explanation. Existing insights from the SSL literature offer useful intuitions. Contrastive learning frameworks (e.g., AlignUniform, SimCLR) demonstrate that negative samples and the uniformity loss serve as repulsive forces to the alignment loss, helping to prevent dimensional collapse. On the other hand, non-constrastive methods like SimSiam and Barlow Twins show that using asymmetric architectures and encouraging feature decorrelation can promote diverse representations across dimensions, thereby mitigating collapse. We will incorporate these insights into the latest version of our paper. **Experiments on PDA and OSDA** Thank you for the suggestions! We have now included the results for PDA and OSDA below. * PDA (21/10/0, Office31) | | A2D | A2W | D2A | D2W | W2A | W2D | Avg | | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | |Source-Only *| 90.4| 79.3 | 79.3 | 95.9 | 84.3 | 98.1 | 87.8 | CMU* | 84.1 | 84.2 | 69.2 | 97.2 | 66.8 | 98.8 | 83.4 | DANCE* | 77.1 | 71.2 | 83.7 | 94.6 | 92.6 | 96.8 | 86.0 | UAN | 82.3| 83.1 | 69.8 | 96.6 | 64.2 | 98.4 | 82.4 | UAN + SSL | 91.1 | 84.6 | 81.2 | 97.8 | 83.6 | 98.6 | 89.5(+7.1) | *: from "LEAD: Learning Decomposition for Source-free Universal Domain Adaptation" CVPR'24 These results indicate that UAN+SSL significantly outperforms previous methods in PDA settings. Interestingly enough, the scenario corresponds to $\pi_s=0.68$, which is close to the extreme UniDA setting, and all prior domain-matching methods underperform the SO approach, reinforcing our observation. * OSDA (0/10/11, Office31) | | A2D | A2W | D2A | D2W | W2A | W2D | Avg | | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | CMU* | 70.5 | 71.6 | 81.2 | 80.2 | 70.8 | 70.8 | 74.2 | | DANCE* | 74.7 | 82.0 | 82.1 | 68.0 | 82.5 | 52.2 | 73.6 | | UAN | 67.8 | 68.4 | 78.8 | 75.4 | 66.3 | 68.2 | 70.8 | UAN + SSL | 68.1 | 69.2 | 79.3 | 76.1 | 67.4 | 69.3 | 71.6 (+0.8) *: from "Subsidiary Prototype Alignment for Universal Domain Adaptation" NeurIPS'22 The result suggest that incorporating SSL does not significantly affect performance. In the setting where $\pi_s=0$, the critical challenge lies in identifying target-private classes rather than domain matching, which may explain why SSL does not substantially influence the performance. **Ablaion study is only conducted on a subtask of the small Office31 dataset** We have included a more comprehensive ablation study on the entire Office-31 dataset, which shows that alignment alone provides limited benefit, uniformity leads to a slight performance improvement, and combining both yields the best results. | | A2D | A2W | D2A | D2W | W2A | W2D | Avg | | -------- | -------- | -------- | -------- | -------- | -------- | -------- | -------- | | $\mathcal{L}_s$ | 70.1 | 64.5 | 62.5 | 68.4 | 59.7 | 63.1 | 64.7 | | $\mathcal{L}_s + \mathcal{L}\_{\text{align}}$ | 71.5 | 64.6 | 60.2 | 67.8 | 59.1 | 64.2 | 64.6 (-0.1) | $\mathcal{L}_s + \mathcal{L}\_{\text{uniform}}$ | 72.2 | 65.1 |63.3 | 68.8 | 60.8 | 64.7 | 65.8 (+1.1) | $\mathcal{L}_s + \mathcal{L}\_{\text{align}} + \mathcal{L}\_{\text{uniform}}$ | 72.0 | 67.2 |64.7 | 70.1 | 62.1 | 64.9 | 66.8 (+2.1)
Summary: This paper addresses the universal domain adaptation problem, which is useful in real-world applications. They identify the failure of partial domain matching by dimensional collapse and propose to jointly leverage the alignment and uniformity techniques to avoid dimensional collapse. Experiments on four datasets and benchmarks highlight its superior performance. Claims And Evidence: Yes Methods And Evaluation Criteria: Traditional evaluation criteria [1,2] in univeral domain adaptation always include evaluations under partial-set domain adaptation (PDA), open-set domain adaptation (OSDA), and open-partial-set domain adaptation (OPDA) settings. However, the author only evaluate under last setting without PDA and OSDA. [1] Universal domain adaptation through self supervision, NeurIPS 2020 [2] LEAD: Learning Decomposition for Source-free Universal Domain Adaptation, CVPR 2024 Theoretical Claims: No Theoretical Claims Experimental Designs Or Analyses: The experimental designs miss important PDA and OSDA setups. Supplementary Material: Yes, all parts. Relation To Broader Scientific Literature: The proposed framework could be potentially helpful to other literature. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths 1. The paper addresses the univeral domain adaptation problem, which is a challenging and practical scenario. 2. The paper is well written and easy to follow. 3. The paper provides extensive experiments, showing the effectiveness and versatility of the proposed method. Major Weaknesses 1. The authors only use CNN backbones. More ablations on ViT backbone should be added, as it demonstrates strong generalization and adaptation performances compared with CNNs. 2. Lack of theoretical insights in support of the proposed method. 3. Comparison with more recent methods [1,2] should be included. 4. The novelty is limited. The self-supervised learning and uniformity loss have been widely used in universal domain adaptation. [1] LEAD: Learning Decomposition for Source-free Universal Domain Adaptation, CVPR 2024 [2] Universal domain adaptation via compressive attention matching, ICCV 2023 Other Comments Or Suggestions: How about replace SSL with other pretext tasks, such as jigsaw puzzles and rotation? Questions For Authors: What is the performances under PDA and OSDA setups? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the constructive feedback. We address the raised concerns below. **Novelty and contributions** While SSL has been explored in various DA contexts, we respectfully argue that our contribution lies beyond using SSL for UniDA. Prior SSL for UniDA works typically rely on pretext tasks (e.g., rotation, jigsaw), which are ineffective in mitigating DC and fail under extreme UniDA settings, as shown in our comparison table below (**Comparison with other SSL pretext tasks**). In contrast, we use both contrastive learning and non-contrastive learning methods, which are explicitly designed to preserve representation diversity and combat dimensional collapse (DC) [1]. To the best of our knowledge, we are the first to apply these techniques in UniDA specifically to address DC, making our use of SSL novel in both motivation and application. We also highlight that **identifying the previously overlooked challenge of extreme UniDA is itself a key contribution**, as it draws attention to the limitations of current benchmarks and provides a foundation for systematically addressing the problem through improving representation quality. **Ablations on ViT backbone** We initially excluded this analysis because it is not commonly included in UniDA literature, even in recent works (e.g., LEAD, MLNet), except for [2], which specifically investigates ViT architectures. The following tables present the performance of ViT on both extreme UniDA and general UniDA settings for DomainNet. The results follow a similar trend to those observed with CNNs: improvements are more pronounced in the Extreme UniDA setting compared to the standard UniDA case. * Extreme UniDA | | P2S | P2R | S2P | S2R | R2P | R2S | Avg | - | - | - | - | - | - | - | - | | UAN | 44.69 | 59.18 | 40.96 | 57.44 | 39.70 | 36.07 | 46.34 | UAN + SSL | 51.57 | 61.12 | 48.24 | 58.22 | 42.95 | 37.49 | 49.93 (+3.59) * General UniDA | | P2S | P2R | S2P | S2R | R2P | R2S | Avg | | - | - | - | - | - | - | - | - | | UAN | 44.38 | 62.58 | 34.11 | 53.31 | 51.34 | 41.62 | 47.89 | UAN + SSL | 46.25 | 62.81 | 38.74 | 54.76 | 53.26 | 42.11 | 49.65 (+1.76) **Lack of theoretical insights in support of the proposed method** We agree that understanding why SSL mitigates DC in extreme UniDA is a valuable theoretical pursuit. While seminal SSL methods (e.g., SimCLR, SimSiam, Barlow Twins) were initially driven by intuition and empirical success, later works sought theoretical explanations [3, 4]. We believe that our philosophical insights and empirical results lay a strong foundation for future theoretical study. **Comparison with more recent methods** Thank you for highlighting these baselines. We were aware of them but initially excluded them due to differing settings. LEAD uses a source-free setup without target data during training; we now include its results on DomainNet, where it underperforms compared to several baselines under the extreme UniDA setting. [2] uses a ViT backbone, making it incompatible with our CNN-based setup. It was also excluded in the MLNet (AAAI 2024) paper, and its code is unavailable, preventing reproduction within our limited timeframe. | | P2R | R2P | P2S| S2P | R2S| S2R| Avg| | - | - | - | - | - | - | - | - | |CMU | 30.1 | 42.4 | 34.1 | 24.3 |32.2 | 34.1 | 32.8 | | UniOT | 38.1 | 29.8 | 30.8 | 29.3 | 29.1| 38.3 | 32.6 | LEAD | 17.3| 16.5 | 15.4 | 14.8| 15.8 | 15.3 | 15.9 | **Comparison with other SSL pretext tasks** We compare three different approaches to SSL in DA: 1. pretext tasks: Minimize domain gap via auxiliary tasks like jigsaw puzzles or rotation prediction (Bucci et al., Xu et al.). 2. prototype alignment: Align target samples to source/target prototypes using entropy minimization (DANCE). 3. Contrastive & non-contrastive: The focus of our paper—these methods promote representation diversity to avoid collapse, using contrastive losses (AlignUniform) or asymmetric architectures without negatives (SimSiam). | | H-score on Office31| | - | - | | Rotation/Location (Xu et al.) | 59.7 | | Jigsaw Puzzles (Bucci et al.) | 63.4 | | DANCE | 61.2 | | AlignUniform | 71.8 | | SimSiam | 72.3 | This suggests that contrastive and non-contrastive methods, which explicitly tackle DC, are effective in extreme UniDA—unlike previously explored SSL approaches, which struggle in this setting. **Results on PDA and OSDA** We refer the reviewer to the "Results on PDA and OSDA" section of Reviewer sBWy. [1] “Rethinking The Uniformity Metric in Self-Supervised Learning”, ICLR 2024 [2] “Universal Domain Adaptation via Compressive Attention Matching”, ICCV 2023 [3] “Understanding Dimensional Collapse in Contrastive Self-supervised Learning”, ICLR 2022 [4] “How Does SimSiam Avoid Collapse Without Negative Samples? A Unified Understanding with Self-supervised Contrastive Learning”, ICLR 2022 --- Rebuttal Comment 1.1: Comment: I want to thank the authors for the rebuttal, most of my concerns are addressed and I therefore increase the score to 3. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for carefully considering our rebuttal and for increasing the score. We're glad that our response addressed your concerns.
Summary: The paper investigates the cause of the performance degradation of partial domain matching (PDM) in (extreme) universal domain adaptation (UniDA). Specifically, the paper presents the failure mode of PDM resulting from dimensional collapse (DC) in target representations, in extreme UniDA where the source-private classes are abundant. To address this issue, the paper proposes to incorporate self-supervised learning on unlabeled target data during training. Ablation studies and empirical evaluations are presented. --- ### Post-rebuttal I find the clarification in authors' rebuttal helpful and encourage authors to incorporate it in manuscript. I have increased my score. Claims And Evidence: The primary claim of the paper is that DC results in the failure of PDM in extreme UniDA settings. The evidence comes from the analyses on different applications of SSL loss functions, employing the Wang and Isola (2020) framework. Methods And Evaluation Criteria: The method starts from analyzing SSL loss functions to demonstrate the pitfalls of PDM in UniDA, and followed by incorporating unlabeled target data into training, in addition to source-labeled data. The evaluation criteria depend on the data, and include accuracy, H-score (based on accuracy), and various visualizations. Theoretical Claims: The evidence of the paper primarily comes from empirical evaluations and illustrations. Experimental Designs Or Analyses: The illustrative experiments on the influence of extreme UniDA on PDM, in terms of DC, are based on singular values (consistent with previous approaches), as well as the relation of DC and loss functions (employing particularly the framework of Wang and Isola, 2020). The experiments (Section 5) are designed to showcase the potential benefit of SSL for PDM, and the generalization across different DA settings. Supplementary Material: I went through the supplementary material. Relation To Broader Scientific Literature: The paper is related to different settings/instantiations of domain adaptations. Essential References Not Discussed: There are no significant missing references (with the caveat that the setting is sensible, more in Other Strengths and Weakness part). Other Strengths And Weaknesses: I have one particular concern: the utilization of target data (although unlabeled) makes the setting no longer extreme UniDA. On the one hand, if the distinction of settings (e.g., UDA or UniDA) is w.r.t. both the label sets (explicitly) and constraint on the corresponding features (implicitly), then the gap "Extreme UniDA" in the spectrum of UniDA (Fig. 1) is important and meaningful to address. However, the introduction of target data without labels during training shifts the setting to a learning problem instead of a DA problem. The previous approaches compared are DA methods, and therefore, are not sufficient to demonstrate the benefits of the proposed approach. On the other hand, if the distinction of settings is only w.r.t. the labels themselves, then the setting of "Extreme UniDA" becomes an ill-posed problem, since there is no additional assumption/leverage on how target features (not labels) relate to the source domain. According to the paper, the appearance of unlabeled data (from target domain) is still within Extreme UniDA. Other Comments Or Suggestions: (additional comment on material organization) In the Introduction, the settings and approaches for UniDA and UDA are introduced interchangeably. Considering the fact that the abbreviations are very similar, it might worth considering rearranging the material to make the content more consistent locally (e.g., no jumping back and forth between UDA and UniDA, now that the object of interest is UniDA). Questions For Authors: Can authors share further clarifications/discussions w.r.t. the concern about the shift in setting due to the utilization of target data (even if unlabeled) during training? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the constructive feedback. The primary concern raised is whether the utilization of unlabeled target data during training is still consistent with the Extreme UniDA setting. We address this concern below: **Concern about the utilization of target data, which makes the setting no longer extreme UniDA.** We appreciate the reviewer’s perspective and acknowledge that different lines of work may interpret or label these setups differently. However, our approach remains squarely within the unsupervised domain adaptation (UDA) paradigm—specifically, universal domain adaptation (UniDA). In this framework, both label sets (label shift) and underlying data distributions (covariate shift) can differ between source and target domains—aligning with the first category the reviewer mentioned—and having access to unlabeled target data during training is a standard assumption. We will clarify in the revised manuscript that having unlabeled target data during training is not an additional assumption outside of domain adaptation; rather, it is intrinsic to UDA and UniDA. Removing all target data during training would place the problem under settings like domain generalization [1], zero-shot transfer [2], or test-time adaptation [3], which focus on generalizing without any prior exposure to the target domain. In contrast, both UDA and UniDA explicitly require access to unlabeled target data, and every baseline PDM method we compare (e.g., UAN, CMU, UniOT) operates under this same premise. Therefore, while we recognize that terminological nuances may vary, our work consistently follows the established UniDA framework. We do not shift toward a purely supervised or semi-supervised learning problem on the target domain; instead, we remain entirely within the standard assumptions of UniDA by incorporating unlabeled target data during training. [1] Li et al., “Learning to generalize: Meta-learning for domain generalization”, AAAI 2018 [2] Radford et al., “Learning Transferable Visual Models From Natural Language Supervision”, ICML 2021 [3] Wang et al., “Tent: Fully test-time adaptation by entropy minimization”, ICLR 2021 **Interchangeable usage of UniDA and UDA** Thank you for your helpful suggestion. We agree that the interchangeable use of UniDA and UDA may cause confusion. While our primary focus is on Universal Domain Adaptation (UniDA), we include key references from the Unsupervised Domain Adaptation (UDA) literature to provide necessary context. In the revision, we will clarify this distinction and streamline the discussion of UDA works to maintain focus.
Summary: This work focuses on the problem of extreme Universal Domain Adaptation (UniDA). Firstly, UniDA considers a domain adaptation problem where a model has to be trained with a labeled source domain and an unlabeled target domain, such that the label sets of source and target domains are disjoint (i.e. some classes are source-private, some are shared, and some are target-private). The extreme UniDA problem considers the cases of having a very large number of source-private classes, i.e. many source classes are absent from the target data. This paper analyzes the partial domain matching (PDM) approaches for UniDA and finds that they fail on extreme UniDA due to dimensional collapse (DC). To address DC, they propose to use an existing SSL method that encourages learned representations to be uniformly distributed on the unit hypersphere, in order to preserve their intrinsic structure. Finally, they perform experiments on extreme UniDA and show that existing approaches can be improved using the SSL method. Claims And Evidence: * Fig. 2: It would be good to report common classes accuracy and target-private accuracy separately (apart from the H-score), so we have a clear picture of whether both of these are worse than SO for high $\pi_s$ or if one or the other is worse. This could also give additional insights into which part of the UniDA algorithm needs to be improved for extreme UniDA. * Fig. 4 (b, d): The analysis is interesting and informative, but it uses only older works (2022 and before). It would be stronger if the same is evaluated with newer works/methods (like LEAD, Qu et al. CVPR 2024 or another work from 2023-2025). Methods And Evaluation Criteria: Yes, the paper uses standard evaluation criteria similar to prior UniDA works and develops new analysis experiments that also seem to be valid and useful. Theoretical Claims: Not applicable Experimental Designs Or Analyses: * Overall experimental design seems reasonable and valid. And the analysis experiments added to motivate different ideas in Sec. 3 and 4 make the paper very interesting to read. * Sec. 4.2: Is the self-supervised loss same as the uniformity loss from Sec. 4.1? If yes, it would be good to use the same notation throughout so that it's easier to follow. If not, then better clarify what self-supervised loss is and how it is different from uniformity loss. Supplementary Material: I reviewed the provided Appendices A, B, C, and D. Relation To Broader Scientific Literature: * This paper analyzes the failure of existing UniDA methods for the difficult setting of extreme UniDA, which is novel and interesting. * Specifically, they analyze the partial domain matching approaches and find that they fail due to dimensional collapse (DC). * To resolve the DC problem, they re-purpose an existing SSL method (hypersphere-based uniformity loss). * Finally, they show improved results for extreme UniDA by incorporating the SSL method into existing UniDA techniques like UAN, UniOT, and MLNet. * While UniDA is a more complex and niche setting compared to general DA, it is a more practical setting. Further, the analysis experiments in this paper are quite valuable and interesting to motivate future work. Essential References Not Discussed: * SSL-based UniDA work (Kundu et al. 2022) was only mentioned in experimental settings in Appendix D.3. However, this work designed a self-supervised pretext task specifically for Universal DA and should be discussed and compared with the proposed method. This is also currently missing from Table 1 (comparison of existing SSL approaches for UniDA). * Another SSL-based UniDA work [W1] propose a self-supervised adaptive memory network with consistency regularization, and should be discussed and compared with the proposed method. * A highly relevant SSL-based DA work [W2] was not discussed and cited in the paper. It explores the use of existing pretext tasks for DA, highlights their limitations, and proposes a new pretext task specifically designed for closed-set DA. It should be discussed in the related work. [W1] Zhu et al., “Self-supervised Universal Domain Adaptation with Adaptive Memory Separation”, ICDM 2021 [W2] Kundu et al., “Concurrent Subsidiary Supervision for Unsupervised Source-Free Domain Adaptation”, ECCV 2022 Other Strengths And Weaknesses: * The writing of this paper is solid and the analysis experiments in Sec. 3 and 4 strongly motivate the proposed approach. * A minor weakness is that the SSL method is not novel and re-purposed from Wang and Isola (2020). However, this work seems to be the first to use it for UniDA. Other Comments Or Suggestions: None Questions For Authors: Please address the concerns listed above. Overall, the paper is well-written and well-motivated with good results. However, I have some concerns regarding essential references not being discussed properly in the paper, apart from other minor concerns in the experiments. Hence, my rating is currently “weak accept” but I am willing to update my rating based on the rebuttal. ## Update after rebuttal I thank the authors for their efforts in the rebuttal. Since my major concerns are resolved, I upgrade my rating to "accept". Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and are glad our analysis strongly motivates the proposed approach. Below, we address the raised concerns. **Missing references** We appreciate the reviewer’s feedback on our discussion of SSL for DA. While prior work primarily employs SSL to minimize domain gap—either through pretext tasks or by aligning low-uncertainty samples with class prototypes [2, 3, 4]—our focus is on preventing dimensional collapse (DC) in extreme UniDA. This different goal leads us to adopt SSL paradigms specifically designed to counteract collapse by enhancing feature diversity. Our methods include both contrastive (AlignUniform) and non-contrastive (SimSiam, Barlow Twins) approaches that promote uniformity and decorrelation to mitigate DC [5], rather than simply using pretext tasks for domain matching. We also present performance comparisons on extreme UniDA in the table below (see "Comparison with other SSL pretext tasks" of reviewer yX6d), showing that pretext-based and prototype-alignment methods fall short of the performance achieved by approaches specifically targeting DC. The above content will be included in Section 4 for a more comprehensive comparison in our latest version. We now discuss how our approach differs from the specific works [1, 3, 4]. Kundu et al. [1] propose "sticker-intervention," a pretext task that improves domain preservation over traditional SSL tasks. We will include it in the related work. Zhu et al. [3] propose a framework that is highly similar to DANCE [2], comprising prototype alignment (referred to as adaptive memory in [3]) and an entropy separation module. As [3]'s code is unavailable, we use DANCE as a proxy given their similar design. DANCE struggles in extreme UniDA (see Table 3–5), making it a reasonable reference point. We will include a discussion of [3] in the related work section. SPA [4] introduces an add-on module that performs adaptation at the mid-level layers, as these layers are shown to exhibit lower negative transfer. Their method leverages a Bag-of-Words-inspired pretext task to learn distinct visual word prototypes and promotes prototype alignment via entropy minimization. Their work mainly improves domain matching, which is orthogonal to our work that tackle DC. As their module is designed to be an add-on, it can serve as a complementary component to our method. Since the code is not publicly available, we currently include their method only in the related work discussion and will add it as a baseline once the code is released. [1] “Concurrent Subsidiary Supervision for Unsupervised Source-Free Domain Adaptation”, ECCV 2022 [2] “Universal Domain Adaptation through Self-Supervision”, NeurIPS 2022 [3] “Self-supervised Universal Domain Adaptation with Adaptive Memory Separation”, ICDM 2021 [4] “Subsidiary Prototype Alignment for Universal Domain Adaptation”, NeurIPS 2022 [5] “Rethinking The Uniformity Metric in Self-Supervised Learning”, ICLR 2024 **Report common-class accuracy and target-private accuracy separtely in Figure 2.** Thank you for the suggestion! Due to character limitations, we were unable to include the full tables here, but we summarize the observed trends below: * H-score: The H-score decreases as $\pi_s$ increases, as shown in Figure 2. Additionally, UniOT and UAN perform worse than SO under high $\pi_s$. * Common-class accuracy: This also decreases with increasing $\pi_s$. UniOT performs slightly worse than SO at high $\pi_s$, while UAN shows a clearly lower performance than both. * Target-private accuracy: Accuracy on target-private classes declines as $\pi_s$ increases for all methods. Both UniOT and UAN yield slightly lower scores than SO. **While the analysis in Figure 4 is interesting and informative, it would be more complete with the inclusion of more recent methods.** We agree that more recent methods should be included. Since LEAD is a source-free method and does not use a domain-matching loss, we have instead included MLNet (AAAI'24), which employs mutual nearest neighbors (MNN) for domain matching. This approach is conceptually similar to our distance-based baseline, but its mutual filtering enforces a stricter alignment. * Avg. error rate of importance weight function during training | |UAN | CMU | Energy | Distance | MNN | | - | - | - | - | - | - | | $\pi_s=0.25$ | 0.29| 0.30 | 0.33 | 0.26 | 0.24 | | $\pi_s=0.75$ | 0.59| 0.53 | 0.51 | 0.49| 0.41 | The MNN baseline shows a significant improvement. However, it is still far from the threshold (0.15–0.2), as shown in Figure 4 (a), required to outperform the Source-Only baseline. **Is the self-supervised loss from Sec. 4.2 the same as the uniformity loss from Sec. 4.1?** The self-supervised loss refers to alignment + uniformity loss, where alignment encourages class-wise aggregation and uniformity prevents DC. We’ll clarify this in the revision.
null
null
null
null
null
null
CSTrack: Enhancing RGB-X Tracking via Compact Spatiotemporal Features
Accept (poster)
Summary: The article proposes using compact spatiotemporal features for RGB-X tracking. Unlike the commonly used two-stream frameworks, It employs a one-stream structure to reduce computational overhead. Experiments on multiple downstream tasks demonstrate that this paper achieves the SOTA performance. ## update after rebuttal The authors' responses have resolved most of my concerns, and I maintain my 'weak accept' rating. Claims And Evidence: The article is well-written with clear descriptions and motivation, and comprehensive experiments. Methods And Evaluation Criteria: The evaluation metrics and RGB-X tracking datasets are consistent with this paper. However, in the ablation study, it is suggested to modify the dataset for RGB-T experiments to LasHeR, as this dataset is more challenging and authoritative. Theoretical Claims: The article focuses on empirical research and validates the effectiveness of CSTrack through experimental results. Experimental Designs Or Analyses: The authors conducted sufficient experiments, including experiments on different modalities and ablation studies on the modules, which basically meet the requirements. However, there are issues with unclear descriptions in the experimental design. For example, it is not clear whether a unified tracker is trained for different modalities or if separate trackers are trained for each modality. Additionally, most of the trackers used for comparison are relatively old, and there is a lack of comparison with the latest trackers. Supplementary Material: The authors introduced the dataset in the supplementary materials and supplemented the specific settings of the ablation experiments, which are now relatively comprehensive. However, this paper does not provide an ablation experiment for the entire module, and thus it is not possible to demonstrate the role of the module as a whole within the network. Relation To Broader Scientific Literature: Previous RGB-X tracking studies typically employed a dual-branch structure and focused on fusion methods. In contrast, this paper uses a one-stream architecture, which is somewhat innovative. Essential References Not Discussed: Some of the latest studies have not been included. For example, TATrack (Temporal adaptive RGBT tracking with modality prompt) and MCTrack (Towards modalities correlation for RGB-T tracking). Additionally, M3PT (Middle fusion and multi-stage, multi-form prompts for robust RGB-T tracking) also attempts to use a single branch to accomplish multimodal tracking tasks, but it differs from the approach presented in this paper. Please compare and discuss these methods. In addition, it is key to hightlight the difference with other single-stream multi-modal tracking methods, such as: [1] Unified Single-Stage Transformer Network for Efficient RGB-T Tracking. IJCAI 2024 [2] From Two Stream to One Stream: Efficient RGB-T Tracking via Mutual Prompt Learning and Knowledge Distillation Other Strengths And Weaknesses: Strengths: 1. This paper proposes a novel approach to integrate fuse spatiotemporal features for multimodal tracking while reducing computational overhead. 2. The experimental results demonstrate the effectiveness of the module proposed in this paper Weaknesses: 1. some sections could benefit from clearer explanations visualizations. 2. In Table 6, RGBT234 and VisEvent seem to be insensitive to the ablated components, but the paper does not provide an explanation for this observation. 3. The description of SCM is unclear. In Equations 2, 3, the cross-attention is computed in parallel, while Figure 1 appears to show a serial computation. Does the order of cross-attention affect the experimental results, or is there an error in Figure 1? Other Comments Or Suggestions: 1. The one-stream structure, as a major characteristic of this tracker, is not prominently highlighted, and there is a lack of discussion on other one-stream multimodal trackers, such as ViPT, USTrack, M3PT. As a distinguishing feature, it is recommended to add content in this regard. 2. Although the method presented in this paper has a smaller FLOPs, the speed appears to be abnormal, with only 35 fps, which is comparable to that of dual-stream trackers. Is there anything wrong with the experiments? Questions For Authors: 1. How does your model differentiate between the inputs of the three modalities? 2. The authors use the HiViT backbone, while most trackers employ the ViT backbone. What are the results if the SCM and TCM are moved to the ViT backbone? 3. The SCM and TCM appear to be transferable to RGB tracking tasks, requiring only modifications to the cross-attention. Have you conducted such experiments? If so, what is the results? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Dear Reviewer f47k,** Thanks for your time and effort in reviewing our work. Your recognition of our novel method, comprehensive experiments, SOTA performance, and writing quality greatly encourages us. We hope the following responses can address your concerns. ___ ### **Q1: Comparison with recent works** Thank you for mentioning some recent trackers. Here’s how our CSTrack differs: 1. TATrack and MCTrack use two symmetric backbones for separate RGB and X processing. ViPT employs an asymmetric X branch, while USTrack and Yang [1] concatenate RGB and X feature tokens for multimodal feature integration with a one-stream backbone. However, they still need to process RGB-X dual inputs within the backbone. M3PT uses middle fusion but relies heavily on the early dual-branch architecture for performance. 2. In contrast, our CSTrack integrates RGB-X dual inputs into a compact feature space, reducing model complexity and computational costs. Additionally, the table below compares various trackers, highlighting the effectiveness of our method. |Tracker|Lasher (SR, PR)|RGBT234 (MSR, MPR)| |-|-|-| |TATrack|56.1; 70.2|64.4; 87.2| |MCTrack|57.1; 71.6|65.6; 87.5| |ViPT|52.5; 65.1|61.7; 83.5| |USTrack|- |65.8; 87.4| |Yang [1]|56.7; 71.4|65.1; 87.3| |M3PT|56.1; 70.0|63.9; 86.5| |CSTrack-B|60.8; 75.6|70.9; 94.0| ___ ### **Q2: Further explanation of model details** 1. **Basic model setup.** CSTrack is a unified tracker trained jointly across different datasets, enabling testing without modal differentiation. This is mentioned in Section 4.1 and will be included in the method section of the revised version. 2. **SCM description.** We apologize for the error in Equation 3 of our paper. The RGB related features should be processed as $[q'_r; f^{'t}_r]$, not $[q_r; f^{t}_r]$. Below is the corrected equation, confirming our use of the serial computation method in Figure 1. $[q' _x; f^{'t} _x] = Norm( [q _x; f^{t} _x] + \Phi _{CA} ([q _x; f^{t} _x], [q' _r; f^{'t} _r]))$ ___ ### **Q3: Further explanation of experimental results** 1. **Ablation study of core modules.** The table below presents ablation results for key modules: Spatial Compact Module (SCM) and Temporal Compact Module (TCM). Adding SCM and TCM improves performance, validating our design. We also offer Lasher benchmark results, consistent with performance rankings observed in RGBT234. |Setting|DepthTrack (F; Re; PR)|RGBT234 (MSR; MPR)|Lasher (SR; PR)|VisEvent (SR; PR)| |-|-|-|-|-| |Baseline|63.6; 64.1; 63.1|68.2; 92.4|57.6; 72.9|62.9; 79.6| |+ SCM|65.1; 65.3; 64.9|69.6; 93.0|59.8; 74.7|64.1; 81.0| |+ TCM|65.8; 66.4; 65.2|70.9; 94.0|60.8; 75.6|65.2; 82.4| 2. **SCM analysis.** Table 6 shows cross-attention and shared embedding in SCM boosts DepthTrack but impacts RGBT234 and VisEvent less. We attribute this to higher noise and fluctuations in depth data compared to thermal and event data, as detailed in Figure 3 and our supplementary videos. This suggests these designs are most effective in challenging scenarios. For RGBT234 and VisEvent, other designs within SCM are sufficient for strong performance, as shown by SCM's gains across all benchmarks (see the table above). 3. **Tracking speed.** In Table 5, our method achieves 35 FPS, surpassing dual-branch trackers at 24 and 18 FPS. This meets expectations and satisfies real-time tracking requirements (exceeding the VOT challenge threshold of 20 FPS). ___ ### **Q4: Additional experimental analysis** 1. **Performance with ViT backbone.** We retrain CSTrack using the ViT backbone. The results below demonstrate our compact modeling exceeds the performance gains of the HiViT backbone, and our tracker achieves SOTA performance even with the ViT backbone, validating our approach's effectiveness. |Backbone|Compact Modeling|DepthTrack (F; Re; PR)|Lasher (SR; PR)|VisEvent (SR; PR)| |-|-|-|-|-| |HiViT| ✔ |65.8; 66.4; 65.2|60.8; 75.6|65.2; 82.4| |HiViT| ✘ |63.6; 64.1; 63.1|57.6; 72.9|62.9; 79.6| |ViT| ✔ |63.9; 64.8; 64.3|58.6; 74.3|64.6; 81.1| 2. **RGB-only tracking test.** Our core compact modeling targets RGB-X dual-stream inputs, making RGB-only tracking not aligned with our objectives. Therefore, we did not conduct this test. 3. **Impact of bidirectional cross-attention order.** We fine-tune our model by swapping the cross-attention order, and the results show minor performance fluctuations. We speculate that the shared patch embedding and the subsequent one-stream backbone can diminish the influence of interaction order. |Setting|DepthTrack (F; Re; PR)|Lasher (SR; PR)|VisEvent (SR; PR)| |-|-|-|-| |CSTrack-B|65.8; 66.4; 65.2|60.8; 75.6|65.2; 82.4| |+ swap order|65.8; 66.0; 65.4|60.7; 75.8|65.2; 82.4| ___ We hope these responses address your concerns and kindly invite you to reassess your rating. Feel free to reach out to us if you have any further questions. ___ [1] From Two Stream to One Stream: Efficient RGB-T Tracking via Mutual Prompt Learning and Knowledge Distillation, Yang et al., in arxiv 2024.
Summary: The paper introduces CSTrack, an RGB‐X tracker that leverages a compact spatiotemporal feature representation to improve tracking performance while reducing computational complexity. Unlike existing methods that typically employ dual-branch architectures to process RGB and X modalities separately, CSTrack integrates both modalities into a single compact feature space using bidirectional cross-attention and learnable modality-specific queries. Moreover, it constructs a refined target distribution heatmap by combining intermediate and final tracking results. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper does not include formal proofs for its theoretical claims. Instead, its contributions are presented through the design of SCM and TCM modules, which are supported by extensive empirical evaluations and ablation studies. Experimental Designs Or Analyses: Yes, the experimental design leverages foundational benchmarks in the relevant domain. Supplementary Material: Yes, I examined the demo provided by the authors and the raw results. Relation To Broader Scientific Literature: Traditionally, many methods—such as ViPT, OneTracker, and SDSTrack—rely on dual-branch architectures to process RGB and auxiliary modalities separately. CSTrack builds on these approaches by unifying the two modalities into a single compact feature space, a strategy that mirrors recent trends in transformer-based models where learnable queries and cross-attention mechanisms have proven effective for capturing complex relationships in visual data. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: 1. Simplifying the dual-stream network in RGB‐X tracking by integrating both modalities into a single compact feature space is a sound idea, as it helps reduce computational complexity and simplifies the model architecture. 2. Constructing a refined target distribution heatmap by combining intermediate and final tracking results is an effective practical method for selecting key target features, thereby enhancing tracking robustness. Other Comments Or Suggestions: 1. The method for modality missing simply duplicates the available modality data (e.g., copying RGB to the X channel) without accounting for potential noise interference in the X modality. This approach may inadvertently introduce erroneous signals when the auxiliary modality is noisy. 2. The paper uses a joint training strategy with both RGB‐X and RGB datasets, but lacks a detailed analysis of their distribution differences and potential domain shift issues. Questions For Authors: 1. For the Spatial Compact Module, could you elaborate on how the bidirectional cross-attention and modality-specific queries behave when one modality (e.g., thermal or depth) provides low-quality or noisy data? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Dear Reviewer W7oh,** We sincerely appreciate your thorough review of our work and are grateful for your recognition of our sound motivation, effective method, and extensive experimental analysis. In response to your concerns, we have provided detailed explanations below: ___ ### **Q1: Noise Interference in modality-missing tracking** Thanks for your constructive suggestion on analyzing noise impact in modality-missing tracking task. IPL [1] recently proposes the modality-missing tracking benchmarks, along with a baseline method that duplicates available modality data, which we adopt. As shown in Tables 2 and 4 of the paper, modality-missing settings significantly degrade model performance. We attribute this mainly to the influence of two types of noise: 1. **Miss of advantage modality.** Figure 4 in the paper introduces the concept of the advantage modality, which primarily conveys object appearance information in challenging scenarios. In modality-missing task, this advantage modality may be unavailable, leaving the tracker susceptible to noise interference from the non-advantage modality due to its lack of appearance features. 2. **Training and inference bias.** CSTrack is trained with complete RGB-X modalities but directly inferred in modality-missing settings. The difference in input data types introduces noise that significantly affects performance. Despite this, CSTrack exceeds IPL, a tracker tailored for such scenarios. We think this is primarily due to our shared patch embedding, which transforms initial RGB and X data into a unified feature space, improving compatibility with varied inputs. Below, we test the impact of shared embedding to further confirm our analysis: |Setting | LasHeR-Miss (SR; PR) | RGBT234-Miss (MSR; MPR) | |-|-|-| |Unshared Embedding | 48.7, 60.6 | 65.3, 86.5 | |Shared Embedding | 50.3, 62.2 | 68.0, 89.8 | ___ ### **Q2: Distribution analysis of RGB and RGB-X datasets** We appreciate your valuable suggestions on the distribution analysis of various modality datasets. The analysis is as follows: 1. **Distribution differences across modality datasets.** We perform statistical analyses on representative RGB and RGB-D/T/E datasets to obtain normalization mean values of training videos, including the ImageNet for comparison as a reference for natural image distribution: |Datasets| ImageNet | LaSOT (RGB-only) |DepthTrack (RGB-D)|Lasher (RGB-T)|VisEvent (RGB-E)| |:-|:-|:-|:-|:-|:-| | RGB Mean| 0.485; 0.456; 0.406 | 0.456; 0.459; 0.426 | 0.417; 0.414; 0.393 | 0.500; 0.499; 0.471 | 0.418; 0.375; 0.317 | | X Mean| - | -| 0.574; 0.456; 0.240 | 0.372; 0.372; 0.368 | 0.935; 0.904; 0.949 | - RGB modality: LaSOT and DepthTrack, collected in typical natural environments, have mean values similar to ImageNet. However, Lasher, often focusing on dark environments, and VisEvent, which collects predominantly yellow RGB images, deviate from these natural image means. - X modality: as shown in Figure 3 of the paper, X modalities vary greatly in image style, resulting in significant differences in mean values. 2. **Model's discriminative ability across modality datasets.** Using features from the one-stream backbone (see Equation 8 in the paper), we extract tokens associated with learnable queries and build a four-class classifier with two fully connected layers to identify the input modality type: RGB-only, RGB-D, RGB-T, and RGB-E. We freeze the existing model and train this classification head for 2 epochs. The accuracy results show the model can effectively distinguish different modality datasets, thanks to the clear distribution differences. |Datasets|LaSOT (RGB-only)|DepthTrack (RGB-D)|Lasher (RGB-T)|VisEvent (RGB-E)| |:-|:-|:-|:-|:-| |Accuracy|100%|98%|100%|100%| 3. **Advantages of joint training.** Our ablation study, detailed in Table 8 (#3) of the paper, shows that the benefits from joint training, including larger datasets and knowledge sharing across modalities, outweigh the drawbacks of domain shift, resulting in enhanced model performance. ___ ### **Q3: Function of bidirectional cross-attention and modality-specific queries.** Our spatial compact module aims to integrate RGB and X input streams into a unified feature space, facilitating simplified and effective spatial modeling. When a modality has low-quality or noisy data, termed "non-advantage modality" (see Figure 4), bidirectional cross-attention first emphasizes the advantage modality's representations while minimizing those of the non-advantage modality. Next, modality-specific queries preserve global semantic information of each modality, offering additional reference information for subsequent feature integration. ___ We hope these explanations can resolve your concerns. If you have any more questions or need further clarification, feel free to reach out to us. ___ [1] Modality-missing RGBT Tracking: Invertible Prompt Learning and High-quality Benchmarks, Lu et al., in IJCV 2O24.
Summary: This paper introduces CSTrack, a novel RGB-X tracker designed to enhance tracking performance by leveraging compact spatiotemporal features. Traditional RGB-X trackers typically process RGB and auxiliary modality (X) inputs separately using dual-branch architectures, which increases computational complexity and limits effective feature fusion. To address this limitation, CSTrack proposes a single-branch compact feature representation that integrates spatial and temporal information more efficiently. This approach incorporates two key modules: the Spatial Compact Module (SCM) and the Temporal Compact Module (TCM). The method is evaluated across several benchmarks, including RGB-D, RGB-T, and RGB-Event tracking datasets, such as DepthTrack, VOT-RGBD2022, LasHeR, RGBT234, and VisEvent. Claims And Evidence: Overall, the paper presents well-supported claims with convincing evidence through extensive experiments and ablation studies. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper primarily presents a methodological and empirical contribution rather than a theoretical one. Experimental Designs Or Analyses: The experimental design in the paper is well-structured and comprehensive, evaluating CSTrack across multiple datasets and conducting ablation studies to validate its core components. Supplementary Material: Yes, I reviewed the supplementary material, specifically focusing on the Ablation Studies, valuates key components of the Spatial Compact Module (SCM) and Temporal Compact Module (TCM), confirming their contributions to performance. Relation To Broader Scientific Literature: RGB-X Tracking and Multimodal Fusion: Traditional RGB-X tracking methods typically use dual-branch architectures, Temporal Feature Modeling in Tracking: Many existing RGB-X trackers lack effective temporal modeling. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. The paper is well-structured and easy to follow. Extensive supplementary material provides additional experimental insights, dataset details, and qualitative comparisons. 2. The Spatial Compact Module (SCM) and Temporal Compact Module (TCM) offer a novel approach to multimodal feature integration while reducing computational overhead. 3. The use of compact spatiotemporal representations is an innovative departure from the traditional dual-branch RGB-X tracking architectures. Weaknesses: 1. The proposed method does not achieve state-of-the-art performance on some of the metrics compared to those in the paper 'Exploiting Multimodal Spatial-Temporal Patterns for Video Object Tracking' (AAAI 2025). 2. The reduction in FLOPs compared to dual-branch methods is not significant (only a minor drop from 38G to 36G) in Table 5. 3. It is recommended to compare the efficiency of the proposed method, including parameters and FLOPs, with other methods to better highlight its computational performance. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **Dear Reviewer waVm,** Thank you for your time and effort in reviewing our work. We appreciate your recognition of our novel, innovative, and efficient method, along with the comprehensive, well-structured, and convincing experiments, as well as easy-to-follow writing and the extensive supplementary material. We note your concerns about the performance and computational efficiency of our model. In response, we've conducted targeted comparative analyses and additional experiments, aimed at addressing your concerns. ___ ### **Q1: Performance comparison with STTrack [1]** We provide a detailed comparison between CSTrack and STTrack across various metrics: | Model | DepthTrack (F; Re; PR) | VOT-RGBD22 (EAO; Acc; Rob) | Lasher (SR; PR) | RGBT234 (MSR; MPR) | VisEvent (SR; PR) | Params | Flops | |-|-|-|-|-|-|-|-| | STTrack | 63.3; 63.4; 63.2 | 77.6; 82.5; 93.7 | 60.3; 76.0 | 66.7; 89.8 | 61.9; 78.6 | 128M | 91G | | CSTrack-B | 65.8; 66.4; 65.2 | 77.4; 83.3; 92.9 | 60.8; 75.6 | 70.9; 94.0 | 65.2; 82.4 | 75M | 36G | | CSTrack-L | 67.1; 67.5; 67.3 | 78.2;83.8;93.2 | 61.8; 77.1 | 71.6; 96.0 | 66.1; 82.9 | 254M | 110G | 1. **CSTrack-B, the model presented in our paper, exhibits superior average performance.** As noted, CSTrack-B exhibits performance comparable to STTrack in VOT-RGBD22 (EAO ↓ 0.2%; Acc ↑ 0.8%; Rob ↓ 0.8%) and Lasher (SR ↑ 0.5%; PR ↓ 0.4%). However, CSTrack-B significantly outperforms STTrack in other benchmarks, achieving improvements such as PR ↑ 2.0% in DepthTrack, MPR ↑ 4.2% in RGBT234, and PR ↑ 3.8% in VisEvent, contributing to a superior average performance. 2. **CSTrack targets a more challenging optimization objective.** Unlike our CSTrack, which uses a single model weight to simultaneously handle RGB-D/T/E tasks, STTrack optimizes separate weights for each task. Additionally, the STTrack repository suggests that it even optimizes individual checkpoints for each benchmark. While UnTrack [2] indicates that this approach may yield better performance, it is a less practical strategy for real-world applications. 3. **CSTrack-B offers superior computational efficiency.** CSTrack-B offers a substantial advantage in Params (↓ 53M) and Flops (↓ 55G), achieving a better performance-computation balance. Focusing on maximizing performance, we develop CSTrack-L by adopting a larger backbone, surpassing STTrack across all benchmarks. ___ ### **Q2: FLOPs reduction compared to dual-branch methods** As shown in Table 5 of the paper, our single-branch method reduces FLOPs by only 2G compared to asymmetrical dual-branch methods. Here's the explanation: 1. Asymmetrical dual-branch methods often incorporate parameter-intensive prompter modules into each layer of the RGB branch to integrate RGB-X features, as seen in the fully connected networks in OneTracker [4]. This results in some FLOPs benefits but leads to increased parameters (↑ 9M) and speed limitations (↓ 11 FPS). 2. Table 5 indicates a notable disadvantage in tracking performance with this dual-branch method, e.g., MSR ↓ 3.7% in RGBT234. It suggests that the scale of FLOPs in this method makes it difficult to effectively integrate RGB-X features. ___ ### **Q3: Computational efficiency comparison with existing methods** Thanks for your constructive suggestion. We analyze the computational efficiency of recent open-source RGB-X methods, yielding the following results: |Model|DepthTrack (F; Re; PR) | RGBT234 (MSR; MPR) | VisEvent (SR; PR) | Params | Flops | Speed | |-|-|-|-|-|-|-| |UnTrack [2] | 61.0, 61.0, 61.0 | 62.5, 84.2 | 58.9, 75.5 | 99M | 24G | 24 FPS| |SDSTrack [3] | 61.4, 60.9, 61.9 | 62.5, 84.8 | 59.7, 76.7 | 102M | 108G | 22 FPS| |STTrack [1] | 63.3, 63.4, 63.2 | 66.7, 89.8 | 61.9, 78.6 | 128M | 91G | 27 FPS| |CSTrack-B | 65.8, 66.4, 65.2 | 70.9, 94.0 | 65.2, 82.4 | 75M | 36G | 33 FPS| CSTrack-B excels in Params and Speed, with only Flops lagging behind UnTrack. Similar to the analysis in **Q2**, UnTrack's core modules primarily consist of fully connected networks, offering Flops advantages but limits Params, Speed, and tracking performance. Overall, our method demonstrates superior tracking performance and computational efficiency. ___ We hope these responses address your concerns and would be grateful if you could reconsider your rating. Should you have any additional feedback or require further clarification, please do not hesitate to let us know. ___ [1] Exploiting Multimodal Spatial-Temporal Patterns for Video Object Tracking, Hu et al., in AAAI 2025. [2] Single-Model and Any-Modality for Video Object Tracking, Wu et al., in CVPR 2024. [3] SDSTrack: Self-Distillation Symmetric Adapter Learning for Multi-Modal Visual Object Tracking, Hou et al., in CVPR 2024. [4] OneTracker: Unifying Visual Object Tracking with Foundation Models and Efficient Tuning, Hong et al., in CVPR 2024. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Regarding the first question, CSTrack-L outperforms STTrack; however, STTrack serves as a base model with a search resolution of 256 and a template size of 128. Therefore, the comparison may not be entirely fair. --- Reply to Comment 1.1.1: Comment: **Dear Reviewer waVm,** Thank you for your timely feedback. Regarding your remaining concern about **Q1 (Performance comparison with STTrack)**, we would like to provide further clarification. ___ ### **Clarification 1: CSTrack and STTrack utilize the same resolution for search and template images.** It is important to note that CSTrack-L is merely an enhanced version of CSTrack-B, achieved solely by employing a larger backbone. Therefore, CSTrack-L, CSTrack-B, and STTrack all utilize a search image resolution of 256 and a template resolution of 128, ensuring fairness concerning input image resolution. (Due to word limits in the comments window, we did not provide a detailed explanation of the experimental setup for CSTrack-L. We apologize for any confusion this might have caused.) ___ ### **Clarification 2: When CSTrack-B adopts the same optimization objective as STTrack, it outperforms STTrack across all datasets.** As discussed in the initial rebuttal regarding Q1, our CSTrack-B employs an optimization objective based on a single unified model weight to simultaneously handle RGB-D/T/E tasks. In contrast, STTrack adopts the approach of optimizing a separate set of model weights for each modality, which simplifies the complexity of the optimization target. As demonstrated in UnTrack, this strategy often results in superior performance. For a fair comparison, we also evaluated CSTrack-B using modality-specific optimization weights. As shown in the table below, modality-specific CSTrack-B surpasses STTrack across all metrics while offering significant computational efficiency advantages. | Model | DepthTrack (F; Re; PR) | VOT-RGBD22 (EAO; Acc; Rob) | Lasher (SR; PR) | RGBT234 (MSR; MPR) | VisEvent (SR;PR) | Params | Flops | |-|-|-|-|-|-|-|-| | STTrack | 63.3; 63.4; 63.2 | 77.6; 82.5; 93.7 | 60.3; 76.0 | 66.7; 89.8 | 61.9; 78.6 | 128M | 91G | | CSTrack-B (unified) | 65.8; 66.4; 65.2 | 77.4; 83.3; 92.9 | 60.8; 75.6 | 70.9; 94.0 | 65.2; 82.4 | 75M | 36G | | CSTrack-B (modality-specific) | 65.9; 66.8; 65.5 | 77.8; 83.6; 93.8 | 60.9; 76.4 | 70.9; 94.4 | 65.4; 82.5 | 75M | 36G | ___ We believe these additional clarifications have addressed your concerns and would greatly appreciate it if you could reconsider your rating, as it is highly important to us. Due to the limitations on the number of comments, if there are any other issues you would like us to clarify, please update them in the **'Rebuttal Comment'** window above. We will promptly address them and provide updates in this window. Our sole purpose in doing this is to sincerely thank you again for your recognition of our method, experiments, and writing in the initial review, and we genuinely hope to address any remaining doubts you might have about our work. Wishing you all the best.
null
null
null
null
null
null
null
null
DUNIA: Pixel-Sized Embeddings via Cross-Modal Alignment for Earth Observation Applications
Accept (poster)
Summary: The paper proposes DUNIA (Dense Unsupervised Nature Interpretation Algorithm), a method that generates pixel-level embeddings by aligning forest vertical structure information (obtained from space-born full waveform LiDAR) with satellite imagery, by using contrastive learning. The pixel-level nature is what makes DUNIA different from related works, that more typically produce patch-sized embedding. Due to the contrastive learning approach, the resulting embedding can be directly used for various EO tasks in a zero-shot fashion. Experiments are conducted, comparing e.g. with the recent AnySat approach, which show that DUNIA (in fine-tuning setting) yields performance on par with or better than state-of-the-arts on five out of six explored tasks. The embeddings of DUNIA can -- for the first time, to the best of mine and the authors' knowledge -- be used to directly generate waveforms representing the forest's vertical structure from pixel inputs. ## updates after rebuttal I thank the other reviewers and the authors for all their efforts. I have read the other reviews + associated rebuttals + the rebuttal to my review. I think overall that the authors have done a thorough job in addressing concerns (but I of course must leave it to the other reviewers to assess what they think of the responses to their respective reviews), including a significant amount of additional relevant experiments. The original reviews were quite diverse: 1 strong accept, 2 weak reject, 1 weak accept, with my weak accept being kind of in the middle. I note that after the rebuttal, one of the weak reject reviewers (WDGb) has updated to weakly accepting the paper (instead of weakly rejecting), so it seems the majority of us believe the paper should now be accepted. I therefore score the work as "accept" now (before, "weak accept"). Claims And Evidence: Yes, I would say that most of the claims to the best of my understanding are well-backed-up; an example of such a claim that is supported is: * **Claim #1:** "In the fine-tuning setting, we show strong low-shot capabilities with performance near or better than state-of-the-art on five out of six tasks." - **Quality of evidence for claim:** + I think this is backed up well in the experiments section, see in particular Table 2. There we see that DUNIA obtains best results in 4 tasks, is more or less on par (82.2% vs 82.3%) for one task, and obtains quite a bit worse results (than AnySat) on one task. One claim that is not quite as well backed up is: * **Claim #2:** At 2nd column of first page, it says that related work approaches "struggle with more complex output like the full vertical structure of vegetation". A similar statement in Sec. 2.3, where it is claimed that pixel-level alignment is necessary for dense predictions in EO applications. - **Quality of evidence for claim:** + I didn't quite feel that the statement was backed up, e.g. by citing some works / results that show that these related works struggle with this. This could perhaps be alleviated by in the text referring the reader to the experiment section, where e.g. the comparisons to AnySat show some of this claim (?). + Also, I feel like that part about pixel-level alignment being necessary for dense predictions is wrong. Many approaches (including AnySat) is used for dense predictions, despite not having pixel-level alignment. Methods And Evaluation Criteria: Yes, I would say so, e.g. for these reasons (see further positives also under "Experimental Designs or Analyses"): * Good that many tasks (7) are explored, and that the proposed DUNIA obtains strong zero-shot improvements on most tasks, including when comparing with specialized supervised models (and really strong improvements on some -- see Table 1). * Continuing on the previous note, also great fine-tuning results (Table 2). Some comments on the negative side of things: * It was not quite clear to me why the standard F1 score was not (also) used as an evaluation criterion, and only the weighted wF1 score. * I could not see any reporting of inference runtime speeds (as in actual "wall-clock" time), this should have been good to have e.g. in the supplement (D.2.2). I assume it's roughly in the same order of compute-efficiency as e.g. AnySat, but would be good to compare them. Especially interesting to see if the number of neighbors in the kNN part affects this a lot. Theoretical Claims: No. Did not see proofs nor theoretical claims in the paper. Experimental Designs Or Analyses: Yes, I had a look at all experiment designs / analyses in the paper (and where needed checked if some things that appeared missing in the main were done in the supplement). Positives: * The main results in Table and Table 2 seem well-thought-through. A lot of tasks (7) explored, and many relevant methods compared against. DUNIA is best in most cases, and where it is not best, relevant commentary is added (see below). * I found the analysis of why performance is worse on "PASTIS" to be insightful and relevant (see beginning of left column, p7, where the reasoning is that the variability of the phonological cycles of crops cannot be well-captured by a single median composite). * I liked that in Supp. D.3.5 an analysis of the embedding sensitivity to horizontal and vertical structures was included, as it is such a core methodological contribution of the paper (and as shown in the corresponding Table 7, the proposed design of cross-modal alignment with vertical structure data is important). Negatives: * I was missing some ablation or similar into the effect of importance of using both S-1 and S-2 data. In the supplement, for example, there could have been room for trying DUNIA while omitting one of the two. * While it's good that limitations are made transparent (see latter part of Sec. 5), I find that the comment on the reliance on timeseries data could anyway have been explored a bit, empirically speaking. Because to the best of my understanding, image timeseries were anyway processed in such a way that it in practice could be replaced with a single image (albeit at an expecation of worse results). Thus it would have been interesting to see what happens if one uses a single image instead (and it would still be OK if results get much worse). * Around L291 in main paper, it is mentioned that fixed-resolution imagery (256x256 pix) is used during inference. I think that it would have been good in the supplement to provide some insight into how results may be affected if using another resolution, as comparison. Supplementary Material: I skimmed all of it, put some extra attention on these parts: * D.3.5 (see my commentary on it in previous question's box). * D2.2 (I was looking for runtime inference speeds but did not find it). * In general looked through all ablations when trying to look for results that I found missing in the main paper (see also previous question's box). Relation To Broader Scientific Literature: To the best of my knowledge, the related work mentioned looks good and covers the necessary literature. In particular, I think lots of relevant related work is covered in the beginning of page 2 (and it's particularly good to mention how those works are various ways to remedy the challenges listed towards the end of the previous page (p.1)), followed also in Sec. 2.1 and 2.2. In addition, an extended set of related work is provided in the supplement. As for the key contributions of this paper, I think the section 2.3 covers it well. In particular, this work builds on earlier works that develop cross-modal (e.g. AnySat) ML-EO methods, but makes a strong contribution relative to prior works in that DUNIA (for the first time, at least to the best of my knowledge, and based on the authors' claims) due to its pixel-level embeddings can directly approximate forest vertical structure from pixel inputs. Essential References Not Discussed: To the best of my knowledge, no essential references were missed; in particular, no references that made it hard for me to understand this paper were missed. Other Strengths And Weaknesses: Strengths: * I re-iterate this as one of the main strengths of this work: The embeddings of DUNIA can -- for the first time, to the best of mine and the authors' knowledge -- be used to directly generate waveforms representing the forest's vertical structure from pixel inputs. * I think in general that the design choices etc of the method have been carefully thought through. An example of many that illustrate this is in Sec. 3.4, where the reasoning about when and why a certain alignment loss is used makes sense (also, such things are well-ablated in the supplement as well). * It's good that limitations are lifted in the discussion section (e.g. admitting the reliance of cases where timeseries are available). * The impact statement on p.9 is important and good. It really shows the importance of this line of work. Weaknesses: * Not all parts were quite clear to me, e.g. - In Sec. 3.3, I got most of the explanation about that composite image I from S1 and S2 timeseries. What I did not get was how the model is supposed to perform reasonably well in cases where it does not have composite images but just "simple / plain" images. I.e., if it's trained assuming access to such "privileged information" as composite images, how can it be expected to not get a "performance drop" when that input changes to something less "rich"? **EDIT:** I keep this weakness, even though it was later made clear (in the listing of limitations in Sec. 5) that this is one of the weaknesses of the approach. But I think this should be more clearly stated earlier as well, as is apparent from the fact that I got confused about it. - In the abstract it reads (L33) "... outperform specialized supervised models, even in low-labeled data regimes". This formulation was a bit surprising to me, given that it is my understanding that this is the most expected case, and is thus not surprising at all (since supervised models often require high-labeled data regimes to work).. (?) Other Comments Or Suggestions: Some minor things such as typos: * Be consistent in "Earth observation" vs "Earth Observation". Pick one. Both work. * Be consistent in the way Sentinel-1 and -2 are abbreviated, e.g. in Fig. 2 caption it is written as "S-1 & S2" <-- write in one of the two ways all the time. * I suggest "words" in math env to not be italic, e.g. in eq. (2) mse could be written in non-italics (and perhaps capital letters for that is mor common). * When referring to figures in the SM (e.g. Fig 5 to 8 right before Sec. 4.2.2), please state that they are in the SM. Questions For Authors: * I could not see statement(s) about future code and/or model availability. Will these be made publicly available? When? * Why was weighted F1 used, instead of the "equal-weighted" F1 score? (Or why not both?) Finally, can you provide some equal-weighted F1 scores to compare with too? * What is the runtime for DUNIA vs AnySat? Does this depend a lot on the number of neighbors in the kNN? * Would the method provide roughly as good results if doing inference on smaller-res images (e.g. 128x128) or higher (512x512), instead of the current default of 256x256? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: First, we wish to thank reviewer **zT7y** for their complete & thorough review of our submission. 1.Unsupported claims > 1\. Concern (C): The struggle to directly estimate the full vertical structure Response (A): Reconstructing the full vertical structure (W) requires modeling a complex distribution P(W|Pix). A pixel embedding (Pix) aggregates spectral information over a 2D footprint but lacks the explicit depth-wise cues needed to resolve W directly (e.g. using a decoder head). In contrast, LDMs learn to iteratively denoise samples toward a plausible W, conditioned on aligned pixel embeds. By this process, LDMs implicitly capture the conditional distribution P(W|Pix) in a more expressive manner. We have revised accordingly. > 2\. C: Pixel-level alignment necessary for dense predictions A: We agree with the reviewer that in the fine-tuning case, pixel-level alignment is not necessary for dense prediction tasks, as usually a decoder is trained on the generated patch-sized embeddings. We revised accordingly. 2. Methods And Evaluation Criteria > 1\. C: The use of the weighted F1 score A: We relied on this metric to remain consistent with the performance scores reported in the literature. The authors of the PF dataset used the weighted F1 score due to high class imbalance. For the PASTIS data set and the CLC+ datasets, OA accuracy has been reported. For these reasons, we opted for the weighted F1 score. Due to lack of time, we only managed to run partial fine-tuning tests. The results indicate that the micro- and weighted- F1 scores have the same order of magnitude. For the macro F1 score, it is several percentage points lower (e.g., \~7% lower for the PF dataset). However, model ranking remains the same regardless of the score. > 2\. C: Inference runtime speeds of DUNIA and AnySat A: Due to design choices and data requirements, AnySat is orders of magnitude slower than DUNIA zero-shot. Below are the wall clock times (in seconds) to generate a \~20x20 km area (\~4.19M pixels), assuming a retrieval database containing 256K keys/values for DUNIA and 100 NNs. The test excludes data loading times. |Model|forward pass|retrieval|KNN|Total| |--|--|--|--|--| |DUNIA|2.52 |0.36 |1.34|4.22| |AnySat|177.37 | - | - |177.37| For a database with 512K K/V, retrieval increased to 0.4s. For NN = 200, KNN increased to 1.88s 3. Experimental Designs Or Analyses > 1\. C: Effect of importance of using both S-1 and S-2 data A: In the limited time that we had for the rebuttal, we only managed to test on heights and land cover classification (CLC+). Below are the results: |Dataset|Metric|S-1 only|S-2 only|S-1 & S-2| |--|--|--|--|--| |Heights|rmse|3.8|2.8|1.34| |CLC+|wF1|90.2|90.0|90.3| The results show that DUNIA leverages both modalities for heights and, in general, using S-2 yields more accurate results than S-1 for heights. For land cover classification, using either modality leads to similar performances. In the revised version, we will include the remaining datasets. > 2\. C: Reliance on timeseries data could anyway have been explored a bit A: We have revised Sections 3 (Approach) and 4.1.1 (first paragraph) to better reflect this point. More specifically, while image composites such as a median composite may be richer than single-date images, they are still less informative than a full time series, as they only provide a median reflectance value over a given period. On the other hand, even this simple form of aggregation preserves significantly more information than a single-date image. Below are the fine-tuning results for two products (Heights, and PASTIS) using randomly acquired-in-time S-1 & S-2 imagery. |Dataset|Metric|Single-date image|Median composite| |--|--|--|--| |Heights|rmse|1.9|1.34| |PASTIS|wF1|42.3|77.0| > 3\. C: Issue raised regarding the effect of image resolution A: We agree with the raised concern. Our tests show that image size has no effect on the quality of the resulting product. Below are performance results for different resolution images. The tests were performed in the zero-shot setting: |Dataset|metric|128x128|256x256|512x512| |--|--|--|--|--| |Heights | rmse | 2.2 | 2.0 | 2.1 | |Cover | rmse | 11.6 | 11.7 | 11.7 | |CLC+ | wF1 | 80.2 | 80.1 | 80.2 | 4. Weaknesses > 1\. C: How the model is supposed to perform reasonably well in cases where it does not have composite images A: Please refer to 3\.2. > 2\. C: The statement: outperform specialized supervised models, even in low-labeled data regimes is wrong. A: Yes we agree with the raised concern. We have revised accordingly. 5. Questions For Authors > 1\. C: statement(s) about future code and/or model availability A: We thank the reviewer for raising this point. We are currently preparing the code for publication. We will link to the repository in the finalized version. > Other questions A: For Qs 2,3, and 4, please refer to 2\.1, 2\.2, and 3\.3 respectively. --- Rebuttal Comment 1.1: Comment: I thank the other reviewers and the authors for all their efforts. I have read the other reviews + associated rebuttals + the rebuttal to my review. I think overall that the authors have done a thorough job in addressing concerns (but I of course must leave it to the other reviewers to assess what they think of the responses to their respective reviews), including a significant amount of additional relevant experiments. The original reviews were quite diverse: 1 strong accept, 2 weak reject, 1 weak accept, with my weak accept being kind of in the middle. I feel especially that one of the weak reject reviewers (WDGb) was open to accepting the paper if the weaknesses were well-addressed, so I feel there is a high possibility that there could become a consensus on accepting the paper. But we will see what the others think, too. --- Reply to Comment 1.1.1: Comment: Thank you for your comments. Please allow us to clarify a few points: Regarding reviewer **WDGb**, although they have not officially commented on our rebuttal, they raised their score from 2 to 3. We interpret this as a sign that they were generally satisfied by our responses to their concerns. As you noted, reviewer **hPct** requested several additional experiments, which we performed. Today, reviewer **hPct** acknowledged our rebuttal, although without providing an answer yet. To summarize the responses to reviewer **hPct**’s key requests: - We compared our model against four specialist canopy height models, added an additional dataset, and demonstrated that we outperform specialists even in the zero-shot setting and using <1% of the labels. - We included three additional competing models, despite earlier findings that they perform poorly on forest monitoring tasks. One model wasn't included as it is closed source and not publicly accessible. - We evaluated geographical robustness on the suggested dataset and outperformed all baselines. - We assessed temporal robustness via ablations and datasets with available labels across the years. Results demonstrate the model's stability in time. These new experiments reinforced our original claims and did not alter our conclusions. Finally, a central contribution of our work, as highlighted by all reviewers, is the native integration of LiDAR waveform data at the pixel scale. This enables accurate zero-shot classification and full vertical structure estimation—an important extension to the capabilities of remote sensing foundation models. This extension helps fill a critical gap in forest monitoring and supports broader ecological conservation efforts. We also show that integrating LiDAR is crucial for these tasks: our model outperforms five recent and prominent foundation models on vertical structure estimation and demonstrates strong performance on other land cover and land use tasks as well. We also emphasize that this is a general framework, not a product, that is efficient to pretrain with modest compute requirements. We hope this response clarifies the key points and addresses any remaining concerns. We would be grateful if you would consider this in your final evaluation.
Summary: The paper presents DUNIA (Dense Unsupervised Nature Interpretation Algorithm), an approach for learning pixel-sized embeddings for Earth observation applications through cross-modal alignment between satellite imagery and LiDAR data. The main contributions include a framework that learns dense pixel-level embeddings by aligning forest vertical structure information from LiDAR waveforms with satellite imagery using contrastive learning. This enables both vertical and horizontal structure understanding at the pixel level. The proposed method can perform zero-shot predictions for multiple forest/land monitoring tasks Claims And Evidence: The paper's claim that it "often outperforms specialized supervised models" is partially supported by the evidence presented, though the scope of comparison could be broader. While the results demonstrate superior performance in several cases, the limited set of supervised models used for comparison somewhat weakens the generality of this claim. A more comprehensive benchmarking against recent specialized models would provide stronger support for this assertion. A few examples include Lang et al., 2023, Fayad et al., 2023; Tolan et al., 2024 * Lang, Nico, Walter Jetz, Konrad Schindler, and Jan Dirk Wegner. "A high-resolution canopy height model of the Earth." Nature Ecology & Evolution 7, no. 11 (2023): 1778-1789. * Fayad, Ibrahim, Philippe Ciais, Martin Schwartz, Jean-Pierre Wigneron, Nicolas Baghdadi, Aurélien de Truchis, Alexandre d'Aspremont et al. "Hy-TeC: a hybrid vision transformer model for high-resolution and large-scale mapping of canopy height." Remote Sensing of Environment 302 (2024): 113945. * Tolan, Jamie, Hung-I. Yang, Benjamin Nosarzewski, Guillaume Couairon, Huy V. Vo, John Brandt, Justine Spore et al. "Very high resolution canopy height maps from RGB imagery using self-supervised vision transformer and convolutional decoder trained on aerial lidar." Remote Sensing of Environment 300 (2024): 113888. Similarly, while the paper demonstrates promising low-shot learning capabilities by showing good performance with reduced training data, this aspect of the research could be more thoroughly explored. The analysis would benefit from more extensive comparisons across different data regime sizes to better understand the model's behavior with varying amounts of training data. Additionally, there is limited discussion or analysis explaining why the model performs well in low-data scenarios. Methods And Evaluation Criteria: The paper's robustness testing could be enhanced, particularly regarding the model's behavior across different years. While the current evaluation demonstrates effectiveness within a specific temporal window, there is limited discussion about how the model performs when applied to data from different years, which is crucial for understanding its long-term applicability in Earth observation tasks. Additionally, concerning the datasets used for land cover classification, while they serve their purpose for the current evaluation, expanding the validation to include more diverse land cover datasets would strengthen the model's claimed generalization capabilities. In addition, I strongly recommend that the authors compare the proposed method with well-established remote sensing foundational models, such as SkySense (CVPR 2024), SatMAE++ (CVPR 2024), and DeCUR (ECCV 2024), using recognized benchmarks such as BigEarthNet, fMoW, and DIOR. Theoretical Claims: Not Applicable Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: The paper's contributions advance Earth observation research by building on developments in the field. On the foundation model front, it expands upon work like Scale-MAE (Reed et al., 2023) and DOFA (Xiong et al., 2024) in handling multi-resolution data, while also advancing the multi-source data integration approaches seen in OmniSat (Astruc et al., 2025) and AnySat (Astruc et al., 2024). Essential References Not Discussed: The paper overlooks several contributions in the field of tree canopy height mapping. Notably absent is a discussion of recent deep learning-based methods developed by Liu et al. (2023) and Lang et al. (2023), as well as important advances in vision transformer applications by Fayad et al. (2023) and Tolan et al. (2024). * Liu, Siyu, Martin Brandt, Thomas Nord-Larsen, Jerome Chave, Florian Reiner, Nico Lang, Xiaoye Tong et al. "The overlooked contribution of trees outside forests to tree cover and woody biomass across Europe." Science Advances 9, no. 37 (2023): eadh4097. * Lang, Nico, Walter Jetz, Konrad Schindler, and Jan Dirk Wegner. "A high-resolution canopy height model of the Earth." Nature Ecology & Evolution 7, no. 11 (2023): 1778-1789. * Fayad, Ibrahim, Philippe Ciais, Martin Schwartz, Jean-Pierre Wigneron, Nicolas Baghdadi, Aurélien de Truchis, Alexandre d'Aspremont et al. "Hy-TeC: a hybrid vision transformer model for high-resolution and large-scale mapping of canopy height." Remote Sensing of Environment 302 (2024): 113945. * Tolan, Jamie, Hung-I. Yang, Benjamin Nosarzewski, Guillaume Couairon, Huy V. Vo, John Brandt, Justine Spore et al. "Very high resolution canopy height maps from RGB imagery using self-supervised vision transformer and convolutional decoder trained on aerial lidar." Remote Sensing of Environment 300 (2024): 113888. Furthermore, while the paper focuses on single-year height estimation, it fails to acknowledge important work on multi-year time series analysis by Dixon et al. (2025), Kacic et al. (2023), and Turubanova et al. (2023). Including these references would provide a more comprehensive understanding of how the proposed approach advances or differs from existing temporal analysis methods in forest structure mapping. * Dixon, Dan J., Yunzhe Zhu, and Yufang Jin. "Canopy height estimation from PlanetScope time series with spatio-temporal deep learning." Remote Sensing of Environment 318 (2025): 114518. * Kacic, Patrick, Frank Thonfeld, Ursula Gessner, and Claudia Kuenzer. "Forest structure characterization in Germany: novel products and analysis based on GEDI, Sentinel-1 and Sentinel-2 data." Remote Sensing 15, no. 8 (2023): 1969. * Turubanova, Svetlana, Peter Potapov, Matthew C. Hansen, Xinyuan Li, Alexandra Tyukavina, Amy H. Pickens, Andres Hernandez-Serna et al. "Tree canopy extent and height change in Europe, 2001–2021, quantified using Landsat data archive." Remote Sensing of Environment 298 (2023): 113797. Other Strengths And Weaknesses: STRENGTHS * Novel integration of horizontal and vertical structure understanding at pixel level * Creative combination of different contrastive learning approaches for different alignment tasks * Practical Utility: Addresses real-world challenges in Earth observation WEAKNESSES: * Limited evaluation scope: restricted comparison with recent specialized models, limited geographical coverage (mainly French territory), and insufficient robustness testing across different years * Methodological gaps: limited analysis of low-data regime behavior, insufficient discussion of failure cases, and lack of cross-validation across different environmental conditions * Insufficient support for generalization claims: needs stronger evidence for outperforming specialized models, limited validation across diverse land cover datasets, and insufficient analysis of performance in different geographical regions Other Comments Or Suggestions: No Questions For Authors: Please refer to the above responses Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank reviewer **hPcT** for the thorough review, comments, and suggestions. 1. Claims and Evidence > 1\. Concern (C): The paper's claim that it "often outperforms specialized supervised models" is partially supported by the evidence presented Answer (A): We agree and have included performance results in comparison to: Lang, Fayad, Liu and Tolan (et al.). We have also added an additional dataset for this task (i.e., ALS). The results below clearly show that our model outperforms all current specialist canopy height (CH) models even in the zero-shot (ZS) setting. |Height Dataset|Metric|DUNIA (ZS)|Tolan|Liu|Lang|Fayad| |--|--|--|--|--|--|--| |ALS| rmse|3.1|9.2|3.9|6.5|3.9| |GEDI | rmse|2.0|8.5|5.2|5.6|2.8| > 2\. C: The analysis would benefit from more extensive comparisons across different data regime sizes to better understand the model's behavior A: We chose the sizes based on empirical findings. We presented the lowest data size below which the products were unusable for a given task. In the revised version, we have added an ablation on lower data regimes. All models showed a degradation in performance. However, the model ranking remained the same as that presented in the original submission. Failure cases include overfitting or convergence towards a worse solution. > 3\. C: There is limited discussion or analysis explaining why the model performs well in low-data scenarios A: This is mainly due to the self-supervised training with a contrastive objective. In the main text (line 69), we have included two references that discuss it, and our results corroborate their findings. 2. Methods And Evaluation Criteria > 1\. C: I strongly recommend that the authors compare the proposed method with well-established remote sensing foundational models A: We had pre-trained DOFA, SatMAE, and DeCUR. Initially, they were not included due to their performance gaps on vertical structure related tasks. Due to time constraints we couldn't pretrain SatMAE++, and we couldn't find any implementations of SkySense. Below are the non-linear probing (MLP) results for our model (DUNIA), DOFA, SatMAE, and DeCUR. All models were probed concurrently on the same dataset. The results show that these models severely underperform (except for the PF data set) compared to our model. |Dataset|Metric|DUNIA|SatMAE|DeCUR|DOFA| |--|--|--|--|--|--| |Height| rmse|1.34|10.5|11.0|11.0| |Cover| rmse|9.8|30.2|28.5|29.2| |CLC+| wF1 | 90.3|75.0|75.1|72.0| |PF| wF1 | 82.2 | 79.8 | 78.9 | 78.8 | > 2\. C: Benchmarking against recognized benchmarks (e.g., BigEarthNet) to assess robustness across different years and in different geographical regions. A: We thank the reviewer for this suggestion. First, we would like to point out that BigEarthNet is a scene understanding task (i.e., an image embedding (not pixel) is used to predict a multi-label). As such, patch-based models are expected to perform better than pixel-based ones. The results below show that even though DUNIA's encoder was not pre-trained on the BigEarthNet dataset, and scene understanding is not the intended use case for DUNIA, yet it compares favorably with the other FMs. |Dataset|Metric|DUNIA|CROMA|SatMAE|DeCUR|DOFA| |--|--|--|--|--|--|--| |BigEarthNet| mAP|84.9|84.3|82.1|82.8|83.5| 3. Essential References Not Discussed A: Thank you for the suggestion. We have updated our manuscript accordingly. Please see 1\.1\. 4. Weaknesses > 1\. C: Restricted comparison with recent specialized models A: Please see 1\.1. > 2\. C: Limited geographical coverage (mainly French territory) A: Please see 2\.2\. > 3\. C: Insufficient robustness testing across different years A: We thank the reviewer for this comment. We have performed two test cases: 1. Use the pre-trained model for year 2020 and fine-tune it (MLP) for 2019 and 2021 2. Pretrain the model using data from three years: 2019, 2020 and 2021. For both cases, performance results on canopy height mapping (below) show no significant differences year-to-year in either scenario. Test case 1: Below is the fine-tuning performance on height estimation: |Dataset|Metric|2019|2020|2021| |--|--|--|--|--| |Height | rmse |1.35|1.34|1.40| Test case 2: Below is the zero-shot performance on height retrieval: |Dataset|Metric|2019|2020|2021| |--|--|--|--|--| |Height | rmse |2.4|2.0|2.1| > 4\. C: Limited analysis of low-data regime behavior A: Please see 1\.2\. > 5\. C: Cross-validation across different environmental conditions A: We compared DUNIA in the Zero-Shot setting (ZS) across France's 10 major ecological regions (GRECOs A to J). These regions span 12 of the 18 Köppen-Geiger climate types found across Europe and geographically range from lowland plains and coastal plateaus to mountainous terrains. Standard deviation on the height, cover, and CLC+ metrics across the 10 GRECOS regions was respectively 0.28 m, 0.26%, and 0.68%. Indicating similar performance across different environmental conditions.
Summary: The paper introduces DUNIA, a new approach to learn pixel-level embeddings of Earth observation images. DUNIA is trained contrastively, aligning satellite images with full-waveform LIDAR data to enable understanding of both “vertical” and “horizontal” structures. Experiments measure the effectiveness of the embeddings in seven environmental monitoring tasks. They find the embeddings enable zero-shot classifiers to perform comparably to or outperform supervised specialists, and strong fine-tuning performance with low amounts of data compared to state-of-the-art. ## Update after rebuttal The authors have addressed most of my concerns, so I have adjusted my score accordingly. I still think the paper has problems with methodological motivation and clarity, and presentation more broadly, which are why I haven't increased my score further. For example, why are two autoencoders used (encoder-decoder architectures) rather than simply two encoders with the third alignment model? Is there intuition for why the RVQ module is needed/helpful beyond what the authors state as "regularization by enforcing discrete representations" (which isn't clear to me why it's needed for this task)? These are just a few examples of design decisions that I don't think are clearly motivated, and there are many design decisions made in the paper. For decisions the authors are newly making, intuition should be provided. If designs are well-studied and motivated in prior work, that should be clearly stated and cited. I also think the Tables are still hard to read. For example there is inconsistent use of decimal formatting, inconsistent use of entries with values in parentheses, and lots of acronyms/shorthands which I don't think are necessary or could be made more clear. Claims And Evidence: The claims are supported by clear and convincing evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the task of learning pixel-level EO embeddings. Encoding vertical information in the pixel-level embeddings through the contrastive procedure is an interesting idea and the authors validate it on seven tasks spanning four datasets where strong pixel-level information is required for good performance. However, there may be at least one baseline that should be compared against for proper contextualization in the literature. Theoretical Claims: No theoretical claims were made in this work. Experimental Designs Or Analyses: The experimental designs and analyses presented in the main text are sound. Supplementary Material: I skimmed through the entire Appendix. The supplementary material is extensive and provides additional details that support the main text, including more on experimental setups and additional results. The ablation studies contained within are particularly useful for understanding the impact of different design decisions, and a few qualitative examples are presented as well. Relation To Broader Scientific Literature: The contributions of this paper have implications for how to design strong pixel-level embeddings, which have several applications in remote sensing / EO. The idea to contrast LIDAR data with satellite imagery is new as far as I am aware, and it may inspire future research on self-supervised learning for pixel-level (or more broad self-supervised learning) EO data. Essential References Not Discussed: One notable self-supervised method [1] learns a pixel-level encoder for satellite imagery. This should be discussed, and potentially even compared to in the results for proper contextualization in the literature. Additionally, it does not seem like the paper has a “self-supervised learning for EO” related work section, which further takes away from its positioning in the literature and makes it difficult for readers to assess the key differences and contributions from prior work. [1] Lightweight, Pre-trained Transformers for Remote Sensing Timeseries. Tseng et al. 2023. Other Strengths And Weaknesses: Strengths: 1. Introduces a novel technique for generating high-resolution, pixel-sized embeddings of EO data. 2. The method demonstrates broad applicability across a range of environmental monitoring tasks, sometimes outperforming specialists. 3. Provides extensive supplementary material that supports and extends the main findings. Weaknesses: 1. Little-to-no discussion of self-supervised EO related work. 2. The presentation needs work overall. For example: 1. There is a lack of intuitive explanations for methodological choices. This makes it difficult for readers to understand why the authors made certain decisions, putting them into question. 2. Some figures and tables are unclear and difficult to interpret, which could hinder understanding. 3. Minimal qualitative examples which makes it hard for readers to gain a sense of how their method performs on specific examples compared to other methods. Other Comments Or Suggestions: 1. The methodology section would benefit from more intuitive descriptions and examples to aid understanding. 2. Including a small figure to clearly demonstrate the advantages of pixel vs. patch-sized embeddings would be helpful. Perhaps revamping Figure 1 to do this could make sense. 3. Tables 1 and 2 are really hard to read and should be restructured to improve readability. 4. Figure 2 has a ton of details which makes it hard to follow. Are all of these details necessary, or can some be removed in favor of focusing on the important aspects and simplifying the whole figure? Questions For Authors: 1. Table 1 suggests DUNIA underperforms on PASTIS significantly - why do the authors think that is? 2. Could you provide more intuitive explanations or visual examples that highlight why specific methodological choices were made? Strong responses to the weaknesses as well as my other comments and questions could result in an improvement in my evaluation of the paper. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank reviewer **WDGb** for the thorough review and the helpful comments. 1. Methods And Evaluation Criteria > 1\. Concern (C): There may be at least one baseline that should be compared against for proper contextualization in the literature. Answer (A): Thank you for this comment, also raised by **hPct**. We have added four additional baselines for the canopy height mapping task and one additional dataset (canopy height from ALS). Regarding the foundation model baselines (FMs), we have benchmarked against three new recently proposed FMs. We invite reviewer **WDGb** to read our responses to reviewer **hPct** regarding the performance results. 2. Essential References Not Discussed > 1\. C: It does not seem like the paper has a “self-supervised learning for EO” related work section. A: We agree and thank you for raising this point. Originally, related works on self-supervised models for EO were cited based on their pre-training objectives. In the revised version, we have included a dedicated section for these models. This section now discusses the previously mentioned models, the new models that were included as part of our response to reviewer **hPct** and **Presto**, the model mentioned by the reviewer. 3. Weaknesses > 2\.1 C: There is a lack of intuitive explanations for methodological choices A: Thank you. We would be grateful if the reviewer could guide us towards the passages where we failed to deliver a convincing argument for a given design choice. In the original submission, we discussed the following design choices: 1. Tokenization layer. 2. Encoder and decoders. 3. Losses. 4. The choices for the two AEs. 5. Waveform generation. We realized that the reliance on image composites (i.e., median reflectance of a time series) was not entirely clear, instead of using a time series as input. This has been addressed. > 2\.2 C: Some figures and tables are unclear and difficult to interpret A: Please see our responses to your concerns 4\.3\. and 4\.4\. In particular, we have: 1. Modified Figure 2. 2. Modified Tables 1 and 2. > 3\. C: Minimal qualitative examples. A: We agree. In the revised version of our manuscript, we have split Figure 3 into three figures and included additional maps / product. We hope that this change satisfies this concern. 4. Other Comments Or Suggestions > 1\. C: The methodology section would benefit from more intuitive descriptions A: Please refer to our response in 3\.2\.1 > 2\. C: Including a small figure to clearly demonstrate the advantages of pixel-sized vs. patch-sized embeddings A: Thank you. We have included a new figure in the Appendix of the revised version, which shows the difference between the two variants and the loss of detail induced by patch-sized embedding models. > 3\. C: Tables 1 and 2 are really hard to read and should be restructured to improve readability A: We appreciate your suggestion. In the revised version, Tables 1 and 2 are now four tables. Please refer to our response to reviewer **7y1C** (Concern 1.2) on this subject. > 4\. C: Figure 2 has a ton of details which makes it hard to follow A: We have simplified Figure 2, only keeping the main blocks that help to understand the methodology. The original Figure 2 has been moved to the Appendix as a reference. 5. Questions For Authors > 1\. C: Table 1 suggests DUNIA underperforms on PASTIS significantly - why do the authors think that is? A: Our objective with DUNIA is to ensure accessibility and broad applicability. Thus, several compromises had to be made. One of them was not to rely on time series (TS) data as input, but rather a median composite that still retains some phenological information. As an advantage, this would: 1. Alleviate the need to store large volumes of data. 3. Broader applicability, especially over areas with persistent cloud cover, where having a TS over a given area would not be possible. 2. Make the model more efficient, as it only processes a single image instead of a full TS. The trade-off is that a simple aggregation function like the median inevitably captures less temporal information than a dedicated TS-aware module. However, in six downstream tasks, we show that TS input is not always necessary to achieve strong performance. We believe that our approach has a good balance between accuracy, efficiency, and accessibility, even if it results in lower performance on datasets like PASTIS, which would strongly benefit from richer temporal information. > 2\. C: Could you provide more intuitive explanations or visual examples that highlight why specific methodological choices were made? A: In the revised version, we have added an additional figure to illustrate the advantages of pixel-based versus patch-based embeddings. We have also clarified our reasoning regarding the use of composite imagery vs. TS. If the reviewer feels that further justifications are needed, we will be happy to provide additional clarifications.
Summary: - The paper introduces DUNIA, a novel framework that generates pixel-level embeddings through cross-modal alignment between Sentinel-1 & 2 imagery and LiDAR waveform data. - The model incorporates several components, including a multi-modal pre-training model, two autoencoders, dual decoders with neighborhood attention, contrastive losses (Zero-CL and VICReg), and a latent diffusion model for waveform generation. - Extensive experimental evaluations demonstrate the framework's effectiveness in multiple downstream Earth observation tasks, achieving state-of-the-art results in zero-shot and low-shot learning scenarios. Claims And Evidence: - The paper's claims about the efficiency and effectiveness of the proposed method are well-supported by extensive experiments. The authors provide detailed quantitative results across multiple datasets and tasks, clearly validating the benefits of their pixel-level embeddings and cross-modal alignment strategies. Methods And Evaluation Criteria: The authors have carefully selected relevant datasets, performance metrics, and models to construct their framework, reinforcing the robustness and applicability of their approach. Theoretical Claims: N/A Experimental Designs Or Analyses: The experimental design is valid, and the analysis is discussed in comprehensive detail, reinforcing the credibility of the approach. Supplementary Material: Yes, I have read the supplementary material whenever additional details were needed, as referenced in the main paper. Relation To Broader Scientific Literature: The paper not only delivers significant performance enhancements but also rigorously benchmarks against established literature baselines, underscoring its substantial potential to advance satellite-based tasks. Essential References Not Discussed: The authors provide a comprehensive discussion of related work, supported by an extensive list of references. Other Strengths And Weaknesses: **Strength* - Comprehensive combination of diverse, advanced modeling techniques including multi-modal pre-training, dual decoders, and latent diffusion models. - The paper is well-written and structured, making it easy to follow the detailed analyses and experimental evaluations. Other Comments Or Suggestions: N/A Questions For Authors: - The methods section could be improved by adding a brief subsection that explains the overall flow of the proposed pipeline in a simplified manner. Currently, it relies on a complex figure and separate explanations of individual modules, leaving the actual flow of the method unclear. - Tables 2 and 3 could be refined; as they currently appear to present a large amount of information without clear organization. It is not clear why the authors selected FMs AnySat and CROMA for comparison. Is it because these are the only two models that use - Sentinel-1 and Sentinel-2 data in their pre-training? If other models also utilize this data, an explanation for choosing only these two would be helpful. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: First, we thank the reviewer for their feedback and their appreciation of our work. The following are our responses to your comments. 1. Questions For Authors > 1\. Concern (C): The Methods section could be improved by adding a brief subsection that explains the overall flow of the proposed pipeline in a simplified manner. Currently, it relies on a complex figure and separate explanations of individual modules, leaving the actual flow of the method unclear. Answer (A): This concern was also raised by **WDGb**. 1. We simplified the Methods section and revised the first paragraph of Section 3 (Approach) to make it easier for the reader to follow. 2. We have simplified Figure 2, only keeping the main blocks that aid in understanding the methodology. The original Figure 2 has been moved to the Appendix as a reference. > 2\. C: Tables 2 and 3 could be refined; as they currently appear to present a large amount of information without clear organization. A: This comment was also raised by **WDGb**. Due to space limitations, we had to format them as they were so that they could be included in the main text. In the revised version, Table 1 has been split into two tables: the first table now includes only the default configuration for the zero-shot classifier (i.e., sample size S and different KNNs), while the second table includes the modifications to the zero-shot classifier. Table 2 has also been split in a similar fashion. > 3\. C: It is not clear why the authors selected FMs AnySat and CROMA for comparison. Is it because these are the only two models that use - Sentinel-1 and Sentinel-2 data in their pre-training? If other models also utilize this data, an explanation for choosing only these two would be helpful. A: Although data requirements were a factor in selecting the models we compared against, the decision to include these two models was based on the type of EO applications they target, performance compared to other EO foundation models, and also their recency and novelty. However, following the comments of the reviewer **hPct**, we added three more models as baselines namely: DOFA, SatMAE, and DeCUR. We invite reviewer **7y1C** to see our response to reviewer **hPct** for the comparison results. Again, thank you for your time reviewing our paper.
null
null
null
null
null
null
BalancEdit: Dynamically Balancing the Generality-Locality Trade-off in Multi-modal Model Editing
Accept (poster)
Summary: The paper introduces BalancEdit, a method designed to address the challenge of balancing generality and locality in multi-modal model editing. Existing methods often fail to dynamically adjust the influence scope of edits, leading to over-correction or under-correction. The authors introduce the generality-locality trade-off in multi-modal editing and create OKEDIT, a dataset to evaluate this balance. Then, the proposed solution, BalancEdit, integrates an adapter into a vision-language model layer without altering the original weights. This adapter functions as a codebook that stores edits by caching input-error embeddings and updated transformation layers. Experiments show it outperforms baselines (e.g., IKE, MEND, GRACE) across metrics, achieving state-of-the-art results in targeted updates. Claims And Evidence: Strengths: * BalancEdit balances generality and locality better than baselines. * Efficiency of BalancEdit: Comparison of computational costs and data requirements strengthens this claim. * Support for sequential edits. Weaknesses: * **Superiority of OKEDIT dataset**: While OKEDIT is described as more comprehensive than MMEDIT (table 2), details about its construction (e.g., GPT-4-generated rephrasings, diffusion-model-generated images) raise questions about potential biases. No qualitative examples of "harder" locality samples are provided compared with MMEDIT. In addition, the lack of dataset analysis for human-annotated benchmarks weakens its validity as a gold label. * **Dynamic influence radius mechanism**: The radius calculation (Eq. 3) depends on hyperparameter $\alpha$ and distances between positive/negative samples. While ablation studies show α’s impact, the paper does not explain how α is chosen or whether it generalizes across tasks. * **Negative sample selection**: Although the author compares the Negative Anchor of white and black, I don't understand why a pure colour is used as a negative sample, rather than a counterfactual example that is then labeled very differently. Methods And Evaluation Criteria: * **Layer Selection**: The choice of layer (e.g., specific transformer blocks) is critical but not rigorously justified. Performance might degrade if suboptimal layers are selected. * **Simplistic Positive Samples**: In addition to the negative samples mentioned above, the positive samples are simply perturbed by simple textual. Is it possible to retrieve associated images and use them as positive samples? * The loss function in Eq. 2 is not clear. Theoretical Claims: There seems to be no issue with the theoretical part. Experimental Designs Or Analyses: * Why doesn't time spent fine-tuning count as time spent doing training in tabel 5? * I am more interested in knowing the extremes of hyperparameter $\alpha$ in ablation experiments, e.g. 0 or 1. Supplementary Material: The supplementary materials are complete. Relation To Broader Scientific Literature: The paper’s key contributions are closely tied to and advance several strands of prior work in model editing, multi-modal learning, and parameter-efficient adaptation. Essential References Not Discussed: No. Other Strengths And Weaknesses: Please see the methods and evaluation criteria. Other Comments Or Suggestions: Please see above. Questions For Authors: Please see above. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks a lot for the valuable feedback! We would like to clarify some points below. >Why our locality sample is harder? A: Thanks for the question. Our locality sample is harder due to the **sematical similarity** to the editing knowledge. For example: |Aspect|MMEDIT|Our (OKEDIT)| |-|-|-| |**Edit Input**|Q: How many tennis balls are in the picture? A: 0 → 2| Q: What brand is this computer? A: HP → Lenovo| |**Locality Sample**|Q: What sport can you use this for? Image: bicycle A: riding|Q: What brand is this computer? Image: Dell laptop A: Dell| |**Relation to Edit**|Semantically **unrelated**|Semantically **similar**| |**Difficulty**|Low:easy to distinguish|High: requires finegrained reasoning| We will add more qualitative comparisons in the paper. >How is the validity of the dataset? A: Thanks for raising this important point. We provide the **Category-level Statistics** of OKEDIT in our response to Reviewer 55Ec, showing its generality. We also performed **human verification** on a subset of samples to ensure the correctness and the alignment with the intended edit scope. This helps establish the reliability of OKEDIT. >How is α selected? A: Thanks for the valuable concern. We select α using a small **held-out set** of only 5 unrelated samples. Since different models may have different latent feature distributions, α is treated as **model-dependent**. Once chosen, the same α is fixed per model (e.g., α=0.2 for MiniGPT-4) for all evaluations. >Why is our negative sample better than generating counterfactual negative sample. A: Thanks for the thoughtful question. We want to clarify that using counterfactual samples presents several limitations: - **No Extra Knowledge Assumption:** We aim to **avoid requiring additional knowledge** beyond the editing input for our goal of being **realistic and efficient**, but counterfactual sample require external knowledge. - **Intractability of Counterfactuals:** Generating high-quality counterfactuals is costly, while ours use an **efficient, universal, and reusable** negative anchor. To further justify the effectiveness of our negative anchors, we conducted an empirical comparison with a **random negative sample baseline** (in the second last response to reviewer ZhWy). We still achieve better results. >How is the editing layer chosen? A: Thanks for raising this concern. In our method, we select the editing layer based on prior work(MEND, MMEDIT). To validate the robustness of our method, we conducte an ablation study on another editing layer below: |Method|Acc|T-Gen|I-Gen|Loc|HM|Model| |-|-|-|-|-|-|-| |BalancEdit|100|99.87|76.46|53.14|71.58|MiniGPT4| |diff layer|100|100|64.03|66.39|73.75|| |BalancEdit|100|98.89|65.38|61.18|71.85|BLIP2| |diff layer|100|100|78.43|44.99|66.70|| Even if a different layer is chosen, **we can still achieve good performance**, showing the robustness of our method. > Why is your positive sample better than retrieving associated positive images? A: Thanks for the suggestion. We would like to clarify that retrieval would introduce **additional overhead**, including retrieval pipelines and sample bank. Retrieval also assumes that **visually similar positive samples exist**, which may not hold for long-tail or rare edits. In contrast, our method uses **textually rephrased questions**, enabling **efficient and consistent construction** of positive samples, keeping our framework lightweight and generalizable across edits. >The loss function in Eq. 2 is not clear. A: Thanks for pointing this out. $L$ is the **language loss**, specifically the **next-token prediction loss** used by the base LLM. This loss ensures that the updated model $f_{\text{new}}$, when processing input$(i, t)$, generates the desired new answer $y_n$. >What does the time in table 5 mean? A: Thanks for the question. In Table 5, **“training” refers to the offline pretraining phase**, such as training a metanet (e.g., MEND). However, We don't need that. The **time spent fine-tuning in our method is counted under "editing time"**, as it is performed **per edit**. The fast edit time shows that our method is **efficient**. >What is the extreme situation for α? A: Thanks for the suggestion. We conduct ablation experiments using extreme values of α on Minigpt4 with OKEDIT dataset: |Method|Acc|T-Gen|I-Gen|Loc|HM| |-|-|-|-|-|-| |BalancEdit|100|99.87|76.46|53.14|71.58| |α=0|100|100|95.19|16.30|36.65| |α=1|100|45.94|24.22|100|41.06| - **When α = 0**, the influence radius is entirely determined by the **negative sample**, leading to **over-generalization and lower locality** (e.g., 16.30 Loc), as the edit is applied too broadly. - **When α = 1**, the radius is determined by the **positive sample** only, leading to **over-localization and poor generalization** (e.g., 24.22 I-Gen), since the edit applies to narrow a region. These results confirm that α controls the trade-off between generality and locality as intended, which validates our assumption.
Summary: Large-scale multimodal models suffer from knowledge decay as facts change, and traditional fine-tuning is often impractical due to their size. Instead, direct knowledge editing is preferred, but current methods neglect differences in fact influence, creating a trade-off between generality and locality. To address this, the authors introduce the generality-locality trade-off, propose the OKEDIT dataset for evaluation, and present BalancEdit—a method that dynamically balances these aspects by generating positive and negative samples and editing the model’s latent space via a discrete, localized codebook without altering the underlying weights. Claims And Evidence: The claims are supported by the experiments. Methods And Evaluation Criteria: The proposed method make sense for the problem. Theoretical Claims: This paper do not contain theoretical claims. Experimental Designs Or Analyses: The experimental design is valid. Supplementary Material: This paper do not have supplementary material. Relation To Broader Scientific Literature: There is a lot of related work; it is recommended that this paper clearly highlights the differences and contributions compared to existing multimodal knowledge editing works. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: Introduces the generality-locality trade-off, a unique perspective in multimodal model editing that explicitly addresses the balance between global consistency and localized accuracy. The creation of the OKEDIT dataset provides a targeted benchmark for evaluating this trade-off, which can facilitate further research in the area. BalancEdit’s mechanism of generating both positive and negative samples to determine the influence scope is a sophisticated approach that enhances the precision of edits. Weaknesses: The success of the method hinges on the quality and comprehensiveness of the OKEDIT dataset; any limitations or biases in the dataset could affect the evaluation and generalization of the results. Other Comments Or Suggestions: See weaknesses. Questions For Authors: The multimodal large models used in the experiments are somewhat outdated. It is recommended to conduct experiments on the latest multimodal large models, such as llava-onevision, Qwen2-VL, and others. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for providing valuable feedback to improve our paper. We have addressed your hesitations below. > Please highlight the contributions compared to existing works. A: Thank the reviewer for the valuable suggestions. We agree that clearer differentiation is necessary. Below, we restate our key contributions while explicitly contrasting them with existing multimodal knowledge editing works: 1. **We formulate the generality-locality trade-off in multi-modal model editing.** In contrast, prior works such as MMEDIT [1] and GRACE [2] focus either on general-purpose editing or lifelong editing, but **do not explicitly define or evaluate** the trade-off between generality and locality in the multi-modal setting. 2. **We introduce OKEDIT, a new benchmark dataset designed to evaluate both generality and locality.** In contrast, MMEDIT [1] uses random choose pairs as locality samples which may result in loose alignment and limited coverage. 3. **We propose BalancEdit, an efficient editing framework that dynamically adjusts the influence scope of edits.** In contrast, IKE [3] assumes a wide influence range due to retrieval-based prompting, while GRACE [2] uses fixed-radius memory lookups. We introduce a radius-based mechanism using positive and negative samples to support **dynamic control over edit scoping**. 4. **We design a parameter and data efficient mechanism with little training overhead per edit.** In contrast, MEND [4] and similar meta-learning approaches require pretraining on large edit datasets, which are hard to obtain in the multi-modal domain. BalancEdit **avoids pretraining**, supports multi-edit scenarios. We will revise the Introduction and Related Work sections to make these distinctions clearer in the final version. [1] Cheng et al., Can We Edit Multimodal Large Language Models?, arXiv 2023 [2] Hartvigsen et al., Aging with GRACE, NeurIPS 2024 [3] Zheng et al., Can We Edit Factual Knowledge by In-Context Learning?, arXiv 2023 [4] Mitchell et al., MEND: Model Editing with Noisy Demonstrations, NeurIPS 2021 > How is the quality and comprehensiveness of the OKEDIT dataset? A: Thank you for the important observation. While OKEDIT is our main evaluation benchmark, we have taken several steps to ensure the **robustness, diversity, and generalizability** of both the dataset and our method: 1. **Cross-dataset Validation:** We evaluated BalancEdit not only on OKEDIT but also on the **MMEDIT dataset**, and observed consistently strong performance (see Table 3), which supports the **generalizability** of our method across datasets with different construction paradigms. 2. **Grounding in a Verified Source Dataset:** OKEDIT is built upon the **OKVQA dataset**, which has been widely used and shown to exhibit **limited bias**. This provides a reliable and diverse base of real-world image-text pairs for constructing edits. 3. **Diverse Category Coverage:** Our dataset spans **11 broad knowledge categories** (e.g., animals, transportation, brands, people, holidays, etc.), ensuring that our benchmark reflects both **common and long-tail factual knowledge**. This promotes **balanced evaluation** across a wide range of input types. 4. **Harder and More Realistic Locality Samples:** Compared to MMEDIT, OKEDIT constructs **semantically similar locality distractors**, making the evaluation more challenging and reflective of **real-world edit interference scenarios**. 5. **Human Verification and Quality Control:** We conducted **human validation** on a subset of the dataset to ensure correctness of labels and alignment with the intended edit scopes. Together, these points demonstrate that OKEDIT is a **carefully designed and reliable benchmark**, and that BalancEdit is not overly dependent on a specific dataset structure. We will revise the manuscript to better highlight these points. > How is the generalization of the method on newer model? Thank the reviewer for the constructive suggestion. To evaluate the generalization of our method on more recent vision-language backbones, we conducted additional experiments using **Qwen-VL** on the MMEDIT dataset. The results are shown below: |Method|Acc|T-Gen|I-Gen|Loc|HM| |-|-|-|-|-|-| |Base|18.18|17.13|14.14|NA|NA| |GRACE|99.88|30.36|32.18|87.78|39.78| |BalancEdit|100.00|70.96|71.51|41.96|**57.79**| Our method achieves a **higher harmonic mean (HM)** compared to GRACE, indicating a **more balanced trade-off between generality and locality**. These results also validate the **generalizability** of BalancEdit to newer backbone models beyond those originally reported in the paper. We will add more backbones in the future work.
Summary: Existing multi-modal model editing methods struggle to dynamically adjust the influence scope of an edit, balancing generality and locality. To address the issue, this paper proposes a novel model editing method, i.e., BalancEdit, with process as follows * For each image-text pair to be edited, the averaged embedding and the associated transformation and radius are cached. The transformation is learnt by standard fine-tuning and the radius is an interpolation of distances with positive (rephrased text, same image) and negative (same text, black image) samples. * For new image-text pair, an edited transformation is activated if it falls within the scope of an edited sample; otherwise, the unedited transformation is applied. The proposed method is empirically compared with FT, IKE, MEND, and GRACE on MMEDIT and OKEDIT datasets in both single and sequential editing settings, where BalancEdit enjoys superior accuracy, text generality, harmonic mean, and efficiency. Ablation study is conducted to illustrate the effects of hyper-parameters, distance function, and anchors. Claims And Evidence: The author claims that BalancEdit is parameter-efficient on line 108, while I understand that it requires to cache a transformation for each edit, making it suffers linear memory consumption with respect to the number of edits. Methods And Evaluation Criteria: I understand the intuition behind the algorithm while I am curious about whether the design is sufficiently justified. * The averaged embedding is chosen as the cached key. I wonder whether there is any reference or experiment to support the choice. Will it be better than the embedding of certain influential tokens discovered by prior work [1]? * The same text with black image is selected as the negative sample. Is there any empirical study to demonstrate its superiority to a straight-forward baseline, i.e., randomly choosing irrelevant text-pair samples? [1] Locating and Editing Factual Associations in GPT, NeurIPS 2022. Theoretical Claims: This work is mostly empirical. Experimental Designs Or Analyses: See Methods and Evaluation Criteria Supplementary Material: NA Relation To Broader Scientific Literature: NA Essential References Not Discussed: NA Other Strengths And Weaknesses: * The idea of the work makes sense and is interesting to me. Other Comments Or Suggestions: * No period on line 87. * The Metric paragraph in Sec. 4.1 seems should be separated. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful feedback, we are glad for the opportunity to clarify some points. >Why is our method parameter-efficient? A: Thanks for the valuable comment! Knowledge editing is a challenging task, and we acknowledge that memory grows linearly with the number of edits due to cached transformations. However, compared to baselines like full fine-tuning or meta-learning (e.g., MEND), BalancEdit is still parameter-efficient per edit, as it **only** modifies a single layer and avoids retraining the full model. Moreover, our transformation can be replaced by PEFT method (e.g., LoRA), which would further reduce memory usage. We chose full fine-tuning for **universality** and for **clearer and fair comparisons**. >Why use averaged embedding as the key? A: Thank the reviewer for the thoughtful question. We would like to clarify that using the **averaged embedding as a sentence-level representation** is a common and effective strategy(e.g., SimCSE [1], SBERT [2]), and we adopt a similar approach to cache edited knowledge in BalancEdit. We chose this method over influential-token-based strategies (e.g., ROME [3]) for two main reasons: 1. **Modality Gap:** Prior work like ROME is designed for *text-only* models (e.g., GPT), while we focus on **multi-modal** models, where token-level attributions are harder to isolate due to vision-language fusion. 2. **Simplicity and Generalization:** Unlike token-attribution methods, our approach does not require edit-specific or model-specific token selection, making it easier to generalize across diverse edits and architectures. We find that average embedding is robust, simple, and effective for our editing framework. [1] Gao et al., SimCSE, EMNLP 2021 [2] Reimers & Gurevych, Sentence-BERT, EMNLP 2019 [3] Meng et al., Locating and Editing Factual Associations in GPT, NeurIPS 2022 >Why does your Negative sample selection better than randomly chosen negative sample? A: Thank the reviewer for the insightful question. We conducted an empirical comparison with a **random negative sampling baseline**, where a randomly chosen text-image pair (assumed irrelevant to the edit) is used as the negative anchor. | Method | Edit Acc | T-Generality | I-Generality | Locality | HM | Model | |-----------------------|----------|--------------|--------------|----------|--------|-----------| | **BalancEdit** | 100.00 | 98.89 | 65.38 | 61.18 | **71.85** | BLIP-2 | | Random Negative Sample| 100.00 | 100.00 | 49.12 | 65.08 | 65.61 | BLIP-2 | | **BalancEdit** | 100.00 | 99.87 | 76.46 | 53.14 | **71.58** | MiniGPT-4 | | Random Negative Sample| 100.00 | 99.00 | 66.93 | 45.92 | 64.08 | MiniGPT-4 | **BalancEdit consistently outperforms the random baseline** in harmonic mean (e.g. 71.85 vs. 65.61), showing a better balance between generality and locality. This supports the effectiveness of our **black image-based negative anchor**, which provides a **fact-agnostic, consistent, and efficient way to define a lower bound** in representation space. In contrast, random negative samples: - Depend on external unrelated examples, - Are unstable in quality and relevance, - And require assumptions that may not hold across diverse domains. Our method avoids these issues, making it more **robust and scalable** for real-world editing scenarios. We will include this analysis and clarify the design rationale in the final version. >Typos: A: Thanks for the detailed review. We will revise them accordingly. --- Rebuttal Comment 1.1: Comment: > Parameter-efficient The author points out that the proposed method is more parameter-efficient than fine-tuning and meta-learning. I deem that the tuned parameters in fine-tuning and meta-learning are hyper-parameters (that can be set to a single linear layer), thus they are not inherently more expensive than the proposed method. Perhaps a more suitable claim is that the proposed method achieves a better performance under limited budgets (where only a single linear layer is tuned). I am still skeptical of the claim regarding parameter efficiency. When editing thousands of knowledge, the additional parameters may (significantly) exceed the original parameters of the model. > Average embedding I appreciate the explanation. I understand that the average embedding of SimCSE and SBERT are tuned as representation of the sentence in contrast to the zero shot setting in the proposed method, so they may not well support the choice. > Random negative I appreciate the author for the additional experiment as I deem it is necessary to show the proposed method at least outperform a straight-forward baseline. --- Reply to Comment 1.1.1: Comment: Thank you for your reply! We would like to clarify it below. >Parameter-efficient A: Thank you for the thoughtful feedback. We agree a more accurate framing is that BalancEdit **achieves strong performance under constrained tuning budgets**, particularly when compared to full fine-tuning or meta-learning-based methods. We appreciate the opportunity to clarify our claim on efficiency as follows. 1. **Scalability via edit merging:** To address cumulative cost when editing **many knowledge points**, we design a **merging mechanism for sequential edits** (Section 3.2 line 200). When edits occur in a coherent region of the latent space, they can be **consolidated into a single transformation**, reducing storage and memory overhead. We will further highlight this mechanism and its benefits in the final version. 2. **No pretraining or shared meta-parameters:** Unlike existing methods such as MEND, BalancEdit avoids **offline meta-learning**, large shared modules, or retraining on auxiliary datasets. Edits are fully modular and isolated, and thus is efficient per-edit. We will revise the efficiency claims as suggested and add more discussions on cumulative cost and merging in multi-edit scenarios. >Average embedding A: Thank you for the thoughtful follow-up. We agree that SimCSE and SBERT are tuned for sentence representation. In fact, we were trying to clarify that **mean pooling can be effectively employed to aggregate contextual information across all tokens**, as demonstrated in related work and experiments [1–5], even from decoder-only models such as GPT or LLaMA, without any additional training. Our intention was not to claim that average pooling is the optimal strategy, but rather that it is a **commonly used and practical approximation** of sentence-level semantics, similar in spirit to approaches such as last-token pooling (e.g., GRACE) or influential neuron tracing (e.g., ROME). Compared to token attribution methods, our use of average embeddings is **model-agnostic, efficient, and requires no task-specific intervention**. Moreover, our experiments validate that this strategy is **effective in practice**, achieving strong performance across datasets, and achieving a balance between generality and locality. We will clarify this design choice and add the above references in the final version. [1] Tao et al., LLMs are Also Effective Embedding Models: An In-depth Overview, 2024 [2] Su et al., One Embedder, Any Task: Instruction-finetuned Text Embeddings, ACL 2023 [3] BehnamGhader et al., LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders, 2024 [4] Springer et al., Repetition Improves Language Model Embeddings, 2024 [5] Lee et al., Gecko: Versatile Text Embeddings Distilled from Large Language Models, 2024 > Random negative Experiment A: We truly appreciate your constructive feedback and will make sure to include this comparison in the final version. We sincerely appreciate the time you dedicated to the review. We hope that our response can address the concerns you have.
Summary: The paper introduces a method for efficiently updating multi-modal LLMs. Existing model editing struggles with balancing generality and locality. BalancEdit addresses this by using a codebook mechanism that stores discrete edits, dynamically adjusting their influence scope with positive and negative samples. The authors introduce OKEDIT, a dataset designed to evaluate this trade-off. Experiments on MiniGPT-4 and BLIP-2 OPT show that BalancEdit surpasses baseline methods while maintaining efficiency and interpretability. Claims And Evidence: The claims in the submission are partially supported by empirical evidence, including quantitative evaluations, baseline comparisons, and ablation studies. The introduction of the OKEDIT dataset provides a structured way to assess the generality-locality trade-off, and experimental results on MiniGPT-4 and BLIP-2 OPT indicate that BalancEdit performs effectively in terms of accuracy, efficiency, and interpretability. Efficiency claims are supported by editing time comparisons. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for multi-modal model editing, with BalancEdit addressing the generality-locality trade-off and OKEDIT providing a structured benchmark. The chosen metrics—editing success, generality, and locality—effectively assess model edits, and comparisons with baselines support the evaluation. Theoretical Claims: I have checked the equation in the paper. There is no proof in the paper. Experimental Designs Or Analyses: The experimental design includes baseline comparisons, metrics (harmonic mean of generality and locality), and ablation studies to validate BalancEdit. The OKEDIT dataset serves as a relevant benchmark, while efficiency claims are supported by editing success and time analyses. Supplementary Material: NA Relation To Broader Scientific Literature: LLM editing is a promising research direction for cost‑effective revision of large language models. It can further remove harmful or biased content that might persist despite standard safety alignment mechanisms. Essential References Not Discussed: NA Other Strengths And Weaknesses: Strength 1. This paper proposed BalancEdit which dynamically optimizes the generality-locality trade-off without modifying model weights. 2.The proposed method outperforms baseline methods in accuracy, efficiency, and sequential editing while requiring minimal computational cost. 3.This paper discuss the radius of the influence scope, which is useful for editing-related research. Weaknesses 1. The organization of the paper, especially the organization of the method section, is not clear enough, make it hard to follow. (Detailed examples are in Question section). 2.    It could be questionable how to ensure the quality of the generated images for generality and locality respectively, since sometimes there may not be a clear boundary between them. The test dataset is also built on generated data, which increases my concern. 3.    The reproducibility of the method is another concern. Other Comments Or Suggestions: Please enlarge the font size in Figure 3 to improve readability. Questions For Authors: It is still difficult for me to understand what transformations (v) is. It is hard to understand the eq. (2). And what is L in this equation? What is a specific key k? “By caching embeddings for input errors and the updated knowledge transformation layer that decodes into the desired model outputs”.. Could you explain it in detail? Why hyperparameter /alpha can “adjust the distance” in eq. (3).  if I understand correctly, /alpha is used to adjust the weight of different distances instead of “adjusting the distance”. How did you ensure the quality of the generated images for generality and locality respectively? Could you provide the statistics of each categories (vehicles, people, etc.), the number of images under generality, locality? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for the valuable feedback. Here we're glad for the chance to clarify some points. >Quality of the generality and locality samples? A: We appreciate the reviewer’s thoughtful concern. To ensure the semantic quality of the OKEDIT dataset and the meaningful separation between generality and locality, we adopt a carefully designed generation pipeline. The detailed generation process has been provided in Appendix A.1. Here we provide a concrete example. The editing counterfactual knowledge is ***the "HP" looks computer is named as lenovo.*** Specifically, the original image is a HP brand computer, original question is ***'What is the brand of it'***, and the original result is ***'HP'***, and the new result is ***'lenovo'***. - **Image Generality sample generation:** 1. **Generality object confirmation:** We ask GPT4 about objects and scene that should be in the image. The prompt is shown in Appendix A.1. The GPT would answer ***"A HP laptop"***. 2. **Generality sample generation:** Based on the object, we will prompt the diffusion model to generate the images. Since we provide the exact object, the model can generate correct images. - **Image locality sample generation:** 1. **Locality answer generation**: We first prompt GPT4 to generate a distractor. Like *"Given (question: What is the brand of it, A: HP, B: Lenovo), what could be another option? Short answer. C: []"* The GPT would generate a distractor, such as ***"Dell"*** 2. **Locality object confirmation:** Similarly, we ask GPT4 about objects and scenes that should be in the image. The GPT would answer ***"A Dell laptop"***. 3. **Locality sample generation:** Based on the locality object, we will ask diffusion model to generate the correct images. In this way, having clear instructions for creating an image helps ensure its quality. >What is transformation(v)? What is L and a specific key k. A: We apologize for the confusion about Eq. (2). In the following. We clarify each component of Eq. (2) and we will update the final version of our paper accordingly: **Transformation $v$**:The transformation refers to a **specific layer in the LLM**. During an edit, we **fine-tune this transformation layer** to encode the new knowledge. Thus, $v$ is a **updated version of an existing LLM layer**, reused only for inputs within the influence radius of a particular edit. **Loss $L$**: $L$ is the **language loss**, specifically the **next-token prediction loss** used by the base LLM. This loss ensures that the updated model $f_{\text{new}}$, when processing input$(i, t)$, generates the desired new answer $y_n$. **Key $k$**: As shown in Method 3.2, each **$k$** in the codebook is a **representative embedding** for a specific edit. For example, suppose we edit the fact: *"What brand is this computer?" from "HP" to "Lenovo".* Then the key k is the averaged embedding from layer $l-1$ when the input is the ***HP laptop image + the original question***. >“By caching embeddings for input errors and the updated knowledge transformation layer that decodes into the desired model outputs”.. Could you explain it in detail? A: Thank the reviewer for pointing this out. We agree that this sentence may have been too condensed, and we are happy to explain it in detail. Similar to the Section 3.2, We mean: 1. **Caching embeddings for input edits (keys $k$):** For each edit (i.e., input $(i, t)$ and new answer $y_n$), we extract the **averaged embedding** from the layer $l-1$ of the model. This embedding — the **key $k$** — represents the semantic location of the edit in the model’s latent space. It serves as a **reference point** to decide whether future inputs are close enough to activate the edit. 2. **Caching the transformation $v$:** We fine-tune and cache a **copy of the LLM’s layer $l$** so that, when applied to this type of input, it produces the new target output $y_n$. >How does α work? A: Thanks for mentioning the key point. α adjusts the **influence radius** by **weighting** the distances between the key and the positive/negative samples. It also controls the trade-off between generality and locality. >Statistics of categories A: Thank the reviewer for raising this point. Below, we provide detailed statistics of the knowledge categories in the OKEDIT dataset. As shown in the table, OKEDIT spans a **diverse and representative** set of domains, supporting its utility as a comprehensive and generalized benchmark for multi-modal model editing. |KnowledgeCategory|Percentage| |-|-| |Plants, Animals|17%| |Vehicles, Transportation|16%| |Cooking, Food|15%| |Sports, Recreation|12%| |People, EverydayLife|9%| |Objects,Material, Clothing|8%| |Brands,Companies, Products|3%| |GeographyHistory,Language, Culture|3%| |Weather, Climate|3%| |Science, Technology|2%| |Other|12%| >Reproduction and typos A: We thank the reviewer's attention. We will fix the typos based on the suggestions and provide the code in the final version.
null
null
null
null
null
null
A Parameter-Free and Near-Optimal Zeroth-Order Algorithm for Stochastic Convex Optimization
Accept (poster)
Summary: The paper gives a parameter-free zeroth-order algorithm for convex and Lipschitz continuous stochastic functions $f(x) = E[F(x,\xi)]$ over a convex and compact (section 3/4) or convex and closed (section 5) set. It achieves optimal convergence rates (up to a logarithmic factor) without knowledge of the Lipschitz parameter or the diameter of the constraint set $X$. The parameter-free convergence rates are achieved using an AdaGrad style tuning and the current maximum norm distance to the initial iterate $x_0$. The theoretical findings are supported by illustrating experiments. Claims And Evidence: Claims are clear and supported. Methods And Evaluation Criteria: Primarily theoretical work, hence, benchmark datasets are not relevant. Theoretical Claims: To the best of my understanding, theoretical claims are correct. Experimental Designs Or Analyses: The paper contains a small experimental part with numerical experiments illustrating the theoretical findings. To the best of my understanding, the experimental design is sound and valid. Supplementary Material: I reviewed relevant parts of the supplementary material. Relation To Broader Scientific Literature: The contributions of this paper are rather incremental. (for more details, see weaknesses) Essential References Not Discussed: To the best of my knowledge, all relevant literature is discussed. Other Strengths And Weaknesses: Strengths: - This work is very clearly written. Weaknesses: - To the best of my understanding, the results and techniques are rather incremental. Most results follow directly with small modifications (extensions from first- to zeroth-order) of the work by Ivgi et al '23a combined with the zeroth order gradient estimation in Shamir '17. Specifically, the similarities in techniques to the work by Ivgi et al are -to the best of my understanding- very strong. Other Comments Or Suggestions: While the paper is very clear and well-written, I think the contribution is rather incremental. Many of the theoretical results share similarities with the results in Ivgi et al and Shamir, up to the point that the proofs follow very similar ideas and structures. I think the paper would profit a lot from clarifying what its technical contribution and new insights are. I am happy to revise my evaluation if the authors can convince me that there is a significant difference between existing work and their contribution. Questions For Authors: See weaknesses and comments. Ethical Review Concerns: none. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We highlight our technical contribution and new insights as follows: 1. To the best of our knowledge, the proposed POEM is the first parameter-free algorithm for zeroth-order stochastic optimization, which is also mentioned by all other reviewers. 2. Our method can achieve the near-optimal convergence with the smoothing parameter $\mu_t={\mathcal O}(\sqrt{d/t})$, which is larger than the counterparts in existing works such as $\mu_t={\mathcal O}(\sqrt{d/T})$ (Duchi et al., 2015) and $\mu_{2t}={\mathcal O}(1/(d^2t^2))$ (Shamir, 2017). This is a new observation which improves the numerical stability in the step of finite difference. 3. For the unbounded domain case, we establish the lower bound (Theorem 5.6) to show that achieving an ideal parameter-free stochastic zeroth-order algorithm is impossible, i.e., the algorithm cannot attain the near-optimal SZO complexity with only logarithmic dependence on problem parameters. Our lower bound construction considers the dependence on $d$ in the complexity (construct the $d$ dimensional function), which is more challenging than the lower bound for the first-order method that depends on the 1-dimensional function (Khaled \& Jin, 2024).
Summary: This paper introduces a novel parameter-free zeroth-order optimization algorithm named POED for stochastic convex optimization problems. The key idea is to eliminate the need for parameter tuning including learning rate and smoothing parameter. Inspired by difference of gradients (DoG), the proposed method leverages a difference of finite difference strategy to set the learning rate and use an adaptive smoothing parameter. The authors prove that POED achieves near-optimal SZO complexity under bounded domain assumption while impossible to achieve parameter-free algorithm in unbounded setting. Finally, numerical experiments on several datasets demonstrate the superiority of POEM. ## update after rebuttal Most of my concerns have been addressed during the rebuttal. Claims And Evidence: The paper claims that (a) POEM is a parameter‐free method, (b) it achieves near‐optimal SZO complexity under convexity, Lipschitz conditions and bounded domain assumptions. To support thest claims, the authors provide rigious theoretical analysis that establish high-probability convergence guarantees. The experimental results also verify the superiority of POEM. Overall, the claims are well supported by both analysis and experiments. Methods And Evaluation Criteria: POEM is designed to automatically adjust its step size and its smoothing parameter. This design avoids the typical need to manually set these hyperparameters, which is well aligned with problem of parameter-free optimization problems. The analysis centers on standard metrics in zeroth-order optimization, such as the function value gap and the SZO complexity. These criteria are consistent with the established literature. Theoretical Claims: The mian theoretical claims include the convergence rate guarantees for bounded and unbounded domains. The proofs build on well-established lemmas and techniques. I didn't carefully check all the proofs but the theoretical results seem to be sound. Experimental Designs Or Analyses: The experimental design is straightforward and relevant. The use of standard benchmarks helps validate the algorithm’s practical performance. However, the scope of the experiments is somewhat limited in terms of dataset diversity and scale, as well as the scale of the model scale. More extensive empirical validation could further bolster the claims. Supplementary Material: The supplementary material contains detailed proofs for the main theorems but I didn't carefully check all the proofs. Relation To Broader Scientific Literature: The work is well situated within the current literature and clearly identifies the gap it fills—namely, the lack of parameter-free methods in zeroth-order optimization that achieve near-optimal complexity. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strenths: 1) To the best of my knowledge, this is the first work to explore a parameter-free zeroth-order optimization. 2) Comprehensive theoretical analysis with convergence guarantees matching lower bounds. Weaknesses: 1) The empirical evaluation is limited to a few datasets and could be extended to cover more diverse and large-scale scenarios. Other Comments Or Suggestions: n/a Questions For Authors: 1. Can you elaborate on the challenges and potential modifications required to extend POEM to nonconvex optimization problems? 2. What is the computational cost associated with computing the adaptive step size? The computation of the adaptive step size seems need a copy of initial parameters weight. How does this overhead compare with that of standard zeroth-order methods in large-scale applications and do you think there exists a memory-efficient way for the implementation? 3. Recent research has focused on fine-tuning large language models with zeroth-order optimization algorithms while sensitive to the selection of learning rate ([1], [2]). The proposed method seems to be a promising way for solving this problem. Do you think POEM is suitable for large-scale applications like training/fine-tuning large-scale models? [1] Fine-Tuning Language Models with Just Forward Passes. [2] Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Q1** The empirical evaluation is limited to a few datasets and could be extended to cover more diverse and large-scale scenarios. **A1** Thank you for your suggestion. We have addressed your comment by including the experiments on the higher-dimensional datasets "qsar" ($d=1,024$, $n=1,687$) from UCI machine learning repository and "gisette" ($d=6,000$, $n=5,000$) from LIBSVM repository. Please see link <https://anonymous.4open.science/api/repo/a-5532/file/response-dAwi.pdf?v=78140e95> for the experimental results. We can observe that our POEM also performs better than baselines. **Q2** Can you elaborate on the challenges and potential modifications required to extend POEM to nonconvex optimization problems? **A2** Thank you for your question. It is still unclear how to extend POEM to nonconvex optimization problem. The convexity plays a crucial role in our theoretical analysis: 1. In Equation (8), we bound the function value gap by using Jensen’s inequality, which relies on the convexity of $f(\cdot)$. 2. In Equation (9) and the poof of Lemma C.3, we use the first-order condition in Lemma A.1 which requires the convexity of the smooth surrogate $f_\mu(\cdot)$, and the convexity of $f_\mu(\cdot)$ comes from the convexity of the objective $f$ (see Lemma 2.8). For the nonconvex problem, it is possible to introduce the online-to-nonconvex conversion [1-3] into our framework to attain the nearly-tight upper bound. However, it seems not to be a direct extension. We believe this is a good future direction. References [1] Ashok Cutkosky, Harsh Mehta, Francesco Orabona. Optimal stochastic non-smooth non-convex optimization through online-to-non-convex conversion. ICML 2023. [2] Guy Kornowski, Ohad Shamir. An algorithm with optimal dimension-dependence for zero-order nonsmooth nonconvex stochastic optimization. JMLR 2024. [3] Kwangjun Ahn, Gagik Magakyan, Ashok Cutkosky. General framework for online-to-nonconvex conversion: schedule-free SGD is also effective for nonconvex optimization. arXiv:2411.07061, 2024 **Q3** What is the computational cost associated with computing the adaptive step size? The computation of the adaptive step size seems need a copy of initial parameters weight. How does this overhead compare with that of standard zeroth-order methods in large-scale applications and do you think there exists a memory-efficient way for the implementation? **A3** Thanks for your insightful question. Each iteration of POEM requires the computational cost of ${\mathcal O}(d)$ to access $||x_t-x_0||$ and $||g_t||$ to determine the step size. Compared with standard zeroth-order methods, POEM needs to additionally maintain the initial point $x_0$, which requires storing additional $d$ floating point numbers in general. However, the domains of many real applications contain the origin. In such case, we can simply set $x_0=0$ to avoid the additional memory cost to store $x_0$. **Q4** Recent research has focused on fine-tuning large language models with zeroth-order optimization algorithms while sensitive to the selection of learning rate ([1], [2]). The proposed method seems to be a promising way for solving this problem. Do you think POEM is suitable for large-scale applications like training/fine-tuning large-scale models? **A4** Thanks for your constructive suggestions. This work focuses on convex optimization, while training/fine-tuning large-scale models typically corresponds to nonconvex optimization. As we mentioned in A2, our theory cannot be directly extended to the nonconvex case. Following your suggestion, we are currently attempting to adapt POEM to the fine-tuning of large-scale models. We have not yet determined whether POEM is suitable for this task.
Summary: This paper proposes a parameter-free stochastic zeroth-order method that achieves near-optimal rate in the convex setting with a bounded domain. The authors also consider the unbounded domain case and prove that it is impossible to construct an ideal parameter-free algorithm in this setting. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: I do not have the time to check all the proofs. To what I have verified, everything looks fine. Experimental Designs Or Analyses: The experimental design is reasonable. Supplementary Material: I skimmed through parts of the proof. Relation To Broader Scientific Literature: This paper considers parameter-free stochastic optimization, several related works are published in past ICML events. Essential References Not Discussed: The literature is sufficiently covered. Other Strengths And Weaknesses: Strengths: - Clear writing with a nice flow of ideas. - The first parameter-free stochastic zeroth-order method in the convex setting, which is a solid contribution - The impossibility result is interesting which reveals the fundamental necessity of a bounded domain. - The robustness of the parameter setting of POEM is numerically verified Weaknesses: NA Other Comments Or Suggestions: Figure 3 has not been mentioned in the main text. Add a description to it. Questions For Authors: NA Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive feedback and appreciation of our work. **Q1** Figure 3 has not been mentioned in the main text. Add a description to it. **A1** We sincerely thank you for your careful review. We will modify the text in lines 436-438 of the left column to include a clear reference to Figure 3.
Summary: This paper addresses the stochastic optimization problem $$ \min_{x \in \mathcal{X}} \mathbb{E}_{\xi}[F(x, \xi)], $$ where each function $F(\cdot, \xi)$ is convex and $L$-Lipschitz, and $\mathcal{X}$ is a simple convex set in $\mathbb{R}^d$. The authors propose a parameter-free zeroth-order optimization method that achieves $\epsilon$-accuracy (in terms of the objective function) with high probability in $\tilde{O}(\frac{d L^2 D^2}{\epsilon^2})$ stochastic function evaluations&mdash;nearly optimal up to logarithmic factors. The proposed method builds on DoG (Ivgi et al., 2023), a parameter-free stochastic gradient method with distance adaptation, applied to a randomized smoothing of the objective. The authors also present a version of their algorithm for unbounded domains, assuming a good estimate of $L$ is available. Additionally, they establish a lower bound demonstrating that a fully parameter-free algorithm with complexity scaling as $\tilde{O}(\frac{d L^2 D_0^2}{\epsilon^2})$ is impossible, implying that any efficient parameter-free method must depend on the domain diameter $D$, rather than just the initial distance $D_0$ to the solution. Claims And Evidence: The claims made in the paper are well-supported, with each theorem and lemma accompanied by a corresponding proof. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate for the problem at hand. Theoretical Claims: I reviewed the main theoretical claims and assessed the general proof techniques but did not verify all details rigorously. Experimental Designs Or Analyses: I examined the experiments and found no major issues. Supplementary Material: I reviewed the supplementary material and inspected most of the proofs. Relation To Broader Scientific Literature: This paper integrates two established optimization techniques: 1. *Randomized smoothing*, which approximates the original objective via a smooth surrogate, enabling efficient stochastic gradient estimation (Duchi et al., 2012; Yousefian et al., 2012; Shamir, 2017; Nesterov & Spokoiny, 2017; Gasnikov et al., 2022; Lin et al., 2022). The specific smoothing method used here&mdash;uniform smoothing over the Euclidean ball&mdash;has been extensively studied in (Duchi et al., 2012; Shamir, 2017; Gasnikov et al., 2022). 2. The *DoG algorithm* (Ivgi et al., 2023), which enables efficient parameter-free stochastic optimization of convex functions. Much of the analysis in this paper builds on the proofs in (Ivgi et al., 2023), leveraging known properties of randomized smoothing from (Shamir, 2017). Essential References Not Discussed: I have not identified any essential references missing from the discussion. Other Strengths And Weaknesses: **Strengths:** 1. This is the first (to my knowledge) parameter-free algorithm for zeroth-order stochastic optimization. The presentation is concise and generally clear. 1. The lower bound in Theorem 5.6 is a valuable theoretical contribution (though I have not fully verified its correctness). **Weaknesses:** 1. The paper primarily combines existing results from (Ivgi et al., 2023) and (Shamir, 2017). The proposed algorithm is essentially DoG applied to a smoothed objective with a carefully chosen smoothing parameter $\mu$. The complexity bound follows directly from prior DoG results (Ivgi et al., 2023) and established bounds on the second moment of stochastic gradients for the smooth approximation (Shamir, 2017). The main novelty&mdash;adapting $\mu_t$ at each iteration rather than fixing it in advance&mdash;introduces only minor modifications to the DoG analysis. While this is a useful refinement, it does not introduce a fundamentally new idea. Other Comments Or Suggestions: 1. The claims that the method is "parameter-free" in the Abstract and early sections of the paper require clarification. The algorithm is truly parameter-free only when the domain is bounded and the complexity bound depends on the domain's diameter. For unbounded problems (or when results depend on the initial distance $D_0$ rather than $D$), the method requires an upper bound on the stochastic gradient norm. 1. There is a mistake in the formula for $\mu_t$ in Algorithm 1, which makes the final bound in Proposition 4.7 not scale-invariant. The correct formula is likely $\mu_t = \bar{r}_t \sqrt{\frac{d}{t + 1}}$. 1. Assumption 2.6 is missing "for all $x, y \in \mathbb{R}^d$". 1. Line 163: The claim that "this is more challenging" is somewhat misleading. In principle, $\mu_t$ can be arbitrarily small, even zero, in which case finite differences reduce to a directional derivative. While choosing a very small $\mu_t$ may degrade numerical stability in practice, this is a separate issue unrelated to the theoretical complexity bounds. 1. Lemma A.4: The reference to Shamir (2017, Lemma 9) seems incorrect. 1. Typos: "differed" (line 169), "soothing" (line 267). Questions For Authors: 1. The paper assumes that the original objective is Lipschitz. What if it is Lipschitz-smooth? Can the proposed method still be applied, and what would its convergence rate be? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Q1** The main novelty—adapting at each iteration rather than fixing it in advance—introduces only minor modifications to the DoG analysis. While this is a useful refinement, it does not introduce a fundamentally new idea. **A1** Thank you for your thoughtful review. We highlight our novelty as follows: 1. Our method can achieve the near-optimal convergence with the smoothing parameter $\mu_t=\mathcal{O}(\sqrt{d/t})$, which is larger than the counterparts in existing works such as $\mu\_t=\mathcal{O}(\sqrt{d/T})$(Shamir, 2017) and $\mu\_{2t}=\mathcal{O}(1/(d^2t^2))$ (Duchi et al., 2015). This is a new observation which improves the numerical stability. 2. For the unbounded domain case, we establish the lower bound (Theorem 5.6) to show that achieving an ideal parameter-free stochastic zeroth-order algorithm is impossible, i.e., the algorithm cannot attain the near-optimal SZO complexity with only logarithmic dependence on problem parameters. Our lower bound construction considers the dependence on $d$ in the complexity (construct the $d$ dimensional function), which is more challenging than the lower bound for the first-order method that depends on the 1-dimensional function (Khaled & Jin, 2024). **Q2** The algorithm is truly parameter-free only when the domain is bounded and the complexity bound depends on the domain's diameter. **A2** Thank you for your suggestion. We will clarify the claim of "parameter-free" in revision. For the unbounded problems, we provide the lower bound to show that it is impossible to achieve the near-optimal and parameter-free algorithm. Please see the second point in the last response. **Q3** There is a mistake in the formula $\mu_t$ for Algorithm 1, which makes the final bound in Proposition 4.7 not scale-invariant. The correct formula is likely $\mu\_t=\bar{r}\_t\sqrt{\frac{d}{t+1}}$. **A4** Thank you for your careful review. You're right that the smoothing parameter should be $\mu_t = \bar{r}\_t \sqrt{\frac{d}{t+1}}$. The result of Lemma 4.5 should be modified to $$ \sum_{k=0}^{t-1}2L \bar{r}\_k \mu\_k\leq 4L \bar{r}\_{t-1}^2\sqrt{dt}, $$ where the exponent of $\bar{r}\_{t-1}$ changes from $1$ to $2$. Thus, Proposition 4.7 should be stated as: $$ f(\bar{x}\_t)-f(x^*)\leq\frac{16\theta\_{t,\delta}(\bar{r}\_t+s\_0)\big(\sqrt{G\_{t-1}}+Ld+L\sqrt{dt}\big)}{\sum_{k=0}^{t-1} \bar{r}\_k/ \bar{r}\_t}, $$ which is scale-invariant. Based on above modification, the main result (Theorem 4.9) still holds. We also update our implementation and empirical results are very similar to previous ones. Please see the link <https://anonymous.4open.science/api/repo/a-5532/file/response-ZPYR.pdf?v=abeb97c6>. **Q4** Assumption 2.6 is missing "for all $ x,y\in\mathbb{R}^d$". **A4** Thank you for your careful review. We will involve the description for these variables in revision. **Q5** Line 163: The claim that "this is more challenging" is somewhat misleading. In principle, $\mu_t$ can be arbitrarily small, even zero, in which case finite differences reduce to a directional derivative. While choosing a very small may degrade numerical stability in practice, this is a separate issue unrelated to theoretical complexity bounds. **A5** Thank you for your valuable suggestion. In revision, we will clarify that the choice of $\mu_t$ is important to the numerical stability, rather than the theoretical complexity bounds. **Q6** Lemma A.4: The reference to Shamir (2017, Lemma 9) seems incorrect. **A6** Thank you for your careful review. The result of Lemma A.4 appears at the beginning of the proof of Shamir (2017, Lemma 9), which can be found at the bottom of page 6 (not the final result of Lemma 9). We will clarify this point in revision. **Q7** Typos: "differed" (line 169), "soothing" (line 267). **A7** We appreciate your careful review. We will correct the typos in revision. **Q8** The paper assumes that the original objective is Lipschitz. What if it is Lipschitz-smooth? Can the proposed method still be applied, and what would its convergence rate be? **A8** Thank you for your insightful question. For the bounded setting, our proposed method remains applicable. We assume that $F(\cdot;\xi)$ is $M$-smooth, i.e., $\Vert \nabla F(x)-\nabla F(y)\Vert \leq M\Vert x-y\Vert$ for all $x,y \in \mathcal{X}$. Let $x^*\in \mathcal{X}$ be the solution. It follows that $\Vert \nabla F(x)\Vert\leq\Vert\nabla F(x^*)\Vert+\Vert\nabla F(x)-\nabla F(x^*)\Vert\leq\Vert\nabla F(x^*)\Vert+M\Vert x-x^*\Vert\leq\Vert\nabla F(x^*)\Vert+MD_\mathcal{X}$ for all $x\in \mathcal{X}$. Thus, the function $F(\cdot;\xi)$ is $(\Vert\nabla F(x^*)\Vert+M D_\mathcal{X})$-Lipchitz on the domain $\mathcal{X}$. Hence, we can directly apply Theorem 4.9 to achieve the convergence rate. For the unbounded setting, the additional assumption such as bounded variance for the stochastic gradient (Ghadimi & Lan, 2013) is typically required. It seems that our analysis cannot be directly applied to this case.
null
null
null
null
null
null
Spherical-Nested Diffusion Model for Panoramic Image Outpainting
Accept (poster)
Summary: This paper presents a new diffusion-based panoramic image outpainting control net model that incorporates 1) spherical noise as structural prior into the equirectangular projected image for better handling the ERP distortion and 2) spherical deformable convolution layers to handle the varying CNN reception field on the ERP image. The experimental results demonstrate the promising panorama outpainting quality compared to prior works. Claims And Evidence: The claims made in the paper are supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, this submission largely follows the evaluations from the prior work. Theoretical Claims: There’s no proof for theoretical claims. Experimental Designs Or Analyses: The experimental designs and analyses are sound to me. Supplementary Material: I reviewed all parts of the supplementary materials. The discussion of the impact of the structural prior shown in Fig. A. 3 is interesting, but it still requires further analysis. Relation To Broader Scientific Literature: The proposed method of injecting spherical structural noise to the original diffusion noise might be relevant to the non-isotropic diffusion model, for example: * “Score-based generative modeling with critically-damped langevin diffusion” * “Score-based denoising diffusion with non-isotropic gaussian noise models” Essential References Not Discussed: References are sufficient. Other Strengths And Weaknesses: Strengths: * The introduced spherical noise as a structural prior for panorama diffusion denoising is interesting and novel. * The design of spherical deformable convolution is reasonable and effective. Weaknesses: * Since the outpainting tasks largely rely on perceptual evaluations, the author should provide more visual results to assist readers’ perceptual comparison. Besides, there’s no visual result in the ablation study. Providing the reference ground truth would also be helpful. * Since the evaluated mask region is located at the center of the ERP panorama, input image patches do not contain too much distortion. This evaluation setting may not show the advantage of spherical noise and SDC processed on meaningful pixels. Other Comments Or Suggestions: None Questions For Authors: * How are the panorama images masked during training? Are they only masked around the center? What about the range of elevation angle? * How does this model generalize to varying masks? If the entire input is masked, will this model work properly? * It would be better if the authors could include more applications of the proposed method. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks for your positive opinion and valuable comments! - **Tab.1 - 4 are in https://anonymous.4open.science/api/repo/tabl/file/T.pdf?v=5327f2ef** - **Fig.1 - 9 are in https://anonymous.4open.science/api/repo/figu/file/F.pdf?v=2d1be253** > ### **1 Supplementary Material: Further Analysis on Fig. A. 3** The randomness in our SpND model consists of two components: the structural prior $\mathbf{P}\_\text{ERP}$ from spherical noise and the diffusion noise $\mathbf{Z}$. Fig. A. 3 in Supplementary-A illustrates the impact of these two components, in which spherical noise minimally affects the semantic content of generated images, maintaining structural consistency. In contrast, diffusion noise drastically changes the output, creating distinct visual features. This indicates spherical noise serves as a structural prior, preserving geometric constraints, while diffusion noise controls semantic content generation. These observations support our formulation of $\mathbf{P}\_\text{ERP}$ as a mechanism to regularize the latent space without disturbing semantic attributes. > ### **2 Relation to Broader Scientific Literature: Relation to Non-Isotropic Diffusion Models** The noise in our SpND method follows non-isotropic Gaussian distributions, with the non-isotropic property arising from the ERP operation applied to spherical noise. In contrast, Ref. [1] introduces a critically-damped Langevin diffusion process to enhance network convergence by adding an auxiliary velocity variable, while Ref. [2] generalizes score-based denoising diffusion models with non-isotropic Gaussian Free Field (GFF) noise, improving generation quality on CIFAR10. However, these methods, with their particular non-isotropic settings, are not designed for panoramic image outpainting. For instance, GFF noise in Ref. [2] defines the covariance between sampled values $f(\theta, \phi)$ and $f(\theta', \phi')$ as: $$\left(\text{Cov}(f(\theta,\phi), f(\theta',\phi')) \approx -\log\|(\theta,\phi)-(\theta',\phi')\| \right) + C,$$where $C$ is a constant. In contrast, for our spherical noise, when $\phi=\frac{\pi}{2}$, the covariance simplifies to: $$\text{Cov}(f(\theta,\frac{\pi}{2}), f(\theta',\frac{\pi}{2})) = \sigma^2.$$Thus, the constant covariance $\sigma^2$ contradicts the logarithmic dependence required by Ref. [2], demonstrating that the non-isotropic noise in our SpND method cannot be generalized by existing models. This highlights the novelty and contribution of our approach in panoramic outpainting. > ### **3 Weakness 1: More Visual Results** Yes, outpainting tasks heavily rely on perceptual evaluations. We have added more visual examples for all comparisons discussed in our experiments, including the main comparisons, ablation study and view-image outpainting experiments, in Figs. 5-8 in the link. The results contain ground truth, masked input, all baseline methods, and our SpND method for both the Matterport3D and Structured3D datasets, which consistently demonstrate the superior performance of our SpND method. > ### **4 Weakness 2: Evaluations on Distorted Masks** To further evaluate our method on different masks, we conducted additional experiments on masks near the poles, where spherical distortion becomes evident. Specifically, we projected a $256 \times 256$ image onto the ERP format at $\theta=90^\circ, \phi=60^\circ$ to create a highly-distorted mask near the pole. We then retrained our SpND method, along with the second and third best baseline methods (PanoDiff and PanoDiffusion). Results reported in Tab. 4 and Fig. 9, confirm the superior performance of our SpND method, highlighting the advantages of spherical noise and the SDC operation in accommodating panoramic deformation. > ### **5 Questions For Authors 1: Mask Location and Evaluation Angle** Yes, in our original experiments, the images were masked only around the center during training, corresponding to Fig. 4-(a) in our manuscript. For the masked input, the region containing image content spans an elevation angle range of approximately $[-45^\circ, 45^\circ]$, within a full image that has an elevation angle range of $[-90^\circ, 90^\circ]$. We further evaluated our SpND method's ability to generalize to various non-centered masks, as detailed in **Rebuttal 6 below**. > ### **6 Questions For Authors 2: Generalization to Varying Masks** Yes, our SpND model can generalize to varying mask formats. We re-trained our SpND method with the entire input masked and show the subjective results in Fig. 4. While performance was reduced, our SpND method achieved an FID of 29.65 and successfully performed panoramic generation, confirming its generalization ability. Additionally, our SpND method is highly flexible with multiple input views and we refer to our **Rebuttal 4 above** and our **Rebuttal 3 to Reviewer Qs7j**. > ### **7 Questions For Authors 3: More Possible Applications** Many thanks! For more possible applications, we refer to our **Rebuttal 1 to Reviewer GNkK**. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' response and additional results. Most of my concerns are addressed in the rebuttal. I appreciate the efforts in preparing the additional results on varying non-trivial out-painting. At least compared to the baselines, these results are competitive. I will keep my positive rating on this submission. --- Reply to Comment 1.1.1: Comment: We wish to thank the reviewer very much for taking the time to read our responses and provide valuable comments! And we are also glad that we have addressed your concerns.
Summary: This work proposes SpND, a novel pipeline for panoramic image outpainting based on diffusion model. The authors tackle the limitation of previous works in which the spherical nature of panoramic image is injected using soft regularization techniques, often failing to fully enforce the spherical constraint. To address this, this work aims to intrisically enforce the sphercial constraint during the generation through a sphercial-nested diffusion model, including the sphereical noise technique and a novel spherical deformable convolution (SDC) specifically designed to handle the spherical grid of panoramic images. SpND shows the state-of-the-art performance on panoramic image outpainting, surpassing previous works on image quality. Claims And Evidence: 1. The paper is well written and easy follow. The authors tackle a practically meaningful application of panoramic outpainting. 2. The two main technical contributions - spherical noise and SDC - both have clear motivations and seem to be effective based on the qualitative/quantitative comparisons and ablations studies. 3. The analysis on the spherical property of the ERP format well delivers why it is critical to sample the noise in the spherical spaces instead of directly from the planar space. 4. Yet, it would be better if the authors provide a clear elaboration on the claim that "one-to-many mapping inherently introduces spatial coherence in the EPR domain". While these overlaps between multiple views can enforce the same content to be generated in the corresponding pixels, it is unclear how this can also lead to a globally coherent image. Is it because the coherent content/style is somehow propagated across different views during denoising? Moreover, have the authors observed any downsides of this one-to-many mapping, such as blurry/over-smoothed image near the overlapping regions? Methods And Evaluation Criteria: 1. The introduction of SDC seems to be quite novel and an approach for handling the spherical nature of panoramic images. Adopting the idea of deformable convolution specifically for the spherical grid seems to be a reasonable method that takes into account the key characteristic of panoramic images. 2. While this work aims for panoramic "outpainting", the design choice of SpND would be better justified if the authors had provided discussions that compare with the panoramic generation works. For instance, MVDiffusion [1] and PanFusion [2] both suggest novel architectures for text-to-panorama generation. Can we also apply RePaint to such models to obtain panoramic image outpainting? If it is not possible to apply the same approach to these works, can the authors please explain which aspect of SpND allows this in contrast? 3. Moreover, based on the pretrained SpND model, the RePaint technique is utilized to perform seamless outpainting based on the input mask. Since the original RePaint is based on "pixel-space" diffusion models, it doesn't introduce any discrepancy between the input mask and the space in which the denoising is performed. In constrast, since this work uses a latent diffusion model, I am concerned if the discrepancy between the input mask (pixel-space) and the actual inpainting operations (latent-space) could lead to quality degradation. If this is not the case, it would be nice if the authors can explain why this discrepancy does not necessarily lead to degradation. [1] MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, Tang et al., NeurIPS 2023 [2] Taming Stable Diffusion for Text to 360◦ Panorama Image Generation, Zhang et al., CVPR 2024 Theoretical Claims: This work does not provide theoretical claims and instead focuses on empirical evidence for the effectiveness of SpND. Experimental Designs Or Analyses: 1. The paper provides detailed explanations on both the evaluation settings and implementation details of SpND. 2. Both quanitative and qualitative comparisons are presented clearly, showing that SpND outperforms previous methods on panoramic outpainting. 3. I am confused by the choice of using FID and FID_hori for image quality evaluation. To measure the quality and realism of the panoramic image, it seems more intuitive to either measure the FID of the projected perspective views or use a panorama-specific metric such as FAED [1]. 4. The paper provides a thorough ablation study of its core components, which shows that the design choice for the pipeline works according to their intentions. [1] Bips: Bi-modal indoor panorama synthesis via residual depth-aided adversarial learning, Oh et al., ECCV 2022 Supplementary Material: The visual analysis on the impact of resolution and sampling density to the structural prior (Fig. A.1) and its influence on the final output (Fig. A.3) seems intuitive and makes it easier to understand the importance of sampling spherical noise in SpND. Relation To Broader Scientific Literature: It would be interesting to see whether the proposed idea of inject a structural prior by sampling from spherical noise could be extended to pure generation of panoramic images. For instance, can it lead to more accurate panoramic images (or videos) generated just from text prompts? Applying a similar idea to text-to-panorama methods like MVDiffusion [1] and PanFusion [2] would be an interesting future work. [1] MVDiffusion: Enabling Holistic Multi-view Image Generation with Correspondence-Aware Diffusion, Tang et al., NeurIPS 2023 [2] Taming Stable Diffusion for Text to 360◦ Panorama Image Generation, Zhang et al., CVPR 2024 Essential References Not Discussed: As mentioned above, an important related work PanFusion [1] is missing. Since this is not an outpainting method but rather a pure text-based generation method, it doesn't seem necessary to make quantitative comparions. But it would be nice to discuss why this previous approach is not sufficient for panoramic outpainting. Moreover, AOG-Net [2] seems to be a more directly related work, and possibly a valid baseline. [1] Taming Stable Diffusion for Text to 360◦ Panorama Image Generation, Zhang et al., CVPR 2024 [2] Autoregressive Omni-Aware Outpainting for Open-Vocabulary 360-Degree Image Generation, Lu et al., AAAI 2024 Other Strengths And Weaknesses: The strengths and weaknesses are discussed in the above sections. Other Comments Or Suggestions: When first reading the paper, it was a bit confusing whether the whole pipeline is trained end-to-end and the inference is done in a simple feed-forward manner. However, as far as I understand, the inference for outpainting is done by applying iterative RePaint steps using the pretrained diffusion model. It would have been better if that was emphasized. Questions For Authors: Some questions are already included in the above sections. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Many thanks for the positive comments and insightful suggestions. - **Tab.1 - 4 are in https://anonymous.4open.science/api/repo/tabl/file/T.pdf?v=5327f2ef** - **Fig.1 - 9 are in https://anonymous.4open.science/api/repo/figu/file/F.pdf?v=2d1be253** > ### **1 Claims and Evidence 4: Elaboration on One-to-Many Mapping** The one-to-many mapping naturally arises from the ERP format of the spherical content. Pixels near the polar regions exhibit strong correlations in the ERP domain, reflecting spatial coherence in this projection. The consistent denoising procedure ensures coherent style and content by propagating the entire image across different views. Controlled by $\zeta$, the one-to-many correspondence reaches its optimum when the correlations between noise and feature/image are similar, as explained in Supplementary-A. When $\zeta$ is too small, blurry artifacts occur due to excessive pixel correspondence. Conversely, when $\zeta$ is too large, spatial coherence decreases. This was verified through two new ablation studies, which is detailed in **Rebuttal 2 to Reviewer QS7j**. > ### **2 Methods and Evaluation Criteria 2: Applicability of RePaint to Panoramic Generation Methods** Outpainting can be achieved through text-to-panorama and diffusion-based generative methods, with each inference step integrated by the RePaint operation for denoising the full panoramic image. However, MVDiffusion generates $8$ perspective views via a multi-view diffusion model, which are then aggregated into a full panorama using Rodrigues rotation. This makes it infeasible to access each inference step for denoising, preventing the straightforward application of RePaint. In contrast, PanFusion accesses the full latent panoramic features at each inference step, enabling outpainting via RePaint. We modified PanFusion by adding RePaint and present new results in Fig. 3 in the link. As shown, PanFusion's outpainted panoramas exhibit repetitive patterns and poor quality due to latent feature rotation to preserve spherical continuity. In comparison, our SpND uses a masked image as input and consistently achieves the superior panoramic outpainting. > ### **3 Methods and Evaluation Criteria 3: Pixel-space Discrepancy by RePaint** Yes. The discrepancy may exist in our SpND method. Most diffusion-based outpainting methods, particularly for panoramic images, operate within the latent space and suffer accuracy degradation in the input mask. For methods using RePaint, we calculated the PSNR$\_\text{cent}$ for the repainted area against the original image, with results reported in Tab. 3 in link. As shown, our SpND method achieves the highest accuracy and quality, which naturally results from our intrinsic SDC operation and spherical noise. > ### **4 Experimental Designs or Analyses 3: Clarifying FID and FID$_\text{hori}$, with Enhanced Metrics** The FID metric is widely used to evaluate generated image quality, and we preliminarily applied it to assess the performance of outpainted panoramic images. The FID$_\text{hori}$ metric evaluates the FID values of 8 perspective view images of size $512 \times 512$, following MVDiffusion. We also incorporated the panorama-specific FAED metric to evaluate the quality of panoramic images. Notably, FAED requires depth information, which is infeasible for outpainting. Therefore, we followed the PanFusion method to compute FAED without explicit depth input. The results, reported in Tab. 3, demonstrate that our SpND method outperforms state-of-the-art outpainting techniques. > ### **5 Relation to Broader Scientific Literature: Applicability to Panoramic Generation Tasks** Yes, we believe injecting non-*i.i.d.* noise into existing panoramic generation methods could further improve detail retention and the quality of generated panoramic images, including text-to-image methods such as MVDiffusion and PanFusion. Our SpND method can also generalize to generate images from text descriptions using a fully masked input. We evaluated this during the rebuttal and we achieved an FID score of 29.65 by retraining our method without other modifications. Subjective results are shown in Fig. 4. > ### **6 Essential References not Discussed: PanFusion and AOG-Net** Yes, we have analyzed PanFusion by using the RePaint strategy, by **Rebuttal 2 above**. AOG-Net uses an autoregressive pipeline for 360° outpainting, utilizing feature remapping for panoramic content. We conducted a comparative analysis in Fig. 5 and Tab. 3 against AOG-Net using the Matterport3D dataset. From these, we conclude that our SpND method, by enforcing panoramic deformation through intrinsic convolution and spherical noise, consistently outperforms others in generating high-quality outpainted panoramic images. > ### **7 Other Comments or Suggestions: Training and Inference Procedures** Yes. During inference, we applied iterative RePaint steps based on the trained SpND model, following other outpainting methods such as PanoDiffusion. --- Rebuttal Comment 1.1: Comment: I appreciate the authors for thoroughly addressing the raised concerns, particularly by providing additional experiment results on PanFusion and AOG-Net. My concerns regarding the advantage of SpND over previous diffusion-based methods have been resolved, and I have updated my recommendation to "Accept". --- Reply to Comment 1.1.1: Comment: Thank you very much for providing valuable comments and reading our responses. We are glad that we've addressed your questions! In our revised paper, we will further improve our experimental settings and evaluations against the state-of-the-art baselines, to further verify the advantage of our SpND against existing diffusion-based methods.
Summary: This paper proposes a spherical reformulation of diffusion models for panoramic image outpainting. It focuses on the fact that the processing unit and the spatial stochastic patterns in plain diffusion models do not align well with panoramic image outpainting. To handle this problem, the paper proposes redesigning some components, such as spherical noise, spherical deformable convolution, and circular mask. This design choice is different from existing works, which utilize soft constraints. Experimental results clearly show that the proposed method outperforms the existing methods. ## update after rebuttal All the reviewers have acknowledged the contribution of the paper. Although I had some doubts regarding the broader impact, I also think that this is a valuable reformulation for an important use case. I maintain my original score. Claims And Evidence: The claim that panoramic outpainting requires components that can naturally deal with spherical patterns, is plausible and convincing. Using tailored modules rather than soft constraints can obviously give better results. Methods And Evaluation Criteria: The proposed modules or changes (e.g., spherical noise, spherical deformable convolution, and circular mask) are quite intuitive and well-designed. The paper provides most of the standard evaluation metrics for popular benchmark datasets. Theoretical Claims: N/A Experimental Designs Or Analyses: As mentioned above, experimental designs are sound. Experimental results show that the proposed method outperforms existing methods, supporting the effectiveness of the proposed modules. Supplementary Material: I have briefly checked it (all parts). Relation To Broader Scientific Literature: I believe this paper proposes a well-designed method tailored to a particular problem (panoramic outpainting). For broader fields, the impact is somewhat limited. That being said, the problem being dealt with in this paper has some practical values for various applications (VR, AR, etc.). Overall, my opinion is that this is a good paper focusing on a specific problem. Essential References Not Discussed: I believe the bibliography is thorough enough. Other Strengths And Weaknesses: As I mentioned above, this is a good paper focusing on a specific problem. The proposed method is well designed and the performance is great, so I have no doubt in this part. I think the main question boils down to its broader impact. This method is somewhat limited to this particular problem, and the novelties in the method are not like "groundbreaking." But at the same time, the problem itself has many valuable real-life applications. At the moment, I'm slightly leaning towards the positive side. Other Comments Or Suggestions: There are some typos, e.g., "gird" -> "grid." Questions For Authors: (9) is somewhat puzzling. Seeing Fig. 3, it seems the outputs of Spherical Net are "embedded" in the main U-Net. This seems slightly contradictory to (9). Could the authors clarify this point? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We wish to thank the reviewer very much for the positive opinion and insightful suggestions. - **Tab.1 - 4 are in https://anonymous.4open.science/api/repo/tabl/file/T.pdf?v=5327f2ef** - **Fig.1 - 9 are in https://anonymous.4open.science/api/repo/figu/file/F.pdf?v=2d1be253** > ### **1 Weaknesses: Our Broader Impact.** Yes. Although belonging to the broader fields of panoramic image processing, the panoramic outpainting task still possesses rich applications, especially for nowadays VR/AR applications such as autonomous driving, medical imaging, cultural heritage preservation, media and entertainments, to name but a few. The capability of completing panoramic content from masked views also allows for efficient compression and transmission of large-volume panoramic images, one of the key requirements to achieve meta-universe. Moreover, the newly proposed techniques, such as the spherical deformable convolution (SDC) operation and spherical noise, may also contribute to other panoramic-related tasks, including panoramic generation (as we have newly evaluated in **Rebuttal 5 to Reviewer vstk**) and content understanding. We believe that this direction is worth investigating with rich juicy unrevealed, in which our method paves a possible way to the field. > ### **2 Other Comments Or Suggestions: Typos** Many thanks! We will modify ''gird'' to ''grid'' and also carefully polish our wordings, by correcting grammatical errors and typos. > ### **3 Questions For Authors: Clarification on Equation (9)** Many thanks for the careful considerations. Yes, the outputs of our spherical net are integrated into the primary U-Net architecture, as depicted in Fig. 3 of our manuscript. Therefore, Equation (9) was misleading and is corrected by: $$ \epsilon\_{\boldsymbol{\psi}} = f\_\text{SD}\left[\mathbf{Z}\_{t}, \mathbf{T}, f\_\text{PDN}\left[\mathbf{F}\_\text{pri}, \mathbf{T}\right]\right] $$ where $f\_{\text{SD}}$ denotes the pre-trained diffusion model and $f\_{\text{PDN}}$ denotes the spherical net, which is embedded into the diffusion model. Moreover, $\mathbf{T}$ is the embedding of the input prompt and $\epsilon\_{\boldsymbol{\psi}}$ is the output of our SpND model. In this way, the spherical net can extract panoramic features that guide the pre-trained diffusion model for panorama outpainting.
Summary: This work proposes to impose the sphere nature in the design of the diffusion model, such that the panoramic format is intrinsically ensured during the learning procedure, named the spherical-nested diffusion (SpND) model. In particular, the authors design to employ spherical noise in the diffusion process to address the structural prior, together with a newly proposed spherical deformable convolution (SDC) module to intrinsically learn the panoramic knowledge. Experimental results demonstrate the effectiveness of the proposed methods on various datasets. Claims And Evidence: See the weaknesses for more details. Methods And Evaluation Criteria: See the weaknesses for more details. Theoretical Claims: See the weaknesses for more details. Experimental Designs Or Analyses: See the weaknesses for more details. Supplementary Material: NA Relation To Broader Scientific Literature: It is highly related to the panoramic image outpainting literature. Essential References Not Discussed: See the weaknesses for more details. Other Strengths And Weaknesses: Strengths: - The paper is generally well-structured, with illustrative diagrams effectively clarifying key concepts and model architectures. The motivation, design choices, and experiments are clearly presented, aiding reader comprehension. - By directly incorporating spherical constraints into the diffusion process, the method improves quantitative metrics (FID) compared to existing state-of-the-art approaches. Weaknesses: - Limited Novelty of the SDC Layer Compared to Existing Panoramic Vision Approaches: The paper proposes a Spherical Deformable Convolution (SDC) layer, aiming to better capture the spherical nature of panoramic images. Yet, the technical design closely resembles prior methods proposed in existing panoramic vision literature. For example, "Eliminating the blind spot: Adapting 3d object detection and monocular depth estimation to 360 panoramic imagery", "Panoformer: panorama transformer for indoor 360° depth estimation", "Cylin-Painting: Seamless 360 panoramic image outpainting and beyond", "Panoramic panoptic segmentation: Insights into surrounding parsing for mobile agents via unsupervised contrastive learning", etc. Clarifying how the modifications substantially differ from the above approaches through more explicit analyses and discussion would significantly strengthen the novelty claim. - Lack of Quantitative Evaluations on Non-i.i.d. Noise in ERP Format with Different Densities: The authors introduce the notion of non-i.i.d. spherical noise after ERP projection as a key component in their model. However, the paper does not quantitatively analyze how varying the sampling density impacts the performance or characteristics of the generated panoramic images. Although visual examples and qualitative insights are provided, rigorous numerical evaluations illustrating the effect of different densities and their impact on model performance (e.g., through ablation studies on various values) are missing. Such quantitative evaluations are crucial for understanding the practical robustness and sensitivity of the proposed method. - Limited Capability Regarding Input Versatility (Multiple Images Support): The proposed SpND model primarily focuses on scenarios where the panoramic image outpainting input is a single masked ERP image. However, many realistic applications, particularly those involving panoramic image reconstruction in AR/VR or autonomous driving scenarios, require handling multiple overlapping or partial-view inputs simultaneously. The current formulation and experimentation do not clearly address or demonstrate whether or how the proposed approach could effectively scale to or support scenarios involving multiple input images or viewpoints. Such limitations constrain the practicality and applicability of the method to real-world panoramic image generation tasks. Other Comments Or Suggestions: The ablation study lacks the important qualitative evaluations. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Many thanks for the valuable comments! - **Tab.1 - 4 are in https://anonymous.4open.science/api/repo/tabl/file/T.pdf?v=5327f2ef** - **Fig.1 - 9 are in https://anonymous.4open.science/api/repo/figu/file/F.pdf?v=2d1be253** > ### **1 Weakness 1: Novelty of Our SDC Layer** To the best of our knowledge, the novelty of our SDC layer lies in the first successful attempt of proposing a new **basic convolution operation** to address the panoramic deformation, the key to achieve high-quality panoramic outpainting. Our spherical noise setting also contributes to our overall novelty in addressing this task. In contrast, existing methods, including those for other tasks such as panoramic image segmentation and depth estimation, typically address the panoramic deformation via soft loss regularization or feature re-sampling strategies. More specifically, Ref. [1] developed existing rectilinear architectures via pre-defined rectilinear-panoramic projection, to achieve 3D object detection and monocular depth estimation. Ref. [2] employed spherical resampling solely for the value feature within Transformer, to enhance panoramic image depth estimation. Ref. [3] added learnable positional encoding cues with existing features that are still processed by standard 2D convolution, for panoramic image outpainting. Ref. [4] regularized the network by spherical contrastive loss to effectively capture panoramic content for the segmentation of 360° images. However, Refs. [1, 2, 4] are incapable of generating panoramic content and thus unsuitable for the panoramic image outpainting task. More importantly, all the existing methods rely on either extra feature re-organization (e.g., Refs [2, 3]) or sphere-assisted losses (e.g., Refs. [1, 4]) to accommodate panoramic images. In contrast, our SDC layer essentially alters the basic convolution operations, which is fundamentally different from the existing strategies adopted for both outpainting and other panoramic-related tasks. Indeed, the fundamental improvement of our SDC layer on the convolution operation is much effective and straightforward to enforce geometry cues in addressing panoramic deformation, thus benefiting remarkable improved quality of outpainted images. Ref. [1]: Eliminating the blind spot: Adapting 3D object detection and monocular depth estimation to 360 panoramic imagery. Ref. [2]: Panoformer: Panorama transformer for indoor 360° depth estimation. Ref. [3]: Cylin-Painting: Seamless 360 panoramic image outpainting and beyond. Ref. [4]: Panoramic panoptic segmentation: Insights into surrounding parsing for mobile agents via unsupervised contrastive learning. > ### **2 Weakness 2: Quantitative Evaluations on Non-*i.i.d.* Noise in ERP Format** Yes. Our Non-*i.i.d.* noise setting caters for the panoramic format during the outpainting, and we further conducted quantitative evaluations as suggested by the reviewer. More specifically, we further ablated two settings by varying sample density $\zeta$, i.e., via $\zeta=15$ and $\zeta=45$, and report the results in Tab. 1 in the link. As shown in the table, our SpND method achieves optimal performance at $\zeta = 45$ and $\zeta = 30$, consistent with the analysis in Supplementary-A, where the ERP noise closely aligns with the ground-truth features/images. Compared to $\zeta = 45$, $\zeta = 30$ offers the highest efficiency. Choosing $\zeta = 15$ slightly degrades outpainting performance but still outperforms existing baselines. These results confirm the rationale and robustness of our proposed non-*i.i.d.* ERP noise, derived from *i.i.d.* spherical noise. > ### **3 Weakness 3: Limited Capability Regarding Input Versatility** We agree with the reviewer that outpainting from multiple input views can further improve the practicality and applicability of our method. Indeed, our model can readily accommodate multiple partial-view and overlapping inputs by adjusting the training masks accordingly. More specifically, - For the multi-input scenario, we newly evaluated our SpND method based on two viewpoints centered at $(\theta_1=-90^\circ, \phi_1=-15^\circ)$ and $(\theta_2=90^\circ, \phi_2=15^\circ)$ within the equirectangular projection (ERP) format, thus denoted as **Dual**. - For the overlapping scenario, we newly evaluated our SpND method based on two viewpoints centered at $(\theta_1=20^\circ, \phi_1=0^\circ)$ and $(\theta_2=90^\circ, \phi_2=15^\circ)$ with overlapped regions, also using the ERP format. We denote this as **Overlapping**. We report the results in Tab. 2, together with the subjective results in Figs. 1-2 in the link. From this table and figure, we can conclude that our method can generalize to multiple overlapping and partial-view inputs. Note that due to limited rebuttal period, the model was trained by adequate convergence, and performance improvements can be expected with further training. Even though, our SpND method still achieves superior performances for the panoramic image outpainting task. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response, which addressed most of my concerns well. Thus, I would like to increase my rating. However, the revision parts, especially the clarifications of the novelty compared with previous panoramic works (regarding the weakness 1), are expected to be presented in the paper. I believe such a clarification is crucial to highlight the contribution of this work. --- Reply to Comment 1.1.1: Comment: Thank you for the positive opinions and valuable suggestions. It is really great that you would like to increase your rating. Yes. We totally agree with that clarifying our difference against existing panoramic methods is important to highlight our novelty and contributions, especially regarding our SDC layer. During the rebuttal period, we are not allowed to submitted a revised version. In our final revised paper, we shall comprehensively discuss the suggested 4 references in detail, by including our response to Weakness 1 with further improved clarifications, as well as trying to find possible other relevant references.
null
null
null
null
null
null
Fast Estimation of Partial Dependence Functions using Trees
Accept (poster)
Summary: This paper proposes an efficient tree-based algorithm called FastPD for estimating partial dependence (PD) functions and extracting functional decompositions. The authors highlight how it can unify PD plots, Shapley values, and higher-order interaction effects under one consistent framework. By carefully caching and reusing computations, the method achieves gains over naive or path-dependent approaches, particularly when features are correlated. The main thrust is that FastPD provides both improved consistency and reduced computational complexity, compared to commonly used baselines like path-dependent TreeSHAP. In practice, it claims a user to quickly derive multiple interpretability artifacts from a single model-agnostic foundation. Claims And Evidence: The authors study the inconsistency of TreeSHAP, more specifically when features are correlated. Methods And Evaluation Criteria: The evaluation is quite weak; FastPD is only compared with two versions of TreeSHAP in one experiment of adding more background samples over a relatively small XGBoost. The evaluation is merely over the synthetic data sets; We cannot draw any conclusion from the experimetns provided in the article. Theoretical Claims: The paper studies the consistency of the proposed method and the inconsistency of TreeSHAP. Experimental Designs Or Analyses: While the authors include simulation studies—showing that the method works better on synthetic data compared to TreeSHAP—they do not examine any real-world datasets or large-scale industrial applications. This can make it difficult to assess how well FastPD copes with real-world issues such as messy data distributions, large numbers of features with intricate dependencies, and numeric stability in high-dimensional spaces. This is particularly important because the proposed method is more an algorithmic innovation. Supplementary Material: Skimmed through it Relation To Broader Scientific Literature: There are several studies at the intersection of functional decomposition and PDP. FastPD could be a worthy added value to the literature, but it definitely requires far more experiments and probably more theoretical studies on the behavior of the method. Essential References Not Discussed: References are fine, but a recent study is missed: Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory [1] Hiabu, M., Meyer, J. T., and Wright, M. N. Unifying local and global model explanations by functional decomposition of low dimensional structures Other Strengths And Weaknesses: I think the most crucial problem is the experimental setup; the evaluative measures are not rigorous, and the experiments are only conducted on small, limited synthetic data sets. TreeSHAP is also the only benchmarks. More comprehensive experiments are required, where FastPD is compared over real data sets according to some meaningful evaluative measures. This is particularly important for FastDP because it puts forward more an algorithmic way of computing PD than a rigorous theoretical one. Other Comments Or Suggestions: The title could be more informative by mentioning that FastPD is particularly for tree-based machine learning models. Questions For Authors: The author should discuss that not all the functional components could be acquired as there are exponentially many such components. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your comments. In our response to the first reviewer we have conducted further experiments of our method on real data (see **1. Real-world application** in our response to R1). In our response to the second reviewer, we advice using shallow trees to reduce complexity for large-scale datasets with many features (see **2. High-complexity models** in our response to R2). We believe that our method is still very applicable even in high-dimensions, provided that the model is not overly complex. We have not encountered numerical instability in our experiments with real data; since with many samples, the estimation of partial dependence (PD) functions becomes more reliable, not less. Moreover, practitioners are typically interested in marginal effects and low-order interactions, because they are easier to interpret. In all cases, the actual quantity being computed is a just an empirical average, for which `FastPD` can provide a speed-up when the model is a tree-based model. We agree that not all functional components could or should be acquired, since for higher-order interactions the effects of these components are usually very close to zero, we will make this point more clear in the paper. This further motivates the use of shallow trees, as they limit the fitting of higher-order interactions and preserve interpretability. As a remark, in the package implementation of our method, we provide the user the option to specify the order of the effect they wish to acquire. For the reference that we had inadvertently missed, perhaps it was the following you meant? > Fumagalli, F., Muschalik, M., Hüllermeier, E., Hammer, B., & Herbinger, J. (2024). *Unifying Feature-Based Explanations with Functional ANOVA and Cooperative Game Theory*. arXiv preprint arXiv:2412.17152. We have already cited Hiabu et al. multiple times, but were not yet aware of Fumagalli et al. and will ensure it is cited in the revised version. Finally, we agree that the paper title could be clearer. A possible revision would be: **"Fast Estimation of Partial Dependence Functions using Trees"**, which more accurately reflects the tree-based nature of our approach.
Summary: This paper proposes FastPD, an efficient, and consistent algorithm for estimating Partial Dependence (PD) functions in tree-based models. FastPD addresses computational inefficiencies present in existing methods by decomposing the estimation process into a two-step procedure: a tree augmentation step leveraging background data and an evaluation step for calculating PD functions. The authors show that FastPD improves computational complexity from quadratic to linear with respect to the number of samples. Furthermore, the paper demonstrates that existing methods, like TreeSHAP-path, are inconsistent when features are correlated. Experimental results indicate that FastPD provides more accurate estimates of PD-based interpretations, including SHAP values, compared to competing algorithms. Claims And Evidence: The claims regarding the computational efficiency and consistency of FastPD are convincingly supported through theoretical propositions and experimental validations. However, the claim about significant differences between FastPD and TreeSHAP-path estimates is primarily demonstrated through simulation experiments with moderate feature correlation. Additional real-world experiments could further strengthen these claims. Methods And Evaluation Criteria: The methods and evaluation criteria used in the paper are appropriate for the stated problem of estimating PD functions efficiently. The paper clearly outlines how the complexity and consistency of the algorithms are assessed. Using simulations with varying levels of feature correlation and benchmarking against established methods (TreeSHAP-path, TreeSHAP-int, VanillaPD) is sensible and effective. Theoretical Claims: The theoretical claim regarding the inconsistency of TreeSHAP-path is clearly stated, and its correctness is supported by a formal proof provided in the supplementary material. I reviewed the proof in the appendix and found it sound and convincing. Experimental Designs Or Analyses: The experimental design and analysis presented in the paper appear sound. The experiments include comparisons of computational runtime, estimation error (mean squared error), and consistency across different correlation settings. No immediate issues were found in the methodology or its implementation. Supplementary Material: I have reviewed the supplementary material, specifically focusing on the proofs provided in the appendix and additional experimental setups (different correlation scenarios). The supplementary material is thorough and supports the main findings effectively. Relation To Broader Scientific Literature: The paper positions itself clearly within the broader scientific literature, notably referencing foundational work on SHAP values, PD functions, and functional decomposition. It builds directly on TreeSHAP-related methods and addresses previously identified limitations (such as inconsistency due to feature correlation). Essential References Not Discussed: The paper is comprehensive in its citations; however, additional references exploring the scalability of SHAP (beyond tree-based methods) would enhance context for readers interested in broader applicability. - Jethani, Neil, et al. "Fastshap: Real-time shapley value estimation." International conference on learning representations. 2021. - Wang, Guanchu, et al. "Accelerating shapley explanation via contributive cooperator selection." International Conference on Machine Learning. PMLR, 2022. Other Strengths And Weaknesses: Strengths include clear contributions, detailed and rigorous mathematical arguments regarding algorithmic complexity, and practical relevance for model interpretability in correlated feature settings. The experimental validation is thorough and clearly presented. A weakness is that the scalability discussion focuses primarily on moderately deep trees and relatively simple experimental settings. The impact of scaling to larger, real-world datasets and more complex tree-based ensembles (deeper or broader models) is less clear. Other Comments Or Suggestions: The paper could be clearer in distinguishing the practical implications of model-based PD functions versus ground-truth PD functions in realistic scenarios. Minor typo found: "adddition" should be "addition" (Section 1.1). Questions For Authors: How does FastPD scale when applied to larger and deeper tree-based ensembles (e.g., random forests or deep gradient boosting trees)? Clarifying this could strengthen practical applicability. Could you provide guidance or recommendations on choosing the optimal number of background samples for FastPD in different practical scenarios? Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments, we address them below. 1. **Real-world applications:** We have addressed this in our reply to the first reviewer. Please see **1. Real-world application** in our response to R1. 2. **High-complexity models:** The computational complexity of both our algorithm and the baselines scales exponentially with depth, which motivates the use of smaller depth values. Additionally, increasing depth captures higher-order interactions that are inherently less interpretable. Gradient Boosting methods such as XGBoost fit shallow trees well; we therefore recommend using a depth of 5 or less in practice. We will clarify this point in the paper. 3. **Referencing SHAP scalability:** We will add some references for non-tree-based SHAP algorithms in our introduction. 4. **Distinguishing model PD and ground-truth PD:** We will add the following to the application section: *"We see that FastPD is well-suited for estimating the model PD, though it may deviate from the underlying ground truth PD. In practical settings where accurately capturing the relationship between the predictors and response is crucial, we recommend using FastPD as an initial visualization tool, which is to be complemented by other statistical methods for estimating continuous treatment effects, see e.g. Kennedy, E. H., Ma, Z., McHugh, M. D., & Small, D. S. (2017). Non-parametric methods for doubly robust estimation of continuous treatment effects. Journal of the Royal Statistical Society Series B: Statistical Methodology, 79(4), 1229-1245. and Chernozhukov, V., Chetverikov, D., Demirer, M., Duflo, E., Hansen, C., Newey, W., & Robins, J. (2018). Double/debiased machine learning for treatment and structural parameters. The Econometrics Journal, 21(1), C1–C68.."* 5. **Choice of background sample size:** We now note in the main text that in practice, one should use all available data as background. If this is not feasible, a smaller background sample size can be chosen (e.g., $n_b = 100$) and compared to a slightly larger one (e.g., $n_b = 150$). If the resulting PD functions are similar, then the smaller sample size is likely sufficient. As a remark, in our experiments we have observed that augmenting the tree is typically fast, so the computational cost of using more background samples is low. The majority of computation time is instead spent on evaluating the explanation points.
Summary: This paper introduces FastPD, a novel tree-based algorithm for estimating partial dependence (PD) functions, which are central for interpreting machine learning models via SHAP values. The paper identifies a critical limitation in the commonly used TreeSHAP-path method: while TreeSHAP-path is theoretically exact under feature independence, its reliance on conditioning on the decision path can lead to biased SHAP estimates when features are correlated. FastPD overcomes this issue by decoupling the computation into an augmentation phase—where the full background dataset is used to precompute empirical probabilities—and an efficient evaluation phase that reuses these values to compute the PD functions exactly. The result is a method that is both computationally efficient and robust to feature correlations. Claims And Evidence: Overall, the claims are well supported by clear theoretical derivations and convincing simulation experiments, though real-world dataset experiments would further strengthen the evidence. Methods And Evaluation Criteria: The proposed method is specifically designed for tree-based models, leveraging the structure to precompute necessary statistics (augmentation) and then efficiently evaluate PD functions. This two-step approach is both novel and well-motivated. Overall, the methods and evaluation criteria make sense for the problem of model interpretability, although additional experiments on real-world datasets would enhance the evaluation. Theoretical Claims: The theoretical arguments are well-presented and, at a high level, correct. Minor technical details may need further scrutiny, but no major issues were identified. Experimental Designs Or Analyses: The experiments are based on simulated data with controlled levels of feature correlation using tree-based models (e.g., XGBoost). The comparisons include multiple methods (VanillaPD, TreeSHAP-path, TreeSHAP-int, and FastPD) and evaluate both accuracy (MSE) and computational runtime. The experimental analyses are sound and clearly demonstrate that FastPD achieves lower estimation error and faster computation. While the designs are rigorous for simulation studies, incorporating additional experiments on real-world datasets would further validate the approach in practical settings. Supplementary Material: Strengths: Novel decoupling of augmentation and evaluation steps to compute exact PD functions. Clear mathematical exposition linking SHAP values, partial dependence functions, and the issues with conditioning in TreeSHAP-path. Comprehensive simulation experiments that highlight both estimation accuracy and runtime improvements. Weaknesses: Lack of evaluation on real-world datasets limits the demonstration of practical applicability. The algorithm’s presentation could benefit from more intuitive diagrams or flowcharts to aid understanding (as a suggestion, maybe add a description with a figure and move the algorithms to the appendix? As is, it is okay, but I think more intuitive explanations or explaining the essence of the algorithm may be better [i.e. tree augmentation]) Reproducibility details (e.g., explicit hyperparameter settings) could be more thoroughly described in the main text. Relation To Broader Scientific Literature: In general this is a good contribution to the literature on interpretability, and shapley values, particularly addressing the bias with existing approaches like TreeSHAP. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: Strengths: Novel decoupling of augmentation and evaluation steps to compute exact PD functions. Clear mathematical exposition linking SHAP values, partial dependence functions, and the issues with conditioning in TreeSHAP-path. Comprehensive simulation experiments that highlight both estimation accuracy and runtime improvements. Weakness: Lack of evaluation on real-world datasets limits the demonstration of practical applicability. The algorithm’s presentation could benefit from more intuitive diagrams or flowcharts to aid understanding. Reproducibility details (e.g., explicit hyperparameter settings) could be more thoroughly described in the main text. Other Comments Or Suggestions: Consider including experiments on real-world datasets to validate the method’s practical relevance. For example, what can practitioners expect to see in practice? Is the bias of TreeSHAP really that bad? Can it lead to "catastrophic" situations where you attribute significantly less importance to something with with TreeSHAP vs FastPD (we now know it is consistent, but can you make a case for why this is so bad in practice in a relatively varied collection of benchmark datasets?). I think these extensions will really increase the impact of the paper, which is why I want to be transparent and tell the authors that I will definitely raise my score from a 3, if a stronger set of experiments with interpretations on the differences between FastPD and other methods is presented! I think it will really convince readers to use this method! Additional illustrative diagrams of the augmentation and evaluation steps in FastPD would improve clarity. Questions For Authors: Code seems a little messy, do the authors intend to release an easy-to-use package? I think it would be helpful for people to adopt this approach. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your comments, we address them below. 1. **Real-world application:** - **Benchmark:** For an additional comparison between `FastPD`, `FastPD-100`, `FastPD-50`, and the path-dependent method, we have now added a benchmark considering 33 regression and 29 classification datasets from the OpenML-CTR23 Task Collection and the OpenML-CC18 Task Collection. For a set $S \subseteq \{1,\dots,d\}$, we compute the importance measure of the estimated functional component $\hat{m}_S$ by $\hat{E}[|m_S(x)|]$, where $\hat{E}[\cdot]$ denotes the empirical mean. For each dataset, we record the importance of the components $\hat{m}_S$ that are ranked in the top 5 by any of the four methods (`FastPD`, `FastPD-100`, `FastPD-50`, path-dependent). The results are available here: [https://www.dropbox.com/scl/fi/ujdikzu8iyrn53ha03qc4/summary.pdf?rlkey=ypnkbdhjwgylr38ylj16ugjzj&st=f97bol4d&dl=0](https://www.dropbox.com/scl/fi/ujdikzu8iyrn53ha03qc4/summary.pdf?rlkey=ypnkbdhjwgylr38ylj16ugjzj&st=f97bol4d&dl=0) We observe that while results are consistent across methods for some datasets, substantial differences can emerge in others. In several cases the differences are >100%, i.e., the feature importance values from the path-dependent method are more than twice as high as those from FastPD. - **Concrete example:** We highlight one dataset (`adult`) where `FastPD` and the path-dependent method gave qualitatively different insights. The following plot based on a single model obtained from one run of hyperparameter search. We have plotted the interaction effect between age and relationship status: $m_{\text{age,relationship}}$. [https://www.dropbox.com/scl/fi/8wx0xj34rglhst6tm8eb0/adult.pdf?rlkey=nrgz74xjrd1udoxgrhohwx6au&st=yclm7q6o&dl=0](https://www.dropbox.com/scl/fi/8wx0xj34rglhst6tm8eb0/adult.pdf?rlkey=nrgz74xjrd1udoxgrhohwx6au&st=yclm7q6o&dl=0) The path-dependent method indicates no significant interaction effect during working age (35–60), suggesting that age affects husbands and wives similarly in this range. Outside this interval, it estimates a more positive effect for wives than for husbands. Using `FastPD`, we get a different picture: Between ages 30 and 65, the effect is more positive for husbands than for wives, with the advantage reversing outside this age range. 2. **Flowcharts/diagrams:** We agree that a flowchart and illustration would help in explaining the algorithm, and will add it for the camera-ready version. 3. **Reproducibility details:** We will modify the beginning of the experiments section to include an overview of our hyperparameter settings. 4. **Package implementation:** A package implementation exists, including automated plot functions. However, due to anonymity, we prefer not to share it at this point, but it will be made available in the camera-ready version. --- Rebuttal Comment 1.1: Comment: These are very interesting findings, which I believe significantly strengthen the story behind this paper! I have raised my score to a 4 and I am happy to recommend acceptance!
null
null
null
null
null
null
null
null
Step-DAD: Semi-Amortized Policy-Based Bayesian Experimental Design
Accept (poster)
Summary: This paper proposes Step-DAD, a semi-amortized approach to Bayesian experimental design (BED) that extends the existing Deep Adaptive Design (DAD) framework. While DAD pre-trains a fixed policy network before experimentation, Step-DAD allows for test-time adaptation of this policy during deployment, periodically refining it based on accumulated experimental data. The authors evaluate Step-DAD on three experimental design problems: source location finding, hyperbolic temporal discounting, and constant elasticity of substitution. Results consistently show that Step-DAD outperforms DAD and other BED baselines, including when dealing with prior misspecification and extended experimental horizons. Claims And Evidence: The paper's main claim that Step-DAD outperforms existing BED methods is well-supported by empirical evidence across multiple experimental settings, which are commonly used in BED literature. The authors present thorough comparisons with appropriate baselines using consistent evaluation metrics (EIG bounds). Methods And Evaluation Criteria: The proposed methods are appropriate for the BED problem. Using EIG bounds as the evaluation metric aligns with the standard objective in BED literature. The problems selected in the paper are appropriate, which are commonly used benchmarks in BED literature. The inclusion of additional evaluation scenarios such as prior misspecification strengthens the assessment of the method's practical utility. However, one minor concern is that all evaluations are on synthetic problems rather than real-world datasets, it would be nice to see one such case which can support the need of doing test-time training. Theoretical Claims: I've checked the theoretical claims, particularly Proposition 3.1 regarding the decomposition of total EIG. The proof appears sound, correctly leveraging the factorization of joint likelihoods and probabilities to establish the independence of policy optimality for later steps given the history at the intermediate point. Experimental Designs Or Analyses: The experimental designs are generally sound. However, it would be nice to see a comparison with RL-based BED approaches such as the work by Blau et al. (2022). Given that the semi-amortized framework is architecture-agnostic, it would be valuable to see how Step-DAD principles could be applied to RL-BED methods. This would provide a more comprehensive understanding of the benefits of semi-amortization across different policy learning paradigms. Supplementary Material: I've read the supplementary material. It provides comprehensive details that support the main paper's findings. Relation To Broader Scientific Literature: The paper properly situates itself within the BED literature, acknowledging connections to: * Traditional adaptive BED frameworks * Policy-based BED approaches * Reinforcement learning (RL) literature, particularly offline model-based RL Essential References Not Discussed: It would be nice to also mention some other latest BED work in the related work section, where some of these approaches can benefit from the proposed semi-amortized framework: [1] Iollo, Jacopo, et al. "Bayesian Experimental Design Via Contrastive Diffusions." ICLR. [2] Iollo, Jacopo, et al. "PASOA-PArticle baSed Bayesian optimal adaptive design." ICML. [3] Huang, Daolang, et al. "Amortized Bayesian Experimental Design for Decision-Making." Neurips. [4] Iqbal, Sahel, et al. "Nesting Particle Filters for Experimental Design in Dynamical Systems." ICML. [5] Iqbal, Sahel, et al. "Recursive Nested Filtering for Efficient Amortized Bayesian Experimental Design." Neurips BDU workshop. Other Strengths And Weaknesses: Strengths: * The semi-amortized framework is a natural extension of existing BED approaches. * The empirical improvements are significant and consistent across problems. * The paper is well-written and easy to understand. Weaknesses: * The contribution is somewhat incremental, building directly on DAD without fundamental architectural innovations. * While the paper demonstrates robustness to prior misspecification, it lacks exploration of other types of model misspecification. Given that the framework can be used to address model misspecification, investigating additional forms of misspecification (e.g., likelihood function misspecification) would strengthen the paper's claims about robustness. * The paper lacks a systematic analysis for balancing additional computational cost against EIG gains. A metric or guideline for determining the optimal update schedule (number and timing of updates) would provide practical value for deployment. Other Comments Or Suggestions: The paper is very well written, I can't spot any typos or apparent mistakes. Questions For Authors: * For real-world applications where online policy updates might be computationally constrained, do you have recommendations for determining the optimal update schedule (when and how often to update)? * The paper shows Step-DAD is more robust to prior misspecification. Have you investigated how it performs when other aspects of the model are misspecified (e.g., likelihood functions)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you kindly for your helpful review. >It would be nice to see a comparison with RL-based BED approaches such as the work by Blau et al. (2022). Given that the semi-amortized framework is architecture-agnostic, it would be valuable to see how Step-DAD principles could be applied to RL-BED methods. We agree that this could be a nice addition if time and space allow. As you note, the StepDAD approach can be directly applied within RL-BED policy training frameworks, so these simply provide an alternative strategy for the policy refinement we are proposing, rather than a directly competing approach. The main reason we focused on direct policy training strategies in our experiments was that they generally allow the policy to be updated far more cheaply than using an RL-based approach, thus making them more suitable for deployment-time policy refinements where time is more critical than the original policy training. We will add further discussion on this to the paper. >It would be nice to also mention some other latest BED work in the related work section, where some of these approaches can benefit from the proposed semi-amortized framework: Thank you for this excellent suggestion. We will update the related work section to include discussion of the additional papers you mention, several of which could benefit from or be extended by the proposed semi-amortized framework. > The paper lacks a systematic analysis for balancing additional computational cost against EIG gains. Thank you for highlighting this. We agree that this analysis would benefit the paper and have added new results on the trade-off between cost and performance. Specifically, we have done two new ablations for the location finding experiment. The first is to vary the amount of compute spent on inference and fine-tuning. The results, which can be found here: https://tinyurl.com/EIG-wall-time (anonymous), show that meaningful gains over DAD can be achieved with only a couple of minutes of computation. The benefits improve as the fine-tuning budget is increased, before plateauing. The second new ablation is to increase the number of times that the policy is refined through the experiment. The results can be found here: https://tinyurl.com/eig-interventions (anonymous). They show that there is an initial benefit to increasing the number of intervention steps before a plateauing once again. >For real-world applications where online policy updates might be computationally constrained, do you have recommendations for determining the optimal update schedule (when and how often to update)? While the optimal update schedule will vary between applications, our findings do provide some helpful guidance: - Our new results above show that there are often diminishing returns from conducting many policy refinements, so it will not usually be necessary to refine the policy at every step, in fact there is a general plateau beyond two interventions for the T=10 budget case. - Figures 2 and 4 suggest that refining the policy in the region of halfway to three quarters of the way through the experiment may have the biggest impact when only doing a single refinement. - Even small amounts of policy refinement (on the order of a couple of minutes) can be beneficial. We also note that in many real applications, the update schedule may be directly dictated by the computate constraints, with little flexibility in the update schedule. For example, if each experiment itself takes, ~10 minutes to run, this gives us a clear per-iteration budget that we can use while we wait for the next experiment to complete. We will add further discussion on these practical considerations. >The paper shows Step-DAD is more robust to prior misspecification. Have you investigated how it performs when other aspects of the model are misspecified (e.g., likelihood functions)? Thank you for the great suggestions. We have not explicitly investigated robustness to likelihood misspecification. Like other amortized BED methods, Step-DAD remains sensitive in such cases, and understanding this susceptibility is an important direction for future work in BED literature. We believe that Step-DAD in its current form is less likely to help guard against likelihood misspecification than prior misspecification as the same likelihood is still used in the policy refinement, whereas the prior is replaced by the intermediary posterior (which may have corrected some of the issues of the original prior, such as if it is not sufficiently informative). Interestingly though, Step-DAD does open up avenues for future work in this direction. For example, one could consider doing model checking before policy refinement, then training under a new likelihood if the data collected indicates the original one should be rejected. We feel it is beyond the scope of the current paper to fully investigate this, but we will add further discussion as it is certainly an interesting future direction. --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I hope the authors will update the promised changes in the new manuscript. I have also checked the comments of other reviewers and I decided to keep my score, which leans towards acceptance.
Summary: They propose a semi-amortized approach to Bayesian experimental design, in which a policy is learned offline (as in the standard fully amoritzed approach) and then is adapted online. This increases computational cost but results in more adaptive designs, which are more robust to model misspecification. The paper is extremely well written and the method is elegant and effective. Claims And Evidence: They derive a new algorithm and claim it results in improved information gain for a given number of experiments/ samples. They empirically demonstrate this is true on two different datasets, Source Location Finding and Constant Elasticity of Substitution (CES). They do careful ablation studies to show where the gains come from. Methods And Evaluation Criteria: Yes Theoretical Claims: This is an algorithms paper, so does not have new theory. Experimental Designs Or Analyses: Very solid. Supplementary Material: No Relation To Broader Scientific Literature: Authors propose a novel and useful variant of amortized BED in which they allow the agent to learn an amortized policy offline in the usual way, and then adapt it at run time in light of observed data, by sampling potential outcomes from the posterior predictive rather than the prior predictive. This is a very elegant and useful idea. Essential References Not Discussed: NA Other Strengths And Weaknesses: NA Other Comments Or Suggestions: NA Questions For Authors: - For the eval metric, have you considered using synthetic data where the ground truth theta* is available, and then asseessing distance between E[theta|hT) and theta* where hT is from a particular design policy? - sec 6.3. How do you evaluate diffrent designs if you just have access to an empirical dataset, and not a DGP (simulator)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you kindly for your helpful review. > For the eval metric, have you considered using synthetic data where the ground truth theta* is available, and then assessing distance between E[theta|hT) and theta* where hT is from a particular design policy? Yes! Assessing the distance between the posterior mean and the true parameter value is indeed a reasonable metric, and one we did consider. We ultimately chose to focus on using the EIG to assess performance as this is the metric most commonly used in the BED literature and it tends to be more robust than this distance (in particular, it is possible for the posterior mean to very close to theta* while still having significant posterior variance). Preliminary results with this metric were very similar to that of the EIG and we can add such comparisons if you think they are important. >How do you evaluate different designs if you just have access to an empirical dataset, and not a DGP (simulator)? In general, this is very difficult as we are trying to evaluate the quality of a data gathering process, which cannot be directly done simply by having access to an existing dataset. In the scenario where we have some existing data and then want to gather more data, the most natural approach is to train a model to the data we already have, then use this model within an experimental design framework. Depending on context, problems like this can also sometimes be tackled using active learning or reinforcement learning methods, instead of experimental design ones.
Summary: The paper deals with Bayesian adaptive experimental design for identifying model parameters. Fully adaptive strategies are costly and myopic. Recent work has proposed amortized experimental design in which a neural net maps from observed data directly to the experimental design policy. That work, however, is not sufficiently adaptive as it cannot learn directly from the data in the current experiment. The paper introduces Step-DAD which attempts to blend the two. Essentially it is adopts the amortized neural network approach, but adds in fine tuning based on the results of the current experiment. This improves performance relative to the amortized approach and various static approaches. ## update after rebuttal increased score Claims And Evidence: Generally the paper is well written and the claims are supported by clear and convincing evidence. There was one claim that I did not see well-supported. The paper claims that the per-iteration cost of the fully adaptive methods is too time-consuming, and that the proposed Step-DAD is ideal for the "many problems where we can afford to perform some test-time training during the experiment itself." However, nowhere does the paper actually give results for how much test-time training is required by the method. This is given only in terms of number-of-fine-tuning iterations, as opposed to wall time (what actually matters for the framing of the problem). I expected to see a plot showing the trade-off between achieved EIG and fine-tuning wall-time, where with 0 fine-tuning wall time we would match the EIG of DAD, and then we could see how much wall time is required to improve significantly over that, and if indeed there are many problems where we can afford that much test-time training. If this plot were to be added, I'd be more supportive of the paper; I think its absence is significant. Methods And Evaluation Criteria: I have no concerns with the selection of baselines or problems. Theoretical Claims: Yes, Prop 3.1 Experimental Designs Or Analyses: The evaluation was all reasonable to me. Supplementary Material: I searched the supplement for results on wall times and did not find any. Relation To Broader Scientific Literature: The framing with respect to past work was well described. Essential References Not Discussed: Not aware of any. Other Strengths And Weaknesses: The paper is well-written and everything is well motivated. Other Comments Or Suggestions: N/A Questions For Authors: What does the EIG vs. wall time trade-off look like as we increase the amount of fine tuning? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you kindly for your helpful review. >I expected to see a plot showing the trade-off between achieved EIG and fine-tuning wall-time Thank you for the great suggestion. We have now implemented this analysis, and the chart illustrating this trade-off can be found here: https://tinyurl.com/EIG-wall-time (anonymous). We will incorporate these results into the final revised version of the paper. The results show that significant improvements can be achieved with only a few minutes of fine-tuning time, with further tuning providing additional benefits but with diminishing returns. By contrast, traditional BED approaches often require test-time computation on the order of hours (Foster et al., 2021) and similarly training the original DAD network (50k steps) also was on the order of hours. > There was one claim that I did not see well-supported. The paper claims that the per-iteration cost of the fully adaptive methods is too time-consuming, and that the proposed Step-DAD is ideal for the "many problems where we can afford to perform some test-time training during the experiment itself." We would like to emphasize that the main benefit we expect from StepDAD over the traditional greedy approach is in the quality of the designs, rather than simply in terms of cost, as is demonstrated in our numerical results. While the above results show that there are also computational cost benefits as well, our primary motivation is still to give the best possible design performance when there is computational time available during the experiment. We will make edits to the paper to ensure this is clear. *Adam Foster, Desi R Ivanova, Ilyas Malik, and Tom Rainforth. Deep adaptive design: Amortizing sequential bayesian experimental design. Proceedings of the 38th International Conference on Machine Learning (ICML), PMLR 139, 2021* --- Rebuttal Comment 1.1: Comment: New result looks great, thanks!
Summary: This paper introduces Step-DAD, a hybrid between traditional and fully amortized policy-based approaches to Bayesian Experimental Design (DAD) that retrains its policy on-line as it gathers additional observations. The authors argue that this allows to retain the benefits of policy-based BED while overcoming its key limitation of not being able to adapt the policy in response to data collected. They discuss training procedures (mostly based on existing technical contributions). Finally, Step-DAD is evaluated in a number of experiments against DAD and other baselines, demonstrating the expected performance improvements in terms of EIG improvements and robustness to model misspecification. ## Update after rebuttal My assessment remains unchanged - this is a good and well-executed paper and deserves to be accepted, but for me to champion the paper with a strong accept I would want to see some more novel ideas or some harder technical challenges solved. Claims And Evidence: Claims are clear and supported by convincing evidence. Methods And Evaluation Criteria: * The methods is a very natural evolution of DAD, and largely relies on existing methods and results used in a novel way. * The evaluation criteria make sense, and the empirical evaluation is relatively comprehensive and includes relevant and insightful ablations. Theoretical Claims: Yes, Prop 3.1 (the only claim) is straightforward. Experimental Designs Or Analyses: * The analyses all appear proper and valid. * The authors use a conservative estimate of the improvement in performance from Step-DAD over DAD, which lends additional credibility to the results. Supplementary Material: Yes, I reviewed the full supplementary material (which is extensive and helpful). Relation To Broader Scientific Literature: * Combining the "best of both worlds" from traditional and policy-based BED where "online" computation during the experiment is permissible is a very natural thing to do and makes a lot of sense. * The idea isn't exactly groundbreaking given the parallels with Step-Static policies or MPC (see below), but is executed solidly in the paper. * As this extension didn't require a lot of new technical innovations, the primary contribution of the work is therefore to set up the problem and demonstrate the performance in the empirical evaluation, which was executed well. * The Step-DAD setup is reminiscent of Model Predictive Control (MPC), which in a similar fashion re-optimizes a sequence of control inputs for the remaining experimental horizon based on a model of the process and the observations obtained so far. The main difference is that wile MPC optimizes input trajectories, Step-DAD optimizes a full policy - in that sense, MPC is the control-theory analogue of the Step-Static policy. This approach is sufficiently related that this connection warrants discussion in the paper. Essential References Not Discussed: N/A (I have limited overview of the relevant literature) Other Strengths And Weaknesses: * The proposed Step-DAD method is a natural evolution from traditional and fully amortized approaches. The empirical results convincingly demonstrate the benefits Step-DAD - while those are intuitive and expected, there is still substantial value in confirming this in a well-designed evaluation. * My main question that remains unanswered is how exactly how feasible this re-training of the policy is in practical settings. * The examples provided don't have a notion of cost or duration of obtaining a measurement, and while the studies do evaluate different and multiple steps at which to re-train the policy, this seems somewhat artificial. Why not re-train the policy at each step? It would be highly illustrative and helpful to have some real-world examples with specific budget and time considerations to understand how Step-DAD would be used in practice. * From a practical perspective, it appears substantially simpler to deploy a trained (and possibly inference-optimize) policy network to perform BED - e.g. this could easily be done on edge or embedded devices. It seems less straightforward to run a full training setup for re-training the policy network in such a setting. This doesn't make it infeasible or devalues the methodology more generally, but I feel these practical aspects should be discussed as well. Other Comments Or Suggestions: * It would be interesting to further explore the notion of a "compute arbitrage" between investing into training a high quality policy offline vs. using a simpler policy network and investing less compute upfront and instead recover the loss in performance from that by re-training the policy online. Some of this is contained in the results of Fig 2, though that only considers the number of training steps, not the complexity of the policy network. * In Section 6.1, you note that "the performance advantage of Step-DAD over DAD appears to be most pronounced when fine-tuning occurs just past the midpoint of the experiment, that is for τ = 6,7 or 8.". At this point this is a bit of reading the tea leaves - this is much clearer from the Hyperbolic Temporal Discounting example in Sec 6.3. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you kindly for your helpful review. > The Step-DAD setup is reminiscent of Model Predictive Control (MPC)... This approach is sufficiently related that this connection warrants discussion in the paper. Thank you for drawing this perceptive connection and we agree there are nice analogues between the approaches examined here and the Model Predictive Control literature. We will happily add references and discussion on this to the paper. >My main question that remains unanswered is how feasible this re-training of the policy is in practical settings Thank you for raising this important and practical point. We will add results on precise wall-clock times and additional discussion on when we expect the retraining to be feasible, including some real-world example applications and practical issues with, e.g., running on embedded devices. In short, we find that helpful retuning can usually be achieved in as little as a minute, so the retraining can be done whenever such time can be justified between one or more experiment iterations. To characterize things more precisely, we have conducted a new ablation of performance vs amount of time spent to refine the policy, the results of which can be found here: https://tinyurl.com/EIG-wall-time (anonymous). The charts demonstrate all computation is on the order of minutes and there is an initial increase in EIG with increasing wall time before a plateauing. Note that we implemented importance sampling and the varying percentage of inference corresponds to the number of samples drawn from the proposal (100% = 20,000 samples). For each row, the initial inference cost is a fixed cost and increasing finetuning steps is what increases the wall time. >Why not re-train the policy at each step? There is absolutely nothing wrong with this if sufficient computational budget is available and one may well often do so in practice. The main reasons we did not do this in our experiments were a) to keep evaluation costs down (noting that we are running many seeds over lots of different possible true theta, which won’t be needed when deploying StepDAD in practice) and b) we tended to see quite quickly diminishing returns for conducting multiple retrainings in practice. To more explicitly demonstrate these diminishing returns in the paper itself, we have run a new ablation for the location finding experiment where we increase the number of times that the policy is refined. The results, which can be found here: https://tinyurl.com/eig-interventions (anonymous), demonstrate the diminishing returns behaviour, with Total EIG roughly flat beyond 2 intervention steps. Once again note that the varying percentage of inference corresponds to the number of samples generated from the importance sampling proposal (100% = 20,000 samples). > It would be interesting to further explore the notion of a "compute arbitrage" between investing into training a high quality policy offline vs. using a simpler policy network and investing less compute upfront and instead recover the loss in performance from that by re-training the policy online. Some of this is contained in the results of Fig 2, though that only considers the number of training steps, not the complexity of the policy network. Thank you for the insightful suggestion. We hope that the new plots above provide further insight into this compute arbitrage, beyond what was already provided in Figure 2. Further investigating this trade-off with simpler policy networks instead of fewer training steps would certainly also be interesting and we will look into adding this. One important thing to note though, is that a noticeable part of the time to refine the network is in the inference rather than the network updates itself, so there will be limits to the gains which can be achieved from just simplifying the policy network itself. --- Rebuttal Comment 1.1: Comment: Thanks for the additional ablation results, these are quite helpful. I will keep my score - this is a good and well-executed paper and deserves to be accepted, but for a score of 5 I would have wanted to see some more novel ideas or some harder technical challenges solved.
null
null
null
null
null
null
Trustworthy Machine Learning through Data-Specific Indistinguishability
Accept (poster)
Summary: This paper proposes a concept of (gaussian) data-specific indistinguishability (DSI), which relaxes Input-Independent Indistinguishability (or differential privacy in many sense) by enforcing constraints only for a set of pre-defined input pairs instead of globally. Similar to what we already have for differential privacy, they derive the corresponding Gaussian mechanism, composition rules and group versions for DSI, which consequently allows using Gaussian noise in parameter optimizations (SGD) to ensure DSI of the entire scheme (just like DP-SGD for differential privacy in many ways). They denote this as DSI deep learning. Experimentally, they show example application of DSI deep learning in reducing memorization of fine-tuned language models and defending backdoor attacks in federated learning. --- ### update after rebuttal While DSI is an interesting notion different from existing ones, I am not yet convinced if/how DSI is more preferable in applications suggested by the authors. I can see how DSI can lead to better provable (robustness) bounds than differential privacy, but I think the bounds are still not meaningful practically in most cases as $e^\epsilon$ can be large even for single-digit $\epsilon$ and there are established methods with better provable robustness (e.g. DPA or bagging or Finite Aggregation for provable backdoor/poisoning defenses). I am ok with it being accepted but will not champion for it based on current materials provided, which still aligns with my initial recommendation of "3: Weak accept (i.e., leaning towards accept, but could also be rejected)". Claims And Evidence: Most claims are well supported, except: 1. In section 4 and in conclusion, it is claimed that "DSI local-SGD outperforms DP-SGD in all cases", "our initial results demonstrate a significant improvement in the utility-trust trade-off compared to traditional, such as DP-based, methods", which is not supported by existing evidence. While in table 1 it is showed that DSI-Local-SGD can offer higher accuracy than DP-SGD **assuming the same $\epsilon$ and $\delta$**, using the same $\epsilon$ and $\delta$ for differential privacy and DSI does not indicate the same level of trust/privacy/indistinguishability, rendering the arguments unsupported. Naturally, this issue also affect many other claims made in section 4 involving advantage of DSI (DSI-local SGD) over DP (DP-SGD). 2. Section 5 experiments: "for all the following experiments with (ε, δ) parameters considered DP-SGD always requires prohibitively large noise which fully destroys the performance." This is not well justified, due to the very same issue as the one above. Methods And Evaluation Criteria: Yes. Theoretical Claims: While I did not check the proofs line by line, I skimmed through the main theoretical results and they made sense. Experimental Designs Or Analyses: The experimental designs for (reducing) memorization in LLM and (defending) backdoor attacks in federated learning are ok, with a flaw that no baseline is compared to, including but not limited to DP-SGD. The primary issue in existing experiments across different parts is that the submission assumes, without good justifications, that different privacy and the proposed DPI should be compared by simply assuming the same epsilon and delta. Supplementary Material: I mostly reviewed the experimental results in appendix, with theoretical proofs only briefly read through. Relation To Broader Scientific Literature: I can see the proposed DSI framework is very closely related to differential privacy, with many tools/results most likely adapted or at least inspired from existing, well-established results for differentially private deep learning research. Notably, the suggested applications of DSI, i.e. reducing memorization and mitigating backdoor attacks, also overlap with applications previous suggested for differential privacy. Essential References Not Discussed: Not that I am aware of. Other Strengths And Weaknesses: [strength] The idea of relaxing the global requirement for Input-Independent Indistinguishability/differential privacy is very natural, and the authors put in great efforts in providing a fairly comprehensive set of theoretical tools/results, which is impressive. [weakness] The primary weakness is the limited investigation (regardless of theoretical/empirical) regarding the degree of trust/privacy/robustness provided by the proposed DSI notions, especially in comparison with differential privacy. Basically there is no clear indications regarding whether/why DSI is a more useful notion compared to differential privacy, for example. Other Comments Or Suggestions: minor: The running title of the submission is not updated. Questions For Authors: The primary question/concern I have is simply: What supports are there indicating DSI is promising/a better tool than e.g. differential privacy? As there is no results aligning/comparing the effectiveness (i.e. the degree of trust/privacy/protection they provide) of DSI & differential privacy, this remains unclear and renders the assessment quite tricky. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your positive assessment and valuable suggestions. **1. Differential Trustworthiness (DT) vs. Differential Privacy (DP)** We fully agree with your insightful comment: “Data-Specific Indistinguishability (DSI) and Differential Privacy (DP) (or Input-Independent Indistinguishability (III)) **even under the same** parameters $(\epsilon, \delta)$ do **not** imply the same level of trust or privacy.” In Section 1.2 and our Response 1 to Reviewer 53pn, we explain why III is necessary for privacy—instance-based privacy statements or noise mechanisms themselves may already leak information. The goal of this paper is **not** to relax existing privacy frameworks (e.g., DP) or improve privacy solutions. Instead, we focus on a broad class of trust concepts that are **not** primarily concerned with preventing information leakage but rather with **controlling data usage** (e.g., copyright protection), **mitigating influence** (e.g., backdoor defense), and **governing model behavior** (e.g., memorization mitigation). A key insight from our Differential Trustworthiness (DT) framework is that if we can bound the distinguishability between a target model and a set of safe reference models—such as a model trained on clean data (for backdoor defense) or a dataset excluding memorization-sensitive information—we can obtain a probabilistic guarantee that the target model meets objective trustworthiness requirements. More importantly, when information leakage is not a concern, input independence is **not** necessary to derive DT guarantees. This explains why DP (which enforces III) is a **sufficient but not necessary** condition for DT. Our framework establishes an optimal noise mechanism to achieve DSI, significantly more efficient than **existing distinguishability control** methods in DP. But, in Section 1.2 including Footnote 2, we clarify DSI noise mechanisms **cannot** be used to address privacy concerns or provide DP-like guarantees, and we will emphasize this further in our revision. **2. Efficiency of Indistinguishability Control** The goal of our experiments is indeed to compare the **efficiency of achieving indistinguishability** using DP's standard methods—**clipping (sensitivity control) and isotropic noise**—versus our DSI framework for black-box data processing. By targeting the same divergence bound (measured in Hockey-Stick divergence with $(\epsilon, \delta)$ parameters), we demonstrate the significant utility improvement enabled by our optimized noise. We will clarify and emphasize that we are **not** comparing the privacy guarantees ensured by DP-SGD in the revision. **3. Baseline and Additional Experiments Comparing Isotropic and DSI Noise** We apologize for any misunderstanding regarding our notation. In all experiments, we include a baseline (see the column with $\epsilon = \infty$), which represents the original case **without** any perturbation. Additionally, we provide **new experiments in Exp 3 in the attachment (https://anonymous.4open.science/r/4722-1FD5)**. Specifically, in a backdoor defense scenario (see Appendix F.3) where training data is collected from $m$ sources (one of which is malicious), we compare the trust-utility tradeoff achieved via DP-SGD (gradient clipping + isotropic noise) versus the DSI framework. - 3-1) Matching Utility Performance: For $m$ varying from $10$ to $80$, we first adjust the variance of isotropic noise to match the test accuracy obtained with the DSI framework under $(\epsilon=4, \delta=10^{-5})$ and $(\epsilon=8, \delta=10^{-5})$, respectively. We then compare the adversarial success rate (ASR), showing to produce the **same empirical test accuracy**, DSI outperforms isotropic noise (leading to lower ASR rates) in defending against backdoor attacks in most cases. - 3-2) Evaluating Provable Indistinguishability: We also report the provable $(\epsilon, \delta)$ guarantees achievable by isotropic noise in this setup. As expected, given that the number of data sources $m$ is relatively small, the worst-case sensitivity $O(1/m)$ remains high. Even with $m=80$, the resulting bounds **$(\epsilon=114, \delta = 10^{-5})$** for low-frequency attacks [1] and **$(\epsilon=179, \delta = 10^{-5})$** for Blended attacks [2] are too weak to provide meaningful guarantees, to match the same performance from DSI framework with $(\epsilon=4, \delta=10^{-5})$. These results will be included in our revision, further supporting our claim that standard mechanisms in DP-SGD cannot produce usable divergence bounds in our experimental settings (especially for small dataset and scaling/high-dimensional models). Finally, we would greatly appreciate your feedback on whether we have adequately addressed your concerns. Please let us know if you have any additional questions. [1]. Zeng, Yi, et al. "Rethinking the backdoor attacks' triggers: A frequency perspective." [2]. Chen, Xinyun, et al. "Targeted backdoor attacks on deep learning systems using data poisoning." --- Rebuttal Comment 1.1: Comment: My primary concerns in the initial review can be summarized as: What supports are there indicating DSI is promising/a better tool than e.g. differential privacy? There is no clear indications regarding whether/why DSI is a more useful notion compared to differential privacy, for example. The point 1 & 2 in the rebuttal are simply the differences between DSI and differential privacy rather than DSI's advantages, which means point 1 & 2 in the rebuttal are not really helpful in addressing these concerns. The results in new experiments in Exp 3 in the attachment (point 3) are very related to the concerns, specifically comparing empirically DSI and DP-SGD as defenses. However, I find it difficult to agree with the authors' interpretation of the results: >From rebuttal: "We then compare the adversarial success rate (ASR), showing to produce the same empirical test accuracy, DSI outperforms isotropic noise (leading to lower ASR rates) in defending against backdoor attacks in most cases." In fact, according to exp 3, DP-SGD performs better than the proposed DSI framework in **2 out of 8** cases in Table 4 and **4 out of 8** cases in Table 5. These are for sure not conclusive/significant enough to claim the advantage of DSI notions over differential privacy. To sum up, while DSI is an interesting notion different from existing ones, I am not yet convinced that DSI is more preferred in applications suggested by the authors. I am ok with it being accepted but will not champion for it based on current materials provided, which still aligns with my initial recommendation of "3: Weak accept (i.e., leaning towards accept, but could also be rejected)". --- Reply to Comment 1.1.1: Comment: Thank you very much for your response and for sharing your concerns! We’d like to clarify some of the points and potential confusions in the following. 1. **Sharpened Utility-Trust Tradeoff with Data-Specific Indistinguishability (DSI)** The **key advantage**/contribution of DSI lies in its ability to provide **provable** trust guarantees while incurring significantly **lower utility overhead** compared to Differential Privacy (DP) or Input-Independent Indistinguishability. To illustrate this, consider the memorization mitigation of a particular data point $x_0$​. Both DP and DSI (reference set defined as leave-one subset, i.e., removing $x_0$​)—when having the same indistinguishability parameters $(\epsilon_0, \delta_0)$—actually lead to the **same** guarantee: if the probability that the reference model (trained without $x_0$​) memorizes $x_0$ is $p_0$​, then the probability that the target model (trained with $x_0$​) memorizes $x_0$​ is upper-bounded by $$e^{\epsilon_0}p_0 + \delta_0.$$ Please refer to our memorization experiments for more details (Section 5a-1: memorizing 6-digit strings; Section 5a-2: memorizing token subsequences). We also verify the tightness of the probabilistic bounds based on distinguishability (see left column, lines 405–408 and 432–435). Thanks to optimized noise and tighter accounting methods (composition and grouping), our theoretical bounds via DSI—**though still conservative**—substantially sharpen the tradeoff between trust (indistinguishability) and utility. For example, in LLM fine-tuning experiments on Wikitext-5 (a relatively small dataset), achieving $(\epsilon=8, \delta=10^{-5})$ with DP-SGD requires prohibitively-large noise due to high model dimensionality (hundreds of millions of parameters), resulting in unacceptable performance (perplexity > 40). In contrast, DSI achieves meaningful guarantees with much better utility (perplexity between 16–22; see Table 2). Our additional experiments on backdoor defense further highlight DSI's efficiency for **provable distinguishability control**. As shown in Tables 3 and 4 in the attachment: To match the utility of DSI, the $(\epsilon, \delta)$ values that can be ensured by DP-SGD can be **hundreds of times larger**, which are too weak to produce any meaningful guarantees. Conversely, to enforce a reasonable single-digit $\epsilon$ using DP-SGD, test accuracy on CIFAR-10 drops to ~20%, compared to ~70% using DSI. 2. **Additional Operational Advantages of DSI** Beyond efficiency, DSI offers several practical advantages over DP: - a) **Black-box Processing and Customized Budgets**: DSI can be applied in black-box settings, and supports differentiated distinguishability budgets across multiple references. In contrast, DP requires white-box sensitivity analysis and can only capture the worst case. The flexibility of DSI allows, for example, assigning different $(\epsilon,\delta)$ budgets to control contribution from mutiple data sources—useful in scenarios like copyright or contribution attribution, which DP cannot directly support. - b) **Modeling More General Differential Trust**: DSI is not necessarily restricted to leave-one-out reference sets, as considered in DP. The reference sets in DSI can be very general. We provide a possible use case: Imagine multiple companies train separate LLMs using their own data and algorithms. For privacy consideration, they all want to ensure the generated responses on an arbitrary query from their models do **not** reveal any **unique** information containing in their training data. A DSI solution can be adding an optimized noise to (semantic embedding of) each response on a given query, which mitigates their difference while maximally preserving their common patterns or information. - c) **One-Way Divergence**: As mentioned in Section 3.2(i), DSI only requires one-way divergence control, while DP necessitates two-way worst-case analysis, which is more complex and less flexible [1]. 3. **On Interpretation of New Experimental Results** We apologize for any unintended implications for the added experiments on backdoor defense. We faithfully report the empirical defense performance of both DSI and DP noises as a complement. However, as noted, our main focus is always on the efficiency to achieve **provable** trust/indistinguishability and we do **not** plan to claim any empirical superiority of DSI noise itself. Finally, thank you once again for your thoughtful feedback. To the best of our knowledge, DSI is the first attempt to **systematically** unify a wide range of differential trust concepts and efficiently build **provable** guarantees. The combination of optimized DSI noise and improved accounting methods meaningfully bridges the **gap between theory and practice** in cutting-edge trustworthy AI research. We would deeply appreciate your support on this new research direction. [1] Zhu, Yuqing, and Yu-Xiang Wang. "Poission subsampled rényi differential privacy."
Summary: The paper aims at combining various privacy preserving mechanism for machine learning into a unified framework, for instance reducing memorization, providing copyright protection, differential privacy and so on. The main mathematical technique is to use Data Specific Indistinguishability which aims at providing privacy protection to be dependent on the data (training data) and its restrictions instead of the worst case bounds in DP. The propose an algorithm to achieve this and discuss its properties like composition, post processing etc. Claims And Evidence: The main claim is providing a DSI procedure which can encompass various privacy preserving mechanisms into one definition and making it data dependent more specifically. Section 3 discusses the DSI gaussian mechanism, which aims to add gaussian noise as a way of providing privacy, and providing data dependent results i.e. removing the data independence assumption used in DP In Section 3.2 the paper discusses various properties, and shows how the proposed method satisfies them. In Algorithm 2, the paper proposes a framework for optimizing deep learning model with black box optimizers by modifying the gradients in similar spirit to DP-SGD Methods And Evaluation Criteria: The proposed method is relevant and interesting, however, for the experiments / baselines, it appears that the paper does not compare against the right baselines. For instance, there have been several papers on data dependent DP, which exploit subspaces to improve the utility of the underlying privacy mechanism. However, the only experiment / baseline was DP-SGD Theoretically, it seems to be a relaxation of DP-SGD where the gradient clipping is by-passed as part of the definition, and the addition of noise is data dependent. In the conclusion the authors mention that their approach is significantly better than DP, however, they both are aimed at doing different things. And to make a fair comparison it necessary to compare to the right papers. Some relevant baselines: https://arxiv.org/pdf/2311.14632 https://arxiv.org/pdf/2210.00036 https://proceedings.mlr.press/v202/bu23a/bu23a.pdf https://arxiv.org/abs/2212.00328 https://arxiv.org/pdf/2203.11481 Theoretical Claims: The definitions and theorems seem to be sound. Experimental Designs Or Analyses: I think the baselines used are not up to the mark, and its not the best to compare against DP-SGD. There are works on subspace identification in DP, where we can minimize the damage caused by clipping and gaussian noise. The major concern I have is that the related works have not been correctly used to build upon. There have been plenty of works for improving DP. Here are few examples: https://arxiv.org/pdf/2007.03813 https://cdn.aaai.org/ojs/20315/20315-13-24328-1-2-20220628.pdf https://arxiv.org/abs/1707.07708 https://arxiv.org/pdf/2203.11481 Supplementary Material: I went over the supplementary material. Relation To Broader Scientific Literature: The paper aims provide a framework for differential trustworthiness, and provide a data dependent definition for it. It is well know in the privacy community that DP implies reduced memorization, copyright protection (using NAF), unlearning, and robustness to membership attacks. However, its difficult to make DP work in practice at scale, as the amount of noise scales rapidly with size, and thus the models trained with DP have reduced utility. The paper aims to relax this and provide a simple procedure where we add gaussian noise which is tailored to the training data. While the aim and direction of the paper is interesting, I think there are a lot of closely related works which do something similar. Also, the aim is at providing differential trustworthiness, however, its eventually compared against DP-SGD Essential References Not Discussed: Some essential references which are missing: Per-instance DP: - https://arxiv.org/abs/1707.07708 - https://arxiv.org/abs/2111.02281 - https://openreview.net/pdf?id=ESt7ECoWpn Subspace based DP: - https://arxiv.org/abs/2108.11527 - https://www.math.uci.edu/~rvershyn/papers/hsvz-subspaceprivacy.pdf Public-Private DP: - https://arxiv.org/pdf/2306.15056 - https://arxiv.org/pdf/2203.11481 - https://proceedings.mlr.press/v202/nasr23a/nasr23a.pdf PAC Privacy: - https://arxiv.org/abs/2210.03458 - https://arxiv.org/abs/2312.01201 Data dependent DP - https://arxiv.org/pdf/1905.12813 - https://papers.nips.cc/paper_files/paper/2018/hash/9a0ee0a9e7a42d2d69b8f86b3a0756b1-Abstract.html Other Strengths And Weaknesses: Strengths: - The paper is well written, and the theory is easy to parse and grounded. - The problem is very interesting to work on, as it corresponds to reducing multiple sub-problems in privacy preserving machine learning to a single optimization problem - Using data dependent anisotropic gaussian noise for privacy preservation is intuitive, and in general having data dependent privacy is the way for the future. - The proposed approach seems to be more relevant for unlearning, obfuscation than privacy in general whose definition can vary a lot depending on the application. Weaknesses: - The proposed methods are similar to the existing methods in DP literature, so the novelty of the proposed appears to be low. - The baselines considered are weak, and thus it would be useful if it can be improved. Other Comments Or Suggestions: I will recommend the authors to spin the paper as an unlearning / obfuscation paper rather than differential trustworthiness because privacy in general is too broad, and there is a lot of work in relaxing DP which makes the novelty of the proposed approach low. Also, there is no limitations section in the paper. One trivial limitation I can see is that the proposed method has no worse case bounds which can be of utmost importance in privacy. I will encourage the authors to add a section the limitations of the proposed method. Questions For Authors: I think if the authors provide a detailed comparison against methods like per-instance DP, DP in public-private setting, PAC-privacy that will be great. The current method seems to be very correlated with existing methods, and thus its essential to make it clear what is the major contribution which is different from the past works. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your positive assessment and helpful suggestions. **1. Relationship Between Privacy and Differential Trustworthiness (DT)** We fully agree with your insightful comment that (differential) privacy is recognized as a stronger guarantee, which, as a **sufficient** condition, can produce many other (differential) trust guarantees (e.g., reducing memorization and protecting copyright). However, privacy is costly (we also acknowledge your point that "privacy" is a broad concept; here, we specifically use "privacy" to refer to confidential protection). This motivates us to explore more efficient solutions to address “weaker” trust guarantees—those aimed at controlling the influence of training data on models without necessarily preventing data leakage. Strictly speaking, privacy is **not** directly comparable to the DT (memorization, backdoors, copyright). One key insight we highlight (see Sections 1.1 and 1.2) is that privacy based on **Input-Independent Indistinguishability (III)** form a sufficient **but not necessary** solution: when the goal is not to prevent information leakage, input independence is **unnecessary** and technically only indistinguishability guarantee is needed to build provable trust. Thus, we propose and justify a relaxed version, Data-Specific Indistinguishability (DSI), and construct the optimal DSI noise mechanism. Due to space constraints, please refer to our Response 1 to Reviewer 53pn for more details on why input independence is necessary for privacy, how DSI compares to per-sample/individual Differential Privacy (DP). **2. Comparison to Distinguishability Control Tools rather than DP Definition** Our key motivation and contribution are **not** to improve or relax existing DP frameworks but rather to develop better methods for constructing indistinguishability for DT guarantees. As demonstrated above, DSI is strictly weaker than III and is also incomparable to DP guarantees. What our experiments actually target is the **efficiency of different mechanisms in achieving indistinguishability**. As the baseline, the classic method to control distinguishability is through **clipping (sensitivity control) combined with isotropic noise**, as widely explored in DP literature. DP-SGD is a representative. To compare, we propose a new **anisotropic** noise mechanism to determine the minimal noise necessary for specific indistinguishability. Our framework also enables black-box algorithm analysis without requiring sensitivity control, thereby eliminating clipping bias. Furthermore, we establish accounting methods, such as composition (Theorem 3.4) and grouping (Lemma 3.5), to more tightly convert indistinguishability guarantees into probabilistic DT guarantees. In summary, our baseline comparison focuses on the efficiency of existing DP methods in achieving the required indistinguishability or statistical divergence bounds **rather than their privacy guarantees themselves**. To clarify, we will revise claims such as "comparison with DP-SGD" to "comparison with clipping + isotropic noise methods in DP-SGD." **3. Comparison to other Works** - PAC Privacy [1]: PAC Privacy leverages secret entropy and models privacy risk by an adversary’s posterior success rate in recovering the secret. Different from examining the correlation between secrets and leakage from a privacy perspective, DT and DSI do **not** assume or rely on any input distribution. Instead, we optimize noise based on reference safe models. - Data-Dependent DP: Sticking to classical DP definitions, [2] utilizes public knowledge of graph model structures to optimize privacy budget allocation across multiple releases. However, the noise mechanism for each release remains the standard Laplace mechanism. - Sub-space and Public-private DP: Assisted by public data, existing works have considered optimizing clipping strategies, subspace projection, or data augmentation. Still, all existing approaches apply standard **isotropic** noise. **In Exp2 ( https://anonymous.4open.science/r/4722-1FD5), we include a more detailed comparison** and show **even without** assuming public data, DSI-SGD has outperformed all prior benchmarks training on CIFAR10 from scratch. In addition, public data tricks can also save DSI noise, e.g. mixing training data with public samples [3] can reduce the divergence between the target and reference models. **4. Limitations** We will expand the discussion on limitations. As you noted, when both provable privacy and DT guarantees are required simultaneously, the weaker DSI is not applicable and III remains the only known method. Finally, please do not hesitate to let us know if we have addressed your concerns or you have additional questions. [1] PAC Privacy: Automatic Privacy Measurement and Control of Data Processing [2] Data-Dependent Differentially Private Parameter Learning for Directed Graphical Model [3] DP-Mix: Mixup-based Data Augmentation for Differentially Private Learning
Summary: This paper proposes a unified framework for Differential Trustworthiness (DT), which models trust-related concerns in machine learning algorithms, such as memorization, data poisoning, and copyright issues. The framework aims to regulate the divergence between the outputs of a model trained on a target dataset and those trained on reference datasets, ensuring trustworthy model behavior. Specifically, it introduces the concept of Data-Specific Indistinguishability (DSI) and its implementation through a Gaussian noise mechanism to mitigate differential effects and protect sensitive information. The paper also provides a thorough exploration of the theoretical properties, algorithmic solutions, and practical applications of DSI, particularly in areas like large language models and computer vision, addressing challenges such as privacy, model robustness, and utility trade-offs. Claims And Evidence: 1.The writing of the abstract is somewhat difficult to understand and should be revised and expanded according to the logic of the introduction. 2.The proposed method shows significant theoretical contributions, but questions arise regarding its practical applications (e.g., defending against backdoor attacks). How does it differ from existing noise-based methods, such as those proposed in [1] and other works? 3.Experiments conducted on large language models are convincing. However, why are copyright-related experiments not included? [1] NCL: Textual Backdoor Defense Using Noise-augmented Contrastive Learning, ICASSP 2023 Methods And Evaluation Criteria: The DSI framework is novel, and its application to memorization, data poisoning, and copyright concerns is meaningful. Theoretical Claims: The theory appears to be sound. Experimental Designs Or Analyses: See Claim and Evidence point 3. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: Perhaps works related to memorization, data poisoning, and copyright issues concerning noise should be cited. Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your positive assessment and insightful questions. **1. Comparison to other Noise-based Solutions** We appreciate your references to prior works that apply noise for trustworthy machine learning. Existing noise-based solutions typically face two key challenges: **a) High utility overhead** and **b) Lack of provable guarantees**. As a representative of (a) and the primary baseline in our paper, Differential Privacy (DP) mechanisms can be used to ensure provable indistinguishability, which can further imply other trust guarantees such as memorization prevention and unlearning. However, ensuring a meaningful indistinguishability bound (i.e., small security parameters $\epsilon, \delta$) usually requires adding prohibitively-large isotropic DP noise, which is well-known to impose a significant utility cost. On the other hand, a long line of empirical work explores noise-based defenses **without** formal guarantees. For example, [1] proposes adding noise to training data as a general defense against unknown textual backdoor attacks; [2] introduces random smoothing, which augments training samples with noisy versions to improve adversarial robustness of trained models; [3] adds small noise to gradients during training to resist membership inference attacks. While these approaches demonstrate strong empirical success, they lack theoretical bounds on how much noise is needed to provably prevent attacks or ensure trust guarantees. Our Data-Specific Indistinguishability (DSI) noise framework overcomes these limitations by determining the **minimal necessary** noise to ensure **provable** trustworthy guarantees while maintaining low utility overhead. The key idea can be summarized below. - First, given an algorithm $\mathcal{F}$, we identify a safe reference set $R$ whose output $\mathcal{F}(R)$ is highly likely to satisfy the desired trust properties. For example, if $\mathcal{F}$ is a standard deep learning algorithm and $R$ is a clean dataset, then the trained model $\mathcal{F}(R)$ is likely robust to backdoor attacks. - We then control distinguishability by adding minimal noise to reduce the statistical divergence between the target model and the reference model. This provably ensures that the target model has a high probability of satisfying the desired trustworthiness properties (see Lemma 2.3 and Section 3.2(i)). More importantly, unlike DP, which adds noise isotropically, DSI selectively adds noises only in necessary directions to minimize utility overhead. DSI also enjoys operational benefits, which enables black-box algorithm analysis and the noise only needs to be added to the final output of a data processing procedure. **2. Additional Experiments on Copyright/Contribution** We have added a set of experiments for the copyright or contribution control using our DSI framework. Please refer to **Experiment 1 in attachment (https://anonymous.4open.science/r/4722-1FD5).** In this experiment, we finetune a stable diffusion model (v1-4 from https://huggingface.co/docs/diffusers/v0.8.0/en/training/text2image) on a collection $U$ of 425 paintings from 10 artists. We select the reference sets $R_i$ as $U$ excluding one painting, i.e., leaving-one subsets. Thus, the $(\epsilon, \delta)$ DSI parameters characterize the **per-sample contribution** to the trained model. In the figure, we present: (1) the original painting, (2) the generated image from the pre-trained diffusion model **before** fine-tuning, (3) the generated image **after fine-tuning without noise**, and (4) the generated images **after fine-tuning with DSI noise** in different indistinguishability budgets ($\epsilon$). From the plot, it is easy to observe that with a larger $\epsilon$, the fine-tuned diffusion learns the art style better and the generated pictures look more similar to the training data. In practice, the DSI parameters can be used to quantify and determine the contribution of each sample or data source, which provides economic solutions to data usage in collaborative learning. Finally, we highly appreciate if you could let us know whether we addressed your concerns or you have additional questions. [1] Zhai, Shengfang, et al. "NCL: Textual Backdoor Defense Using Noise-Augmented Contrastive Learning." [2] Cohen, Jeremy, Elan Rosenfeld, and Zico Kolter. "Certified Adversarial Robustness via Randomized Smoothing." [3] Aerni, Michael, Jie Zhang, and Florian Tramèr. "Evaluations of Machine Learning Privacy Defenses Are Misleading."
Summary: The idea of DSI is equivalent to per-instance privacy (see "Y.-X. Wang. Per-instance differential privacy. Journal of Privacy and Confidentiality," or the equivalent individual privacy of "V. Feldman and T. Zrnic. Individual privacy accounting via a renyi filter. Advances in Neural Information Processing Systems") when the reference is between two datasets and removing a datapoint. The case of two arbitrary datasets was also explored in "Thudi, Anvith, et al. "Gradients look alike: Sensitivity is often overestimated in {DP-SGD}". No mention of these existing and related works is mentioned in the paper, and differentiating to them is recommended. On the technical aspects, unlike the past work the paper notes when just focusing on the Gaussian mechanism (and not the sampled Gaussian mechanism which could complicate the analysis and was analyzed in the aforementioned prior work) one can optimize the returned noise vector given the subspace of differences. This seems like the main novel contribution.On the composition theorem, the method may be improved with the expectation based composition theorem of "Gradients look alike: Sensitivity is often overestimated in {DP-SGD}" which would seem to imply it's enough for each step to satisfy a $\gamma$ constraint. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes Experimental Designs Or Analyses: Yes, it maybe better to use more recent baselines, e.g. from https://arxiv.org/pdf/2204.13650 Supplementary Material: Yes Relation To Broader Scientific Literature: Some related work is not mentioned, see above After rebuttal: I think prior work captured some of what is discussed in the paper. Conceptually, it was already discussed that input dependence is ok for per-instance privacy for certain applications (e.g. see unlearning and memorization implications from Thudi et al.). Operationally, I understand the novelty and the setting -- optimizing noise for the specific datasets. I feel connection to prior work and equivalences need to be further explored. Essential References Not Discussed: See above Other Strengths And Weaknesses: - Other Comments Or Suggestions: - Questions For Authors: Please compare to the papers mentioned above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your valuable comments and suggestions! **1. Comparison with (Per-Sample/Individual) Differential Privacy (DP)** We believe there are at least three fundamental differences between Differential Trustworthiness (DT) and Differential Privacy (DP). 1-1) First, on **motivation**, though both DT and DP leverage distinguishability or statistical divergence, they target very different problems. Distinguishability measures the closeness of two distributions, which can be interpreted in two ways: a) It is hard to distinguish the source of a randomly drawn sample; b) The behavior of two random variables, $a \sim A$ and $b \sim B$, is similar in probability: $\Pr(a \in O) \approx \Pr(b \in O)$ for any set $O$. Privacy (confidential protection) typically adopts a) to model preventing adversaries from recovering secrets from leakage. If the leakages from different secrets are indistinguishable, no useful information is revealed to an adversary. In contrast, DT adopts b), using divergence between a target and a set of safe references to characterize the probability that an algorithm’s output satisfies certain trust properties. Imagine, for a set of reference training datasets $R_1, R_2, \dots, R_m$, if each trained model $\mathcal{F}(R_i)$ has a 99\% probability of not memorizing sensitive information $q_i$, then the divergence between $\mathcal{F}(U)$ and each $\mathcal{F}(R_i)$ provides a bound on the probability that $\mathcal{F}(U)$ does not memorize all $q_i$ **simultaneously**. 1-2) **Conceptually**, DT justifies the role of data dependence in trust-preserving mechanisms, whereas this **remains a challenge in privacy**. If a specific sensitive data point $x_0$ becomes a parameter in a privacy guarantee (e.g., achieving $(\epsilon, \delta)$-DP for the membership of $x_0$), this statement itself already leaks information about $x_0$. Section 1.2 discusses why input independence is essential for privacy and why most DP mechanisms rely on worst-case sensitivity. Since DP mechanisms calibrate noise based on the worst case, regular (average-case) data points may enjoy stronger privacy guarantees—motivating prior work on per-sample [1,2] or individual privacy [3]. However, formally quantifying this average-case amplification from the provable, global security parameters remains an **open** question. Existing works [1,2,3] can only estimate per-sample/individual privacy loss, which itself is sensitive and cannot be disclosed. A key insight from DT is that many trust concerns focus on the use (e.g. copyright) or influence (e.g. backdoor) of public data to train a model, or governing models (e.g. memorization) to prevent certain behaviors, and it suffices to ensure indistinguishability with respect to safe reference models. More importantly, unlike privacy-preserving operations, when leakage is not a primary concern, input-independence is **not** necessary, and thus we propose the more efficient DSI. Another distinction is that DT only requires one-way divergence—specifically, the $f$-divergence from the target output $\mathcal{F}(U)$ to the reference outputs $\mathcal{F}(R_i)$. In contrast, DP treats two adjacent datasets $X$ and $X'$ symmetrically, requiring two-way divergence to prevent leakage. See Section 3.2(i) for further details. 1-3). **Operationally**, we demonstrate how to determine the optimal anisotropic noise to achieve the required DSI guarantees. However, isotropic noise in prior works [1,2,3] cannot provide **controllable** per-sample/individual privacy loss. Moreover, our composition theorem (Theorem 3.4) is tight, whereas the composition approach in [2] involves complex expectation-based estimates, and [3] still relies on worst-case individual guarantees. We will incorporate those comparisons in our revision. **2. Comparison with State-of-the-art Results** Please refer to Table 1. We **have** compared with the DP-SGD benchmarks (De et al., 2022, https://arxiv.org/pdf/2204.13650) you suggested. By optimizing DSI noise, we achieve **7-10\%** improvement in test accuracy in all cases with varying indistinguishability budget. **3. Subsampling Amplification** We appreciate your insightful comments and acknowledge that our current results do not exploit subsampling-based randomness, which could further enhance the trust-utility tradeoff—particularly in applications like SGD. We highlight this as a promising future direction in Section 6. But it is also noteworthy that even **without** subsampling amplification, DSI mechanisms already significantly outperform best-known DP methods that has incorporated subsampling (Table 1). Finally, we highly appreciate if you could let us know whether we addressed your concerns or you have additional questions. [1] Y.-X. Wang. Per-instance differential privacy. [2] V. Feldman and T. Zrnic, Individual privacy accounting via a Renyi filter. [3] Thudi, Anvith, et al. "Gradients look alike: Sensitivity is often overestimated in DP-SGD.
null
null
null
null
null
null
Variational Learning of Fractional Posteriors
Accept (poster)
Summary: The authors introduce a novel variational inference method based on fractional posteriors, parameterized by a single scalar γ∈(0,1). This approach generalizes Bayesian variational inference by tempering the likelihood term, leading to fractional posteriors that provide improved calibration and robustness. It extends to hierarchical models and shows applicability in both analytic and empirical scenarios, including Gaussian mixture models (GMMs) and variational autoencoders (VAEs). The main contributions include derivations of novel variational lower bounds based on Holder’s inequality, analytic gradient expressions, and empirical demonstrations showing improved calibration and generative modeling performance over conventional methods such as ELBO. ## update after rebuttal I would like to thank the authors and the other reviewers for the discussion. I keep my score, as explained in my comments. Claims And Evidence: Most claims are supported through theoretical derivations and by limited empirical results . The authors convincingly show the benefit of fractional posteriors for uncertainty calibration in GMMs and generative modeling with VAEs. However, some claims about robustness to model misspecification and improved alignment with priors are insufficiently supported by the empirical evaluations provided. Specifically, the paper claims robust performance improvement, but the supporting evidence is limited to simplified experiments that might not generalize to more complex scenarios. Methods And Evaluation Criteria: The methods and evaluation criteria are mostly appropriate. The use of simple benchmarks like MNIST and Gaussian mixtures is effective for demonstrating conceptual advantages, though it somewhat limits the impact. For the calibration study on the GMM, the chosen metric (credible intervals and coverage probabilities) is appropriate. However, broader applicability could have been shown with additional, more complex datasets or larger-scale problems to demonstrate practical relevance clearly. Theoretical Claims: The main theoretical derivations involving the lower bounds via Hölder’s inequality (Equations (1) and (2)) appear mathematically sound. However, the proof that the fractional posterior exactly corresponds to an optimal lower bound (Section 2.1) should be elaborated further to highlight clearly any assumptions required. Additionally, the limits as γ approaches 1 (recovering ELBO) are adequately proven, though readers might benefit from more intuitive explanations (which are indeed present in the supplementary materials). Experimental Designs Or Analyses: The empirical designs have notable issues. The calibration experiments with Gaussian mixtures provide insights into fractional posteriors' theoretical properties; however, only two-component mixtures are explored, which significantly limits the generalizability of results to more complex, realistic scenarios. The linear regressions used to select optimal γ values (Rℓ and Rκ) lack sufficient theoretical justification—why linear regression should capture the relationship between coverage intervals and fractional parameters? In VAE experiments, it is unclear whether the observed improvements in generative quality generalize beyond the simple MNIST and Fashion-MNIST datasets, or the limited dimensionality (two-dimensional latent space). Experiments involving higher-dimensional latent spaces or more complex datasets are needed to substantiate the broader claims about generative performance improvements. Supplementary Material: I reviewed the supplementary material referenced (Sections A, B, and C) containing technical derivations and additional figures. These materials support the main claims, but should be better integrated into the main text to enhance readability and comprehension. In particular, section C.4 clearly demonstrates areas of the prior (explored via interpolation) where the samples are noisy (Figures 3,6). Given the very simple datasets (MNIST, fashion-MNIST), it is not clear how well the advantages of the proposed method will scale to more complex and higher dimensional data. Relation To Broader Scientific Literature: The paper situates its contributions clearly within the context of existing literature on fractional posteriors, variational inference (VI), and related methods (e.g., β-VAE, importance-weighted ELBO, and general fractional posterior methods). It explicitly connects its novel bounds with existing concepts like PAC-Bayes and generalized variational inference (Knoblauch et al., 2022). However, the paper downplays related work that explores tempering or regularization in posterior inference (e.g., mitigating posterior collapse), which could contextualize the novelty more comprehensively. Essential References Not Discussed: Several relevant approaches are noticeably absent. Examples: * Recent advances in "cold posteriors" (e.g., "How Good is the Bayes Posterior in Deep Neural Networks Really?" by Wenzel et al.) share conceptual similarities with fractional posteriors in controlling posterior calibration but are not discussed adequately. * "A Simple Baseline for Bayesian Uncertainty in Deep Learning" by Maddox et al. which propose Stochastic Weight Averaging (SWA) approach that also deal with tempered posteriors and could offer valuable comparisons. Also many papers on mitigating posterior collapse are missing, such papers present an alternative to improving sample quality, especially when combined with learning an empirical prior post-training. Examples: * "Preventing Posterior Collapse with delta-VAEs" by Razavi et al. * "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" by Higgins et al. * "Variational Lossy Autoencoder" by Chen et al. Inclusion of such works would clarify the positioning of this paper's contributions relative to established posterior inference approaches and methods known for calibration. Other Strengths And Weaknesses: Strengths: * Clearly derived and theoretically justified new variational lower bounds. * Novelty in combining fractional posterior methods with variational inference. * Empirical demonstration that fractional posteriors improve generative tasks in VAEs. Weaknesses: * Experiments limited in complexity (simple datasets, low-dimensional settings). * Lack of comprehensive evaluation on scalability or applicability to large-scale practical tasks. * Limited sensitivity analysis regarding the γ hyperparameter. Other Comments Or Suggestions: * Figure 1 - clarity could be improved by adding explanation to each of the labels (a-d) explicitly. (b,c) looks a lot like latent interpolation, is that the case? * Table 2 - captions and discussions should explicitly clarify the implications of differences between empirical evidence bounds and variational approximations. If "Test using ELBO are solely for diagnostics to understand the learnt posteriors using the same metrics" then can you please explain what do we learn from it or alternatively remove it? * Line 069 - should be "(KL, α → 1)" and not "(KL, α = 1)" * Line 252 - what is the error for the MC estimates of Zc and Zd? Can you provide any bound? How will large error affect the learned parameters of a VAE model? Questions For Authors: * How sensitive are results to different choices of γ? Would a systematic exploration (beyond the limited set used) reveal significantly different findings regarding calibration or performance? * Why did you not test your approach on higher-dimensional latent spaces or more complex data? Would you expect similar benefits in such scenarios, and if not, why? * Can you clarify the conditions under which the introduced hierarchical approach (section 2.3) avoids the degeneracy problem of ELBO? An explicit analysis would substantially strengthen the theoretical contribution. * The various approximations might limit the practicality of the proposed method to simpler dataset and lower dimensions. Have you tested the MC approximation error for latent space with more dimensions and larger and more complex images? * What is the advantage of the proposed method compared to many other methods that address posterior collapse in VAE? * Learning an empirical prior after training can also provide high quality samples (assuming posterior collapse was mitigated) and might be a simpler and a more practical solution. Can you explain why would the proposed method be beneficial here? * In section C.4 clearly demonstrates areas of the prior (explored via interpolation) where the samples are noisy (Figures 3,6). Given the very simple datasets (MNIST, fashion-MNIST), it is not clear how well the advantages of the proposed method will scale to more complex and higher dimensional data. Have you tried the proposed method with such datasets (e.g., ImageNet or even CIFAR10)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > some claims … insufficiently supported by the empirical Lines 12 (right) and 87 (right) are general remarks on robustness of fractional posteriors, citing others. Statement on line 44 (right) follows these, but may be misunderstood. We will change to “an alternative to”. Alignment with prior is substantiated by Table 2 (last col), Fig 4&5, and Fig 7 & Table 5. ### Experimental > only two-component mixtures Beyond 2 components, the marginal likelihood landscape is complex (line 328, left, citation therein). The main text uses 2 components for clarity and focus. Beyond this, we need more exposition to separate the confounding effects due to the complex landscape. Nonetheless, C.1.2 gives results for 4 components. > linear regressions … lack sufficient theoretical justification Indeed, it’s more justified to regress against $\ell^{-2}$ (C.1.1). This gives better interval lengths, but worse coverages. Other strategies are possible (last sent. in sec. 6). ### Comments (numbered) 1. Yes. See caption (also Fig 4 of Kingma&Welling, 2022]. marginal-CDF$\in [0.1, 0.9]$ 2. All results are based on the variational approximation optimised on train set. The empirical error-bars are for 10 runs, a different seed each to initialise the NN. With “ELBO/div”, we know that the fractional posteriors are closer to the prior for smaller $\gamma$ in the same sense of KL. This can't be seen from “Objective/div” because Renyi-divergence differs with $\gamma$. See also line 358 (left) 3. Ok 4. We haven't systematically investigated. Table 2 hints at stability from the relatively small error bars (3 x stddev) by using 100 samples per datum and 10 runs (details in C.3). ### Questions (numbered) 1. Sensitivity analysis of $\gamma$ must be done within posterior families (sec 2.2). Current results show the quality of the bound and the closeness of the approx. fractional posterior to the prior. 2. We will acknowledge this experimental limitation in an additional section. Current 2d latent space allows posterior illustrations (Fig 4 to 6). We use NNs from [Ruthotto & Haber, 2021], with only 88,837 parameters for 2d latent space, so results can be obtained on free platforms (sec C.5). We need richer NNs for more complex data. We are also careful and use the correct likelihood: continuous Bernoulli. Otherwise, bounds in Table 2 are not convincing. We expect similar benefits compared with existing objectives such as ELBO or $\beta$-VAE’s, when the goal is either better evidence bounds, or more stable computation of fractional posteriors, or both. 3. Degeneracy isn't necessary for optimality; but it isn't avoided and is also a solution, as stated in A.4. There, Eqs 3&4 say if we explicitly enforce $q(u)\neq0$ at multiple locations, optimality is still possible. So, we can design $q(u)$ to be non-degenerate if we wish. Further work can be on crafting training dynamics towards non-degeneracy. 4. For FashionMNIST, using 4d latent space and the following $\gamma$s for our primary bound: * 1 (ELBO): train/test bound=1220/1200; FID=58.3 * $10^{-3}$: train/test bound=1231/1219; FID=56.8 * $10^{-5}$: train/test bound=1231/1219; FID=55.8 So for 4d, bound for smaller $\gamma$ gives better results. Overall 4d results better than 2d (see C.4 for existing values), as expected for a richer model with more parameters. We haven't analysed the MC approx error. 5. We don't directly address posterior collapse. If we must, we might seem to be encouraging collapse, but it's more complicated and demands more discussion. Figure 4(d) gives a prelude, where the fractional posteriors as a whole aggregate towards the prior, but posterior for every data point is different. 6. With respect to image generation from prior (sec 5.3), we learn a decoder that operates well with the prior, which can be simple. At the same time and within the same objective, the NN are optimised with respect to the fractional posteriors that are close to the prior. Post learning the empirical prior requires its model to be sufficiently flexible to approx the posteriors, and a separate procedure to learn its parameters. So we believe our proposal is simpler. Which is more practical depends on context within a wider ML/AI sys — e.g., if posteriors exist from previous intensive training/tuning, the empirical prior approach might be preferred. 7. The samples are noisy also because NN architectures for the encoder & decoder are simple with 89K parameters for 2d latent space. For CIFAR10, we add one CNN layer (total 3) each for encoder & decoder, total 530K parameters for 32d latent space. Number of train/eval samples reduced to16/128; epochs reduced to 300 (C.3 gives current settings). Results using SageMaker StudioLab: * $\gamma=1$ (ELBO): train/test bound=1229/1172; FID=141 * $\gamma=10^{-5}$: train/test bound=1238/1184; FID=135 Our bound for $10^{-5}$-posterior gives better results. Better results need more complex architectures, e.g., ResNet-1001 with 10.2M parameters. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. I will keep the score.
Summary: The paper introduces a novel one-parameter variational objective that generalizes the standard evidence lower bound (ELBO) by enabling the estimation of fractional posteriors. By leveraging Hölder’s inequality, the authors derive a new bound L₍γ₎ which recovers the conventional ELBO in the limit as γ → 1. The framework is further extended to hierarchical and Bayes posteriors, and the paper provides both analytical gradient derivations for cases such as exponential family models and mixture models, as well as empirical studies. Experiments on Gaussian mixture models and variational autoencoders (VAEs) demonstrate that fractional posteriors yield better-calibrated uncertainties and improve generative performance, particularly in aligning the VAE decoder with the prior distribution. **Updates after Rebuttal** Thanks for the rebuttal. I think most of my concerns have been addressed. I will keep my scores. Claims And Evidence: Yes all claims are supported by clear theoretical and empirical evidence Methods And Evaluation Criteria: Yes Theoretical Claims: I have checked all theoretical proofs, and I found no overt mistakes in the proofs. Experimental Designs Or Analyses: Yes I have checked the soundness and validity of experimental designs, and they are sound. Supplementary Material: Yes, all. Relation To Broader Scientific Literature: The proposed approach is closely linked to the density estimation literature by tempering the likelihood to improve robustness and convergence properties. For instance, Friel and Pettitt (2008) introduced power posteriors to facilitate marginal likelihood estimation in mixture models—a concept that underpins the use of fractional likelihoods in density estimation. Similarly, O’Hagan (1995) ARXIV.ORG showed that raising the likelihood to a fractional power can yield more objective Bayes factors, particularly valuable in nonparametric settings where standard models are prone to overfitting. In the context of variational autoencoders (Kingma & Welling, 2014), the paper’s findings relate to efforts such as β-VAE (Higgins et al., 2017) and IWAE (Domke & Sheldon, 2018), where modifying the ELBO has been proposed to improve disentanglement or tighten the variational bound. Essential References Not Discussed: The paper covered most of the essential works in this field. Other Strengths And Weaknesses: Strengths: * The experiments on VAEs, including improvements in evidence bounds and better alignment of decoder distributions for generative tasks, indicate potential practical benefits in generative modeling and beyond. Weaknesses * The method involves nested integrations and Monte Carlo estimates (especially in the hierarchical and semi-implicit formulations), which may lead to high computational overhead. This can become particularly problematic in high-dimensional latent spaces or when dealing with complex models. Other Comments Or Suggestions: N.A. Questions For Authors: How would the proposed fractional posterior works when a generation framework other than VAE is applied? Say a flow-based framework. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### Relation To Broader Scientific Literature Thank you. We will try our best to incorporate these into the paper. ### Other Strengths And Weaknesses > The method involves nested integrations and Monte Carlo estimates (especially in the hierarchical and semi-implicit formulations), which may lead to high computational overhead. This can become particularly problematic in high-dimensional latent spaces or when dealing with complex models. As in our reply to Reviewer H3En, the triple integral in section 2.3.2 can be reduced to a double integral (see section A.3 and last row of Table 3). The remaining double integrals are necessary since we have a hierarchical construction, involving $\boldsymbol{u}$ and then $\boldsymbol{z}|\boldsymbol{u}$. Fortunately, depending on the application, one may not need such a construction if we set $\gamma$ to be small so that the approximating family can be simple, as discussed in section 2.2. ### Questions For Authors We have done preliminary work on such extensions, but it is not trivial. We leave such matters for future work to keep the current paper focused. Subject to space constraints, we may suggest approaches in relation to other frameworks in an additional future work or discussion section, especially on pitfalls to avoid based on our experience. --- Rebuttal Comment 1.1: Comment: Thanks for the rebuttal. I think most of my concerns have been addressed. I will keep my scores.
Summary: The paper proposes a variational objective targetting fractional posteriors based on the Holder inequality instead of the more typical Jensen's inequality approach. Furthermore, for hierarchical models, minor variations of the variational objective are considered. The utility of the approach is demonstrated through coordinate-ascent variational inference of mixture models and gradient-based variational expectation maximization of deep latent variable models. ## update after rebuttal Through intense discussion with the authors, the points of disagreement have been sufficiently identified, and the authors have promised to address them. As such, I am now in favor of the paper to be accepted. However, the rather large number of changes promised by the authors makes me think that an additional round of review would be beneficial. As such, I will lean toward borderline. Claims And Evidence: The contributions of the paper are a bit confounded, and this is my main concern. The main claim appears to be that the paper proposes a variational objective that can handle fractional posteriors. What I don't get is that one can use plain-old evidence lower bound maximization for fractional posteriors too: what is wrong with using the ELBO $$ \mathcal{L}_{\mathrm{ELBO}}^{\gamma}(q) = \int \big\\{ \gamma \log p(\mathcal{D} \mid z) + \log p(z) \big\\} q(\mathrm{d}z) + \mathbb{H}(q) $$ for approximating a $\gamma$-fractional posterior? Therefore, I do not find that being able to handle fractional posteriors can be claimed as a technical contribution. On the other hand, it does appear that the paper is proposing a novel variational objective based on Holder's inequality. But it is unclear what the claimed benefits of this new objective are. Is this statistically better in any sense than naively using the ELBO as above? Is there a computational benefit? Works that propose new divergences need to articulate what is uniquely new or desirable about the proposed divergence. For instance, the initial claim about $\alpha$- and $\chi^2$-divergences [1,2] was that they reduce the mode-seeking behavior of the exclusive KL divergence. The long-standing argument for the exclusive KL divergence is that it is computationally convenient to optimize. Unless there is some serious misunderstanding on my end, I think the paper should reconsider its positioning and refine its technical claims. 1. Li, Yingzhen, and Richard E. Turner. "Rényi divergence variational inference." Advances in neural information processing systems 29 (2016). 2. Dieng, Adji Bousso, et al. "Variational Inference via $\chi $ Upper Bound Minimization." Advances in Neural Information Processing Systems 30 (2017). Methods And Evaluation Criteria: The same comments apply here. On a technical level, however, I would also like to point out that it is unclear what the objective $\mathcal{L}_{\gamma}$ is doing. It is fair to assume that it is a surrogate for some divergence, but which one? Is the proposed objective equivalent to this divergence up to a constant (The ELBO is exactly the same as the KL divergence up to a constant), or is it a strict surrogate? Furthermore, the discussion in Section 2.2 needs to be more subtle. Selecting $\gamma$ is a specification of the *model*, not the *inference algorithm*. That is, the submission claims that if a variational approximation can "approximate only certain fractional posteriors well, then the corresponding $\gamma$s would be optimal." Optimal in what sense? This is essentially modifying the *model* so that some variational approximation better approximates it, but there is no reason to believe that this will be statistically sound. The model that is best approximated could be terrible. Instead, previous works took different approaches, from maximizing the predictive density [1] or taking the ABC perspective [2], which come with accompanying theoretical analyses. 1. Grünwald, Peter, and Thijs Van Ommen. "Inconsistency of Bayesian inference for misspecified linear models, and a proposal for repairing it." Bayesian Analysis (2017): 1069-1103. 2. Miller, Jeffrey W., and David B. Dunson. "Robust Bayesian inference via coarsening." Journal of the American Statistical Association (2019). Theoretical Claims: n/a Experimental Designs Or Analyses: For reasons mentioned above, the evaluation is not entirely sound. The paper is proposing a variational objective and corresponding inference algorithms. Thus, the evaluation should focus on evaluating the following: "Given a fixed value of $\gamma$, how accurate is the obtained variational approximation?" The experiments are not appropriate for doing this. The experiments in Table 1, for example, are not doing this: coverage confounds the properties of the model and the algorithm, so they are not informative about the performance of the algorithm/variational objective. Furthermore, since the model changes depending on $\gamma$, the variational bound values in between different values of $\gamma$ cannot be compared, and individual bound values are not interpretable. Similar arguments apply to Table 2. Supplementary Material: n/a Relation To Broader Scientific Literature: The paper proposes a new variational objective, which extends the existing literature on developing alternatives to the ELBOs. Given the concerns above, it is unclear how the work is positioning itself within this context. Essential References Not Discussed: * Section 2.3: The objectives specialized to hierarchical models in Section 2.3 are reminiscent of the "locally-enhanced bounds" proposed in [1]. I recommend taking a look for connections. * Line 228: The reparameterization gradient for variational inference was independently proposed by [2,3] as well. 1. Geffner, Tomas, and Justin Domke. "Variational inference with locally enhanced bounds for hierarchical models." arXiv preprint arXiv:2203.04432 (2022). 2. Titsias, Michalis, and Miguel Lázaro-Gredilla. "Doubly stochastic variational Bayes for non-conjugate inference." International conference on machine learning. PMLR, 2014. 3. Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. "Stochastic backpropagation and approximate inference in deep generative models." International conference on machine learning. PMLR, 2014. Other Strengths And Weaknesses: n/a Other Comments Or Suggestions: * The term "hierarchical posterior" has been used throughout. This is a bit unusual as a posterior may not have any notion of hierarchy, but a *model* can be hierarchical. * Line 30 "high-variance estimators": Why is Roeder 2017 cited here? Roeder proposes a lower variance estimator, so this doesn't appear to be the right citation here. Furthermore, the fact that coordinate descent can be used does not necessarily fix everything since it can only be used with mean-field conjugate families. * Line 69-72 "The Kullback-Leibler divergence ... is the only case where the chain rule of conditional probability": I am not sure why this is relevant here. * Line 94 "it is achieved without relying on PAC-Bayes or modifying the likelihood": PAC-Bayes bounds are generalization bounds and not about variational inference algorithms. (Alquier *et al.* 2016 establish generalization guarantees for variational posteriors.) What did the authors intend by this sentence? Questions For Authors: n/a Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We suspect misunderstandings. We ask for further clarifications below. ### Claims & Evidence > contributions ... confounded Our key contribution is a lower bound that _also_ approximates fractional posteriors. Having both at once is new in ML and statistics. Moreover, ELBO is a special case. So, we fill a gap between standard VI (ELBO) and fractional Bayesian inference. Fractional posteriors have been shown to be more robust (sec 1 2nd para), and have applications in calibration (sec 5.1). A lower bound allows _principled_ maximisation of the hyperparameters (ML-II), common in Bayesian ML (sec 5.2). Having both at once benefits generation from VAE decoder (sec 5.3). We bring the bound in sec 2 to expts in sec 5 by developing complex posterior constructions (sec 2.3), parameter updates (sec 3) and MC estimates (sec 4). > $L_{ELBO}^{\gamma}$ $L_{ELBO}^{\gamma}$ essentially changes the likelihood of the model, so it is an ELBO to that changed model, and may not lower bound the original model (depends whether $p(D|z)>1$). In contrast, ours is a theoretical lower bound to the original model. It holds for all $\gamma\in(0,1)$ and all $q$. Nonetheless, the optimal solution of $L_{ELBO}^{\gamma}$ indeed approximates the fractional posterior of the original model. It's the same as $\beta$-VAE’s objective, by dividing by $\gamma$ (Table 3). If we only want a fractional posterior, then this suffices. If, in addition, we wish to estimate the hyperparameters of the model, as is common in ML, then our lower bound is effective (Table 2). > statistically better in any sense than naively using the ELBO ...? We show our bound to be computationally more stable and to give better results than $\beta$-VAE for the FashionMNIST in C.4 (referenced in sec 6). We have not analysed the properties such as convergence rates. Please clarify if we misunderstood the question. > computational benefit? No. Indeed, KL is mathematically more convenient. Computationally, if implemented using sampling (e.g., for VAE), we swap the log and sum operations, include multiplicative and additive constants, and use more than one sample (line 807). > Works that propose new divergences …what is uniquely new or desirable about the proposed divergence We provide a variational objective that lower bounds evidence _and_ enables estimation of approx. fractional posteriors. This is _not_ fulfilled by $L_{ELBO}^{\gamma}$ (nor $\beta$-VAE). ### Methods & Evaluation > ... a surrogate for some divergence, but which one? ... equivalent to this divergence up to a constant ... a strict surrogate? Optimality is at the fractional posterior so it's not a divergence to/from the Bayes posterior. We do not think it’s equivalent to any known divergence up to a constant (the evidence). We can define a divergence (from the fractional posterior) based on our bound: $D(q||(p(z),p(D|z),\gamma)) = L_{evd} - L_\gamma$, where the triple defines the target posterior. We can better answer if “strict surrogate” is defined. > $\gamma$ is a specification of the model, not the inference $\gamma$ is not a specification of the model. From a Bayesian viewpoint, the model is fully specified by likelihood $p(D|z)$ and prior $p(z)$. Given a data set, this fixes the exact marginal likelihood or data-evidence $p(D)$ (MacKay, 2003). One may compute $p(D)$ and $p(z|D)$ using sampling, such as MCMC. For complex models in ML, variational methods are developed (see 1st para of paper). We propose a new variational method that leads to approximate fractional posteriors. Our method has a parameter $\gamma$ for the inference, but it does not change the model. We admit there are schools other than Bayesian. In particular, in the regularisation community, the end objective is treated as the model, like the reviewer alludes to. To put our paper in the correct setting, it begins with “Exact Bayesian inference is ...”. > Optimal in what sense? In the sense of giving a tighter bound to the evidence (1st sent in para). We will reiterate at the end of para. > modifying the model … statistically sound. We emphasise that the model is fixed (given hyperparameters) and evidence is fixed (given data) in the Bayesian view. Our lower bound to the evidence is mathematically sound. It changes neither $p(D|z)$ nor $p(z)$; max wrt $q$ also changes neither. But max wrt $p(D|z)$ or $p(z)$ changes the model. ### Experiments Our evaluation is sound: **In sec 5.1, the model is fixed** and the bounds are comparable for relative tightness to the same exact evidence. **In sec 5.2, the model changes** because we optimise the decoder, changing $p(D|z)$ (right para starting line 318). Here, the bounds are comparable for the quality of optimised models within the same family (i.e., same NN architecture) on the same data. We should have used the word “tighter” more carefully. We will update the abstract; 4th para in intro; last para in sec 2.2; intro to sec 5. 4th and 5th para in sec 5.2; and conclusion. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response. I think I have a clearer understanding of the disagreements here. > $\gamma$ is not a specification of the model. Okay let use more clear terms; it is a specification of the *posterior*. Changing $\gamma$ changes the posterior. Now, if you are being orthodox Bayesian, there is no reason to change the posterior. The whole point of using fractional posteriors, however, is that you suspect the model is misspecified and therefore use modify the posterior. > $L_{ELBO}^{\gamma}$ essentially changes the likelihood of the model, so it is an ELBO to that changed model, and may not lower bound the original model (depends whether). In contrast, ours is a theoretical lower bound to the original model. It holds for all $\gamma \in (0, 1)$ and all $q$. I think the authors should clarify whether they are thinking in terms of performing empirical Bayes/marginal likelihood maximization (optimizing parameters of the posterior) or variational inference (inferring $q$). In the perspective of VI, it doesn't matter whether there exists an upper bound or not, what matters is which divergence is being minimized with respect to what target. Here, the target is the fractional posterior, which changes with $\gamma$. In terms of marginal likelihood maximization, it is unusual that variational inference is performed against the fractional posterior, but the marginal likelihood is defined with the original posterior. If one believes in their model, it makes sense to do everything with the untempered posterior. If you're being post-Bayesian and don't believe your posterior, then everything should be done using the fractional posterior, and this could be done by maximizing the $L^{\gamma}_{\mathrm{ELBO}}$. What is the justification for this mix and match here? > We provide a variational objective that lower bounds evidence and enables estimation of approx. fractional posteriors. This is not fulfilled by By "confounding," I meant that those two things need to be evaluated separately. > We can define a divergence (from the fractional posterior) based on our bound Yes please define this divergence so that it is clear what the variational inference part is actually doing. > In the sense of giving a tighter bound to the evidence (1st sent in para). We will reiterate at the end of para. In terms of marginal likelihood maximization, I can buy this. However, it must also be clarified that the variational approximation $q$ is now targetting a fractional posterior, where its $\gamma$ was not selected according to any notion of statistical optimality of the posterior but just to make the variational bound tighter. Here is a summary of the points: * If marginal likelihood maximization is the only ultimate goal, then I agree that the technique in the paper could yield a tighter bound. I agree that this is meaningful. * The technique proposed in this paper has little to do with the goal of post-Bayesian works in fractional posteriors since $\gamma$ is selected purely to make the variational bound tighter. Therefore, there is no reason that the usual goals of post-Bayesianism, like robustness against misspecification or calibration, will be fulfilled. This should be clarified in the paper and the evaluation will also have to reflect this. Do the authors concur with these points? --- Reply to Comment 1.1.1: Comment: > orthodox Bayesian … no reason to change the posterior. The whole point of using fractional posteriors … model is misspecified Agree. Also, the approx Bayes posterior by ELBO is known to under-estimate the uncertainty, so here we actually want an approx fractional posterior (sec 1 para 2). Sec 5.1 is on this. > clarify whether … empirical Bayes/marginal likelihood maximization (optimizing parameters of the posterior) or variational inference (inferring $q$) We assume the 1st parenthesis should be about “the model”. Until and including sec 5.1, the paper is on inferring $q$ to max the lower bound to the evidence. In sec 3, $\theta$ is the parameters of $\tilde{q}$; and sec 4 follows from this. Sec 5.2 & 5.3 optimise the hyperparameters in $p(D|z)$ of the model, as we feel is expected in VAE expts. Sec 5.2 para 5 makes this clear. We will add more signposting, and also use the word “tighter” more carefully. > doesn't matter ... an upper bound or not, what matters is which divergence is being minimized with respect to what target. Here, the target is the fractional posterior, which changes with $\gamma$ Taking reference from [Knoblauch et al., 2022] cited in our paper, we feel the reviewer has taken the _VI as constrained optimization_ view (sec 2.3.3 therein), while we take the _VI as log evidence bound_ view (sec 2.3.1 therein). We acknowledge both. See citations in sec 6 para 1 and in sec 1 para 1. > marginal likelihood is defined with the original posterior Marginal likelihood is wrt the prior. > post-Bayesian and don't believe your posterior, then everything … done using the fractional posterior, … done by maximizing the $L_{ELBO}^\gamma$ Yes, $L_{ELBO}^\gamma$ is a straightforward manner to achieve this. It obtains the same result as our bound if the approximating family includes the exact fractional posterior. If the approximating family does not include the exact posterior, then our bound also provides a meaningful quantification between different families and the optimal results therein. In the case of ELBO, access to such quantification has advanced ML, e.g., [Geffner and Domke, ICML 2022] cited by the reviewer. > justification for this mix and match * In sec 3.1.2, we can analyse the fractional posterior which approaches the Bayes posterior as the dataset grows (so $\gamma\rightarrow1$). * In sec 3.1.3, we use approximate fractional posterior for the component means and approximate Bayes posterior for the cluster assignment. Admittedly, we can also do this with some version of $L_{ELBO}^\gamma$, but in general the optimal posteriors will be different. * While the focus in sec 5.1 is on calibration, the bounds could also be used for model comparison against a different prior for the component means — this we have not done to maintain the focus of the section. * Sec 5.3 gives the case for learning VAE decoder so that we can generate via the prior. Our work may inspire more applications (see reply to Reviewer SAc3). > those two things … evaluated separately. Sec 5.1 gives both tightness of the bounds and the quality of the fractional posteriors. While the results are from the same experiment, we _assess the fractional posterior and the bounds separately_. We haven’t drawn conclusions from one to the other — e.g., we haven’t said that because it’s a fractional posterior, it’s a better bound. _Sec 2.2 forbids this explicitly_. We will restate this in sec 5.1. In fact Table 4 shows that smaller $\gamma$ can give worse bounds. We will highlight this in a para on bounds in sec 5.1 that was removed for space. We can also include the actual evidences for the simulated data for comparison. The same can be said for sec 5.2, though now the bounds are for the quality of the optimised models. We will reiterate that the two aspects are evaluated separately. > define this divergence … Will put this between sec 2.1 and 2.2. > $q$ is now targeting a fractional posterior … not selected according to any notion of statistical optimality of the posterior but just to make the variational bound tighter. This will be made explicit in sec 5.2. We add that sec 5.3 is a case for $\gamma$ to be small, though not for statistical optimality. > Here is a summary of the points: * Yes * Post-Bayesian has many aspects; we only do fractional posteriors. We cite existing work on robustness of fractional posteriors for context — _our work is not about proving robustness_ (see reply to Reviewer LKGu). We will make this clearer. Sec 5.1 is on misspecification caused by variational inference (sec 6 last para; also Knoblauch et al., 2022 sec 2.4 pt ii). Some may disagree that this is misspecification — we can add this qualification. Here, $\gamma$ is chosen for calibration, not bounds. An additional limitations sec can say 1. We haven't evaluated where either the likelihood or the prior is misspecified. 2. We don't directly address the goals of post-Bayesianism. We rely on the works of others on such matters.
Summary: The classical variational inference (VI) often underestimates the uncertainty, motivating the research of generalized Bayesian inference. Nevertheless, the theoretical connections between generalized Bayesian inference bounds and the marginal likelihoods are only established approximately/asymptotically, hindering careful use in practice. This work shows that a family of generalized Bayesian inference bounds are lower bounds to the log marginals with an application of the Hölder’s inequality. The bound is tight when the approximate posterior is a fractional posterior, which interpolates between the true posterior and the prior. There are two main applications of the bound. The calibration of the approximate posterior from such a bound can be adjusted by setting a parameter $\gamma$, and the bound can also be utilized for learning generative models such as VAEs. Hierarchical models are widely used in practical Bayesian inference. This work further derives two objectives with fractional posteriors for hierarchical models, an objective with structure and an objective that allows subsampling. There are also cases where components in the objectives can be derived analytically and the work shows three of them. In the experiments, it is first shown that with proper setting of $\gamma$, the fractional posteriors can achieve the correct coverage given a confidence level, while having an interval similar to that from ELBO. For MNIST image modeling problems, tuning $\gamma$ is also useful for having a better model learning capability. On FashionMNIST, it is shown that the generation quality increases when $\gamma$ is reduced, while retaining the theoretical aspects of a variational lower bound. ## update after rebuttal I was leaning towards acceptance for this work. I still think this work is worthy of acceptance after reading through other reviewers' comments. I keep my score, but do not increase it because I also foresee huge efforts in the revision. Claims And Evidence: I think the theoretical claims and empirical evidences are strong. Methods And Evaluation Criteria: - The discussion of the methods is thorough and strong. In addition to the variational objectives, this work also provides recipes for stochastic optimization, as well as a second-step variational objective when the first-step approximate posterior is not normalized. - For the methodology, only the derivation of $\mathcal{L}^{bh}$ is not clear to me. As in A.3, there are two possible objectives, but the main text uses the one with both $u$ and $u'$ in the integrand. I do not see why this one is chosen. - For a theoretical paper like this, I think the benchmark datasets are enough to support the claims. However, it would be much better if the comparison with $\beta$-VAE is also demonstrated in the experiments. Theoretical Claims: - It is neat to have a generalized Bayesian inference objective that is a lower bound to the log marginal. I think this contribution is strong enough to establish the work. - Section 3.1 contains three cases where components in the objective could be derived analytically. Given that modern Bayesian models can be written as probabilistic programs with program tracing and autodiffs, I suppose most of the derivations could be automated in practice. Experimental Designs Or Analyses: Most experiments are on the effects of changing $\gamma$ in the fractional posterior framework. The designs and analyses are sound to me. Supplementary Material: No. I did not check the derivations in the supplementary material. Relation To Broader Scientific Literature: From what I can see, this work extends the generalized variational bounds from Li & Turner, 2016 in the context of generalized Bayesian inference, and is heavily influenced by Yin & Zhou, 2018, for implicit approximate posterior. Essential References Not Discussed: There is another line of works in generalized Bayesian inference with empirical losses and regularizers [1-3]. This work has similarly structured objectives as those works (though they do not produce tight lower bounds). I don't think they should be compared, but they are worth mentioning. [1] Masegosa, A. (2020). Learning under model misspecification: Applications to variational and ensemble methods. Advances in Neural Information Processing Systems, 33, 5479-5491. [2] Morningstar, W. R., Alemi, A., & Dillon, J. V. (2022, May). PACm-Bayes: Narrowing the empirical risk gap in the misspecified Bayesian regime. In International Conference on Artificial Intelligence and Statistics (pp. 8270-8298). PMLR. [3] Lai, J., & Yao, Y. (2024). Predictive variational inference: Learn the predictively optimal posterior distribution. arXiv preprint arXiv:2410.14843. Other Strengths And Weaknesses: No. Other Comments Or Suggestions: No. Questions For Authors: - Why is the objective with both $u$ and $u'$ chosen for $\mathcal{L}^{bh}$? Is there a practical obstacle that keeps the other from being implemented? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### Methods And Evaluation Criteria > For the methodology, only the derivation of $\mathcal{L}^{bh}$ is not clear to me. As in A.3, there are two possible objectives, but the main text uses the one with both $\boldsymbol{u}$ and $\boldsymbol{u}$’ in the integrand. I do not see why this one is chosen. This one is chosen because we have its experimental results at time of submission. We now have experimental results for the alternate and simpler bound in A.3 — the empirical results are very similar (for example, the test objective for $\gamma=0.5$ is $1608.0\pm25.4$), and the implementation is simpler and does not involve triple integrals. We also have proof showing non-degeneracy of the implicit distributions is not neccessary, similar to A.4.2. We will replace the currently chosen bound with the simpler bound in the main text. The current bound will be placed in the appendix. The essential arguments in the experimental section remain the same. > For a theoretical paper like this, I think the benchmark datasets are enough to support the claims. However, it would be much better if the comparison with $\beta$-VAE is also demonstrated in the experiments. The $\beta$-VAE is demonstrated in C.4 for generating images from the prior, and this is referenced from the main text in section 6 (line 409, right column). In addition to the parameters for $\beta$-VAE currently in C.4, we have additionally tried with $\beta$ taking values 5 and $10^2$, giving FIDs 78.4 and 99.1. The conclusions are the same. We will include these additional results in the final version. Subject to space constraints, we will move some of these results to the main text in the camera-ready version. Since $\beta$-VAE for fractional posteriors is provably less tight than ELBO, we do not include the results in section 5.2/Table 2. ### Theoretical Claims > Section 3.1 contains three cases where components in the objective could be derived analytically. Given that modern Bayesian models can be written as probabilistic programs with program tracing and autodiffs, I suppose most of the derivations could be automated in practice. Yes indeed, for the most part. In addition, we believe knowing the derivations and derived formulae have pedagogical value, and can give rise to new algorithms and/or more efficient updates in future work. For example, autodiff will not be able to choose $1/\gamma = 1 + 1/n$ (section 3.1.2); nor is it able to decide to apply $\mathcal{L}_\gamma$ first on $q(\boldsymbol{u})$ and then ELBO for $p(\boldsymbol{c})$ (section 3.1.3), and in this order. After these are determined, and perhaps with further simplification of the bounds and equations, then autodiffs can be applied readily. ### Essential References Not Discussed: We will mention [1]-[3] and relate them to our work. ### Questions For Authors: Answered above. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications and the additional experimental results. I will keep my score.
null
null
null
null
null
null
Revisiting Cooperative Off-Policy Multi-Agent Reinforcement Learning
Accept (poster)
Summary: This paper studies the extrapolation error in off-policy multi-agent reinforcement learning that is caused by the curse of multi-agent. To mitigate the errors, the paper proposed to focus on target estimation error (TEE). To further address the TEE, 3 different approaches, annealed multi-step bootstrapping, average TD target and restricted action representation (RAR), are proposed. Empirical simulations show that the proposed approaches show promising results in comparison with their vanilla counterparts. Claims And Evidence: Yes. Methods And Evaluation Criteria: Proposed methods make sense. Theoretical Claims: The reviewer didn’t check the correctness of the proofs. Experimental Designs Or Analyses: The reviewer didn’t implement the pseudo code in the local machine to redo the experimental studies, but went over the simulation results. Supplementary Material: The reviewer went over some portions supplementary material that includes Figure 11-13. Relation To Broader Scientific Literature: The paper proposed 3 different approaches in addressing the large action space issue in multi-agent setting. These approaches, notably including restricted action representation, might provide inspirations in large action space problems. Essential References Not Discussed: The reviewer believes related works are sufficient. Other Strengths And Weaknesses: Strength: The paper studied the large action space issues in multi-agent system. By focusing on TEE, the paper proposed 3 different approaches, among which RAR is the most interesting to the reviewer. The empirical results show promising results for these approaches in comparison with their vanilla counterparts. Weakness: Please see the comments sections. Other Comments Or Suggestions: 1. Some detailed explanations or analysis are needed, for example on why RAR versions are outperforming their counterparts in Figure 4-6. 2. Among the three approaches, it will be more beneficial to understand and compare the contributions of each induvial approach. 3. It seems like some of the aspects studied in the paper even applies to on-policy setting in MARL, will extrapolation error also be effectively mitigated by the proposed approaches? Questions For Authors: Please see the comment section. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We are thankful for your time and effort in reviewing our paper, which has greatly helped us improve the quality of our paper. We were glad to hear that you found our proposed methods sensible and that our empirical results demonstrated promising improvements over baseline approaches. Below, we address the key concerns you raised. > 1. Some detailed explanations or analysis are needed, for example on why RAR versions are outperforming their counterparts in Figure 4-6. The superior performance of RAR versions can be attributed to the reduction in Target Estimation Error (TEE) when the joint action space is mapped into a lower-dimensional space. To validate this, we conducted experiments with MADDPG-RAR using different joint action dimensions and report the corresponding TEE values in the following table: | Action Dim \ Step | 0M |1M|2M|3M|4M|5M| | - | - | - | - | - | - | - | | 4 | 0.13 | 0.09| 0.09|0.07|0.05|0.04| | 16 | 0.17| 0.11| 0.10| 0.08|0.06|0.06| | 64 | 0.26| 0.10| 0.09| 0.10|0.08|0.09| | 256| 0.27| 0.15| 0.11| 0.09|0.09|0.08| |original MADDPG(3125)|0.36|0.18|0.13|0.13|0.11|0.10| From these results, we observe that a lower action dimension leads to a smaller TEE. This occurs because reducing the joint action space mitigates extrapolation in the target Q-function. Intuitively, if the joint action is mapped into a single dimension (although suboptimal due to potential bias and increased optimality difference, as discussed in Section 3.1), the target Q-function would require no extrapolation even for unseen actions. By properly constraining the joint action dimension, we can significantly reduce extrapolation while maintaining sufficient expressiveness in the joint Q-function, ultimately improving performance. Another key intuition is that joint Q-functions do not necessarily need to assign distinct values to every possible joint action. For example, consider two agents A and B: - Joint action $a_1$: A moves toward B, B stays. - Joint action $a_2$: A stays, B moves toward A. If the task depends only on the relative distance between A and B, then $a_1$ and $a_2$ may be equivalent in terms of their Q-value. Now, assume: - $a_1$ and $a_2$ is frequently observed in state $s_1$. - $a_1$ is frequently observed in state $s_2$, but $a_2$ is not. In an unrestricted joint Q-function, $Q(s_2,a_2)$ may be poorly estimated due to lack of experience. In contrast, with RAR, since the model has already learned in $s_1$ that $a_1$ and $a_2$ are equivalent, and since $Q(s_2,a_1)$ is well estimated, $Q(s_2,a_2)$ will also be well estimated. This further supports why RAR enhances performance. > 2. Among the three approaches, it will be more beneficial to understand and compare the contributions of each induvial approach. Figure 7 (left) illustrates the contributions of Annealed Multi-Step Bootstrapping and Averaged TD Target to QMIX. To further clarify their impact, we provide additional results: | algorithm\map | zerg_5_vs_5| zerg_10_vs_10| zerg_10_vs_11| zerg_20_vs_20| protoss_5_vs_5| terran_5_vs_5| terran_10_vs_10| terran_10_vs_11| | - | - | - | - | - | - | - | - |-| |AEQMIX|62.1|64.4|47.4|56.0|78.1|76.2|80.3|68.9| |EQMIX|57.8|62.6|43.4|55.1|75.3|75.1|78.3|63.8| |QMIX|40.4|45.0|26.5|33.1|69.5|64.4|66.6|40.7| These results indicate that both Annealed Multi-Step Bootstrapping and Averaged TD Target contribute to performance improvement. However, Averaged TD Target provides a greater benefit at the cost of additional network computations, whereas Annealed Multi-Step Bootstrapping is more computationally efficient. The impact of RAR techniques on performance is shown in Figures 4 and 8, where we compare MADDPG and QPLEX with and without RAR. Overall, these results reinforce that all three approaches contribute to performance gains. > 3. It seems like some of the aspects studied in the paper even applies to on-policy setting in MARL, will extrapolation error also be effectively mitigated by the proposed approaches? While some insights from our work could extend to on-policy MARL, extrapolation error is less of a concern in such settings because on-policy methods primarily use a value function (V-function) rather than a Q-function. Specifically: - Annealed Multi-Step Bootstrapping: The on-policy equivalent is TD($\lambda$) which is already widely used. Unlike in off-policy RL, annealing $\lambda$ in TD($\lambda$) is unnecessary because it does not introduce bias as in Q($\lambda$). - Averaged TD Target: While applicable to on-policy RL, its effectiveness is reduced. The V-function is inherently easier to learn than the Q-function and is less affected by variance induced by the joint action space. - Restricted Action Representation: This technique is not applicable to on-policy RL, as it specifically targets the joint action space, which is not used in V-function-based methods. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the clarification and good response. I have increased the score accordingly.
Summary: The work studies the problem of overestimation and target estimation errors in off-policy multi-agent reinforcement learning (MARL). The work first outlines how action-value estimation in MARL often suffers from estimation errors as a consequence of the exponential growth of the joint action space. To substantiate the problem, the work decomposes the value error into three components and discusses how in particular target estimation error (TEE) can cause worse performance in MARL, and how error propagates in value decomposition algorithms. The work proposes three approaches to mitigate such target estimation error based on (1) multi-step target estimation, (2) computing averaged target values across an ensemble of value functions, and (3) projecting the large action space into lower dimensional discrete space to simplify learning action-value functions. The efficacy of these components applied to multiple MARL algorithms in FACMAC, MADDPG, QMIX, and QPLEX are demonstrated in empirical evaluations across several tasks of the SMAC, SMACv2 and GRF benchmarks. Lastly, the work shows further analysis and ablations to show the impact of each of the three novel proposed components. Claims And Evidence: Overall, the claims in this work are mostly well supported and convincing. However, the key claim that all three proposed techniques mitigate TEE (e.g. made in Conclusion) is insufficiently supported. The evaluation clearly demonstrates that all three proposed techniques improve the performance of off-policy MARL algorithms, but only for annealed multi-step boostrapping do the authors provide explicit analysis and show how this reduces TEE (Figure 9 left). Given the claims made, I would expect clear empirical evidence that averaged TD targets and restricted action representations reduce TEE. Methods And Evaluation Criteria: The proposed techniques are sensible and clearly motivated with the problem of target estimation error and value overestimation. The evaluation is well conducted, systematic, and sufficiently extensive. Theoretical Claims: I went through the derivations and steps presented in the main paper and found these logical. I did not verify the proofs provided in Appendix B of the supplementary material. Experimental Designs Or Analyses: 1. Figure 9 (left) clearly shows the impact of the proposed $\lambda$ annealing and choice in general on the TEE. However, similar visualisation is missing for the averaged TD target computation and restricted action representation. To support the claims made in this work, I would expect to see similar visualisations to Figure 9 (left) for e.g. AQMIX/ AMADDPG with different $M$, and for MADDPG/ FACMAC/ QPLEX with and without RAR. 2. Figure 7 (left) compares the win rates of QMIX, AEQMIX and different ablations of the AEQMIX algorithm. While EQMIX with varying $M$ all seem to significantly improve upon QMIX, no significant performance difference can be observed between EQMIX ($M=8$) and AEQMIX ($M=8$) as claimed in the text. Would the authors be able to show such ablation potentially for another task and/ or aggregated across multiple tasks to show that AEQMIX indeed performs better than EQMIX? 3. In Section 4.3 and Section 5, empirical evidence is provided that the restricted action representation is improving the performance of the MADDPG and QPLEX algorithms. However, there is a lack of analysis to show how the RAR technique affects the target estimation error and what is being learning. In particular, I would suggest the following analysis: 1. MADDPG-RAR: Would the authors be able to visualise or otherwise provide intuition into the learning low-dimensional representation of the high-dimensional action space represented by function $g$? Which joint actions are mapped to the same representation and which joint actions are separated? 2. QPLEX-RAR: Figure 4 visualises the $\lambda$ values for QPLEX. Would the authors be able to show similar values for QPLEX-RAR when applying the Sigmoid function? I know that these values would be within [0, 1] but it is not clear to me whether these would remain more stable or change similarly erratic as for QPLEX. 3. Impact on target estimation error: The work claims that all three proposed techniques (including RAR) mitigate TEE, so I would expect explicit analysis showing how MADDPG-RAR and QPLEX-RAR exhibit lower TEE than their vanilla counterparts. 4. Figure 1 nicely illustrates the challenge of learning action-value functions in MARL in tasks with many agents. How would AE- and RAR-extended algorithms perform in this illustration? Would the take-away still be that algorithms like MAPPO that do not rely on action-value functions are preferred for tasks with many agents or do the proposed approaches bridge that gap? 5. When describing Figure 2, it is stated that the proportion of extrapolated values "is calculated based on the fraction of $(s, a')$ pairs in each update that are absent from the replay buffer". However, just because a state-action pair is absent from the replay buffer does not necessarily mean that it is extrapolation since it could have been trained on earlier during training (unless the replay buffer is large enough to fit all training samples). Because of this discrepancy, I would expect a lower proportion of values to actually be extrapolated than shown in Figure 2 (a). Would the authors be able to compute the true proportions and update the respective Figure? Supplementary Material: I reviewed all but Appendix B of the supplementary material which provides further experimental results, details and contextualisation within the literature. I would like to state that I believe the work should attempt to make space to discuss related work and literature (as presented in Appendix A) within the main work. Lastly, I believe that the legend of Figure 10 (b) within the supplementary material is misleading. It refers to "double Q-learning" but appears to show target computation from Figure (14) for different sizes of the ensemble instead for conservative value estimation. Double Q-learning typically refers to another approach [1, 2]. [1] van Hasselt, Hado. "Double Q-learning." Advances in neural information processing systems 23 (2010). [2] van Hasselt, Hado, Arthur Guez, and David Silver. "Deep Reinforcement Learning with Double Q-learning." AAAI (2016). Relation To Broader Scientific Literature: As stated above, I would have liked to see the discussion of related work in the main body of the paper and not as part of the supplementary material. I would encourage the authors to identify ways of fitting at least some of this within the main work. Essential References Not Discussed: I am not aware of any essential references that are not discussed. Other Strengths And Weaknesses: I would like to commend the author to a well structured and written paper. I enjoyed reading the work and believe that it naturally presents the problem, supports it with clear visualisations (e.g. Figure 1 and 2) to illustrate and provide evidence, provides supporting theory, and proposes conceptually simple resolutions that are grounded in existing literature. Other Comments Or Suggestions: I have no further suggestions or comments. Questions For Authors: 1. Given the claims made by this work, I would expect explicit evidence that shows how each of the proposed components (Annealed multi-step bootstrapping, averaged TD targets, restricted action representations) reduce the TEE but such evidence is only provided for annealed multi-step bootstrapping (Figure 9 left). I would strongly encourage the authors to provide such evidence for the other two techniques, and I will increase my score if this is done convincingly. 2. Would the authors be able to visualise or otherwise provide intuition into the low-dimensional representation learned in MADDPG-RAR of the high-dimensional action space represented by function $g$? Which joint actions are mapped to the same representation and which joint actions are separated? 3. Would the authors be able to show $\lambda$ values for QPLEX-RAR when applying the sigmoid function as proposed, similar as shown in Figure 4 for QPLEX? 4. Figure 1 nicely illustrates the challenge of learning action-value functions in MARL in tasks with many agents. How would AE- and RAR-extended algorithms perform in this illustration? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback, especially for your detailed feedback on the evaluation of TEE and the impact of our proposed techniques. We were glad to hear that you found our work well-structured, clearly motivated, and systematic empirical analysis. Below, we address the key concerns you raised. > 1. I would expect clear empirical evidence that averaged TD targets and restricted action representations reduce TEE. For a good estimation of the TD target, both bias and variance play a crucial role. The estimation error of the TD target typically consists of both components. While annealed multi-step bootstrapping reduces bias introduced by extrapolation, it increases variance by assigning higher weight to trajectory returns. Therefore, averaged TD target is importantly utilized to mitigate variance of the target Q-function, and indeed reduced it as shown in Figure 9 (left). Regarding RAR, it tackles both bias and variance of TEE by directly simplifying the joint Q-function. Intuitively, in an extreme case where the discrete action space is compressed into a single dimension, learning the joint Q-function becomes as simple as learning a V-function (though this introduces a significant optimality gap, as discussed in Section 3.1). To further demonstrate the impact of RAR on TEE, we conducted experiments on MADDPG-RAR in MPE. Please refer to point 1 of our rebuttal to reviewer Lbue for details. > 2. Would the authors be able to show that AEQMIX indeed performs better than EQMIX? Yes, AEQMIX consistently outperforms EQMIX. Please see point 2 of our rebuttal to reviewer Lbue for supporting results. > 3. In MADDPG-RAR, Which joint actions are mapped to the same representation and which joint actions are separated? Strictly speaking, all actions have different representation. This is because $g(a)$ is a multi-categorical distribution, and every joint action can have unique probability. For example, consider the spread task with 5 agents and each has 5 actions, which maps from a 5^5 joint action space to 2^5. If we directly apply argmax on the probability, the joint actions are divided into 2^5 groups, each group contains between 64 and 127 mapped joint actions. Within each group, however, we did not find significant correlation among these joint actions. > 4. The $\lambda$ values of QPLEX-RAR when applying the Sigmoid function. The following table presents $\lambda$ values when applying the Sigmoid function. It shows that $\lambda$ values remain stable throughout training. | $\lambda$\Step|1M|2M|3M|4M|5M|6M|7M|8M|9M|10M| |-|-|-|-|-|-|-|-|-|-|-| | $\lambda mean$|0.52|0.59|0.60|0.61|0.62|0.62|0.63|0.64|0.63|0.64| | $\lambda max$|0.98|1.00|1.00|1.00|1.00|1.00| 1.00|1.00|1.00|1.00| | $\lambda std$|0.16|0.15|0.16|0.17|0.17|0.17|0.17|0.18|0.17|0.18| > 5. How would AE- and RAR-extended algorithms perform in this illustration? Would the take-away still be that algorithms like MAPPO that do not rely on action-value functions are preferred for tasks with many agents or do the proposed approaches bridge that gap? Our proposed techniques improve performance, but the gains are not sufficient to match MAPPO. For instance, in the 5-agent scenario shown in Figure 1, MADDPG’s normalized return improves from -60 to -51. However, MAPPO achieves -41, maintaining a performance gap. This suggests that while our methods enhance Q-function learning, they do not achieve the same scalability as the V-function-based on-policy methods. But this does not mean MAPPO is better in all case, off-policy methods have their own advantages, such as higher sample efficiency. > 6. Would the authors be able to compute the true proportions of extrapolated values and update the respective Figure? We apologize for the mistake. The proportion of extrapolated values was not computed from the replay buffer. Instead, we recorded visited state-action pairs using a separate Python dictionary throughout training. As a result, Figure 2(a) reflects the true proportions. > 7. The work should attempt to make space to discuss related work and literature within the main work. Thanks for the suggestion. While we included some discussion of related work in Section 2, we acknowledge that it may not be sufficient. We will consider adding a new subsection in Section 2 to better integrate the literature discussion into the main paper. > 8. The legend of Figure 10 (b) within the supplementary material is misleading. The term "double Q-learning" in Figure 10(b) refers precisely to [1, 2]. In this figure, the x-axis represents different ensemble sizes, where a larger M leads to a more conservative (underestimated) target. All blue columns apply double Q-learning, which can also contribute to underestimation. To provide contrast, we include a case without double Q-learning when M=0 (pink), which exhibits the most overestimation. The figure demonstrates that simply reducing overestimation does not necessarily lead to better performance. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses and clarifications. > Therefore, averaged TD target is importantly utilized to mitigate variance of the target Q-function, and indeed reduced it as shown in Figure 9 (left). Do I understand correctly that Figure 9 (left) then shows the TEE for an AE algorithm using both averaged TD targets and multi-step bootstrapped targets using the respective hyperparams? This was not clear to me previously. Also, for which algorithm does Figure 9 show these metrics? Is this for QMIX/ MADDPG/ QPLEX/ ...? > Please refer to point 1 of our rebuttal to reviewer Lbue for details. This provided analysis into the RAR approach is excellent and would expect to see some of it in the revised paper since it provides essential evidence for claims made in the work. The same goes for the provided results for QPLEX-RAR. As a bonus, it would be helpful to also provide TEE values for QPLEX with and without TEE (as done for MADDPG) to showcase that RAR also reduces TEE as claimed since the QPLEX RAR approach differs from the one used in MADDPG. > Our proposed techniques improve performance, but the gains are not sufficient to match MAPPO. For instance, in the 5-agent scenario shown in Figure 1, MADDPG’s normalized return improves from -60 to -51. However, MAPPO achieves -41, maintaining a performance gap. > > This suggests that while our methods enhance Q-function learning, they do not achieve the same scalability as the V-function-based on-policy methods. But this does not mean MAPPO is better in all case, off-policy methods have their own advantages, such as higher sample efficiency. I would really appreciate a mention of this result in the work! In some sense, it can be seen as a negative result but it sheds light on when different algorithms shine which is valuable insights for the MARL community. > The term "double Q-learning" in Figure 10(b) refers precisely to [1, 2]. In this figure, the x-axis represents different ensemble sizes, where a larger M leads to a more conservative (underestimated) target. All blue columns apply double Q-learning, which can also contribute to underestimation. This was not clear to me from the textual description -- a short clarification as provided in the rebuttal would be helpful here. Overall, I remain at my original score and continue to suggest to accept this work -- it provides novel techniques and provides insights that are valuable for off-policy algorithms in MARL.
Summary: This paper identifies a problem of erroneous Q-target estimation, primarily caused by extrapolation errors, which worsens as the number of agents increases. The authors follow the previous work on single-agent error decomposition and apply it to multi-agent Q-learning, decomposing the error into Target Approximation Error (TAE), Target Estimation Error (TEE), and Optimality Difference (OD). To address the issue of TEE, the authors propose a suite of techniques, including annealed multi-step bootstrapping, averaged Q-targets, and restricted action representation. Experiments on SMAC, SMACv2, and Google Research Football show significant improvements over baseline methods. Claims And Evidence: The claims are generally convincing. Methods And Evaluation Criteria: - The proposed methods make sense for addressing the problem of erroneous Q-target estimation. - The benchmarks, including SMAC, SMACv2, and Google Research Football, are appropriate for assessing the performance of MARL algorithms. Theoretical Claims: The paper does not introduce particularly complex new theoretical contributions but rather focuses on experimental observations. Experimental Designs Or Analyses: - The authors frequently mention the MPE environment in the introduction but do not include any experiments on it. Why was this environment omitted? - The paper should compare its approach with other research addressing overestimation in MARL, such as [1]. --- [1] Regularized Softmax Deep Multi-Agent Q-Learning. Supplementary Material: I reviewed the supplementary material, including the pseudo-code for the AEQMIX algorithm and additional experimental results. Relation To Broader Scientific Literature: The authors highlight the increasing error in joint-action estimation as the action space expands, which is a well-known issue in multi-agent reinforcement learning. They analyze this issue in detail and propose solutions, making it a relevant and important contribution. Essential References Not Discussed: [1] Regularized Softmax Deep Multi-Agent Q-Learning. Other Strengths And Weaknesses: Strengths: - The paper presents several effective improvements over baseline methods. - The experimental results demonstrate clear performance gains. Weaknesses: - The scope of paper is framed around off-policy learning, but the key techniques seem to target algorithms using Bellman optimality equations. - The section on Restricted Action Representation in MADDPG is difficult to follow. The authors could provide a more intuitive example, such as one where the number of actions differs from the number of agents, rather than using the 5^5 example, which is not clearly explained. - The paper references the $\lambda$ parameter in QPLEX but does not provide a clear introduction to QPLEX itself. Some background should be added. - The discussion on $\lambda$ in QPLEX does not seem to align with the idea of Restricted Action Representation. Instead, it appears to be merely a constraint on the weight values rather than a true action space compression. - The authors propose applying an additional sigmoid function to $\lambda_i$, but the original QPLEX paper already applies sigmoid before summation. While this paper applies it after summation, the practical impact of this change is unclear. - The ablation study on Restricted Action Representation is confusing. The results for MADDPG and QPLEX should be presented together in the same figure for easier comparison. Why are the ablation studies conducted in different environments for different methods? This makes it difficult to interpret the results consistently. Other Comments Or Suggestions: See Weaknesses Questions For Authors: See Weaknesses Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your constructive feedback, which has greatly helped us improve the quality of our paper. We were glad to hear that you found our claims convincing and that our proposed methods make sense for addressing the problem. Below, we address the key concerns you raised. > 1. The authors frequently mention the MPE environment in the introduction but do not include any experiments on it. Why was this environment omitted? The MPE environment is included in **Figure 2 (left) and Figure 4 (left)**. However, we did not highlight it in the experiment section because it contains only one suitable task (Spread), and is significantly less complex than SMAC and SMACv2. We will provide additional results for Spread in the appendix in the next version. For futher details, please refer to point 5 of our rebuttal to reviewer o7jb. > 2. The paper should compare its approach with other research addressing overestimation in MARL, such as [1]. Thank you for the suggestion. We will add a comparison to RES. However, based on the current results, it appears that RES does not outperform the fine-tuned QMIX when evaluated within the PyMARL2 codebase. This also aligns with our argument that extrapolation error does not always lead to overestimation. As shown in **Figure 10(a) (Appendix D.2)**, applying a more underestimated target actually degrades performance. > 3. The scope of paper is framed around off-policy learning, but the key techniques seem to target algorithms using Bellman optimality equations. Our techniques target methods that utilize Q-functions, which are fundamental to most off-policy methods. Since these methods require off-policy Q-target estimation, they inherently face target estimation error, as discussed in section 3 of our paper. > 4. The section on Restricted Action Representation in MADDPG is difficult to follow. The authors could provide a more intuitive example, such as one where the number of actions differs from the number of agents, rather than using the 5^5 example, which is not clearly explained. Thanks for this suggestion, we choose the 5^5 to 2^5 example because it directly corresponds to our experimental setup in Figure 4 (left). For another example such as 4 agents with 5 action, we can also compress the action space from 5^4 to 2^4. For a more detailed explanation of RAR, please refer to point 1 of our rebuttal to reviewer Lbue. > 5. The paper references the $\lambda$ parameter in QPLEX but does not provide a clear introduction to QPLEX itself. Some background should be added. Thank you for pointing this out. We will add a brief introduction to QPLEX to provide the necessary background for the discussion on the $\lambda$ parameter. > 6. The discussion on $\lambda$ in QPLEX does not seem to align with the idea of Restricted Action Representation. Instead, it appears to be merely a constraint on the weight values rather than a true action space compression. The concept of RAR is to **limit the effect of joint actions on the joint Q-function**. In QPLEX, since the joint action component is already separated by $\lambda$, directly constraining $\lambda$ can effectively restrict its influence. Applying the same technique as in MADDPG could achieve similar results but would require additional networks. We will provide corresponding experiments in the appendix. > 7. The authors propose applying an additional sigmoid function to $\lambda_i$, but the original QPLEX paper already applies sigmoid before summation. While this paper applies it after summation, the practical impact of this change is unclear. The original QPLEX paper applies a sigmoid before summation, but this does not restrict the impact of joint actions, as other weight terms can still amplify it. The full expressive power of QPLEX requires $\lambda\in[0,\infty]$, meaning that even with a pre-summation sigmoid, QPLEX retains full representation capability. By constraining $\lambda\in[0,1]$ , our approach explicitly limits action representation ability, leading to more stable results. > 8. The ablation study on Restricted Action Representation is confusing. The results for MADDPG and QPLEX should be presented together in the same figure for easier comparison. Why are the ablation studies conducted in different environments for different methods? This makes it difficult to interpret the results consistently. We understand the concern. However, our MADDPG and QPLEX use **different codebases** as detailed in Appendix D.1, making direct performance comparison within the same figure difficult (basically because the difference in **parallelized environments**). Moreover, the performance of MADDPG and QPLEX are significantly different, SMACv2 may be too hard for MADDPG and SMAC may be too easy for QPLEX, which makes the effect of RAR less noticeable. We will clarify this in the next version. --- Rebuttal Comment 1.1: Comment: Thanks for your clarification. I have updated my score. Good luck!
null
null
null
null
null
null
null
null
Visual Autoregressive Modeling for Image Super-Resolution
Accept (poster)
Summary: This paper proposes VARSR, a visual autoregressive framework for image super-resolution (ISR), addressing the trade-off between fidelity and realism. By leveraging next-scale prediction, prefix tokens for LR conditioning, scale-aligned rotary positional encodings (SA-RoPE), and a diffusion refiner for quantization residual modeling, VARSR achieves promising results in perceptual quality while maintaining computational efficiency. The authors also introduce a large-scale dataset and an image-based classifier-free guidance (CFG) mechanism to enhance realism. Extensive experiments demonstrate VARSR’s superiority over GAN- and diffusion-based methods in both qualitative and quantitative metrics, with significant efficiency gains. Claims And Evidence: The paper provides strong empirical evidence to support its claims, including quantitative results, qualitative comparisons, ablation studies, and user evaluations. These claims are well-aligned with the broader goals of ISR and generative modeling, and the evidence demonstrates VARSR’s effectiveness in addressing key challenges in the field. Methods And Evaluation Criteria: The methods and evaluation criteria in the paper are well-designed and comprehensive. VARSR’s innovations (e.g., next-scale prediction, prefix tokens, SA-RoPE, diffusion refiner, CFG) are rigorously validated through quantitative metrics, qualitative comparisons, ablation studies, and human evaluation. The use of both reference-based and non-reference metrics ensures a balanced assessment of fidelity and perceptual quality, while the user study provides valuable insights into real-world applicability. Theoretical Claims: The theoretical claims in the paper are well-supported by both theoretical justifications and empirical evidence. The use of a large-scale dataset and a robust training pipeline further enhances the model’s generative priors, making it a strong candidate for real-world ISR applications. Experimental Designs Or Analyses: The experimental designs and analyses in the paper are comprehensive and well-structured. They effectively validate VARSR’s performance through quantitative metrics, qualitative results, ablation studies, and human evaluation. The use of both synthetic and real-world datasets ensures a robust evaluation, while the ablation studies provide valuable insights into the contributions of each component. Supplementary Material: The supplementary material covers implementation specifics, additional ablation studies, and visualizations, reinforcing the paper’s claims and demonstrating VARSR’s effectiveness in ISR. The inclusion of limitations and real-world benchmarks further highlights the practical applicability and areas for future improvement. Relation To Broader Scientific Literature: - Image Super-Resolution (ISR) ISR is a well-studied problem in computer vision, aiming to reconstruct high-resolution (HR) images from low-resolution (LR) counterparts. Traditional methods (e.g., SRCNN, VDSR) focus on pixel-level fidelity but struggle with real-world degradations and perceptual quality. - Autoregressive Modeling Autoregressive models, popularized in language modeling (e.g., GPT, LLaMA), have recently been adapted to vision tasks (e.g., VQVAE, DALL-E). These models generate images by predicting tokens sequentially, often in a coarse-to-fine manner. - Generative Models Generative models, including GANs, VAEs, and diffusion models, have revolutionized image synthesis and restoration. Each has strengths (e.g., GANs for realism, diffusion for detail) and weaknesses (e.g., GAN instability, diffusion inefficiency). Essential References Not Discussed: The references are relatively comprehensive. Other Strengths And Weaknesses: - Strengths 1) First application of visual autoregressive (VAR) modeling to ISR, introducing next-scale prediction as a core mechanism. 2) The large-scale dataset (4M images) and training pipeline (C2I pretraining + ISR finetuning) enhance generative priors. 3) Comprehensive evaluations across synthetic (DIV2K-Val) and real-world datasets (RealSR, DRealSR) using both reference (PSNR, SSIM) and non-reference metrics (MANIQA, CLIPIQA, MUSIQ). 4) VARSR outperforms SOTA methods (e.g., PASD, SeeSR) in perceptual metrics (e.g., +6.7% MANIQA, +3.7% CLIPIQA) while matching diffusion models in fidelity. 5) Efficiency: 10× faster inference than diffusion methods (0.59s vs. 5.85s for DiffBIR). - Weaknesses 1) Some implementation details are under-explained, e.g., the exact architecture of the diffusion refiner, the interaction between SA-RoPE and multi-scale tokens, and the training dynamics of the VQVAE with scale dropout. 2) The user study (50 images, 20 participants) is relatively small-scale. Expanding this would strengthen claims about human preference. 3) The large-scale dataset’s curation process (e.g., semantic balance, filtering thresholds) is described briefly. A deeper discussion of potential biases or limitations (e.g., domain coverage) is needed. 4) Limited testing on niche or highly degraded real-world scenarios (e.g., historical photographs, extreme compression artifacts). Other Comments Or Suggestions: None. Questions For Authors: 1) How does the computational cost of VARSR scale with output resolution? The paper mentions tiling for higher resolutions but lacks empirical analysis. 2) Could the diffusion refiner be replaced with a lightweight alternative (e.g., GAN-based) without sacrificing performance? 3) The prefix token approach assumes LR and HR scales are spatially aligned. How does VARSR handle severe misalignment (e.g., rotation or perspective distortion in LR inputs)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **A1. Implementation details.** Thank you for your suggestions. We will elaborate further on these details in our supplementary materials, for example, the MLP architecture of the diffusion refiner, which consists of linear and activation layers. Additionally, our code will be open-sourced to help readers double-check and verify our implementation details. **A2. User Study.** We conducted a larger-scale user study involving 20 participants to evaluate 100 images. The scale of this study is already sufficiently large compared to previous research (e.g., PASD in ECCV2024 involved 15 participants evaluating 40 images). The experimental results further validate our assertion that our VARSR attains the highest selection rate of 57.1%, significantly surpassing alternative methods. This underscores the potent capability of VARSR in real-world settings to produce lifelike images that harmonize with human aesthetics. | Methods | BSRGAN | Real-ESRGAN | StableSR | PASD | SeeSR | VARSR(Ours) | |:-:|:-:|:-:|:-:|:-:|:-:|:-:| | Selection Rates | 0.35% | 0.7% | 3.5% | 17.3%| 21.05% | **57.1%** | **A3. Domain coverage of our large-scale dataset.** To achieve diversity and balance in images from different domains, we conducted semantic clustering and supplemented specific category data. As shown in the Tab below, we ensured that the dataset covers a wide range of category scenes and a relatively balanced proportion (Scenes with a broader semantic scope correspond to a higher proportion of images). This includes portraits, people, food, animals, natural landscapes, cartoons, cityscapes, indoor and outdoor scenes, ensuring comprehensive coverage of visual concepts and richness of scene content. It is undeniable that due to the limited number of images, some rare scenes may not be covered well, such as the semantic scenes discussed in Sec. B.4 Limitations of our Supp.. | Categories | Indoor | Outdoor | Nature | Human | Plant | Object | Animal | Text | Food | Cartoon | Others | |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | Rates | 10% | 9% | 13% | 28% | 8%| 9% | 6% | 4% | 8% | 2% | 3% | **A4. Highly degraded scenarios.** In Sec. B.2 of our Supp., we conducted evaluations on the RealLR200 dataset and achieved SOTA results compared to previous methods. The RealLR200 dataset already contains many highly degraded real-world images, such as historical photographs and extreme compression artifacts. We further provided restoration results in these extreme degradation scenarios, which can be accessed at https://figshare.com/articles/figure/extreme_pdf/28668452?file=53243348. It can be observed that VARSR can still produce faithful and high-quality results in such extreme degradation scenarios, validating the robust capability of VARSR. **A5. Tiling for High-resolution images.** VARSR adopts a tiling approach in generating high-resolution images, which is completely consistent with the diffusion based method. Specifically, we uniformly divide the high-resolution image to be generated into overlapping grids, with each grid having a standard resolution of $512\times 512$. VARSR generates SR results for each grid separately and then tiles them together to obtain the complete image restoration result. Therefore, the computational cost increases linearly with the increase in output resolution. However, we can batch process different grids in parallel to accelerate the inference process, significantly increasing the actual computational speed. Importantly, due to VARSR and the diffusion model using the same tiling approach, the tenfold efficiency advantage over the diffusion model still exists when handling images of various resolutions. **A6. Clarification of Diffusion Refiner.** Please see A4 to Reviewer yFzJ. **A7. Spatial Misalignment.** In the field of image restoration, our goal is to recover the original HR image from severely degraded LR images, where degradation refers to distortions such as blur, noise, compression, and other artifacts that may occur during transmission or capture processes, as defined in the classic work SRCNN (TPAMI2015) in ISR field. The spatial distortions you mentioned, such as rotation or flipping, typically do not fall within the scope of distortions that need to be addressed in ISR field. When spatial distortions such as rotation occur, we can employ preprocessing methods (e.g., corner detection) to restore spatial positioning, followed by the image restoration process. Therefore, we can assume that LR and HR images are always aligned in spatial scale. Our proposed SA-RoPE effectively conveys this spatial structural consistency, thereby enhancing the ISR performance.
Summary: This work presents a novel generative model for image super-resolution, leveraging visual autoregressive modeling. The generative process is conditioned on the low-resolution input, treated as a prefix token. To enhance performance, the authors introduce scale-aligned positional encoding and a diffusion-based refinement step to mitigate quantization losses. The proposed method surpasses previous diffusion-based models in both fidelity and efficiency. ## after rebuttal: weak accept Claims And Evidence: The authors identify the challenges of adapting autoregressive models for image super-resolution and propose tailored solutions for each issue. Methods And Evaluation Criteria: Yes, please refer to experimental design for more details. Theoretical Claims: No, as there are no proofs or theoretical claims. Experimental Designs Or Analyses: Yes. The experimental validation is consistent with previous work, and sufficient ablations regarding the contributions are provided. However, the data collection and filtering processes remain questionable. Supplementary Material: Yes, I read the entire supplement. Relation To Broader Scientific Literature: This paper explores a novel generative framework recently popularized by VAR, while also tackling efficiency, adding further value to the approach. Essential References Not Discussed: Essential references are discussed. Other Strengths And Weaknesses: Strengths: - Overall, this is a strong and well-written paper, supported by extensive experimental results. - The authors clearly articulate their motivation and provide thorough explanations of their methodology, which significantly enhances the efficiency of generative image super-resolution, which is a valuable contribution. Weaknesses: - The use of classifier-free guidance noticeably reduces SSIM, likely affecting PSNR similarly. Established perceptual metrics like LPIPS also show a negative impact. - The larger dataset appears to negatively influence SSIM and LPIPS. Since a filtering stage based on MUSIQ/MANIQA was applied, it seems that the selected images are optimized specifically for these metrics, raising concerns about data representativeness. - The overall performance of VARSR seems strong specifically on the metrics used for pre-filtering the data. Compared to previous methods like SeeSR, which use smaller and lower-quality datasets, this brings the value of the collected data into question and the fairness of evaluation. - The training setup is complex and resource-intensive, aiming to design a more efficient diffusion-based alternative — though diffusion processes are still integral to the approach. Other Comments Or Suggestions: None. Questions For Authors: - How does the model perform when using a standard super-resolution model as the refiner instead of a diffusion model? Does the efficiency also improve? - How does the model perform on IQA metrics that were not used for data filtering? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **A1. Effectiveness of CFG.** CFG leads to a certain reduction in fidelity metrics but significantly improves the perceptual quality of the image. Existing fidelity metrics have limitations in accurately measuring human perceptual quality, especially when the original HR image quality is low. In Fig.4, generated images of higher perceptual quality for humans can lag behind in certain fidelity metrics, as overly smoothed low-quality images tend to score better on these metrics. These limitations have been confirmed in many previous studies (e.g., SUPIR in CVPR2024 and Pipal in ECCV2020), and mathematical derivations have verified the inherent contradiction between fidelity and quality (The Perception-Distortion Tradeoff in CVPR2018). In Fig.9, the introduction of CFG results in significantly richer textures in the generated images, substantially improving perceptual quality to meet human preferences while maintaining correct semantics. **A2. Other IQA Metrics.** Thanks for your valuable comments. First, as mentioned in A1, fidelity and IQA metrics can exhibit certain contradictions. Using our large-scale datasets for training can generate images that retain semantics and have higher quality. Second, the good performance of VARSR in IQA metrics does not originate from specialized data filtering but from its ability to generate high-quality images. We further conduct evaluation using other well-adopted IQA metrics that were not used for data filtering, including CNNIQA (CVPR2014), HyperIQA (CVPR2020), and TOPIQ (TIP2024). As shown in the Tab below, VARSR continues to achieve SOTA results, surpassing other methods. ||Metrics|BSRGAN|RealESR|SwinIR|LDM|StableSR|DiffBIR|PASD|SeeSR|VARSR(Ours)| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |DIV2K|CNNIQA|0.5492|0.5652|0.5402|0.5579|0.6274|0.6413|0.6269|0.6613|**0.6661**| ||HyperIQA|0.5682|0.5586|0.5235|0.5225|0.6100|0.6164|0.6158|0.6666|**0.7031**| ||TOPIQ|0.5413|0.5182|0.4796|0.4695|0.5923|0.6105|0.6165|0.6793|**0.7020**| |RealSR|CNNIQA|0.5513|0.5624|0.5281|0.5637 | 0.6029|0.6077|0.5938|0.6594|**0.6692**| ||HyperIQA|0.5617|0.5231|0.5093|0.4936|0.5703|0.5690|0.6001|0.6746|**0.7038**| ||TOPIQ|0.5502|0.5137|0.4882|0.4762|0.5579|0.5580|0.5920|0.6854|**0.6991**| |DRealSR|CNNIQA|0.4989|0.4849|0.5017|0.5367|0.5518|0.6025|0.5794|0.6132|**0.6445**| ||HyperIQA|0.5305|0.4938|0.5074|0.5050|0.5537|0.5992|0.6008|0.6583|**0.6866**| ||TOPIQ|0.5058|0.4622|0.4694|0.4807|0.5330|0.5831|0.5963|0.6534|**0.6800**| **A3. Complexity of the training setup.** Our model is no more complex than diffusion methods, both requiring the same three-step process for application in ISR tasks: (1) training the VAE, (2) pre-training on C2I/T2I tasks, and (3) fine-tuning on ISR. The apparent simplicity of training in previous diffusion methods stems from directly using pre-trained Stable-Diffusion and its VAE as base models. These base models have already undergone training in the first two steps, requiring only fine-tuning for the ISR task. However, the open-source VAR base model falls short of our needs, as it could only generate $256*256$ images and is limited in generated image quality. Therefore, we need to conduct training in all three stages, making it appear more complex. We intend to open-source the base models trained in the first two steps to ease training burdens for future research and contribute more to the community. **A4. Clarification of Diffusion Refiner.** Our proposal of the Diffusion Refiner does not mean introducing an additional ISR process but offers a mapping of continuous residual distributions. As highlighted in lines 185-195, quantization of image discrete vectorization leads to loss, thereby restricting the upper bound of restoration, as VAR can only predict the quantized discrete vectors of the image. Therefore, we specifically proposed a refiner to convert predictions of categorical vector distribution into a continuous-valued space through a diffusion loss, thereby enhancing the upper bound of VAR's capacity. Such an idea has been validated in previous works: MAR (NIPS2025) and HART (ICLR2025). The Refiner solely serves to map the probability distribution of quantized residuals with VAR features as a condition, and does not have the capabilities of an ISR model. A lightweight network (only 37M parameters accounting for 3% of the 1.1B model) suffices for the discrete-to-continuous mapping. In the Tab below, a larger refiner does not yield significant gains, which was also confirmed in MAR (NIPS2025). Thus, we believe that a standard SR model as the refiner will not lead to improvement. ||Refiner|PSNR|SSIM|LPIPS|DISTS|FID|MANIQA|CLIPIQA|MUSIQ| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |RealSR|6-layer|24.61|0.7169|0.3504|0.2470|137.55|0.5570|0.7006|71.26| ||18-layer|24.51|0.7184|0.3492|0.2478|138.75|0.5521|0.7043|70.97|
Summary: This paper proposed a image super resolution framework that is built on top of the VAR framework -- next-scale prediction. They modify the original VAR architecture to digest tokens from low res image inputs. And then, they leverage the original VAR architecture to upsample until the highest granularity. Given the quantized signals, they further add one diffusion refiner to close the quantization gap, i.e., producing continuous tokens from the VAR prediction. In the above process, scaled roPE is introduced, and a modified CFG is presented. Experiment-wise, they compare mostly against generative image super resolution prior, including GAN and diffusion based priors. They demonstrate they are higher quality and more efficient. Claims And Evidence: yes. Methods And Evaluation Criteria: Method-wise, I am confused on the necessity of the diffusion refiner, and the introduction of CFG together with its positive and negative embedding. If the message is VAR is great for ISR, then the framework should focus on pushing more on the upperbound of VAR related components. However, now it ends up instead mixing diffusion with VAR, which makes the key message vague to me. Evaluation criteria makes sense. Theoretical Claims: no theoretical claims. Experimental Designs Or Analyses: I am not sure BSRGAN or Real-ESRGAN is SOTA, which was published back in 2021. Latest GAN baselines, such as GigaGAN(CVPR 2023,https://mingukkang.github.io/GigaGAN/), is not compared directly. Similarly, diffusion-based baseline is missing, SinSR (https://github.com/wyf0912/SinSR, CVPR 2024). Supplementary Material: yes. Relation To Broader Scientific Literature: repurpose VAR for Image super resolution. Essential References Not Discussed: They have discussed the essential prior work, e.g., VAR and VQVAE Other Strengths And Weaknesses: Strengths: 1. The authors did a fantastic job in ablation over their design choices. 2. The qualitative and quantitative results show their greater quality against the selected baselines. And it makes their ablation justification easy to follow. 3. The metrics are carefully selected and discussed. Weakness: 1. As is stated above. I am confused on the necessity of the diffusion refiner, and the introduction of CFG together with its positive and negative embedding. If the message is VAR is great for ISR, then the framework should focus on pushing more on the upperbound of VAR related components. However, now it ends up instead mixing diffusion with VAR, which makes the key message vague to me. 2. The role of diffusion diffuser is very marginal. As can be seen from the Table 5, despite improvement, it is very tiny scale better. For this level of improvement, an additional level of VAR might just suffice without the need to introduce the diffusion part. Then, the need of CFG is also questionable to me. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **A1. Comparison with more SOTA.** BSRGAN and Real-ESRGAN are still commonly used SOTA GAN-based models due to their excellent performance. Other ISR works (e.g., SeeSR, PASD) also chose them as GAN-based baselines for comparison. GigaGAN does not provide open-source models or code for testing on its homepage. Therefore, we further conduct a comparison with more recent SOTA DASR (GAN-based, ECCV2022) and SinSR (diffusion-based, CVPR2024). As shown in the Tab, the results are consistent with the findings in the paper, with VARSR leading by a significant margin in perceptual quality metrics, validating the strong performance of VARSR. ||Metrics|PSNR|SSIM|LPIPS|DISTS|FID|MANIQA|CLIPIQA|MUSIQ| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |DIV2K|DASR |24.46|0.6253|0.3696|0.2533|57.37|0.3104|0.4960|53.96| ||SinSR| 24.22|0.5922|0.3429|0.2157|42.17|0.4101|0.6411| 61.46| ||**VARSR**|23.91|0.5890|0.3260|0.2218|35.51|**0.5340**|**0.7347**|**71.27**| |RealSR|DASR|25.47|0.7575|0.32401|0.2267|133.33|0.2470|0.3198 | 41.21| ||SinSR |24.86|0.7191|0.3472|0.2500|142.31|0.3985|0.6162|60.57 | ||**VARSR**|24.61|0.7169|0.3504|0.2470|137.55|**0.5570**|**0.7006**|**71.26**| |DRealSR|DASR|29.75|0.8262|0.3099|0.2275|155.36| 0.2809|0.3813|42.41| ||SinSR|28.26|0.7443|0.3743|0.2495|173.42|0.3843|0.6302|55.28| ||**VARSR**|28.16|0.7652|0.3541|0.2526|155.87|**0.5362**|**0.7240**|**68.15**| **A2. Necessity of Diffusion Refiner.** As mentioned in lines 043-051 of our paper, we believe that VAR is effective for ISR tasks. Similar to other improvements made to VAR-related components (e.g. Prefix Tokens, SA-RoPE), the Diffusion Refiner is also an attempt to enhance the upper bound of VAR in ISR tasks. As highlighted in lines 185-195, quantization of image discrete vectorization leads to loss. Even if VAR accurately predicts all quantized discrete tokens, the generated image will still have quantization losses that limit its upper bound of restoration. Therefore, we specifically proposed a refiner to convert predictions of categorical discrete vector distribution into a continuous-valued space, thereby enhancing the upper bound of VAR's representational capacity. This Refiner utilizes the loss of diffusion form to model the probability of quantization residuals instead of the complete latents to accelerate convergence. The idea of using a diffusion refiner for discrete-to-continuous mapping has been validated in many previous works: MAR (NIPS2025) and HART (ICLR2025). In Tab.5, the introduction of the diffusion refiner led to improvements in all metrics, especially in perceptual quality metrics. Notably, MANIQA achieved an average improvement of 2.2%, and SSIM improved by 0.82%. Fig. 8 shows that the diffusion refiner reintroduced many image details lost in quantization (e.g., textures of the clothes and the flowers). Furthermore, the diffusion refiner is an extremely lightweight module, with only 37M parameters accounting for just 3% of the 1.1B model. We believe that the Diffusion Refiner is effective in enhancing the representational capacity upper bound of VAR in ISR tasks with a very small param increase. **A3. Necessity of CFG.** Similarly, the introduction of CFG is also aimed at expanding the upper bound of the image quality generated by VAR, thereby generating more realistic images through guided sampling. As mentioned in lines 152-156, VAR, GANs, and Diffusion models all target fidelity as the optimization objective in ISR tasks, which may result in generated images being overly smooth and lacking in detail. It tends to retain distortions such as blur from LR images, leading to lower human-perceived quality. To address this, we propose an image-based CFG that follows the principles of the standard CFG. By learning low-quality image distributions during training, it allows us to guide the probability distribution during sampling towards generating higher-quality images, thereby expanding the upper bound of the image quality. Image-based CFG validates the introduction of a new form of CFG into the VAR framework, striking a balance between realism and fidelity similar to the diffusion model. Results in Tab.6 and Fig.9 confirm the effectiveness of Imaged-base CFG, resulting in significantly richer textures in the generated images, substantially improving perceptual quality to meet human preferences while maintaining correct semantics.
Summary: This paper presents VARSR, a Visual Autoregressive Model for Image Super-Resolution (ISR). VARSR leverages autoregressive modeling with next-scale prediction, prefix tokens for integrating low-resolution conditions, Scale-aligned Rotary Positional Encoding (SA-RoPE) for preserving spatial structure, and a Diffusion Refiner for pixel-level fidelity. The model is trained on a large-scale dataset of over 4 million high-quality images. Experiments show that VARSR outperforms existing methods in both fidelity and realism, offering higher computational efficiency compared to diffusion-based models. Claims And Evidence: - The paper compares VARSR with several state-of-the-art methods (GAN-based, diffusion-based, and autoregressive-based), and VARSR consistently performs well in both quantitative and qualitative evaluations across different datasets. It achieves superior results on no-reference IQA metrics, such as MANIQA, CLIPIQA, and MUSIQ, and performs on par with diffusion-based methods in terms of reference-based metrics like PSNR and SSIM. - The paper demonstrates that VARSR is more computationally efficient than diffusion models, requiring only 0.59s for inference, significantly reducing the number of steps needed compared to diffusion-based methods. Methods And Evaluation Criteria: Nan Theoretical Claims: Nan Experimental Designs Or Analyses: Yes Supplementary Material: Yes Relation To Broader Scientific Literature: - The paper builds upon prior research in autoregressive image generation (e.g., VQGAN, DALL-E) and image super-resolution (e.g., GAN-based methods, diffusion models). It extends the work on autoregressive models by introducing next-scale prediction and addressing issues specific to ISR tasks, such as pixel-level fidelity and the preservation of spatial structure. - I keep up with the literature in this area. Essential References Not Discussed: Nan Other Strengths And Weaknesses: - The originality of the approach lies in its integration of autoregressive modeling with image super-resolution, making it capable of achieving high fidelity and realism with computational efficiency. The extensive experiments, including a user study, validate the proposed method's performance in real-world scenarios, highlighting its potential practical application. Other Comments Or Suggestions: - When comparing with other generative models, it appears that the authors did not re-train these models on the newly proposed large-scale dataset. As shown in Table 9, VAR benefits from training on the large-scale dataset. Therefore, directly comparing it with other generative models trained on ImageNet is not a fair comparison. - The motivation for introducing VAR into the super-resolution field is not clearly explained. The authors claim that VAR preserves structure better than the Markov process in diffusion models, but the final results show that its reference-based metrics are generally weaker than those of diffusion-based methods, which seems contradictory. Additionally, the authors argue that VAR is more efficient than DDPM, but there are already many efficient designs for diffusion models and one-step generation methods. Therefore, the claim of efficiency improvement for basic diffusion models alone is insufficient to support the motivation of this paper. I believe VAR is an excellent work, but its primary contribution lies in aligning the generative paradigms of vision and text via autoregressive methods. Using it as a simple replacement for diffusion models lacks a clear motivation. Questions For Authors: Please see weakness Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: **A1. Training with different datasets.** Thanks for your comments. In fact, baseline methods are trained on different datasets (e.g., SeeSR uses LSDIR and PASD uses DIV2K/FFHQ, with a tenfold difference in data quantity). There are also differences in the pretrained models, which are important in providing generative priors for ISR (e.g., diffusion methods using Stable Diffusion and GAN methods trained from scratch). In lines 258-270, the primary objective of using large-scale data is for pretraining to acquire generative priors. The open-source VAR base model falls short of our needs, as it could only generate $256*256$ images and is limited in generated image quality. However, diffusion methods leverage the powerful Stable Diffusion as their base model, which is pretrained on billions of image-text pairs, far surpassing our scale of 4 million. Thus, in evaluating ISR methods, previous works (e.g., SeeSR/PASD) typically compare based on the performance of the models themselves. Based on your advice, we present VARSR results trained on the same datasets as baselines. In the Tab below, when pretraining with our large-scale data and fine-tuning with the same LSDIR as SeeSR, VARSR still performs exceptionally well, far surpassing other methods in perceptual quality metrics. This is consistent with our conclusion when training on large-scale data, validating the superiority of the VARSR framework. ||Metrics|PSNR|SSIM|LPIPS|DISTS|MANIQA|CLIPIQA|MUSIQ| |:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |DRealSR|DiffBIR|26.57|0.6516|0.4537|0.2724|0.4602|0.6445|61.06| ||PASD|27.45|0.7539|0.3331|0.2322|0.4551|0.6365|63.69| ||SeeSR|28.13|0.7711|0.3142|0.2230|0.5077|0.6893|64.75| ||VARSR (trained on LSDIR)|27.87|0.7536|0.3716|0.2620|**0.5368**|**0.7206**|**67.74**| **A2. Advantages of VARSR in preserving semantics.** VAR has the advantage over diffusion models in better human-perceived semantic fidelity and quality. (1) Firstly, in terms of fidelity metrics as PSNR and SSIM, VARSR performs similarly to SeeSR and outperforms all other diffusion methods on DIV2K and DrealSR datasets (Tab.1). However, existing fidelity metrics have limitations in measuring human perceptual quality. Fig.4 shows that images with higher human perceptual quality may not score well on fidelity metrics, as overly smoothed images tend to perform better. These limitations have been confirmed in previous studies (e.g., SUPIR in CVPR2024, Pipal in ECCV2020), and mathematical derivations verify the contradiction between fidelity and quality (The Perception-Distortion Tradeoff in CVPR2018). (2) Secondly, user study and numerous examples support VARSR's superiority in maintaining spatial structure and preserving semantics. In Tab.10 (user study), the results of VARSR align more closely with human preferences. In Fig.5/15/16/17, examples illustrate that VARSR excels in generating textures faithful to the original image, outperforming diffusion models. In the last two rows of Fig.5, only VARSR accurately restores architectural semantics and smooth details. In the 2nd and 3rd cases of Fig.17, VARSR generates clear foliage and walnuts, while diffusion methods exhibit illusion issues (SeeSR produces fabricated content). These results highlight VARSR's superior preservation of semantics and spatial structure. **A3. Efficiency Comparison.** Thanks for your comments. We believe that the superior efficiency of VARSR lies in the advantages of the VAR framework over the standard diffusion framework. Therefore, there is no need to compare with models that are optimized to enhance efficiency for a specific framework. This is because these efficiency designs can also be applied to VARSR to further improve efficiency. For example, one-step diffusion models (e.g., OSEDiff in NIPS2025 and SinSR in CVPR2024) often undergo knowledge distillation to match the performance of multi-step generation. Our VARSR can also adopt this approach to simplify the inference steps (e.g., reduce the inference scales). Tab. 3 and the explanations (lines 380-384/407-413) demonstrate VARSR's notable efficiency enhancements over diffusion models, validating our motivation. Compared to the one-step diffusion method SinSR (refer to A1 to Reviewer vbZD), VARSR outperformed in most metrics, showcasing its effectiveness. **A4. Motivation.** VAR provides a novel and effective approach for addressing ISR tasks, offering advantages over diffusion models rather than a straightforward substitute. Our work is just an initial attempt, and there is vast potential for further leveraging VAR in ISR tasks. In addition to the advantages in preserving structural features and efficiency, as you mentioned, VAR aligns the generative paradigm for both vision and text. Thus, we believe that VAR has the potential to be integrated with LLMs, enabling the direct utilization of human preferences to guide ISR through optimization forms such as DPO or GRPO, which is a promising avenue for future research.
null
null
null
null
null
null
When, Where and Why to Average Weights?
Accept (poster)
Summary: The paper studies weight averaging, a "trick" rooted from the old Polyak averaging that never disappoints practitioners from fitting logistic regression to training modern LLMs. This paper provides a relatively large-scale study on AlgoPerf, a benchmark suite for optimizer, aiming at understanding: 1. Whether you should always use weight averaging; 2. How you set the weight averaging time window; 3. How you should use weight averaging together with learning rate decay. The paper shows that: Using weight averaging always gives you faster convergence (i.e. fewer iterations and wall clock time) to a targeted validation loss and using a window size of 1% of the total training steps seems to be consistently the optimal option. Lastly the paper shows that although weight averaging has some connection with learning rate scheduling and can replace scheduling in some cases (e.g. The Road Less Scheduled, Defazio 2024), weight averaging still shows better performance when combined with standard learning rate scheduling. Overall, the problem studied is important and timely, the paper is mostly clearly written but I find some parts lacking details and need more empirical evidence. # Post rebuttal: The author agreed to include more discussion for the choice of WA parameters as well as other modifications I suggested. Based on that, I have increased my score from 3 to 4. Claims And Evidence: The empirical results for LAWA and EMA being helpful is clear and solid: Given a known optimal hyperparameter setting, applying WA always boosts the performance: Fig.1 and 2. The results for other claims are less thoroughly evaluated, e.g. - the combination with Shampoo is only evaluated on one dataset - Comparision of WA + long decay schedule with short decay schedule without WA (Fig. 7a) seems to be only based on one model. Plus the model used is not stated in the caption. For the recommendation of averaging window size, I think there are a couple of factors that could be important but not taken into account: - Batch size (see e.g. https://arxiv.org/abs/2307.13813) - Weight decay - Learning rate warmup Methods And Evaluation Criteria: AlgoPerf used in the paper seems to be a sensible evaluation benchmark. Theoretical Claims: There are no theoretical claims in the submission. Experimental Designs Or Analyses: The experiment design aligned with the claims the paper is trying to make. The most crucial hyperparmaeter: Learning rate, is swiped (Fig. 4). However, the type of learning rate scheduling considered is limited only to cosine schedule, some more recent advances such as WSD or learning rate schedule with warmup (although I guess the observations will still hold true). Supplementary Material: No supplementary material is included Relation To Broader Scientific Literature: The idea and the observations are mostly known, and as the authors admit in the submission, many observations confirm the conclusion in past literature. However, I am not aware of any existing papers that actually perform such large-scale studies. So I think the paper could be helpful for the community as a source of empirical evidence. However, on the other hand, I am not sure if there are any new "insights" this submission provides to the community. The observation I find most interesting is the one in Fig.7a, where the authors show that a long run with weight-averaging, at the middle of training, gives you a performance similar to a full shorter run. I think this observation is relatively novel (at least for me) and could potentially help practitioners perform e.g. continuous pre-training, or tuning the training length, easier. Essential References Not Discussed: Not any I am aware of. Other Strengths And Weaknesses: It would be much better if the weight averaging strategies considered can come with some formulas to help readers understand better, the current version describes EMA and LAWA with only languages, which I don't find very straightforward. Other Comments Or Suggestions: I think Appendix A.2 could be moved into the main text since there are still sufficient space in the main text, and this seems to be pretty critical implementation details since it is highly related to the computation of the wall clock time and practical efficiency of WA. Questions For Authors: In page 4 bottom, it is mentioned that "whereas a short one may be suboptimal or result in a serious computational overhead.", can the authors elaborate more on the computational overhead part? I don't see why it is the case if you are using, e.g. EMA. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for the thoughtful comments. We appreciate your recognition of the value of our study! We address specific points below. > The results for other claims are less thoroughly evaluated, e.g. the combination with Shampoo is only evaluated on one dataset. Comparision of WA with LR schedules (Fig 7) seems to be only based on one model. We acknowledge that the mentioned experiments were missing some data, and we are happy to share **additional results**: - We performed [additional experiments on Shampoo](https://anonymous.4open.science/r/ICMLE1C2/Shampoo.pdf), considering **more workloads** and including **EMA**. Shampoo consistently hits the target faster across tasks when equipped with LAWA or EMA. - [WA vs Short LR schedule](https://anonymous.4open.science/r/ICMLE1C2/Short_LR.pdf) We report results on additional **workloads** and on **EMA**. LAWA and EMA yield better performance throughout training, closely matching the results from shorter schedules. - [We also included EMA](https://anonymous.4open.science/r/ICMLE1C2/Annealing_EMA.png) in the previous analysis on Librispeech Conformer, and find results to be consistent with the one on LAWA. When little or no annealing is applied, EMA provides substantial improvements on the final validation performance; however, when the LR is fully annealed to zero, EMA converges closely to the annealed model. - [LR sweep on EMA](https://anonymous.4open.science/r/ICMLE1C2/LR_Sweep.pdf): We add results on EMA when testing WA at different learning rates, observing that EMA efficiency is preserved across LR's. > For the recommendation of averaging window size, I think there are a couple of factors that could be important: batch size, weight decay, learning rate warmup. We agree that **batch size** and **weight decay** might affect the optimal averaging configuration of LAWA and EMA. We appreciate you highlighting this, and we will include this clarification in the revised version of the paper. However, we would like to emphasize that: 1. Due to the high computational cost of AlgoPerf and limited resources, we were unable to sweep all hyperparameters and instead focused on the best-performing combinations for each workload. 2. These configurations account for a variety of hyperparameters, with batch size, weight decay, and learning rate warmup varying across workloads. As such, our recipe is relatively robust to these choices, though some variability may still occur. 3. Finally, as suggested by Reviewer #egdu, [we included additional results on the effect of $\beta$ and $\nu$](https://anonymous.4open.science/r/ICMLE1C2/EMA.pdf) on EMA performance. We refer to the answer to Reviewer #egdu for a discussion on these additional experiments. > The type of learning rate scheduling considered is limited only to cosine schedule, more recent advances such as WSD or learning rate schedule with warmup would be interesting. We would like to clarify that the employed schedule is Cosine Decay **with warmup**. We apologize if this was not clear and will ensure it is explicitly stated in the revision. Regarding recent advances like **WSD**, we find this to be a very interesting question! We would be particularly interested in whether averaging can accelerate the decay phase of WSD, potentially **reducing the computational cost of cooldowns**. We believe this is a promising area for future work, and hope that our study inspires further research on this topic. > It would be much better if the averaging strategies considered can come with some formulas to help readers understand better. Thank you for pointing this out! We had included the algorithms in an earlier version of the paper but excluded them for the submission. [We display them at this URL](https://anonymous.4open.science/r/ICMLE1C2/Algorithms.png) and will include them in the Appendix of the revised manuscript. > Can the authors elaborate more on the computational overhead part? I don't see why it is the case You are correct in noting that averaging does not incur additional overhead when implemented efficiently. In the quoted sentence, we were referring to a naive implementation of WA, where the average is offloaded to CPU memory and then transferred back to CUDA memory before evaluating the model. We appreciate the question and will revise the sentence to make this distinction clearer. For more details on this topic and data about the overhead, please refer to our response to Reviewer #egdu and this analysis. **To conclude**, we sincerely thank you for your review and thoughtful questions. The discussion has been valuable, and we believe that addressing these points and providing additional evidence has strengthened our work by broadening the scope of our analysis and better supporting our claims. We hope to have clarified your concerns, and we would greatly appreciate if these improvements could be reflected in a higher evaluation. We are happy to address any further questions or doubts. --- Rebuttal Comment 1.1: Comment: I would like to thank the author for the detailed response! My rating stays unchanged: I think this is a good paper that contributes **a lot of useful evidence** and **great value** to the community. However, I have another question for the authors: Imagine I am a practitioner trying to train a model, now after I read the paper, I know that I should use learning rate scheduling with weight averaging, but **how should I set the WA parameters**? Should I treat them as hyperparameters, use a heuristic value (i.e. arbitrary value should be fine), a heurstic value in a range, etc. I would love to see the authors provide a more detailed guideline / or plan for revision on "when, where and how" to perform weight averaging. Note that I am also totally fine if the authors admit that this is an open question and there is no good answer to it. If that is the case, then I believe it would be better if the authors stated that in a "summarization" section, otherwise it could be disappointing for the readers who believe the paper provided a "complete recipe" but ended up failing to find the desired answer. With this point addressed, I will increase my score. --- Reply to Comment 1.1.1: Comment: Thank you for the response! We indeed have some data on the effect of **WA hyperparameters**, and agree that discussing these results in the paper would be useful for the community. We partially address this question in Figure 3, but are happy to share additional results here. We study the interaction between the update frequency $\nu$ and the averaging window $L$ for LAWA at the following link: https://anonymous.4open.science/r/ICMLE1C2/LAWA.pdf, and between $\nu$ and the EMA discount factor $\beta$ here: https://anonymous.4open.science/r/ICMLE1C2/EMA.pdf In both cases, we observe a consistent trend: more frequent updates (lower $\nu$) benefit from longer memory (higher $L$ or $\beta$), and vice versa, sampling checkpoints further apart (higher $\nu$) works better with shorter memory buffers (lower $L$ or $\beta$). Several combinations of $\nu$ and $L$ or $\beta$ yield strong performance, suggesting that an optimal averaging horizon might exist. We also note that slower averages (high $L$ or $\beta$) are more sensitive to the choice of $\nu$, requiring more careful tuning, and that EMA is usually more sensitive to its hyperparameters. In practice, we find that values of $\nu$ in the range 64–128 combined with $L=20$ provide good performance across workloads for LAWA. For EMA, our analysis suggests that a value of $\nu$ between 64 and 128 with $\beta=0.9$ is effective. For Criteo1TB, which has a much shorter training horizon, a smaller value of $\nu=32$ proves more beneficial. We acknowledge that better configurations may exist, and that the optimal values may vary depending on the workload and on other hyperparameters. Finally, we observe that EMA with $\nu=1$ performs poorly for $\beta \leq 0.99$. We find this behavior peculiar, and hypothetize that the worst performance of EMA compared to LAWA observed by Sanyal et al. (2023) might be due to *updating the EMA buffer at every step*, whereas for LAWA they use larger values of $\nu$. We plan to further explore this in future work. A similar result holds for LAWA, with the only exception of OGBG, where we however acknowledge a very long training horizon and a very slow baseline, as noted in Kasimbeg et al. (2025). We thank you for your insightful comments and will include these additional details in the revised manuscript. --- **References** Sanyal, S., Neerkaje, A., Kaddour, J., Kumar, A., & Sanghavi, S. (2023). Early weight averaging meets high learning rates for LLM pre-training. Kasimbeg, P., et al. (2025). Accelerating neural network training: An analysis of the AlgoPerf competition.
Summary: This submission presents the experimental findings and evaluation of two weight-averaging (WA) techniques, LAWA and EMA, on the AlgoPerf optimization benchmark, on seven tasks. They find strong training speed improvements (15% reduction in GPU-Hours), consistent with varying hyperparameters (learning rate and averaging window) with second-order optimizers. They also show a small improvement in generalization provided by WA. Finally, they provide experiments linking WA to learning rate annealing, showing that the benefits of WA diminish when annealing the LR to zero. [Score raised from 2 to 3] Claims And Evidence: The authors claim that their experiments show that WA methods speed-up training improve generalization, and that WA is a proxy for shorter learning rate decay, but can not replace it completely. I think that these claims are relatively well confirmed by the experiments presented, with a few caveats: I disagree with the claim that "show that coupling WA with learning rate annealing yields optimal results" (l49), since they later confirm that learning rate annealing is particularly the case where the benefits of WA disappear (Fig. 7b). Similarly, the generalization improvement provided by WA is very limited (but consistent). Finally, see my main Question regarding speeding-up considering the results of the AlgoPerf competition (Anonymous, 2025 in the submission), which show WA methods as slower than the baseline. Methods And Evaluation Criteria: The authors use 7 (of the 8) benchmarks of AlgoPerf to study the effectiveness of LAWA and EMA for deep learning optimization. This provides a thorough and comprehensive optimization landscape to study their claims. They also provide experiments regarding learning rate schedule, averaging window size and an experiment on a second-order optimizer. I think only studying two WA methods is a bit limited, as this would help better understand and choose WA options through extensive benchmarking, such as the effect of overfit-aware sampling and dense sampling. I'm also surprised the dependence on the EMA coefficient and update frequency was not studied like the window size, in particular when the authors claim (l189) to explore the interaction of the frequency variables with others. I am also surprised that many of the experiments (Figures 3, 4, 5, 6, 7, 8) only consider LAWA and neither EMA. This feels like an important oversight to justify the general and consistent effectiveness of WA methods to only consider one of the two approaches. Theoretical Claims: There are no theoretical claims in the article, but links with previous theoretical claims which are confirmed in this work are presented. Experimental Designs Or Analyses: The experiments done by the authors seem valid and consistent with previous findings. I voiced the concerns I had in other sections (relative lack of variety in experiments, inconsistent results in the AlgoPerf results..) Supplementary Material: I read the Appendix. Relation To Broader Scientific Literature: This work is an experimental benchmarking of the effectiveness of WA methods for deep learning optimization. It is related to the WA and optimization literature, mainly confirming pre-existing findings on large optimization tasks. Essential References Not Discussed: Most essential references have been discussed. I find the Model Soups part lacking more citations. The links between WA methods and self-supervised approaches could also be discussed, as well as more modern approaches using WA methods. Other Strengths And Weaknesses: This paper is well-written and clear, showing the effectiveness of WA clearly. I find these results interesting, but I think that all results (except the combination with Shampoo, and the experiment in Fig 7b and 8) were already shown theoretically or experimentally by previous works: the speed-up and generalization improvement, and the main empirical link between WA and learning rate decay. I feel that this work either warrants further experiments with various WA approaches, or at least varying the hyperparameters missing (EMA coefficient our update frequency); or some more novel insights to justify an acceptance, but I can be convinced otherwise. As said by the authors (l361), studying a more broad range of optimizers could be interesting. Other Comments Or Suggestions: I'm a bit unsure if a second-order polynomial is an appropriate fit for the results in Figure 3, in particular for the upper results (Criteo, Conformer, DeepSpeech etc). Can the authors also provide the values for runs that result in a proportion over 1? Please provide the value of $\eta_{\text{max}}$ (line 291). Can the authors please detail the link between WA and learning rate decay that they find in Theorem 5.3. in Garrigos & Gower ? l67/68: additional "," l316: "Citeo1TBB" Fig 7b: "diminsh" Questions For Authors: I have an important caveat on the results of the authors. Following Anonymous, 2025 (the results of the AlgoPerf competition cited by the authors); LAWA and EMA perform worse than the NadamW baseline (Fig 1. a/b, Table 1). I'm having a hard time making sense of the authors' results in this paper and the ones reported in the AlgoPerf competition, which seems to show that averaging weights only resulted in slower training. One reason I can find for this discrepancy is that the authors' report mainly the number of optimization steps and not the training time, which matters in the benchmark competition. However, this is not the case anywhere, and many claims in the submission are related to GPU-Hours training time (Tab 1 for instance). What is the reason for this discrepancy? Could the authors give the additional time that the averaging steps add to the standard training time, to better understand this possible slowdown? What throughput difference do the offline and online versions have? Why did the authors not also report the performance for the ImageNet image classification task with a ResNet, the last benchmark in the AlgoPerf suite? It is not clear what is the benchmark considered in Figure 7. The difference in results between LAWA and EMA is not really analyzed in the article if I'm not mistaken. What is the answer to the title of this paper? The answer seems to be "always use WA, as a way to better do learning rate decay" to me while reading. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the thorough review! We appreciate the detailed feedback. > I think only studying two WA methods is a bit limited. We agree that broader benchmarking would be valuable and hope this works encourages further study at scale. We focus on LAWA and EMA due to their success and adoption. Influential works such as Szegedy et al. (2016), Vaswani et al. (2017), and Merity et al. (2017) used **one of these two averaging flavors**. We briefly tested Polyak averaging, but found it ineffective, consistent with Defazio et al. (2024), and omitted it. Finally, AlgoPerf's high computational cost limits extensive experimentation, so we prioritized exploring LAWA and EMA in depth. > I'm surprised the dependence on the EMA coefficient and update frequency was not studied. Thank you for mentioning this! We recognize its importance and are happy to share additional results: https://anonymous.4open.science/r/ICMLE1C2/EMA.pdf 1. Small values of $\nu$ require a large $\beta$, and viceversa. 2. Update frequency around 0.003 with $\beta=0.9$ work well across tasks. 3. $\nu=1$ (first group in each plot) performs poorly for $\beta\leq0.95$ > Many experiments only consider LAWA and not EMA. We acknowledge this and would like to share **additional experiments**: - [Shampoo](https://anonymous.4open.science/r/ICMLE1C2/Shampoo.pdf) We include **more workloads** and **EMA**. Shampoo consistently hits the target faster across tasks when equipped with LAWA or EMA. - [LR schedule](https://anonymous.4open.science/r/ICMLE1C2/Short_LR.pdf). Results on more **workloads** and **EMA** confirm that WA closely matches a short LR schedule. - We test [EMA with different annealing](https://anonymous.4open.science/r/ICMLE1C2/Annealing_EMA.png), and find consistent results with LAWA. - EMA efficiency is preserved [across different LR's](https://anonymous.4open.science/r/ICMLE1C2/LR_Sweep.pdf), similar to LAWA. > Can the authors detail the link between WA and LR decay that they find in Thm 5.3. in Garrigos & Gower? Thm 5.3 provides convergence results for SGD under decreasing stepsizes, requiring an averaging scheme adapted to the stepsize schedule. A further connection to averaging stems from comparing this result to Lemma 7.3 where the gradient is computed at the running average of the iterates (see also Thm 7.4). > Following Anonymous, 2025, WA perform worse than NadamW. What is the reason for this discrepancy? Could the authors give the additional time that averaging adds to training? This is a subtle point, thank you for raising it. 1) The AlgoPerf competition API did not allow switching model parameters to the WA buffer before evaluation, resulting to a suboptimal WA implementation. [This plot shows the](https://anonymous.4open.science/r/ICMLE1C2/Overhead.pdf) **overhead** of EMA. On Criteo it's 10x slower. 2) AlgoPerf does not allow **asynchronous** CPU-GPU transfers and has limited VRAM, requiring WA buffers to be stored in CPU memory and transferred frequently over bandwidth-constrained hardware. However, **WA can be implemented efficiently** using asynchronous non-blocking transfers, as shown by Deepseek AI et al. (2024). We used offline averaging to reduce computational costs, but an efficient implementation would have negligible overhead. > The difference between LAWA and EMA is not really analyzed in the article. We do not find significant difference between the two (Fig 2). Moreover, our goal was not to find _the best averaging scheme_, so we omitted a thorough comparison. However, we agree this could be better discussed and will update the paper. > What is the answer to the title of the paper? WA should always be used with a LR schedule to produce better models **for free** during training. How to access *the optimal* model at any time during training remains an open fascinating question. Our study presents WA as a step in this direction, moving the model closer to Pareto optimality of loss vs training time. This is precisely the reason behind the observed efficiency gains. We will clarify this point in the article. ### Additional comments - We exclude ResNet because neither NadamW nor Shampoo reach the target (Kasimbeg et al, 2025), leaving no baseline for comparison. - Fig 7 is Librispeech Conformer, $\eta_{max}=0.00175$, which is the best configuration from Dahl et al. (2023). - Fig 3: the y-axis shows the proportion of step wrt training horizon. Runs with $y>1$ do not reach the target and are hence excluded Finally, while some aspects of our work were explored in prior studies, our contribution expands their scale and scope. We believe our paper helps unify existing findings, evaluating them across tasks, and providing thoroughly validated claims. ### Conclusion We appreciate the insightful questions. We think that these additional analyses will significantly enhance our study and reinforce its contributions. We hope to have addressed your concerns, and if so, we would appreciate your endorsement for acceptance. --- Rebuttal Comment 1.1: Comment: **2 methods** Thank you for the answer. I agree that these methods are influential, but still feel that it warrants more justification in the final version to remove the need to focus on other approaches, such as for Polyak averaging as provided. **New experiments** Thank you for the additional experiments. I feel these help provide a strong and consistent overview of the effectiveness of averaging methods. **AlgoPerf times** These explanations help understand the underperformance of WA methods on AlgoPerf, and would also be a welcome addition to the final version of the paper. However, why is the discrepancy so much more important for the Criteo task compared to the others? **LAWA/EMA** The additional experiments provided help the comparison between the two methods. But as said by the authors, additional comments comparing the two approaches will be welcome. I thank the authors for their detailed and clarifying rebuttal, which strengthened their manuscript. Several of the points that have been answered still require a modification to the final manuscript, but I am raising my score to 3, asking now for acceptance. --- Reply to Comment 1.1.1: Comment: Thank you for the comment and feedback! The significant overhead observed when training a DLRM model (Naumov et al., 2019) on the **Criteo1TB** dataset (Lab, 2014) arises from the **large embedding tables** of the DLRM, which uses a vocabulary size of around 4 million entries (specifically $4194304$, Dahl et al. 2023). Frequent CPU-GPU transfers of model parameters–which include these large embedding tables–considerably slow down the algorithm, leading to more time spent transferring buffers than updating the model parameters. --- **References** Naumov, M., et al. Deep learning recommendation model for personalization and recommendation systems. 2019 Lab, C. A. I. Criteo 1tb click logs dataset. 2014. Dahl, G. E., et al. Benchmarking neural network training algorithms. 2023.
Summary: The authors benchmark weight averaging techniques using AlgoPerf, and find that without learning rate annealing it strongly accelerates training and generalization. It composes positively with learning rate annealing with small gains but cannot fully replace annealing. Claims And Evidence: * Weight Averaging is generally helpful, especially in the absence learning rate annealing. Experimental evidence supports this. * Weight Averaging Speeds Up Training. Experimental evidence supports this. * Weight Averaaging + Learning Rate Annealing works best. Experimental evidence supports this. * Weight Avearaging cannot replace learning rate annealing. Methods And Evaluation Criteria: The evaluation framework allows for extensive and diverse experiments, which is important for answering the research questions being asked in the paper. Theoretical Claims: There are no Theoretical Claims. Experimental Designs Or Analyses: Experiments using AlgoPerfm are well designed. Supplementary Material: I briefly reviewed the appendix. Relation To Broader Scientific Literature: The scientific iteratture implictilty takes advantage of the findings of this paper extensively (wherever weight averaging is used). Formalizing them in a large scale study, however, presents the benfits clearly and is useful for distilling the implicit knowledge in the paper. Essential References Not Discussed: It would be useful to discuss existing application of weight averaging a little bit, espeically in relation to robust finetuning [1] and OOD generalization [2] [1]Wortsman, Mitchell, et al. "Robust fine-tuning of zero-shot models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. APA [2]Arpit, Devansh, et al. "Ensemble of averages: Improving model selection and boosting performance in domain generalization." Advances in Neural Information Processing Systems 35 (2022): 8265-8277. Other Strengths And Weaknesses: Strengths: The paper is crystal clear in its findings, and this clarity is the basis for my recommendation of acceptance. I like the finding that this weight averaging allows for access to a better model during training, useful for long duration training runs. Weaknesses: The findings are not very surprising to the many using weight averaging. However, this is not a reason to reject IMO. We are lacking in clear presentation of simple ideas in this field. Other Comments Or Suggestions: It would be nice to clarify which dataset the figures are associated with in all cases. Eg. Figure 7. I like the idea that learning rate schedules impede continual learning, since you decay to 0 at some point and restarting could be problematic from an optimization standpoint. Do the authors have any evidence that WA can be used for improved continual or transfer learning? Questions For Authors: Why, in Figure 8, does annealing + weight averaging help whereas in figure 7b LAWA adds no benfit relative to the annealed model? Is it a different dataset, or difference in some other setting? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your review and for your insightful comments. We appreciate your recognition of the value of our study! We respond below to specific comments. > It would be useful to discuss existing application of weight averaging a little bit, especially in relation to robust finetuning [1] and OOD generalization [2]. Thank you for the additional references, we will include them in the revised manuscript. > It would be nice to clarify which dataset the figures are associated with in all cases. Eg. Figure 7. Thank you for spotting this! We will correct the labeling. Figure 7 is obtained by training a **Conformer** on Librispeech. We note that a similar effect can be observed on **other workloads** as well, and provide **additional results** [at this URL](https://anonymous.4open.science/r/ICMLE1C2/Short_LR.pdf) for comparing Weight Averaging against shorter LR schedules, [and at this URL](https://anonymous.4open.science/r/ICMLE1C2/Annealing_EMA.png) for testing the effect of EMA and LAWA when the learning rate is not annealed to zero. As suggested by Reviewers #egdu and #fFcZ, we will incorporate these additional experiments and considerations in the revised article to provide a broader view and strengthen our findings. > I like the idea that learning rate schedules impede continual learning, since you decay to 0 at some point and restarting could be problematic from an optimization standpoint. Do the authors have any evidence that WA can be used for improved continual or transfer learning? This is a very interesting question! Resuming training from a checkpoint obtained with standard cosine annealing can indeed present challenges, such as **potential forgetting** during the **re-warming phase** (Singh, V. et al., 2025). Therefore, it looks appealing to avoid annealing, and instead resume from the average model. We have not experimented with this point yet, but we think it would be an interesting direction for future work. An interesting question is how to optimally **resume training from the averaged model**, especially how to adapt the learning rate value in this scenario. Since averaging acts as a proxy of implicit learning rate decay, we hypothesize that using a smaller learning rate could be more beneficial for continued training from the average rather than restarting from the previous top learning rate. We believe this fits into the broader question of _how to better utilize the average for training_ (Kaddour et al., 2022; Defazio et al., 2024), a topic that remains underexplored but could offer significant benefits to the community. > Why, in Figure 8, does annealing + weight averaging help whereas in figure 7b LAWA adds no benfit relative to the annealed model? Is it a different dataset, or difference in some other setting? Figure 7b shows results on Librispeech Conformer, which, as reported in Figure 8, benefits only minimally when combined with LAWA. This small gains are not visible at the scale of the plot in Figure 7b. ### References Singh, V., et al. (2025). Beyond Cosine Decay: On the effectiveness of Infinite Learning Rate Schedule for Continual Pre-training. arXiv preprint arXiv:2503.02844. Kaddour, J. (2022). Stop Wasting My Time! Saving Days of ImageNet and BERT Training with Latest Weight Averaging. arXiv preprint arXiv:2209.14981. Defazio, A., et al. (2024). The Road Less Scheduled. arXiv preprint arXiv:2405.15682. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal! I am satisfied and am raising my score accordingly.
Summary: This paper systematically explores two existing weight averaging techniques, EMA and LAWA, across the seven diverse tasks in the 2024 Algoperf competition. The authors find that weight averaging can either increase the final performance for a given compute budget or speed up training to specific performance threshold (by roughly ~20% on top of a well tuned baseline). They also explore whether weight averaging can replace learning rate decay but find that it can not. Rather it is always better to combine weight averaging with standard learning rate schedules (cosine in this paper) than using either method alone. **Post-rebuttal update:** The authors have agreed to rephrasing edits which effectively address my actionable concerns. My conclusion remains that there is little to criticize about the paper aside the limited novelty of the methods and findings. I still think it is a nice reference and validation of the methods which is valuable. Overall these two factors roughly balance out to make this a borderline paper in my opinion. Claims And Evidence: Most of the claims are fine, but I have minor issues with some of them. **We explore the relationship between averaging and learning rate annealing and show how to optimally combine the two to achieve better performance:** I do not agree that the paper shows how to **optimally** combine the two. What the paper does show is that combing standard weight averaging with standard learning rate schedules (as opposed to constant learning rates) is better than just using one of them. The paper does not show e.g. some new learning rate schedule that works better with EMA than standard techniques. **(We) show that coupling WA with learning rate annealing yields optimal results— a novel finding to the best of our knowledge:** I somewhat disagree that this is a new finding. Practical works that use e.g. EMA on top of a learning rate schedule do so exactly because it works better than not using EMA (Karras 2312.02696 could be an example). Other works have shown that WA alone can not replace learning rate decay (e.g. Haegele 2405.18392). I think that the fact that the combination yields better results than one alone follows directly from these, although I am not sure if a single work has shown this. Methods And Evaluation Criteria: Yes, evaluating weight averaging techniques on a competitive benchmark on top of very well tuned baselines is good. The scale and diversity of the benchmark is also convincing. Theoretical Claims: The paper doesn't make any theoretical claims. Experimental Designs Or Analyses: Yes, I looked through the experimental setup and found no significant issues with it. Overall the experiments are convincing and well executed. Error bars are given for some results, further strengthening the results. Supplementary Material: Yes, I read through the appendix. Relation To Broader Scientific Literature: The paper does a good job of discussing related literature. I feel this paper primarily serves as a (decently diverse and large scale) validation of existing methods (standard weight averaging approaches) and observations about them (they improve performance, simple weight averaging can not replace lr schedules). I did not find the questions asked or the findings very novel or surprising. The empirical validation is still well done and can be of value to the community. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: The paper is well written and the figures are clear. The experiments are well done as previously mentioned. There is little to criticize about the paper aside the limited novelty of the methods and findings. I still think it is a nice reference and validation of the methods which is valuable. Overall these two factors roughly balance out to make this a borderline paper in my opinion. Other Comments Or Suggestions: "Even for industrial-scale tasks like benchmark workloads, weight averaging reliably reduces compute costs by hundreds of GPU hours." - I think this sentence is a bit awkward. Only relative savings matter, 100s of GPU hours can range from very significant to very insignificant. The table below shows savings of roughly 100 hours, not 100s. "even on second-order optimizers": There might be differing opinions on this but I think calling Shampoo a second order optimizer is a bit of a stretch (even if it is motivated by some very rough approximation of second order phenomena). "coupling WA with learning rate annealing": I think "combining" would be clearer than "coupling" here. To me coupling signifies a tighter relationship than just using both methods. E.g. if you varied an EMA time horizon dynamically with the learning rate schedule or lowered the learning rate based on some signal from the EMA performance, that would better qualify as coupling. On related work: I think some types of weight averaging were used earlier than you mention in deep learning. Maybe look into deep RL, the original transformer paper ("for the big models, we averaged the last 20 checkpoints"), and I think TensorFlow had support for some type of weight averaging very early on. Questions For Authors: No specific questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your comments! We respond to specific comments below, but please let us know if you have any additional questions. > I do not agree that the paper shows how to optimally combine the two. What the paper does show is that combing standard weight averaging with standard learning rate schedules [...] is better than just using one of them. You are right—we do not explore *optimal* combinations of the two, but rather demonstrate that **combining** weight averaging with learning rate schedules outperforms using either alone. The broad scope of the benchmark makes a full optimality study expensive, but we believe this is still a useful result for practitioners. We will revise the manuscript to clarify this distinction. Thank you for pointing this out! > I somewhat disagree that this is a new finding. Practical works that use e.g. EMA on top of a learning rate schedule do so exactly because it works better than not using EMA. You are right, this is not a completely novel claim. Previous works indeed show that combining WA with learning rate schedules is beneficial. We will adjust the revised paper and clarify this point. We believe the impact of our work comes from demonstrating this at scale, unifying insights from previous work in a single competitive benchmark, and showing that WA alone does not replace learning rate decay across a wide range of settings. > Only relative savings matter, 100s of GPU hours can range from very significant to very insignificant. We agree that relative savings are important, and we will rephrase this for clarity. We did report the **15% saving** in GPU hours, which we believe to be significant for this benchmark, and will ensure this is emphasized more clearly in the revision. > I think calling Shampoo a second order optimizer is a bit of a stretch. We will clarify that while Shampoo is inspired by second-order principles, it may not strictly be a second-order optimizer. > "coupling WA with learning rate annealing": I think "combining" would be clearer than "coupling" here. Thank you for the useful suggestion. We agree that "combining" would be clearer than "coupling" and will make that adjustment in the revision. > On related work: I think some types of weight averaging were used earlier than you mention in deep learning. Thank you for the additional references. We are aware of this and will update the manuscript accordingly. We will add a paragraph in the Related Work section, citing earlier studies that predate SWA. Apart from the Transformer paper (Vaswani et al., 2017), we found that **other influential studies** such as Szegedy et al. (2016) and Merity et al. (2017) have also employed averaging schemes to improve model performance and reduce overfitting.
null
null
null
null
null
null
Test-Time Canonicalization by Foundation Models for Robust Perception
Accept (poster)
Summary: The paper introduces FoCAL, a zero-shot framework for achieving approximate invariance to complex transformations at scale without requiring additional training. FoCAL operates in two steps: (1) generating multiple transformed variations of an input and (2) ranking them using energy functions derived from CLIP and Stable Diffusion to select the most "canonical" version. The method is evaluated on transformations such as 3D viewpoint shifts, lighting changes, and environmental variations across multiple datasets, including ImageNet, COCO, Objaverse-LVIS, and CO3D. Claims And Evidence: The claims made in the submission are generally supported by convincing experimental results. However, as mentioned in the *Methods And Evaluation Criteria* section, additional results could better demonstrate the advantages or disadvantages of their method in comparison. Methods And Evaluation Criteria: There are some limitations in the evaluation. For example, the paper does not include comparisons with domain adaptation (DA) methods or equivariant architectures for transformations like rotation, which would have provided a more comprehensive understanding of the method's performance. For example, in Section 4.2: Lighting (color and contrast), the only comparison made is with vanilla CLIP. Additionally, while the authors mention that their method is not superior to supervised approaches, they do not provide quantitative results or scores to explicitly assess the difference between their method and the supervised approaches. Theoretical Claims: There are no formal proofs in this paper, but the approach is grounded in empirical design choices with inspiration from prior studies. Experimental Designs Or Analyses: As reported in the *Methods And Evaluation Criteria* section, additional comparison would improve the experimental design of the paper. Supplementary Material: I had a quick look at the additional experiments in the appendix, while the code is not provided. Relation To Broader Scientific Literature: The key contribution of this paper is to provide a zero-shot framework for achieving approximate invariance to complex transformations such as lighting changes and 3D viewpoint shifts. This approach builds upon existing work, distinguishing itself by using energy functions derived from CLIP and Stable Diffusion to evaluate transformations, without requiring additional training or task-specific fine-tuning. Additionally, the paper relates to the broader work on equivariant architectures, providing evidence that FoCAL can handle complex transformations. FoCAL leverages pre-trained models and applies a ranking mechanism to select the canonical version of transformed inputs, thus offering a more generalizable solution, making the method particularly useful for large-scale, real-world applications where transformation invariance is needed without extensive retraining. Essential References Not Discussed: I don't know any additional essential references. Other Strengths And Weaknesses: The paper is well-written and easy to follow, with clear explanations of the key ideas. The figures are self-explanatory and complement the text, helping to further clarify the proposed approach. Additionally, the authors provide a thoughtful discussion of the method’s limitations and potential future work, which adds transparency to the study. However, one weakness is the lack of comparisons with existing data augmentation methods and equivariant architectures for transformations like rotation. A direct comparison with these methods would help highlight the advantages and limitations of the proposed approach and could strengthen the paper’s effectiveness. Other Comments Or Suggestions: Figure 4 is never cited in the paper. Questions For Authors: * Q1: I understand that FoCAL addresses problems that other methods struggle with (e.g., rare classes for data augmentation and complex real-world transformations for equivariant architectures), but how does it perform in settings where these methods are typically used and perform well? * Q2: How does the method compare with the one proposed in [1]? * Q3: In lines 317-318, it is mentioned that FoCAL does not surpass supervised approaches (e.g., Barron & Tsai, 2017; Hernandez-Juarez et al., 2020), but no results are provided. Could the authors provide a comparison with these methods? ----- [1] Mondal, Arnab Kumar, et al. "Equivariant adaptation of large pretrained models." Advances in Neural Information Processing Systems 36 (2023): 50293-50309. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We are glad you found value in our idea and we hope FOCAL can be a key step towards a zero-shot approach towards a wide range of visual invariance. We address your concerns and questions below: - **Clarification question**: By DA, are you referring to data augmentation or domain adaptation? We use DA for data augmentation in our paper (line 50, right column). Apologies for the confusion. We are not sure what an appropriate comparison against domain adaptation would be. Thus, the rest of this response assumes data augmentation, but if you meant domain adaptation, please clarify what you have in mind. - **Comparisons on 2D rotation**: Our PRLC [1] experiments (Figure 7 in the main paper, Table 1 in the appendix) indeed show a comparison against [1] as well as against data augmentation. - The experimental setting in Table 1 is ideal for data augmentation because it uses balanced datasets (e.g., CIFAR10) with known augmentations (2D rotation). On these datasets (CIFAR10, CIFAR100, STL10), we beat both data augmentation and PRLC [1] baselines. - We don’t show explicit comparison against equivariant architectures because we could not find ViT/R50 scale baselines that could be compared fairly, but we would be happy to evaluate any baselines that you suggest. - We do have an indirect comparison against equivariant nets through [1] because it uses an equivariant net to canonicalize the images. [1] https://proceedings.neurips.cc/paper_files/paper/2023/file/9d5856318032ef3630cb580f4e24f823-Paper-Conference.pdf - **“Figure 4 is never cited in the paper.”** - Thanks! Will fix. - **“In lines 317-318, it is mentioned that FoCAL does not surpass supervised approaches (e.g., Barron & Tsai, 2017; Hernandez-Juarez et al., 2020), but no results are provided. Could the authors provide a comparison with these methods?”** - That’s a fair point. On the RCC (Recommended Color Checker) dataset, Barron & Tsai (2017) achieve a median angle error of 1.3 degrees. The Gray World baseline achieves a median angle error of 9.97 degrees. Ours achieves a median angle error of 7.3 degrees. When it comes to classification, we find that the classification accuracy differences are minute (<0.5%). - We will add a more detailed comparison in the paper. We will also try to find supervised baselines for the contrast transformation (or train such baselines ourselves) and add those to the paper as well. --- Rebuttal Comment 1.1: Comment: I thank the authors for addressing all my concerns. I also apologize for the confusion regarding the DA. I have adjusted my review accordingly and have raised the score.
Summary: The paper proposes to construct an energy measure from outputs of pretrained foundation models. This energy can be minimized over transformations of the input image to obtain a canonicalization of the image. The minimization can be done through bayesian optimization. The experimental results are good. Claims And Evidence: The authors write > As a bonus, this approach is fully complementary to training-time approaches like DA. If the foundation models were trained to be invariant to the considered transformations, they couldn't be used to determine which transformation is the canonical. If the downstream model is trained to be invariant, canonicalization is not required. # Post-rebuttal: Regarding the compatibility with data augmentation – it may be true that the model becomes more invariant by further adding a canonicalization, but it will not in the balanced case become better on average since the downstream model does not have a preferred orientation if it is trained with data augmentation. Methods And Evaluation Criteria: The evaluations make sense. Theoretical Claims: N/A Experimental Designs Or Analyses: The paper makes a strong argument for using foundation models for canonicalization, but a large part of the argument is the alleviation of having to train a canonicalization network. In other words, this is an argument that is based on reduced computational requirements. However, since foundation models are quite heavy to run and several transformations have to be tested in a Bayesian optimisation scheme, it seems like the method is heavier than prior work at inference time. This is mentioned in Limitation (1), but the paper should include explicit numbers on compute to give the reader a proper sense of the trade-offs involved. # Post-rebuttal: I am satisfied with the addition of computational requirements to the paper. Supplementary Material: I skimmed the SM. It contains comparisons to TTA. It is unclear if those comparisons are at an equal compute budget. Relation To Broader Scientific Literature: The paper shows that the proposed method works. This has not been demonstrated earlier. Essential References Not Discussed: Kaba et al. were not the first to propose canonicalization by minimizing an energy score, see for instance Boominathan, Lokesh, Suraj Srinivas, and R. Venkatesh Babu. "Compensating for large in-plane rotations in natural images." Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing. 2016. Other Strengths And Weaknesses: The paper is generally well-written and clear. Other Comments Or Suggestions: N/A Questions For Authors: 1. What is the computational cost of the method? 2. In Figure 1(b), is the left-most example an output from the model? I.e. does it produce a background-free teddy bear from an image with background? Including the computational cost in the paper would lead me to raise my score if no critical concerns are shown in other reviews. # Post-rebuttal: I am satisfied with the addition of computational requirements to the paper. Thus I will raise my score to Accept as indicated. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We are glad you found our experimental results to be good and we hope FOCAL can be a key step towards a zero-shot approach towards a wide range of visual invariance. We respond to your concerns and questions below: - **“If the downstream model is trained to be invariant, canonicalization is not required.”** - The difficulty lies in practically achieving a general form of invariance just with data augmentation. Data augmentation still has limitations on long-tailed data [1], generalizing to new transformations, or generalizing outside the trained range. Thus, even if a downstream model is trained with data augmentation, our method can still complement training-time data augmentation by improving invariance in such cases. As an example, in our 2D rotation experiments, FOCAL increases the model’s invariance despite it being trained with data augmentation: even on a balanced CIFAR10 dataset setting, we find that the EKLD with FOCAL+DA is over 10x lower than just DA (0.0053 vs. 0.056). [1] https://arxiv.org/abs/2203.09739 - **“but a large part of the argument is the alleviation of having to train a canonicalization network. In other words, this is an argument that is based on reduced computational requirements … paper should include explicit numbers on compute to give the reader a proper sense of the trade-offs involved.”** - We fully agree that it is important to discuss runtime costs, and will add that analysis to the paper. Please see our “How fast does the proposed method run?...” response to XwyL for details. - To clarify, our biggest claimed benefit is not runtime but rather generalization. - Consequently, our method outperforms PRLC’s specialized canonicalizers not only on their trained settings (Figure 7, Table 1) but also on new datasets (e.g, ImageNet) and downstream models (e.g., CLIP). In addition, our approach also works for more complex transformations than previously explored with canonicalization (lighting, 3D, day-night, active vision). - If runtime is a priority and training a canonicalizer is acceptable, our method can still be useful as a source of supervision. Please see our “Distilling FOCAL energy into a cheaper (or one-shot) EBM” response to XwyL. - **“It contains comparisons to TTA. It is unclear if those comparisons are at an equal compute budget.”** - Yes, this comparison uses an equal compute budget. - **“Boominathan, Lokesh, Suraj Srinivas, and R. Venkatesh Babu. ‘Compensating for large in-plane rotations in natural images.’ Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing. 2016.”** - Thanks! We agree this is a useful citation and will add it to the paper. - **“What is the computational cost of the method?”** - See “How fast does the proposed method run?...” response to XwyL above - **“In Figure 1(b), is the left-most example an output from the model? I.e. does it produce a background-free teddy bear from an image with background?”** - The left-most example is an output from TRELLIS [2], the 3D generator we used. TRELLIS removes the background as part of its pipeline, as TRELLIS is trained on background removed images. [2] https://arxiv.org/abs/2412.01506
Summary: The paper introduces FOCAL, a zero-shot framework designed to achieve invariant perception at test-time using pre-trained foundation models like CLIP and Stable Diffusion. FOCAL generates transformed versions of input images and selects a canonical version by minimizing an energy function derived from these models, requiring no additional training or architectural changes. Claims And Evidence: The authors claim that vision foundation models, such as CLIP, perform poorly on direct classification tasks when faced with viewpoint, color, or lighting changes. But they exhibit strong performance in ranking (**the ability to select the most plausible (canonical) image from among multiple candidate images** via energy-based forms) multiple transformed images by their probability of belonging to the dataset distribution. They propose leveraging this capability to canonicalize images, improving classification accuracy. However, the paper lacks an analytical explanation regarding why foundation models, despite their reduced direct classification accuracy under transformations, possess robust and accurate ranking abilities, even without finetuning on downstream tasks. For instance, Figure 5 suggests CLIP inherently identifies canonical viewpoints even without task-specific fine-tuning, outperforming a fine-tuned version in terms of viewpoint robustness. The paper provides strong experimental results but does not sufficiently explore or explain the underlying mechanisms or reasons behind this key capability. Methods And Evaluation Criteria: As acknowledged by the authors, estimating the canonical form for every image involves multiple transformations, necessitating the use of Gaussian processes for efficient search. Despite using GP, the proposed approach still likely demands significantly more computational resources. Furthermore, in realistic scenarios, images typically undergo multiple combined transformations rather than isolated ones (e.g., viewpoint + color + contrast + active vision). While evaluating single transformations separately is understandable for methodological clarity, this raises concerns that employing Bayesian optimization to search through a more complex and realistic transformation space could become increasingly infeasible. Additionally, Figure 5 shows that finetuned CLIP performs better for samples with high viewpoint rankings (easy samples). How can we determine, given a specific sample image, whether naive finetuned CLIP or FoCAL would yield better performance? Theoretical Claims: There are no theoretical claims to discuss. Experimental Designs Or Analyses: What is the inference time of the proposed method compared to baselines such as finetuned CLIP, TTA, and PRLC? Supplementary Material: No supplementary material (only appendix). Relation To Broader Scientific Literature: Proposed model could be used in the future to enhance the test-time performance of foundation models. Essential References Not Discussed: None. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: None. Questions For Authors: In section B.3, the weighting between the diffusion and CLIP energy terms is currently set via hyperparameters. Could the authors clarify under which circumstances one energy term (diffusion or CLIP) is more critical than the other, and what specific roles each model plays in different canonicalization scenarios? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your insightful feedback. We are glad you recognize our empirical strengths in canonicalizing images on a range of transformations, and we hope FOCAL can be a key step towards a zero-shot approach. We respond to your concerns and questions below: - **“The paper lacks an analytical explanation regarding why foundation models, despite their reduced direct classification accuracy under transformations, possess robust and accurate ranking abilities, even without finetuning on downstream tasks.”** - That’s a good point, and we will add a more detailed explanation about this in section 4 as outlined below. - Specifically, we assume: (1) there is at least one in-distribution image in the set of transformed images, and (2) the foundation models can be used as a prior where ID images have lower energy than OOD. This has been shown/used previously by [1, 2]. [1] https://arxiv.org/abs/2206.09012 [2] https://arxiv.org/abs/1912.03263 - If these two assumptions hold for a given sample, the energy minimization scheme returns an image that is in-distribution. Importantly, this scheme only requires the foundation models to distinguish between ID vs OOD images. Even if the classification and energy values for OOD images are not well-behaved, our scheme still works as long as in-distribution images have lower energy than out-of-distribution images. We acknowledge that the limits of foundation models to serve as image priors are still not fully understood. There may be transformations or OOD images for which assumption 2 may not hold. A rigorous understanding of exactly when foundation models work well as image priors is a great future direction. - **“While evaluating single transformations separately is understandable for methodological clarity, this raises concerns that employing Bayesian optimization to search through a more complex and realistic transformation space could become increasingly infeasible.”** - This is a fair question. In general, handling multiple and complex transformations remains a difficult open problem. For our approach, we agree that more complex transformations (resulting in a higher-dimensional optimization problem) would likely lead to higher sample complexity, which would make the optimization more difficult. - However, Bayesian optimization and gradient descent for latent space optimization have been successful in other mid-to-high dimensional optimization problems like protein design [3,4,5,6]. We are still figuring out how to best use these techniques for invariance, and we think it could be a great future research direction. We hope our paper serves as a strong foundation to explore this direction and hope to solve it in future work. [3]: https://arxiv.org/abs/2006.09191 [4]: https://arxiv.org/abs/2201.11872 [5]: https://arxiv.org/abs/1610.02415 [6]: https://www.nature.com/articles/s42256-022-00532-1 - **“Additionally, Figure 5 shows that finetuned CLIP performs better for samples with high viewpoint rankings (easy samples). How can we determine, given a specific sample image, whether naive finetuned CLIP or FoCAL would yield better performance?”** - Great question! Usually, the more in-distribution/upright an image is, the more likely it is that finetuned CLIP will do better. As a heuristic: if the given image is in a local minimum of the energy function, FOCAL will likely not be helpful. We used this to create a decision rule for 2D rotations that allows us to skip canonicalization where it is unnecessary with 95% accuracy (see “only using canonicalization when necessary” in our response to Xwyl above). We will add these results to the appendix. - **“Could the authors clarify under which circumstances one energy term (diffusion or CLIP) is more critical than the other, and what specific roles each model plays in different canonicalization scenarios?”** - Also a great question. We don’t have a general theoretical answer, but we have run some ablations (e.g., Table 4 in the appendix). In our experience, CLIP is more important when there is a clearly defined object in the image with an easily defined caption (e.g., Objaverse classification). Diffusion helps much more for scenes with many objects (e.g., in segmentation) and visual structures (e.g., edges). We currently find the best combination via hyperparameter optimization, but hope to have a more general approach in future work. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' rebuttal. I will keep the positive score.
Summary: This paper introduces Foundation-model guided Canonicalization (FOCAL), a novel zero-shot framework designed to enhance the invariance of vision models to various transformations by test-time optimization. The method generates candidate transformations (e.g., 3D novel views) and selects a canonical version by optimizing an energy function derived from the foundation models like CLIP and Stable Diffusion. The authors demonstrate the effectiveness of FOCAL across 2D/3D rotations, illumination shifts, and day-night changes, showing improved robustness for models like CLIP and SAM without requiring any training or architectural modifications. The work also explores applications in active vision. Claims And Evidence: - FOCAL improves robustness to various transformations in a zero-shot manner. - The experiments presented in Section 4 provide empirical support for this claim, where they show improved performance on viewpoint shifts, illumination changes, 2D rotations, and day-night transformations. - FOCAL outperforms or matches task-specific canonicalizers like PRLC in their trained settings, despite being zero-shot. - The 2D rotation experiments in Section 4.3 and Figure 7 support this claim. FOCAL matches or surpasses PRLC's performance on 2D rotation tasks and demonstrates better generalization on ImageNet. - The proposed approach is fully complementary to training-time approaches like DA. - To make this claim, don’t they need to show that FOCAL + DA outperforms FOCAL alone? But I couldn’t find any such experiment. Methods And Evaluation Criteria: - The proposed method is well-motivated and clearly explained. The "vary and rank" scheme is intuitive. The energy functions derived from CLIP and Stable Diffusion seem appropriate. - The evaluation criteria are suitable for assessing the method's effectiveness. The choice of datasets (ImageNet, CIFAR, Objaverse-LVIS, CO3D) and transformations (viewpoint shifts, illumination changes, 2D rotations, day-night) is comprehensive and relevant. Theoretical Claims: - No proof provided. Experimental Designs Or Analyses: - The experiments described in Section 4 are comprehensive (e.g viewpoint invariance, Color and Contrast, 2D rotation, day-night transformation, active vision), and for each task, they use multiple datasets to verify the results. - Hyperparameters for the energy functions are explained in Section B.3. Supplementary Material: - Not in details. Relation To Broader Scientific Literature: - The authors clearly discuss the limitations of data augmentation and equivariant networks, highlighting the challenges they face in open-ended scenarios. - The paper builds upon the theoretical foundations laid by Kaba et al. (2022) and leverages the energy-based model perspective from Grathwohl et al. (2019) and the use of diffusion models as priors from Graikos et al. (2022). Essential References Not Discussed: - No major concerns here Other Strengths And Weaknesses: - I think their idea of using foundation models to optimize the energy function for guiding canonicalization is clever, and the experiments demonstrate its strong performance compared to previous approaches. Other Comments Or Suggestions: - No major typos. Questions For Authors: - How fast does the proposed method run? The paper states that "evaluating the energy function for many candidates is computationally expensive," but I think providing a more concrete runtime would be more helpful for the audience. - On a related note, could you elaborate on potential strategies for improving the computational efficiency of FOCAL? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. We are glad you found our use of foundation models for optimizing the energy function clever and we hope FOCAL can be a key step towards a zero-shot approach towards a wide range of visual invariance. We address your concerns and questions below: * **“‘The proposed approach is fully complementary to training-time approaches like DA.’ To make this claim, don’t they need to show that FOCAL + DA outperforms FOCAL alone? But I couldn’t find any such experiment.”** - Good point. We find that FOCAL+DA indeed outperforms either FOCAL or DA by themselves, and will add these results to the main paper. As an example, we find that on a ResNet32 trained on CIFAR10, the C8 rotated accuracy is 72% with just DA, 72.1% with just FOCAL, and 73.2% with FOCAL+DA. * **“How fast does the proposed method run? The paper states that ‘evaluating the energy function for many candidates is computationally expensive,’ but I think providing a more concrete runtime would be more helpful for the audience.”** - Good question. Our paper focuses on generalization rather than efficiency, but we agree that computational cost is important, and will include it in the paper. Our method’s cost is: (# of transforms evaluated) X (Cost of transforming + Cost of evaluating [CLIP + Diffusion model] + Cost of inference). **Example**: Consider 2D rotation (with 8 rotations around the circle). When using CLIP as the downstream model and 5 diffusion steps, we must evaluate the energy for each of the 8 rotations. Thus, our method requires roughly 56x more FLOPs compared to standard inferencing. As for latency, our procedure is highly parallelizable, and the latency can be close to standard inference latency. As another point of comparison, consider a test-time-augmentation (TTA) strategy that just averages the outputs. Such a strategy would use 8x FLOPs than the naive classifier. Thus, in comparison to TTA, our approach is 7x more expensive in FLOPs. We will add a more detailed runtime analysis in the camera-ready paper, and a table detailing the FLOPs and runtime for each experiment (in seconds). * **“Could you elaborate on potential strategies for improving the computational efficiency of FOCAL?”** - **Only using canonicalization when necessary**: Similar to the “mental rotation” [1] phenomenon in humans, where we classify familiar poses quickly but go through a slow mental uprighting process to classify unfamiliar poses, one approach is to apply canonicalization selectively. We have preliminary results for an approach that may significantly reduce amortized computational complexity by skipping the canonicalization when unnecessary. For example, for 2D rotations, we can skip the canonicalization by comparing the image’s CLIP energy against +90 and -90 degree rotations, and thresholding the energy difference. We can use this simple classifier to detect upright vs. non-upright with 95% accuracy. In summary, with only 3 CLIP inferences, we can detect whether to do canonicalization with 95% accuracy. If the extreme rotations are rare (as is typical in the real world), this can bring significant computational savings and, on average, be even cheaper than TTA. [1] https://psycnet.apa.org/record/1971-28060-001 - **Fewer diffusion steps**: The majority of our computational cost comes from the diffusion energy, especially because it uses multiple steps. Diffusion classifier [2] has done an extensive analysis of efficient diffusion step schedules that can be used to rank inputs (in the context of classification). We didn’t explore these schedules since our focus was to show that CLIP and SD can be used for invariance to diverse and complex transformations, but leveraging efficient diffusion step schedules can be a promising future direction. [2] https://arxiv.org/pdf/2303.16203 - **Distilling FOCAL energy into a cheaper (or one-shot) EBM**: Since the desired output is only a scalar energy value, it might be possible to distill CLIP and SD’s energy function into a smaller single-shot EBM like [3]. This is yet another promising future research direction. [3]: https://implicit-pdf.github.io/
null
null
null
null
null
null
Direct Prediction Set Minimization via Bilevel Conformal Classifier Training
Accept (poster)
Summary: This paper introduces a conformal training algorithm called Direct Prediction sEt Minimization (DPSM). Existing training method suffers from a learning bound depending on the batch size. DPSM is formulated as a bilevel optimization that minimizes the prediction set size (upper level) conditioned on the learned quantile of conformity scores (lower level). This paper introduces algorithm to solve DPSM objective and show that it improves the learning bound to be dependent on the number of training samples instead. Claims And Evidence: * Theoretical * The theoretical claims in Theorem 4.1 show that the learning bound is improved, which is standard ML generalization analysis. * However, no theoretical results on the convergence of bilevel optimization is obtained as the authors pose it as an open challenge. * Numerical * DPSM is shown to outperform other conformal training methods with significant reduction in prediction set sizes. It also has better predictive efficiency. * DPSM is shown to converge effectively. * Assumption 3.1 and 3.2 are empirically validated. * The numerical experiments are not very extensive in terms of datasets. Also the number of trials 10 is a bit few. Visualizations with confidence intervals could better illustrate the robustness of algorithm performance. Methods And Evaluation Criteria: The proposed method to resolve the dependence of learning bound on batch size is principled and well explained. The evaluation metrics of coverage guarantee and prediction set sizes make sense. Theoretical Claims: The proof of Theorem 4.1 appears to be correct. Experimental Designs Or Analyses: The experimental design makes sense but could benefit from a more extensive comparison on a wider range of datasets. Supplementary Material: Proof of Theorem 4.1 in Appendix C. Relation To Broader Scientific Literature: Improve conformal training will contribute to the active field of uncertainty quantification, with meaningful implications to high-stake applications. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful review. We provide an **external PDF** [(click this link to access)](https://anonymousicml.tiiny.site) to present the empirical results needed to answer some of your questions. **Q1: No theoretical results on the convergence of bilevel optimization are obtained as the authors pose it as an open challenge.** A1: As we discussed in Section 2 and 4.2, in the current bilevel optimization literature, the convergence guarantee requires some restrictive conditions that do not hold in our problem. E.g., implicit gradient methods require a twice differentiable and strongly convex lower-level function. Iterative differentiation methods require an iterative subroutine for the lower-level problem with more computational cost. Penalty-based methods typically require continuously differentiable upper and lower-level functions, but our lower-level QR loss is nonsmooth. Although we do not provide a theoretical convergence result for our simple stochastic gradient method, we link it to a most relevant bilevel study by showing that the lower-level QR loss satisfies H&ouml;lderian error bound condition (Lemma 4.5), which, to some extent, justifies that using the simple first-order stochastic gradient updates in our algorithm can possibly achieve the $\epsilon$-optimal solution for the original bilevel problem (Corollary 4.6). Finding an affirmative answer to this challenge is part of our future work. Additionally, we empirically verify convergence behavior in experiments: Figure 1(a) and 1(b): upper and lower loss. Figure 1(c): error of the learned quantile in DPSM vs. batch quantile in ConfTr. Figure 1(d): convergence conformal loss (average soft set size) in DPSM vs. ConfTr. We further plot the convergence of optimization error to 0 in **Fig 1(b) and (c) of external PDF** (See also A1 to Reviewer j13f). These empirical results demonstrate the stable convergence of DPSM for the bilevel problem. **Q2: The numerical experiments are not very extensive in terms of datasets.** A2: We would like to highlight that conformal training is expensive in practice, and our experiments are extensive compared with the existing conformal training literature. We will try to add results with more datasets in the final paper. **Why is conformal training expensive?** (i) Requires fine-tuning several hyperparameters (e.g., learning rates, batch size, regularization parameter, $\tau_{{\text{sigmoid}}}$, etc.) and selecting appropriate options (backbone model, conformal scoring function, etc.) from both ML and CP (see Appendix E) (ii) Typically, requires additional operations such as computing and sorting conformity scores in each iteration, which is more expensive than standard SGD based training of deep models.. Comparing our datasets with previous conformal training literature. We conducted experiments on three widely-used benchmark datasets (CIFAR-100, Caltech-101 and iNaturalist) that span diverse difficulty levels and scales, e.g., 341 classes of data with 224x224 resolution for iNaturalist (see Table 2). In contrast, prior conformal training papers [r1, r2, r3, r4] worked on MNIST, CIFAR-10, or CIFAR-100, which are smaller than ours, e.g., up to 100 classes of data with 32x32 resolution for CIFAR-100. **Q3: The number of trials 10 is a bit few. Visualizations with confidence intervals could be better** A3: Our experiments followed common practice in the CP and conformal training literature [r1, r2, r5, r6], where 10 random splits are widely adopted and considered sufficient to evaluate algorithm performance (in our Table 1). To study the robustness of DPSM, we show additional experiments with 20 trials on CIFAR-100 using HPS score in the table below. | Methods | CE | CUT | ConfTr | **DPSM** | |-|-|-|-|-| | DenseNet |$2.59\pm 0.075$|$2.28 \pm 0.070$|$2.27 \pm 0.054$|$2.17 \pm 0.047$| | ResNet|$3.39\pm 0.13$|$3.00 \pm 0.71$|$3.77 \pm 0.10$|$2.94 \pm 0.072$| These results are very similar to those obtained with 10 trials (Table 1), confirming that the improvement of DPSM in predictive efficiency is robust. Additionally, we plot the visualizations with confidence intervals to further illustrate the stability and robustness of DPSM in **Fig 5 in the external PDF**. We will include results with more trials in the final paper. [r1] Sharma, et al. PAC-Bayes generalization certificates for learned inductive conformal prediction. NeurIPS 2023 [r2] Correia, et al. An information theoretic perspective on conformal prediction. NeurIPS 2024 [r3] Einbinder, et al. Training uncertainty-aware classifiers with conformalized deep learning. NeurIPS 2022 [r4] Stutz, et al. "Learning Optimal Conformal Classifiers. ICLR 2022 [r5] Ding, et al. Class-conditional conformal prediction with many classes. NeurIPS 2024 [r6] Huang, et al. Conformal Prediction for Deep Classifier via Label Ranking. ICML 2024 --- Rebuttal Comment 1.1: Comment: I appreciate the authors' thoughtful response. I am maintaining my positive score.
Summary: This paper introduces a novel training strategy for deep classifiers, leveraging insights from conformal prediction. Specifically, it proposes a new training loss function that is formulated as the sum of two components: a conventional loss function—aiming to enhance predictive accuracy—and a (regularized) conformal alignment loss designed to reduce the size of the conformal prediction set. Compared to the existing work [1], which had the same idea but calculated the conformal alignment loss using minibatch data and an empirical quantile derived from the minibatch, this work parameterize the quantile as a variable and propose a simultaneous optimization of both the model weights and the quantile. This is achieved by formulating the problem as a bilevel optimization task: at the upper level, the objective is to minimize the sum of the conventional loss and the conformal alignment loss, while at the lower level, the goal is to estimate the $(1-\alpha)$-th quantile of the nonconformity score distribution. The authors demonstrate that, under certain assumptions, their proposed conformal alignment loss—parameterized by both the model and the quantile—achieves an improved learning bound for the conformal alignment loss. Specifically, they achieve an $O(1/\sqrt{n})$ upper bound, where $n$ is the training sample size, while the existing works [1,2] only achieve an upper bound of $O(1/\sqrt{b})$, where $b<<n$ represents the minibatch sample size. [1] "Learning Optimal Conformal Classifiers", Stutz et. al., 2022. [2] "Training Uncertainty-Aware Classifiers with Conformalized Deep Learning", Einbinder et. al., 2022. Claims And Evidence: The authors assert that (1) the proposed algorithm achieves an improved learning bound compared to existing methods, and (2) their approach empirically results in smaller conformal prediction set sizes. Both claims are substantiated through theoretical justifications and empirical evidence from numerical experiments. Methods And Evaluation Criteria: The datasets used for evaluating their proposed method and the benchmark methods selected for comparison are appropriate. Regarding the evaluation criteria, the authors employ marginal coverage and average set size, both of which are standard metrics in the conformal prediction literature. However, this paper does not evaluate conditional coverage, which may be essential for fully assessing the improvements of an uncertainty-aware classifier trained with the proposed loss function. A more detailed discussion on this point can be found in the Other Strengths and Weaknesses section. Theoretical Claims: Yes, I have reviewed the proof of Theorem 4.1 and did not notice any apparent issues. Experimental Designs Or Analyses: Yes, I have reviewed the experimental setup in Section 5.1 and the corresponding analysis in Section 5.2. Supplementary Material: The main proof of Theorem 4.1. Relation To Broader Scientific Literature: The key contribution of this paper is the introduction of a training loss function that explicitly incorporates the objective of reducing conformal prediction set sizes. The innovation lies in the use of a bi-level optimization framework, which jointly learns both the model and the quantile of nonconformity scores. However, the primary motivation of consistently favoring smaller prediction sets may limit the practical utility and broader significance of the approach, as discussed in detail in the weaknesses section. Essential References Not Discussed: To the best of my knowledge, the most relevant works essential for understanding the paper’s key contribution have been appropriately cited. Other Strengths And Weaknesses: Strengths: The paper is generally well-organized and well-structured. The main message and key contributions of this work are clearly articulated and easy to follow. The idea of simultaneously optimizing the quantile and model weights in the conformal alignment loss is novel. Weaknesses: One of my primary concerns regarding this work is the underlying motivation—specifically, the emphasis on minimizing the prediction set size as the main objective, irrespective of the characteristics of the underlying data. In uncertainty quantification, the goal is to accurately capture and represent the uncertainty associated with predictions. However, reducing the prediction set size alone does not necessarily equate to improved uncertainty quantification. For example, in practical scenarios, datasets often contain a mix of both easy-to-classify and hard-to-classify samples. Ideally, an uncertainty quantification method should be able to distinguish between these cases—producing small prediction sets for easy samples while allowing larger prediction sets for more challenging samples to reflect their inherent difficulty. Prioritizing the minimization of prediction set size across the board may not align with this fundamental principle. For instance, if 90% of the data consists of easy samples while the remaining 10% are difficult, one could achieve a very low average prediction set size by assigning small sets to the easy samples while returning empty sets for the hard ones. However, such an approach is not particularly practical or useful, as it fails to provide meaningful predictions for more challenging cases. In the context of conformalized training, optimizing for a classifier that solely aims to minimize prediction set size seems less convincing. A more meaningful objective would be to develop a classifier that can offer fair and adaptive predictions, differentiating samples based on their intrinsic difficulty. Indeed, prior work, such as [2], has pointed out that training with the ConfTr loss function, which also seeks to minimize the smallest set size, does not necessarily improve conditional coverage, particularly when the data exhibits heterogeneity. That being said, while achieving a smaller average prediction set size can be beneficial as a secondary advantage, it is not entirely convincing to frame this as the primary goal in conformalized training. Instead, a more robust approach would be to balance prediction set size minimization with a framework that ensures fair and informative coverage across different types of samples. [2] "Training Uncertainty-Aware Classifiers with Conformalized Deep Learning", Einbinder et. al., 2022. Other Comments Or Suggestions: In the second line of Theorem 3.5, there is an extra right parenthesis between $(1-\alpha)$ and $(s+1)$. Additionally, $\frac{1}{n}\sum_{i=1}^n \left[ \sum_{y\in \mathcal{Y}} \tilde{1}[S_f(X_i, y) \leq q] \right]$ appears to be repeatedly defined in Equation (9) and the second line of Equation (5), which may lead to potential confusion. Questions For Authors: My apologies if I have overlooked anything, but I found the following two points unclear and would appreciate further clarification from the authors. 1. In the second line of Equation (5), why is $\hat{\ell}(f, q)$ defined as an empirical estimation over all $n$ training samples? Shouldn't $\hat{\ell}(f, q)$ be computed using only the sampled batch? 2. If I understand correctly, the main proposed algorithm, DPSM, aims to solve the optimization problem formulated in Equation (8), where $q$ is estimated by minimizing the average pinball loss over the nonconformity scores of all $n$ training samples. This suggests that the quantile loss is trained on the full batch of training data in each epoch after updating the model $f$. Furthermore, the improved learning bound established in the main theorem also assumes that the quantile function is trained using all $n$ samples. However, in Algorithm 1, the gradient of the quantile loss is computed using mini-batches. Could you clarify the reasoning behind or if I misunderstood? Additionally, why is the same subset $\mathcal{D}_1$ used for both the conformal alignment loss and the quantile loss? Would there be any benefit in performing a further data split? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful review. We provide an **external PDF** [(click this link to access)](https://anonymousicml.tiiny.site) to present the empirical results **Q1: Evaluation: No conditional coverage** A1: Please refer to A2 for Reviewer aEdV and **Fig 4 in external PDF** for updated conditional coverage measures. DPSM shows improved class-conditional coverage **Q2. Minimizing average prediction set size (APSS): motivation not convincing since:** 1\) it is a secondary advantage of conformal training. The primary objective is adaptive prediction sets to hardness and conditional coverage across heterogeneous data; 2\) it alone does not imply meaningful UQ. Predictions with small APSS may not be adaptive and practically useful, e.g., small prediction sets for easy inputs and empty sets for hard inputs fail to provide informative predictions on difficult ones A2: **1) Predictive efficiency is one of the primary desiderata in CP, rather than a secondary advantage** For practitioners, smaller prediction sets of CP are found practically useful to improve downstream decision making from humans [r1, r2]. Therefore, recent work in CP explicitly aims to improve predictive efficiency of CP [r3, r4]. Others target training the underlying model to encourage predictive efficiency in CP [r5, r6]. Therefore, it is a central and practically important goal. DPSM contributes to this line of work by providing provable improvements in predictive efficiency that is verified by experiments, e.g., 20% reduction of APSS over the best baseline **Clarification: DPSM can also be used for conditional coverage.** The idea of DPSM is to learn the quantile, rather than estimating it in batches. It is agnostic to the considered coverage type and thus highly compatible with different conditional coverages. For class-conditional coverage, we can use the same principle to learn the quantile for each class and regularize fair distributions of conformity scores over heterogeneous classes **2) Small prediction sets for easy inputs and empty sets for difficult ones are informative and practically useful** Under nominal coverage, a small prediction set (e.g., singleton) implies high confidence of the model. On the other hand, an empty prediction set indicates selective rejection, which attracts increasing focus in large language models, e.g., abstention over unreliable prediction (e.g., hallucination) on difficult inputs [r7, r8]. In addition, conformity score is a more fine-grained adaptive measure **Q3: $1/n \sum^n_{i=1}[\sum_{y \in \mathcal Y} \tilde 1 [S_f(X_i, y) \leq q]]$ is repeatedly defined in Eq (9) and (5)** A3: This was intentional because it is a shared part to define SA and DPSM conformal losses in Eq (5) and (9), respectively (as stated in L247 below Eq (9)). This highlights the key difference between the two conformal losses: the input quantile of SA conformal loss is from a random batch, while DPSM uses a learnable quantile as input in Eq (9). Their definitions are used in their corresponding learning bound results **Q4: Why is $\hat \ell (f,q)$ defined over all training samples rather than batch?** A4: **We aim to compare learning bounds for SA and DPSM conformal losses,** so we ensure their learning error is defined over the entire training set, rather than a random batch. Although we compute the batch-level quantile for SA method in practice, to make the comparison agnostic to the randomness of batches, we derive an in-effect conformal loss over all training data (Prop 3.4) used for learning bounds (Thm 3.5) **Q5: Why is QR loss defined over all data in Eq (8), but Algo 1 computes batch gradient? Any benefit for this further data split?** A5: An optimal solution of QR loss over all samples is the ($1-\alpha$)-quantile over all data. In Algo 1, we use stochastic gradients to iteratively update the learnable quantile until QR loss converges. It is analogous to applying SGD to optimize a loss defined over all training samples: iterative SGD update can train the deep model to convergence We clarify that $D_1$ is employed to train classification loss and QR loss, rather than conformal loss. Data split helps prevent overfitting and has been in the conformal training literature (see [r9]). [r1] Conformal Prediction Sets Improve Human Decision Making. ICML 2024 [r2] On the Utility of Prediction Sets in Human-AI Teams. IJCAI 2022 [r3] Uncertainty sets for image classifiers using conformal prediction. ICLR 2021 [r4] Conformal Prediction for Deep Classifier via Label Ranking. ICML 2024 [r5] The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks. ICML 2024 [r6] Enhancing Trustworthiness of Graph Neural Networks with Rank-Based Conformal Training. AAAI 2025 [r7] Large language model validity via enhanced conformal prediction methods. NeurIPS 2024 [r8] Conformal language modeling. ICLR 2024 [r9] Training uncertainty-aware classifiers with conformalized deep learning. NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: Thank you to the authors for thoughtfully addressing my previous comments and for providing additional experimental results. These responses addressed my main concerns, and I have accordingly raised my score. I have one remaining question regarding the conditional coverage results: does the dataset used in the experiments exhibit heterogeneity? --- Reply to Comment 1.1.1: Comment: **FQ1: Do the datasets used in the experiments exhibit heterogeneity** FA1: Thank you for raising your score and the thoughtful follow-up question. **Yes, the used datasets exhibit heterogeneity.** We originally conducted experiments on three widely-used benchmark datasets (Caltech-101, CIFAR-100, and iNaturalist). All three datasets exhibit heterogeneity in prediction difficulty as demonstrated by the following results: **(i) Standard deviation of prediction set sizes over each testing data samples on three datasets*** To quantify this heterogeneity, we report the standard deviation of prediction set sizes evaluated over all testing data by all methods (using the HPS score with DenseNet model) as a proxy for variability in prediction uncertainty (their mean values reported in Table 1 in the original paper): | Methods | Caltech-101 | CIFAR-100 | iNaturalist | |-|-|-|-| | CE |$1.48$|$2.72$|$59.12$| | CUT |$0.75$|$2.29$|$46.95$| | ConfTr |$2.99$|$1.86$|$44.35$| | DPSM |$0.30$|$2.16$|$33.93$| These non-zero and varying standard deviations suggest the sample-level heterogeneity in predictive uncertainty across datasets. **(ii) Visualization of datasets’ heterogeneity via histograms of sample-level HPS (non-conformity scores) and prediction set sizes.** To further illustrate the sample-level heterogeneity, we provide two additional experiments on three datasets (Caltech-101, CIFAR-100, iNaturalist) with DenseNet in an additional external PDF [(click this link to access)](https://anonymousicmlfeedback.tiiny.site/). Specifically, **in the first experiment, Fig 1, 2 and 3** demonstrate the histograms of HPS scores over all testing samples on three datasets, in which we compare DPSM with the three baselines (CE, CUT, ConfTr). We find that the sample-level HPS scores of all four methods are distributed across the full range $[0, 1]$ (i.e., the domain of the HPS scores), which highlights the heterogeneity of datasets. **In the second experiment, Fig 4** shows the histograms of prediction set sizes produced by DPSM over all testing samples on three datasets. We find that the sample-level prediction set sizes of DPSM are distributed widely in the domain (a prediction set size can be an integer taking value from $\\{0, 1, …, K\\}$ for $K$ classes), especially for harder CIFAR-100 and iNaturalist. This figure also demonstrates the heterogeneity of datasets.
Summary: This paper introduces the Direct Prediction Set Minimization (DPSM) algorithm, a novel bilevel optimization approach that minimizes prediction set sizes in conformal classifier training with a learning bound of $O(1/\sqrt{n})$, surpassing prior methods limited by $\Omega(1/s)$. Experimental results on benchmark datasets demonstrate a significant 20.46% reduction in prediction set size compared to the best baselines, validating its theoretical and practical effectiveness. Claims And Evidence: 1. DPSM is a novel conformal training algorithm using bilevel optimization to minimize prediction set size. 2. The proposed method is theoretically robust, as the authors provide comprehensive theoretical analysis and an upper bound on the learning error. Methods And Evaluation Criteria: N/A Theoretical Claims: The theoretical analysis is interesting and solid. The bilevel approach and pinball loss integration make it intriguing, while rigorous proofs and clear assumptions ensure solidity. Experimental Designs Or Analyses: 1. Assessing the score function with fixed hyper-parameter settings provides little insight; a parameter-tuning approach would more effectively demonstrate DPSM's advantages. 2. Including conditional coverage metrics such as WSC, SSCV, or CovGap is essential to evaluate how DPSM’s set size reduction affects conditional coverage. 3. The authors should report model accuracy across different loss functions, as higher accuracy, which significantly influences set size, could result in smaller prediction sets and needs to be documented. Supplementary Material: I have checked the supplementary material. Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful review. We provide an **external PDF** [(click this link to access)](https://anonymousicml.tiiny.site/) to present the empirical results needed to answer some of your questions. **Q1: Assessing the score function with fixed hyper-parameter settings provides little insight.** A1: In our original experiments, we use the score functions (HPS, APS, and RAPS) and set their hyper-parameters following the established practice in recent CP and conformal training papers [r1, r2, r3, r4]. Specifically, only RAPS involves explicit hyperparameters, i.e., $\lambda$ and $k_{reg}$, and we used standard fixed hyperparameter settings commonly adopted in the CP literature, e.g., [r1, r2, r3]. HPS and APS do not have hyperparameters. To investigate the impact of the hyperparameters of RAPS over DPSM, we additionally evaluated the average prediction set size based on all conformal training methods with different values for the two hyperparameters in RAPS, where the values follow the original setting in the RAPS paper [r5]. We selected the best combination with the smallest prediction set size for each conformal training method and reported their average prediction set sizes as follows. | Methods | CE | CUT | ConfTr | **DPSM** | |----|------|-------|--|----| | DenseNet |$2.73\pm 0.043$|$2.14 \pm 0.055$|$2.69 \pm 0.053$|$2.34 \pm 0.055$| | ResNet |$3.25\pm 0.14$|$2.93 \pm 0.60$|$4.02 \pm 0.11$|$2.93 \pm 0.05$| These results are the same as those obtained originally with the fixed RAPS hyperparameters (Table 6 of original paper). We will include the additional results with the tuned RAPS score in the revised paper. **Q2: Including conditional coverage metrics such as WSC, SSCV, or CovGap is essential.** A2: To investigate the impact of DPSM’s set size reduction over the conditional coverage, we report **a table of WSC ($\uparrow$ better), SSCV ($\downarrow$ better) and CovGap ($\downarrow$ better)** of all conformal training methods on CIFAR-100 with HPS score and DenseNet backbone below: | Mesures | CE | CUT | ConfTr | **DPSM** | |------|--------|---------|----------|-------------| | WSC |$0.88\pm 0.016$|$0.90\pm 0.022$|$0.88\pm 0.018$|$0.89\pm 0.011$| | SSCV |$0.12\pm 0.024$|$0.09\pm 0.019$|$0.21\pm 0.061$|$0.17\pm 0.034$| | CovGap |$4.54 \pm 0.49$|$5.20 \pm 0.29$ |$4.56 \pm 0.28$ |$4.43 \pm 0.41$ | These results show that DPSM achieves the best performance for class-conditional coverage (the smallest CovGap, with 2.4% $\downarrow$ over the best baseline CE). For size-stratified coverage (SSCV), DPSM has a worse measure compared with CE and CUT, but is better than ConfTr. For WSC, CUT and DPSM have comparable performance. **Visualized class-conditional coverage and prediction set size** We further plot **Fig 4 in the external PDF** to show the distribution of class-conditional coverage and class-wise average prediction set size as fine-grained measures for class conditional coverage. DPSM shows a bit more concentration in terms of class-wise coverage and smaller class-wise prediction set size. This result is supported by the above table, where CovGap is a measure for the violation of class-conditional coverage and DPSM has 2.4% $\downarrow$ over the best baseline (CE) as per the above table. **Q3: The authors should report model accuracy across different loss functions.** A3: We agree that explicitly reporting model accuracy across different training loss functions would provide a more comprehensive understanding of the benefits in DPSM. Below, we report the testing accuracy on CIFAR-100 of all methods in the following table: | Methods | CE | CUT | ConfTr | **DPSM** | |---------|---------|---------|-----------|------------| | DenseNet |$0.775\pm 0.006$|$0.752 \pm 0.006$|$0.739 \pm 0.006$|$0.751 \pm 0.005$| | ResNet |$0.739\pm 0.005$|$0.701 \pm 0.004$|$0.648 \pm 0.004$|$0.699 \pm 0.005$| These results show that while DPSM achieves slightly lower accuracy than CE, it maintains comparable accuracy to CUT and significantly outperforms ConfTr. Combined with the best performance of DPSM in terms of average prediction set size reported in Table 1 of original paper, DPSM demonstrates an effective balance between model accuracy and predictive efficiency of CP. We will include these accuracy results in the revised paper. [r1] Ding, et al. Class-conditional conformal prediction with many classes. NeurIPS 2023 [r2] Tawachi, et al. Multi-dimensional conformal prediction. ICLR 2025 [r3] Correia, et al. An information theoretic perspective on conformal prediction. NeurIPS 2024. [r4] Lu, et al. Federated conformal predictors for distributed uncertainty quantification. ICML 2023. [r5] Angelopoulos, et al. Uncertainty Sets for Image Classifiers using Conformal Prediction. ICLR 2021
Summary: This paper proposed a new method, called DPSM, for minimizing the size of the prediction set for conformal prediction (CP) via bi-level optimization. The main idea is to reformulate the quantile estimation in CP into an optimization problem, such that it can be treated as a lower-level optimization, while the set size minimization as the upper-level one. The authors claimed and theoretically showed that such a formulation can improve the learning bound of previous methods that are also devoted to minimizing the prediction set's size. Experimental results demonstrate the effectiveness of the proposed method, that it reduces the average size of the prediction sets over existing methods in most cases. ## update after rebuttal I would keep my initial rating after reading the authors' responses. Claims And Evidence: The main claim by the authors is that the proposed DPSM has better learning bound with respect to sample complexity than existing methods, which is theoretically verified. Experiments also demonstrate the effectiveness of DPSM. However, a concern is that the learning bound was not empirically tested. Due to the approximation in the optimization, it is a little questionable that the theoretical bound can be indeed achieved. Methods And Evaluation Criteria: Since the proposed method focuses on the prediction set's size, the average prediction set's size is directly used as the performance measure, which is reasonable. For the method itself, the motivation is clear, and the theoretical results seem good (though I haven't checked the proofs in detail). However, I am a little confused that, since the stochastic approximation is also adopted in DPSM implementation for optimizing the quantile as that in SA-based methods, why it can achieve much better learning bound. Theoretical Claims: I haven't checked all the details of the proofs, but I have a concern about the proof for Theorem 3.5. Specifically, I notice that the proof for Theorem 3.5 relies on some properties of Beta distribution, which are only available in the asymptotic sense according to Proposition 3.4. Experimental Designs Or Analyses: The overall experimental designs and analyses are promising. A small issue: - In Figures 1 and 3, it seems that the algorithm did not sufficiently converge with 40 epochs. So, I suggest more iterations. Supplementary Material: I did not fully check the details of the proofs in the supplementary material but looked through the overall logic. I also reviewed the additional experimental results provided. Relation To Broader Scientific Literature: The existing SA-based conformal training methods [1, 2] compute the empirical batch-level quantile during each training iteration. The proposed DPSM tries to improve the theoretical learning bound and practical effectiveness by a bi-level optimization-based formulation. [1] Stutz, D., Cemgil, A. T., Doucet, A., et al. Learning optimal conformal classifiers. arXiv preprint arXiv:2110.09192,2021. [2] Einbinder, B.-S., Romano, Y., Sesia, M., and Zhou, Y. Training uncertainty-aware classifiers with conformalized deep learning. Advances in Neural Information Processing Systems, 35:22380–22395, 2022. Essential References Not Discussed: Related references have been properly cited and discussed. Other Strengths And Weaknesses: Strength: - The bi-level optimization for conformal training is new and inspirative. - Theoretical analysis is provided to support the reasonability of the proposed method. - Experimental results are promising. Weakness: - The gap between the theory and implementation of the proposed method was not discussed. Other Comments Or Suggestions: - In the caption of Table 3, "above" should be "below". - Figure 4 shows the assumption verification using different networks but the same conformity score, compared with that in Figure 2. I suggest trying other conformity scores. Questions For Authors: No further questions. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful review. We provide an **external PDF** [(click this link to access)](https://anonymousicml.tiiny.site) to present the empirical results needed to answer some of your questions. **Q1: Learning bound not empirically tested? Questionable if achievable due to optimization approximation?** A1. Learning bound in Thm 4.1 cannot be empirically computed (conformal loss ${\mathcal L}_c$ is defined over population, as per the standard generalization error [r1]). However, we can verify it via a strategy in ML literature that approximates the generalization error by the absolute gap between the train and test error [r2, r3]. For our CP case, we use the average prediction set size (APSS) evaluated on train and test sets to approximate our learning bound. Specifically, at iteration t, we: 1) compute APSS on train set using $f_t$ and $q_t$ (as the threshold for CP), denoted by $APSS^{tr}(f_t, q_t)$. It includes opt. error since $q_t$ is not optimal (true quantile on train set); 2) compute $APSS^{te}(f_t,q^{te}(f_t))$ on testing data, where $q^{te}(f_t)$ is the true quantile Then we compute $|APSS^{tr}(f_t, q_t)-APSS^{te}(f_t, q^{te}(f_t))|$ as an approximation of the learning bound. We follow the same strategy to approximate the learning bound for the SA-based ConfTr, where we use the batch-level quantiles from batches randomly sampled from train set for train APSS, and from testing data for test APSS, respectively. We include this comparison in **Fig 1 (a) in the external PDF** to verify that the approximated learning error is improved (and converges to 0) by DPSM despite optimization approximation. **Opt. approximation**: We showed the convergence of losses and est. error of learned quantile empirically in Fig 1(a)-(c) of original paper. To investigate how the learned $q_t$ impacts the opt. error on conformal and QR losses, we compute these two losses with the learned $q_t$ and optimal $q^{tr}(f_t)$. On train set, we further show the gap between the conformal loss (and QR loss resp.) with $q_t$ vs. $q^{tr}(f_t)$ (we denote as opt. error) in **Fig 1 (b) and (c) of external PDF**. Both opt. error converges to nearly 0 **Q2: Why can DPSM achieve much better learning bound as SA is also adopted?** A2: DPSM mainly differs from the SA-based method in how the quantile is instantiated **DPSM**: the quantile is a learnable parameter in lower-level QR **SA-based**: uses batch-level quantile at each iteration Although they share the same SA-type update (SGD), the bottleneck of SA-based method is the large deviation of the batch-level quantile w.r.t the true quantile (at least $\Omega(1/s)$, Thm 3.5). Instead, DPSM decouples the quantile estimation from stochastic batches via learning a quantile parameter. By releasing the bottleneck of estimating batch-level quantiles, it allows a much smaller learning bound (at most $O(1/\sqrt{n​})$, Thm 4.1) when DPSM learns the optimal quantile. We further show that DPSM learns a very accurate quantile (the gap w.r.t true quantile converges to 0 as in Fig 1(c) of original paper). Thus, it secures the improved learning bound **Q3: The proof for Thm 3.5 relies on some properties of beta distribution which are only available in the asymptotic sense according to Prop 3.4.** A3: Thank you for pointing this out. We will revise Thm 3.5 to explicitly include the assumption used by Prop 3.4. For the asymptotic result, following some standard analysis techniques in ML literature, e.g., [r4, r5, r2], we use it to formally construct the in-effect conformal loss over $n$ training data characterized by Beta distribution. Based on it, we derived the learning bounds for the SA-based method that are compared with DPSM in Section 4. **Q4: The algorithm did not sufficiently converge with 40 epochs.** A4: Thank you for your suggestion. We have extended training epochs to 100 to show the sufficient convergence of upper and lower loss in **Figure 2 of external PDF**. **Q5: Verify assumption with other conformity scores.** A5: We originally selected HPS score to verify assumptions following the recent conformal training literature [r6]. We add the verification experiments with APS and RAPS scores in **Figure 3 of external PDF**, where Assumption 3.1 and 3.2 are still empirically valid. We will report the results in the revised paper. [r1] Mohri, et al. Foundations of machine learning. MIT press 2018 [r2] Yang, et al. Exact gap between generalization error and uniform convergence in random feature models. ICML 2021 [r3] Yuan, et al. Stagewise Training Accelerates Convergence of Testing Error Over SGD. NeurIPS 2019 [r4] Emami, et al. Generalization error of generalized linear models in high dimensions. International conference on machine learning. ICML 2020 [r5] Velikanov, et al. Generalization error of spectral algorithms. ICLR 2024 [r6] Sharma, et al. PAC-Bayes generalization certificates for learned inductive conformal prediction. NeurIPS 2023
null
null
null
null
null
null
Bi-perspective Splitting Defense: Achieving Clean-Seed-Free Backdoor Security
Accept (poster)
Summary: This paper addresses poisoning backdoor attacks. Specifically, the authors aim to challenge the assumption of accessing clean data in current backdoor defense literature, with the main idea of utilizing both easier-to-obtain target labels and clean, hard samples. They propose a Bi-perspective Splitting Defense (BSD) which relies on semantic and loss statistics through OSS and ALS, respectively. The proposed method is evaluated on benchmark datasets to demonstrate its effectiveness. ## Update after rebuttal. I thank the authors for their rebuttals. It appears that my original review was accurate in stating: > implicitly assumes the existence of a feature that can reliably distinguish clean data from backdoor data. As a result, I find it quite confusing to understand how your approach can be described as clean-data-free. I strongly recommend revising the wording to reflect this more accurately. Regarding the experiments, I do not find them comprehensive enough—especially concerning the attack part—given the vast body of literature on backdoor attacks, as I mentioned in [2]. With that being said, I will adjust my rating to a borderline score. Claims And Evidence: The claims are supported by either textual and mathematical elaborations or empirical evidence. Methods And Evaluation Criteria: I believe the paper has some methodological weaknesses, which I will elaborate on below. - **Free clean-data assumption**: First, I find this assumption invalid. Theoretically, it can be shown (and I can provide a proof if the authors are interested) that **without any information** about the clean data, it is impossible to perform detection or filtering-based defenses. In this sense, the assumption effectively states that, although direct access to clean data is unavailable, a proxy for it is accessible. In particular, while your method operates on mixed data, it **implicitly assumes the existence of a feature that can reliably distinguish clean data from backdoor data**. This idea has already been extensively explored in previous literature [1]. In other words, the paper does not address the scenario where no clean data is available at all. Building upon the previously mentioned implicit assumption, the proposed bi-perspective approach does not appear novel to me. Overall, I find the paper's novelty and originality to be limited. Refs: [1] https://arxiv.org/abs/1811.00636 Theoretical Claims: There is no theoretical proof provided. However, the paper includes some mathematical derivations, which I believe are correct. Experimental Designs Or Analyses: Given the extensive body of literature on backdoor defenses, the evaluations in the current version require significant improvement. For instance, the number of evaluated attacks and defenses is not comprehensive enough. I suggest that the authors follow the setup in [2], where more than 10 attacks and 7 defenses were tested. Refs: [2] https://arxiv.org/abs/2205.13616 Supplementary Material: No Relation To Broader Scientific Literature: Backdoor attacks and defenses fall within the broader domain of core machine learning. Essential References Not Discussed: I have listed some missing references in previous sections. Other Strengths And Weaknesses: - Please check my previous comments Other Comments Or Suggestions: - Please check my previous comments Questions For Authors: - Please check my previous comments Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your comment. # R1 Assumption&Novelty The so-called implicit assumption that `implicitly assumes ... backdoor data` was not a weakness of our paper, and it does not influence the novelty when taking [1] into account. See below for details. ## R1.1 Clarification As the reviewer may have interpreted the background differently, we would like to re-clarify the defense scenario. Our focus is on a practical scenario where defenders attempt to train a benign model using partially poisoned backdoor datasets without **known to be clean** subsets (**See lines 41-76, Introduction; lines 94-121, Preliminary**). To formally define, the defender has a training set $D = D_{c} \cup D_p$ , but the defender doesn't know which part of the samples is clean, and has neither known clean samples $D_{c*}$($D_{c*} \subset D_c$) nor external clean samples $D_{extra}$($D_{extra}\cap D=\varnothing$). In our previous submission, we consistently use the terms *clean subsets / extra clean subsets / clean seed* rather than *clean data* throughout the main text and explicitly formulate the training set as $D = D_{c} \cup D_p$ instead of $D = D_p$ (**Line 107, Preliminary**). While other reviewers have not raised concerns regarding this aspect, we appreciate the opportunity to refine our explanation and will further clarify this definition in our revision. ## R1.2 Free clean-data assumption As we re-clarified above, we didn't make such an assumption as you interpreted, thus it does not constitute a weakness. To understand why our BSD works on mixed datasets without known-clean samples, that's because we leverage general attack-related priors, including: **1)** From the perspective of loss statistics, neural networks tend to overfit backdoor samples, resulting in lower loss values(ALS); **2)** From the perspective of semantic information, identifying poisoned samples from backdoor attacks can be reframed as an open-set recognition problem(OSS). (**See lines 152-160, page 3**) For your claim that `without any information ... filtering-based defenses`, there may be a misunderstanding regarding our background: - It seems that you are referring to a scenario where there is absolutely no clean data in the training set, **it's indeed an impossible task**. But this scenario is not what our paper focuses on, “no clean data at all” is quite different from the “no additional clean subsets” scenario we address. Your comment below, `In other words ... no clean data is available at all` seems to suggest that we should solve a task that just described as impossible. - > If you mean that the defender has no assumptions regarding clean data information but is dealing with a mixed dataset, whether defense is possible depends on how you define “clean data information” and whether general attack-related priors count. Your [1] is a good case. It proposes a method that does not require known clean subsets but distinguishes between samples within a mixed dataset. It managed this using attack-related priors, treating poisoned samples as the minorities within each category. ## R1.3 Novelty Our novelty is already claimed (**lines 75-98, Introduction**), and Ref[1] doesn't affect our novelty, in detail: - While [1] also assumes no extra clean sets, the core defense mechanism differs. It detects poisoned samples as intra-class outliers, considering **only** the samples within each current class. In contrast, our OSS module innovatively reframes the problem based on open-set recognition, OSS **jointly** considers the target class and all other classes for effective distinction. After identifying $y_t$ and warming up the main model, the OSS module distinguishes samples based on different feature distances between the target class $D_{t}$ (i.e., UKCs + UUCs) and the remaining classes $D_{nt}$ (i.e., KKCs). (Details in **Section 4.1.1**) - [1] relies on the assumption that **poisoned samples are intra-class minorities**. As the intra-class proportion of poison samples($\approx\rho$) increases, class-wise mean representation goes closer to poison samples, reducing detection effectiveness. When poisoned samples became the intra-class majority (for large $\rho$ values or in imbalanced datasets like GTSRB), clean samples became outliers instead. In contrast, our method is robust against this issue and is verified by experiments under a larger $\rho$ range (**Figure 3, page 8**). - Additional differences in other aspects: adaptive vs. one-time partitioning and a semi-supervised framework vs. detect-and-retrain framework, etc. --- # R2 Additional Experiments Our experiments already cover 7 attacks in the main text and more attack variants in the appendix, along with 6 defenses and additional recent defenses like VaB, D-ST, and D-BR. **This is sufficiently comprehensive since** 1) the threat model specifies poisoning-based attacks; 2) related works like ABL and DBD evaluate fewer cases. **More results in https://postimg.cc/DSn6S9nG.**
Summary: This paper proposes a backdoor attack defense method, Bi-perspective Splitting Defense (BSD), which does not rely on additional clean subsets. BSD utilizes semantic characteristics through open set recognition-based splitting (OOS) and loss statistics characteristics through altruistic model-based data splitting (ALS) to distinguish clean samples. In their experiments, BSD demonstrates its superiority compared to five state-of-the-art defenses. ## update after rebuttal Thanks for the authors' feedback. After reading the response, I would like to maintain my original score. Claims And Evidence: The authors define backdoor samples as Unknown Classes (UKCs) in their proposed BSD. However, they also admit that clean-label attacks are not UKCs, which makes me question the soundness of their defense claims. This raises doubts about whether their definition is accurate enough—doesn't this create an inconsistency? If clean-label attacks are not UKCs, it implies that OSS is useless against clean-label attacks. The authors also claim that ALS is strong enough to defend against clean-label attacks, so why not simply use ALS for splitting? If that's the case, would applying ALS alone be effective against all types of backdoor attacks? And if so, is OSS even necessary? Methods And Evaluation Criteria: Based on the results of their experiments, they did propose a promising backdoor defense. However, the experimental validation is not robust enough to convincingly demonstrate its effectiveness. Theoretical Claims: Yes, for clean-label attacks, the theoretical foundation of OSS does not hold! Experimental Designs Or Analyses: Yes, I believe this work requires more experiments to demonstrate its soundness and validity. In the case of clean-label attacks, semi-supervised learning lacks the ability to correct mislabeled samples. Both DBD and ASD incorporate semi-supervised learning in their defense mechanisms, yet they struggle to defend against clean-label attacks. So why would BSD be effective against them? For clean-label attacks, it is evident that OSS is ineffective because, in such attacks, poisoned samples are also UUCs (Unknown Unknown Classes). However, the authors claim that ALS alone is strong enough to separate poisoned samples in clean-label attacks. This assertion requires an ablation study to confirm whether ALS alone is truly capable of achieving this. Moreover, if ALS can effectively defend against clean-label attacks, its effectiveness against poisoned-label attacks should also be evaluated. Additionally, the authors should analyze the performance of the target label select mechanism, as it raises concerns about the reliability of target label distinction. Given that clean-label attacks are notoriously difficult to defend against, why did BSD only get evaluated on a single benchmark dataset? Furthermore, the study lacks experiments evaluating the defense performance across different model architectures, which is crucial for demonstrating the robustness and generalizability of BSD. Supplementary Material: Yes, I have gone through the entire supplementary material, but I did not find the expected experiments or analyses. Relation To Broader Scientific Literature: The key contributions of the paper are closely related to the security of artificial intelligence. Essential References Not Discussed: The key contribution is the proposal of a high-performance backdoor defense that does not rely on a clean subset. However, the paper only cites some backdoor defenses with worse performance while overlooking a state-of-the-art approach that also does not require a clean subset—namely, Progressive Isolation of Poisoned Data (PIPD), which was published in AAAI 2024. Other Strengths And Weaknesses: Other Strengths: Framing the backdoor attack problem as an open-set recognition issue offers a novel perspective. Other Weaknesses: The writing is confusing; the concepts of Known Unknown Classes (KUCs) and Known Known Classes (KKCs) are not clearly introduced, and the figures are also quite disorganized. Other Comments Or Suggestions: None. Questions For Authors: Q1. How effective is the experimental method for determining y_target? Can it reliably identify y_target? How does this method perform in attack scenarios where the number of poisoned samples is relatively small? Q2. Clearly, in the case of a clean-label attack, OOS becomes ineffective since the poisoned samples in clean-label attacks do not have modified labels. However, the authors claim that the ALS stage alone can achieve effective separation. If that is the case, why was there no ablation study on BSD to verify whether the ALS stage truly performs as well as claimed? If ALS alone is sufficient, should it also be capable of defending against poison-label attacks? Is it really as effective as the authors suggest? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your comments. **Extended Figure/Tables in anonymous link: https://postimg.cc/68rDsgN3.** # R1 Understanding BSD's Robustness Against Clean-Label Attacks First, in response to `Given that clean-label...single benchmark dataset?`, we provided the experiment results of BSD on GTSRB (**Extended Table 1**). The primary reason BSD resists clean-label backdoor attacks: **1) MixMatch could properly process unlabeled poisoned clean-label data; 2) BSD effectively split poisoned samples.** Specifically, MixMatch's mixup operation visually weakens the trigger in the unlabeled data, preventing trigger-target label association. So, as long as the poisoned samples are correctly placed into the poison pool (unlabeled data), BSD effectively mitigates their impact. **We start by investigating semi-supervised involved defenses like ASD and DBD.** These works report counterintuitive good performance against clean-label attacks. DBD included a brief explanation of its effectiveness against clean-label attacks (in their Appendix O, page25). Reproducing ASD showed that failures occurred when poison samples remained in the clean pool. Reviewing logs from two failed ASD experiments, we found many poison samples misclassified into the clean pool(**Extended Table 2**). **This led us to hypothesize that MixMatch could properly process unlabeled poisoned clean-label data, but DBD and ASD failed for not robust split methods instead.** Further investigation highlighted MixMatch’s mixup operation as critical. MixMatch removes original labels and mixes multiple inputs on the image level, effectively weakening the visual impact of the trigger: $$ \tilde{x} = \lambda' x_l + (1 - \lambda') x_u, \quad \tilde{y} = \lambda' y_l + (1 - \lambda') y_u, \quad\lambda\sim Beta(\alpha,\alpha),\quad \lambda'=\text{max}(\lambda,1-\lambda). $$ With $\alpha=0.75$ by default, expectation of $\lambda'$ ≈0.78, reducing trigger prominence in $x_u$. We visualize the mixed samples in **Extended Figure 1** to better present this operation. Plus, we follow ASD set a 5x smaller $\lambda_u$, further reducing the influence of unlabeled data and mitigating clean-label attacks. **To experimentally verify this hypothesis, we enforced a secure clean-poison split where no poison samples were included in the clean pool.** Under this condition, MixMatch effectively nullified the impact of clean-label attacks, as shown in **Extended Table 3**. **That said, it's still a necessity that we do not split poison samples into clean pools.** Therefore, since the final model does not collapse (Table 2, page6), it supports the reliability of ALS’s split results. Moreover, while OSS seems ineffective against clean-label poisoned samples, it still serves two important purposes: 1) It provides a secured warm-up phase for the main model, preventing an immediate collapse (see our definition in Appendix C.3); 2) Although OSS may not directly identify poison samples, it selects samples from $D_t$ that are farthest in feature space from those in $D_{nt}$, i.e., representitive samples of $D_t$. This result intersects with the clean pool identified by ALS, improving the quality of init clean samples. Therefore, our statement is that ALS “compensates for the limitations of OSS”(line 375, page7), instead of ALS alone is sufficient. --- # R2 Important Baseline Respecting the reviewers’ suggestions, we made efforts to implement PIPD(**Extended Table 4**). Note that we did not intentionally avoid citing higher-performing works. The exclusion of PIPD (AAAI24) was due to: - We conduct reproductive evaluations before adding any baselines, and PIPD's reproductivity is poor. Specifically, across its paper, appendix, and GitHub repository, we found many essential code/parameters/setting omissions. - Methodologically, while PIPD’s approach is intuitively good, it relies on good starts from pre-isolation using LGA. However, our analysis suggests LGA alone may require attack-specific adjustments to hyperparameters (e.g., optimizer choice/learning rates). That's also why we replaced our early version of LGA-based $y_t$ estimation. --- # R3 Estimation of y_t We would like to emphasize that the estimation method for $y_t$ is not our major contribution, and we have verified its robustness(Table 4, page7; Figure 9, page19; Table 8, page19; Table 11, page20) and discussed multiple insurance measures(line 1011-1032, page19). To answer how this method performs when the number of poisoned samples is relatively small, see Figure 3 (a), page8. --- # R4 Others Model structures: Since we have no model-specific assumptions, our method is inherently model-agnostic. Additionally, our experiments on MobileNet further support this claim(Section 5.3). We also add a brief experiment in **Extended Table 5**. Presentation: We appreciate your insights about strengthening the presentation of our work. We will update our modifications accordingly in our revision (once we get a chance).
Summary: This paper introduces ​Bi-perspective Splitting Defense (BSD), a novel in-training backdoor defense framework designed to train robust models from poisoned datasets without requiring clean data. By integrating semantic and loss-based perspectives, BSD addresses critical limitations in existing defenses, particularly their reliance on impractical clean subsets or computationally expensive feature analysis, by proposing the OSS and ALS techniques for splitting datasets. The experimental results validate its effectiveness across various settings. Claims And Evidence: The claims in this paper are supported by convincing evidence. - Open-set recognition task and poison sample detection are similar. - Section 4.1.1 discusses the open set recognition setting and the relationship to poison and clean sample detection in detail. Methods And Evaluation Criteria: This paper has no problem in the method and evaluation following the previous settings. **Methods**: - The proposed methods make sense for the problem, where the feature distance, target-label approximation, and loss-based sample selection techniques are commonly used in the literature for backdoor learning. - The in-training defense targeting on splitting the dataset for training is reasonable and practical. **Evaluation**: - The evaluation metrics (CA, ASR, and DER) and three benchmark datasets are considered adequate in the literature to fully evaluate clean and backdoor performance. Theoretical Claims: There is no theoretical claim in this paper. Experimental Designs Or Analyses: The experiments are comprehensive and the experimental settings are thoroughly illustrated in this paper. Supplementary Material: The supplementary material contains a thorough illustration of the background, experimental setting, and extended experiments, which can be considered adequate. Relation To Broader Scientific Literature: The relation to the literature is well elaborated in this paper. Essential References Not Discussed: It seems that the paper [1] on in-training defense with similar data isolation techniques is not compared and discussed as the baseline. The latest defense method *ASD* is from CVPR-23, which is behind the current SOTA method. [1] Progressive Poisoned Data Isolation for Training-Time Backdoor Defense, AAAI-24. Other Strengths And Weaknesses: Pros: - The paper is well-structured and easy-to-understand with dedicated figures. - The extensive experiments on the method details are conducted thoroughly. - The defense performances are promising in both CA and ASR. Cons: - See the *Essential References Not Discussed* above. Other Comments Or Suggestions: None. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: **Thank you for your constructive feedback and for recommending our paper for acceptance. We sincerely appreciate your thorough review and valuable insights. In response to your remaining concern, we present more baselines.** # Additional results As you suggested, which aligns with other reviewers, we added the PIPD as an additional baseline, together with NAB [1] and NPD [2]. As presented in **Extended Table 1**, while these methods demonstrated exceptional defense performance against specific attacks, our implementation suggests that they fail to achieve consistent defense across all seven attacks under a single set of parameter settings. We believe that with attack-specific parameter tuning, they could potentially achieve the performance claimed in their paper. However, we must emphasize that such attack-specific tuning is impractical for defenders in real-world scenarios. **Extended Table 1. Addition baselines on CIFAR-10. Since the table is too large, we present the full version in this anonymous link: https://postimg.cc/ftmSjbvR.** **PIPD1** uses the Adam optimizer (learning rate = 1e-3), **PIPD2** resets the optimizer (clearing momentum and resetting the learning rate) before the Selective Training stage, and **PIPD3** applies both a larger penalty ($\lambda = 5$) and the optimizer reset. | Attack | < | BadNet | > | < | Blend | > | < | WaNet | > | < | Refool | > | < | LC | > | < | SIG | > | < | Narcissus | > | Avg | Avg | | :----: | :--: | :----: | :--: | :--: | :-----: | :--: | :--: | :---: | :--: | :--: | :----: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :--: | :-------: | :--: | :--: | :--: | | Metric | CA | ASR | DER | CA | ASR | DER | CA | ASR | DER | CA | ASR | DER | CA | ASR | DER | CA | ASR | DER | CA | ASR | DER | ASR | DER | | PIPD1 | 86.8 | 24.8 | 83.6 | 86.8 | 88.5 | 51.3 | 95.9 | 0.2 | 99.9 | 79.4 | 61.9 | 61.5 | 87.5 | 89.7 | 49.0 | 81.0 | 88.7 | 48.0 | 84.2 | 36.2 | 76.1 | 55.7 | 67.1 | | PIPD2 | 78.9 | 36.0 | 74.0 | 83.5 | 55.7 | 66.0 | 94.3 | 0.5 | 99.7 | 76.6 | 70.6 | 55.7 | 73.3 | 29.6 | 72.0 | 80.3 | 87.6 | 48.2 | 73.3 | 29.6 | 73.9 | 44.2 | 69.9 | | PIPD3 | 85.0 | 13.1 | 88.5 | 73.6 | 6.4 | 85.7 | 94.1 | 0.2 | 99.8 | 67.5 | 62.2 | 55.4 | 70.9 | 12.3 | 79.4 | 72.8 | 85.5 | 45.6 | 75.8 | 35.0 | 72.5 | 30.7 | 75.3 | | BSD | 95.1 | 0.9 | 99.6 | 94.9 | 0.8 | 98.8 | 94.5 | 0.8 | 99.6 | 92.4 | 1.2 | 98.4 | 93.8 | 0.0 | 97.0 | 94.8 | 0.5 | 99.0 | 94.3 | 0.0 | 99.3 | 0.6 | 98.8 | # Implementation Description The rationale for excluding PIPD in the previous submission: - Methodologically, while PIPD's approach appears intuitively feasible, it relies on good starts from Pre-isolation using Local Gradient Ascent (LGA), proposed in ABL. Facing varying backdoor attacks, we found that it may require attack-specific tuning of hyper-parameters, which is not allowed for defenders. This concern, also supported by our early experimental observations, was one of the reasons we ultimately replaced our old version $y_t$ estimation (basically based on LGA). - Before incorporating every method as a baseline, we conduct preliminary experimental validation to assess its reproductivity, such as evaluating its defense performance on the typical CIFAR10-BadNet combination. However, we observed that PIPD has not provided comprehensive open-source code. Specifically, after reviewing the main text, appendix, and GitHub repository of PIPD, we found PIPD: **(1)** provides empty README files and has no demos; **(2)** does not include any attack implementation (not even BadNet); **(3)** does not provide the implementation of Pre-isolation and Selective training stage; and **(4)** omits several critical training parameters, including optimizer selection, learning rate, the choice of the $\gamma$ in the Pre-isolation stage, the choice of $\lambda$ in selective training, and whether the optimizer/learning rate needs any resetting during the training. Settings: - To ensure experimental fairness, we merge the poisoned training data from BackdoorBench to generate datasets aligning with PIPD’s PyTorch dataset definition. - For the non-open-sourced components of PIPD, we implement them based on their pseudocode and equations. For key parameters not explicitly specified in PIPD, we set $\gamma = 0.5$ in LGA for Pre-isolation, as recommended in ABL, and applied a default penalty of $\lambda = 1$ in Selective Training. - Besides PIPD, we add two more baselines, namely NAB and NPD, with NAB being an in-training backdoor defense and NPD being a recent post-training defense. --- \[1\]: Liu M, Sangiovanni-Vincentelli A, Yue X. Beating backdoor attack at its own game, ICCV 2023. \[2\]: Zhu M, Wei S, Zha H, et al. Neural polarizer: A lightweight and effective backdoor defense via purifying poisoned features, NIPS 2023.
Summary: This paper introduces a clean-data-free method, named BSD, to defend against backdoor attacks in deep neural networks. BSD employs two complementary perspectives: Open Set Recognition-based Splitting (OSS) which uses semantic information, and Altruistic Model-based Loss Splitting (ALS) which leverages loss statistics. The approach includes pool initialization, class completion, and selective dropping strategies. Extensive experiments across multiple datasets and attack types show BSD achieves an average 16.29% improvement in Defense Effectiveness Rating compared to five state-of-the-art methods, while demonstrating robustness to different model architectures, poisoning rates, and target labels. ## Update after rebuttal. The explanation of the author has resolved my concern and I have also read the commet of other reviewer. The novelty may indeed be a weakness. But I will keep my score of Accept since I did not found any major defects. Claims And Evidence: The paper's core claims about BSD's effectiveness are generally well-supported by extensive experimental evidence across multiple datasets, attack types, and comparison with state-of-the-art methods. However, some claims lack sufficient support: - The paper asserts that the bi-perspective approach is superior but doesn't fully explore why these two specific perspectives are optimal compared to other possible perspective combinations—such as those involving gradient-based metrics, alternative feature distributions, or multi-modal representations. - BSD emphasizes that it does not rely on additional clean data and instead achieves precise segmentation of clean samples through OSS and ALS. Although the paper circumvents the need for external clean data by initializing a pseudo-clean pool based on OSS and ALS, the estimation process for the target label (yt) and the introduction of the altruistic model involve implicit assumptions; if, in certain scenarios, the target label is misestimated or the sample distribution deviates significantly from that of standard datasets, the segmentation performance may deteriorate, ultimately affecting the overall defense efficacy. Despite these limitations, the extensive experimental results across various settings do provide substantial evidence for BSD's effectiveness as a clean-data-free backdoor defense method. Methods And Evaluation Criteria: The paper's methods and evaluation criteria are generally well-suited for addressing backdoor defense challenges, employing comprehensive experiments across popular attack methods (BadNets, Blend, SIG, and WaNet) and standard datasets (CIFAR-10/100, TinyImageNet). The evaluation framework effectively balances security and performance using ASR and accuracy metrics, while also introducing a useful DER or comparative analysis. The appendix D also includes a comprehensive hyperparameter analysis, and I believe that the evaluation are exceptionally robust. Theoretical Claims: I reviewed the derivations of theoretical indicators in the paper—such as loss difference and open set distance—and found that these derivations rely primarily on intuitive explanations and ideas borrowed from existing literature rather than on a rigorous theorem-proof structure. Specifically, the paper employs the loss difference $I(x,y) = L_{\mathrm{sce}}(x,y,\phi) - L_{\mathrm{sce}}(x,y,\theta),$ and an Euclidean distance–based scoring function $S(x) = \min_{i \in \{0, 1, \dots, C-1\} \setminus \tilde{y}_t} \| f_e(x) - \mu_i \|_2$ to distinguish clean samples from contaminated ones. These methods are intuitively reasonable and consistent with previous work rather than strict mathematical proofs. Currently, the validity and robustness of the approach are mainly supported by extensive experimental results. Experimental Designs Or Analyses: In this paper, the experimental design and analysis validate the effectiveness and robustness of the BSD method from multiple perspectives. This is accomplished by evaluating the approach on several datasets (such as CIFAR-10, GTSRB, and ImageNet) and using a variety of mainstream network architectures (including ResNet-18 and MobileNet-v2), as well as by testing against an array of backdoor attacks—namely BadNets, Blended, WaNet, Label-Consistent, SIG, Refool, Narcissus, and even clean-label and all-to-all attacks. Furthermore, experiments conducted under different poisoning rates and target label settings demonstrate that the method can consistently maintain high clean accuracy (CA) and a low attack success rate (ASR). Additionally, ablation studies and hyperparameter grid searches on key components—such as OSS (semantic-based sample splitting), ALS (loss-statistics-based sample splitting), class completion, and selective dropping—confirm the critical contributions of these modules to improving the defense effectiveness (DER), while training cost evaluations underscore the practical advantages of the method. **However, the results also indicate that setting certain hyperparameters to extreme values can significantly degrade performance, and the method’s reliance on accurately estimating pseudo target labels might affect its efficacy when confronted with unusual data distributions or novel attack method.** Supplementary Material: I've reviewed all the supplementary material. Relation To Broader Scientific Literature: Prior research in backdoor defenses—such as ABL, DBD, and ASD—has largely depended on clean-data-dependent strategies or single-perspective methods (e.g., loss-guided splitting) to mitigate poisoning attacks. In contrast, the BSD integrates two complementary ideas from the broader literature. On one hand, it draws on open-set recognition techniques (similar to those used in OpenMAX) to harness semantic similarity and effectively distinguish benign samples from those that have been poisoned. On the other hand, it employs an altruistic model to capture differences in loss statistics, further refining the separation between clean and compromised data. By embedding these dual mechanisms within a semi-supervised learning framework inspired by MixMatch, the method overcomes the limitations of relying solely on extra clean data or a single detection perspective, thereby addressing issues observed under high poisoning rates and clean-label attack scenarios. This synthesis not only builds on established findings in robust deep learning and open-set recognition but also advances the state-of-the-art with a more scalable and cost-effective backdoor defense approach. Essential References Not Discussed: The paper adequately covers the essential references needed to understand the context for its key contributions. Other Strengths And Weaknesses: The paper offers an innovative approach by creatively combining open-set recognition and loss-guided splitting techniques, thereby eliminating the need for extra clean data—a common and restrictive assumption in many earlier works. This originality is complemented by solid empirical evidence demonstrating its effectiveness across multiple benchmark datasets even under high poisoning rates and clean-label attack scenarios, highlighting its potential significance in real-world applications. The clarity of the experimental design and the thorough ablation studies are commendable, providing a well-documented analysis of the contributions of each module. This is a good paper and I did not find any other weaknesses. Other Comments Or Suggestions: The text font in Figure 1 is a bit small. Questions For Authors: - Could you elaborate on the rationale behind choosing the OSS and ALS perspectives over other alternatives such as gradient-based metrics, alternative feature distributions, or multi-modal representations? Have you considered or performed ablation studies comparing these different combinations? - Could you provide more insight into the robustness of the target label(yt) estimation process? For instance, what are the failure modes if yt is misestimated, and can you quantify the impact on BSD’s segmentation and overall defense performance? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. We appreciate your recognition of our contributions. We address your remaining concerns in the following sections. # Potential Failure of Target Label Estimation We would like to kindly emphasize that while target label estimation is an important part of our method's success in defending against backdoor attacks, we believe that based on existing literature and our experimental results, target label estimation is a relatively less challenging task compared to the defense itself. We have verified its robustness through extensive evaluations (Table 4, Figure 9, Table 8, Table 11). Moreover, target label estimation is not the primary contribution of our work. Our main contribution is the innovative reconstruction of the backdoor defense task into an Openset Recognition-like task. For potential failure cases, we have summarized four mitigation solutions: 1. In common scenarios where (1) the dataset is large and well-known with publicly available information on the number of samples per class, or (2) the dataset is well-balanced, target labels can be reasonably approximated using label statistics alone. This simple yet effective strategy ensures reliable target label estimation in practice. 2. We propose more than one target label estimation approach. Specifically, we introduce an alternative method based on local gradient ascent in Appendix D.3 (line 1012), which has proven effective against most of the discussed attacks. Furthermore, this alternative method may offer better target label estimation for unknown attack methods. 3. In scenarios where computational overhead is not a concern, we can apply the OSS algorithm to all possible classes and then aggregate the clean samples identified across all classes as the final OSS result. This approach has demonstrated effectiveness in our experiments (Figure 11). 4. For easy attacks, even if target label estimation fails, our pool update strategies can effectively correct the model through subsequent iterations (Figure 10). # The selection of OSS and ALS We sincerely appreciate your insightful comments. Our decision to adopt OSS and ALS was primarily driven by considerations of computational overhead, as well as the challenges posed by the non-clean subset and model-agnostic defense settings. Regarding gradient-based metrics, we agree that they can be effective. In fact, during the early stages of our research, we explored using per-sample gradient information and techniques like Grad-CAM to identify poisoned sample triggers. We further considered combining inpainting and denoising techniques in reverse diffusion to remove both local triggers (e.g., BadNets) and global triggers(e.g., Blended attacks). However, we ultimately abandoned these approaches as they significantly increased training time, particularly when performing adaptive pool updates. Regarding alternative feature distributions, most of them are designed for backdoor detection, often assuming a clean subset. However, this violated the clean-data-free scenario. Additionally, potential feature extraction from intermediate layers will undermine the model-agnostic feature of the algorithm. Defenders would require knowledge of the model structure to select intermediate layers, undermining the practicality of defenses. In contrast, our chosen OSS and ALS methods effectively integrate the defense task into the training process without making model-specific assumptions. Empirically, we observed that the OSS module, inspired by open-set recognition, complements the loss-based ALS, leading to robust defense performance. # Extreme hyperparameter situations We sincerely apologize if the statements(lines 434-439, page 8) caused concerns regarding the extreme hyperparameter scenarios. Our intention was to demonstrate the robustness of our method to the settings of the two key hyperparameters, $\alpha$ and$\beta$ . We believe the results presented in Table 7 provide a solid reference for choosing hyperparameters, offering practical guidance for defenders and future researchers. Specifically, both values can be set as floating-point numbers within the range of (0,1), with $0.3<\alpha<0.8$ and $\beta<0.5$ being relatively safe choices. To better clarify our points, we have revised the relevant section (lines 434-439, page 8) as follows: > We here present the influence of the main parameter, i.e., the parameters $\alpha$ and $\beta$ controlling the pool size. As revealed in Table 7, our BSD has robust performance against all the attacks with a relatively loose range of $\alpha$ and $\beta$ , and we recommend using the default setting in normal cases, and a reasonable range for adjustments is $0.3<\alpha<0.8$,$\beta<0.5$. # Minor issues Our adjustment allows the figure to be displayed at a larger scale on the page, see this anonymous link: https://postimg.cc/bsztqjQS.
null
null
null
null
null
null
Deep Ridgelet Transform and Unified Universality Theorem for Deep and Shallow Joint-Group-Equivariant Machines
Accept (poster)
Summary: The paper studies so called "joint-equivariant networks", which are neural networks that are simultaneously equivariant with respect to a group action on the input data as well as the parameter space. This can be viewed as a generalization of previously studied group-equivariant neural network, in the sense that the joint-equivariant networks reduces to the aforementioned networks upon taking a trivial group action on the parameter space. This framework also includes standard fully connected neural networks, which are not equivariant. The main result is a universality theorem for the joint-equivariant that unifies certain previous universality results in the literature. ## update after rebuttal I stand by my original score. Claims And Evidence: Yes, see below. Methods And Evaluation Criteria: NA Theoretical Claims: Yes, it is my opinion that the claims in the paper are supported by convincing evidence. The theorems are proven rigorously and the mathematical framework is consistent. It is worth mentioning that the same group of authors have written several papers on this topic before. Although the papers address related points the present paper represents sufficiently novel results to warrant publication on its own. Experimental Designs Or Analyses: NA Supplementary Material: Yes, the supplementary material contains more mathematical details on the main claims. I have gone through a sample of the proofs and they seem correct. Relation To Broader Scientific Literature: See below Essential References Not Discussed: It strikes me as strange that the authors do not cite the original references for group equivariant CNNs by Cohen et al. They mention one paper from 2019 by Cohen, Geiger and Weiler which provides some of the underlying mathematical structure of GCNNs. However, there are several papers before that introducing the key structure of GCNNS, such as Group Equivariant Convolutional Networks - Cohen and Welling https://arxiv.org/abs/1602.07576 Steerable CNNs - Cohen and Welling https://arxiv.org/abs/1612.08498 Spherical CNNs - Cohen, Geiger, Koehler and Welling https://arxiv.org/abs/1801.10130 Other Strengths And Weaknesses: In section 6 they discuss G-convolutional networks of Depth n. However, nowhere do they cite the original papers on group convolutional layers mentioned above. They also do not discuss how their result relate to these previous results. I would certainly have liked to see a detailed discussion of how their GCN-layer is related to previously constructed group equivariant layers by Cohen et al. I think this is a weakness of the paper. Other Comments Or Suggestions: To reiterate the comment above: How are your depth-n G-convolutional layers related to the group convolutional layers by Cohen et al? Questions For Authors: See above Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > How are your depth-n G-convolutional layers related to the group convolutional layers by Cohen et al? Thank you for your question regarding concrete examples. We fully acknowledge the pioneering contributions of Cohen and colleagues in the development of group-equivariant neural networks. The focus of this study lies in universality and a general geometric perspective. Therefore, in this paper, we chose to highlight a single representative example with a particular emphasis on differential geometric aspects, specifically the 2019 paper. The relationship between the ridgelet transform of group-convolution networks and existing works—especially in the context of universal approximation theorems—is thoroughly discussed in Sections 5 and 6 of Sonoda et al. (2022a). We kindly refer the reviewer and future readers to these sections for further details. The three papers suggested by the reviewer do not explicitly analyze the universality or expressive power of the proposed networks. The group convolutions in Group Equivariant Convolutional Networks and Steerable CNNs correspond to group convolutions where a finite group $G$ acts on $X = \mathbb{R}^{n \times n \times k}$ for some $n$ and $k$, and thus their universality follows as a corollary of Theorem 6.1. Similarly, the convolution in Spherical CNNs corresponds to the case where compact group $G = SO(3)$ acts on $X = \mathrm{SO}(3)$ (or homogeneous space $S^2$), and thus the universality can also be shown in the sense of Theorem 6.1. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications!
Summary: The main result of this paper is Theorem 3.10, which provides a closed form formula for a ridgelet transform of learning machines with joint-group-equivariant maps. Previous works have been deriving the closed form formula for the ridgelet transforms of depth-2 networks. This paper has generalized these results to the wider class of neural networks that covers joint-equivariant machines, fully connected networks, group-convolutional networks, and a depth-2 network with quadratic forms. Claims And Evidence: The claims made in the submission are supported by clear and convincing evidence. All theorems and lemmas are provided with complete proofs. Methods And Evaluation Criteria: No method nor evaluation criteria is proposed in this paper. Theoretical Claims: I checked some of the main proofs but did not go through all of them in details. The proofs seem to be correct. Experimental Designs Or Analyses: No experiment is provided. Supplementary Material: I read the supplementary material which contains proofs, but I did not check every details. Relation To Broader Scientific Literature: This paper provides a closed form formula for a ridgelet transform of learning machines with joint-group-equivariant maps. This result is already a great generalization of previous results in the literature on ridgelet transforms of neural networks. Essential References Not Discussed: Essential related works on ridgelet transforms and universal approximation theorems are included in the introduction section. Other Strengths And Weaknesses: **Strengths** - The paper successfully provides a closed form formula for a ridgelet transform of learning machines with joint-group-equivariant maps, thus proposes a unified theoretical framework for understanding universal approximation in both deep and shallow networks, addressing a fundamental topic in machine learning. - The results are strongly supported by rigorously proofs, making a valuable theoretical contribution. - Several concrete examples are provided to demonstrate the applicability of the main theorem, enhancing its relevance to various machine learning architectures. **Weaknesses** - The paper relies on dense mathematical formalism, making it less accessible to a broader audience, especially those outside mathematical machine learning. - The paper has no numerical experiments or empirical validation as it focuses entirely on theoretical derivations. Personally, I think this paper is more suitable for mathematics community than machine learning community. - The authors mention several times that this result unifies the universal approximation theorem. But it is not clear how Theorem 3.10 can imply the universal approximation theorem, say the one established by (Pinkus, 1999). Pinkus, Allan (January 1999). "Approximation theory of the MLP model in neural networks". Acta Numerica. 8: 143–195. Other Comments Or Suggestions: See weaknesses. Questions For Authors: How does Theorem 3.10 imply the traditional universality approximation theorem in (Pinkus, 1999)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > The authors mention several times that this result unifies the universal approximation theorem. But it is not clear how Theorem 3.10 can imply the universal approximation theorem, say the one established by (Pinkus, 1999). There is no unique definition of "universality." In the context of machine learning, the term "universality" typically corresponds to what is referred to as "density" in standard mathematical terminology. Accordingly, there exist various types of universality depending on the choice of topology. For example, the notion treated in Pinkus (1999) corresponds to $cc$-universality, or density in the topology of compact convergence, which is also known as the topology of uniform convergence on compact sets. In contrast, the type of universality we have demonstrated is $L^2$-universality, or density in the $L^2$-topology. The relationships among various notions of universality commonly used in machine learning theory are discussed in detail in the following reference: Sriperumbudur, Fukumizu, and Lanckriet. Universality, Characteristic Kernels and RKHS Embedding of Measures, JMLR, 2011. Moreover, it is generally known that once an $L^2$-universal integral representation is established, $cc$-universality of a finite-width model can be shown by employing quasi-Monte Carlo integration (more precisely, by applying the law of large numbers). Therefore, the claims of our work indeed imply $cc$-universality as well.
Summary: This paper introduces a framework for proving constructive universal approximation theorems for general classes of neural networks. By invoking ideas from representation theory, the paper generalizes the ridgelet transform to a larger class of models which satisfy "joint G equivariance." As a consequence, the paper establishes universal approximation theorems for deep fully connected networks, deep group convolutional networks, and a depth 2 network on a quadratic form. Claims And Evidence: The claims in this paper (i.e the main result Theorem 3.10, and the following examples in sections 4-7), are indeed supported by clear evidence. Methods And Evaluation Criteria: This paper is theoretical, and so the "methods" (i.e theorems proven) do indeed make sense for the problem at hand. Theoretical Claims: The theoretical claims appear to be sound. Experimental Designs Or Analyses: N/A Supplementary Material: I read through Appendix A which contains the proofs, and they appear to be sound. Relation To Broader Scientific Literature: This paper unifies prior works on universal approximation of neural networks via the integral representation approach (i.e Barron, 1993), and generalizes prior works on closed form solutions for the ridgelet transform (Sonoda et al., 2021, 2022a, 2022b, 2024a, 2024b). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The framework developed by this paper seems to be very general purpose, and can allow for constructive proofs of universal approximation theorems for other architectures in the future. - To the best of my knowledge, the contribution of this paper appears to be novel. Weaknesses: - This paper only handles the infinite width (i.e continuous) setting, and does not provide any quantitative results on how much width is needed for a certain architecture to approximate a target function, akin to the Barron norm in (Barron, 1993). - The implications of this work (such as depth separations between different classes of architectures) is not clear, which limits the significance. Other Comments Or Suggestions: - From a clarity perspective, the machinery developed in Section 3 is quite technical and at times difficult to follow (especially for readers less familiar with representation theory). The paper could be improved by motivating the various definitions in Section 3 with a concrete example, such as the deep feedforward network. Questions For Authors: - Can the authors please address my comments in the weaknesses section above? - Additionally, the universal approximation result for deep feedforward networks in Section 5 is unsurprising, given the universal approximation guarantee for two-layer networks and the fact that deep networks are a strictly larger hypothesis class. Could the authors please comment further on this point? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: > This paper only handles the infinite width (i.e continuous) setting, and does not provide any quantitative results on how much width is needed for a certain architecture to approximate a target function, akin to the Barron norm in (Barron, 1993). In our main result, the inverse operator for the integral representation is explicitly obtained. This allows us to apply the so-called Barron’s argument to evaluate the approximation error using the Maurey–Jones–Barron (MJB) bound. In general, applying the MJB bound yields an approximation error of $O(1/\sqrt{n})$ when using a finite-width model with $n$ units. > The implications of this work (such as depth separations between different classes of architectures) is not clear, which limits the significance. (Please also refer to our fourth response for related discussion.) In addition, algebraic methods such as Schur's lemma offer a significant advantage: they allow us to assess density (or universality) even when the input and output are not vectors. While there are several classical theorems—such as the Stone–Weierstrass and Hahn–Banach theorems—that provide general tools for determining universality, applying them to neural networks with deep hierarchical structures requires substantial effort and ingenuity. For example, proof techniques that trace back to Hornik et al. (1989) often rely on repeatedly differentiating activation functions to construct polynomial approximations. While these techniques are mathematically valid, they tend to obscure the underlying mechanism by which neural networks perform approximation, and may not offer much intuitive understanding. In contrast, conditions such as *joint group equivariance* and tools like the *ridgelet transform*, which serves as an inverse operator, provide a more direct and interpretable explanation of how neural networks approximate functions. > The paper could be improved by motivating the various definitions in Section 3 with a concrete example, such as the deep feedforward network. Thank you for the suggestion. We will incorporate a concrete example, such as the deep fully connected network, immediately after the definition to improve clarity. > Additionally, the universal approximation result for deep feedforward networks in Section 5 is unsurprising, given the universal approximation guarantee for two-layer networks and the fact that deep networks are a strictly larger hypothesis class. Could the authors please comment further on this point? Thank you for your request. The value of this work does not lie solely in the illustrative example presented in Section 5. Rather, it lies in providing a unified, simple, and constructive principle for analyzing the expressive power of a wide range of learning machines, as demonstrated in Sections 4 through 7. In deep learning, diverse architectures are proposed on a daily basis. Traditionally, analyzing the expressive power of each new architecture required manual, case-by-case efforts by experts. This study identifies a key condition—*joint-group-equivariance*—that enables such analysis to be performed systematically. We believe this is where the main contribution of our work lies.
Summary: The goal of the paper is to subscribe a general framework for proving universality type theorems for a generalized class of models, that the authors call joint-group-equivariant machines. Joint-group-equivariant machines are models consisting of a sequence of joint-group-equivariant features maps, maps from the joint input-parameter space to an output (feature) space that are group equivariant (jointly in the input and parameter space). Central to the proof is an interesting application of Schur’s lemma on irreducible representations, which concludes that a certain construction for a Ridgelet transform (which is introduced in the paper) is indeed one the “dual” which maps functions of a certain class to parameters of the joint-group-equivariant model. The authors proceed to exemplify the proof for various particular types of models, such as “Depth-n Fully-Connected Networks”, “ Depth-n Group Convolutional Networks” and a “Quadratic-form with Nonlinearity”. Claims And Evidence: This paper is about a pure theoretical construction and universal approximation results on them, empirical evidence is not applicable. Methods And Evaluation Criteria: N/A Theoretical Claims: The theoretical claims seem reasonable, the writing is easy to follow and introduces clearly the constructions about which the proofs is about. On times the paper uses abstract mathematical language to give alternative descriptions to constructions which may obfuscate the reading (for example some category-theoretical parallels, whose validity I cannot testify). However, I find them redundant/complementary to the actual message of the paper, and conditioned that they are true, they do not alter its story. Experimental Designs Or Analyses: N/A Supplementary Material: Some of the proofs in Appendix A, in particular A.3 and A.4. Relation To Broader Scientific Literature: I am not up-to-date with the literature on this topic Essential References Not Discussed: See above Other Strengths And Weaknesses: The unification under the generalized model is theoretically appealing, however I am not sure if the generalized model (joint-group-equivariant machines) are any further useful. It is not clear to me what is the impact of the unification of mostly known results. Have we learnt something insightful about machine learning, or generalization, in the process of unifying universality theorems? Other Comments Or Suggestions: Small typo (double ‘the’) on L30, right column Questions For Authors: The depth-N fully connected neural network described in Section 5 is different that what the community is used to and it looks more general. Does the provided formulation of fully-conntected NNs generalize a ReLU layer for example? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: > On times the paper uses abstract mathematical language to give alternative descriptions to constructions which may obfuscate the reading (for example some category-theoretical parallels, whose validity I cannot testify). However, I find them redundant/complementary to the actual message of the paper, and conditioned that they are true, they do not alter its story. Thank you for the suggestion. We will retain the mathematically detailed supplementary explanation as an optional message intended for readers with a strong background in algebra or representation theory. Since it is strictly supplementary, skipping it will not affect the main narrative or the overall understanding of the paper. > The unification under the generalized model is theoretically appealing, however I am not sure if the generalized model (joint-group-equivariant machines) are any further useful. It is not clear to me what is the impact of the unification of mostly known results. Have we learnt something insightful about machine learning, or generalization, in the process of unifying universality theorems? The main theorem of this work is not merely a unification of existing results; a key strength lies in its ability to systematically and mechanically assess universality even for novel architectures. For example, it enables the straightforward design of expressive networks with universal approximation capability, such as *quadratic networks*. In the era of large language models, the diversity of proposed architectures has significantly increased. Manually analyzing the expressive power of each new model on a case-by-case basis is no longer feasible from a theoretical standpoint. Our framework addresses this challenge by offering a general principle. In particular, algebraic tools such as *Schur's lemma* provide a notable advantage in that they allow us to assess universality (or density) even when the inputs and outputs are not vectors—something that traditional techniques often struggle with. While classical theorems like the Stone–Weierstrass or Hahn–Banach theorems offer generic tools for establishing universality, applying them to deep neural networks with hierarchical structures requires considerable ingenuity. By contrast, proof techniques originating with Hornik et al (1989), which rely on repeatedly differentiating activation functions to construct polynomial approximations, often obscure the intuitive mechanism by which neural networks perform approximation. In comparison, our use of *joint group equivariance* and *ridgelet transforms* as inverse operators provides a more direct and interpretable understanding of the approximation process in deep networks. Regarding generalization: Theoretical analysis of generalization error is often based on estimates of the Rademacher complexity of the hypothesis class. However, in deep learning, the Rademacher complexity typically scales exponentially with network depth. This is at odds with practical observations, where deeper networks tend to generalize better. This discrepancy arises from the limitations of current techniques for analyzing composite functions: the reliance on coarse Lipschitz estimates leads to overly pessimistic bounds. To address this gap, we revisited expressive power analysis and developed the *deep ridgelet transform* as a theoretical framework that can precisely handle compositions of functions. > The depth-N fully connected neural network described in Section 5 is different that what the community is used to and it looks more general. Does the provided formulation of fully-conntected NNs generalize a ReLU layer for example? Yes, the presented example extends typical networks, and especially it includes the case when $\sigma$ is ReLU.
null
null
null
null
null
null
FireFlow: Fast Inversion of Rectified Flow for Image Semantic Editing
Accept (poster)
Summary: This work proposes a modified midpoint method, which is a well-known second order numerical solver. One drawback of midpoint method is that it needs an additional model evaluation for each sampling step, which makes the overall solver slow. This work instead proposes to piggyback on the previous model prediction in the first step of midpoint method. This reduces the number of model evaluations per step to 1. Surprisingly, the authors show that this modified midpoint method retains the local error and global error of the second order methods while being computationally efficient. The authors demonstrate advantages of this approach in image editing tasks as well as in image reconstruction and inversion. Claims And Evidence: Some important quantitive experiments are missing an ablation/baseline — the original midpoint method. The proposed method modifies the original midpoint method with an approximation, but the tradeoffs between the two choices are difficult to infer from the provided quantitative results. Note that I understand that the proposed method needs only one NFE per sampling step as opposed to two NFEs of the original midpoint method, but there should be additional results as described below for a complete overview of the tradeoffs between the two: 1. Table 4 should include quantitative results with the original midpoint method as well. 2. For better understanding of the effects of modification of the first substep of the midpoint method, Figure 3 should ideally include error analysis of the original midpoint method. Similarly, Figure 4 and Figure 7 can also include RMSE values of the original midpoint method. Methods And Evaluation Criteria: Yes, the proposed methods and evaluation methods make sense. The paper considers PIE-Bench for editing tasks, and DCI (Densely captioned image) dataset for inversion and reconstruction tasks. The paper reports quantitative performance on standard metrics for T2I models such as PSNR, SSIM, FID, CLIP similarity etc. Theoretical Claims: Yes. I checked the proof of Proposition 3.1, 4.1 and 4.2. The proofs seem correct. Experimental Designs Or Analyses: The experiments considered in this work are sound. Supplementary Material: All the parts of SM were reviewed. Relation To Broader Scientific Literature: The proposed method seems to be general enough to also have implications beyond the specific task of semantic editing considered in this work which is in designing efficient numerical solvers for flow ODEs for unconditional image generation. However, this paper does not include this additional analysis as it is beyond the scope of the research problem considered in this paper, but it is an interesting direction nonetheless. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The proposed method is attractive as it provides the advantages of a second order method while being computationally more efficient. 2. The results included in the paper seem promising as this method seems to have both improved qualitative and quantitative performance compared to prior SOTA methods. Weaknesses: As mentioned above, the authors should also provide the ablation study against the original midpoint method for various applications to better understand the tradeoff between the two. Other Comments Or Suggestions: 1. In Figure 6, can the authors include the prompts that were provided for editing? 2. How many sampling steps were used to compute time cost in Table 5 for other methods besides Vanilla Reflow? Is this 8 steps for the proposed method and upto 28 steps for RF Solver? The exact number of steps will help understand the speedup better. Perhaps this can be indicated in an additional column or in the text. In addition, can we also include results for the original midpoint method to better demonstrate the inference time advantages (I presume it must be 2x of the proposed method)? 3. Minor (Clarification): Table 4 mentions NFEs for RF-Solver as 15 but the text (line 328) mentions it as 25. Questions For Authors: 1. In Figure 2, How were the pairs of points for plotting the transport maps/trajectories chosen? The number of points in the transport maps are fewer than the number of points in the scatter plot that shows the generated points from the target distribution. Also, could the authors elaborate why the trajectories of midpoint method are less straight then the proposed method (as the proposed method is expected to have a larger error constant, even though the error is of the same order) 2. Algorithms 1 and 2 mention $V_{t_{N-1}}^{inv}$ in self-attention layers. Could you elaborate the context of self-attention layer? Isn’t $V_{t_{N-1}}$ simply the prediction of the ReFlow network? How does this relate to the discussion in Lines 1008 - 1012 in the appendix? 3. Have the authors tried using the proposed method for unconditional image generation from ReFlow models? This will be an interesting and useful addition to the appendix. 4. Can the proposed method be used for solving SDEs similar to the original midpoint method? Do any of the assumptions in the proof not hold for SDEs (e.g. smoothness assumptions)? These limitations if any, should be discussed. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the authors for carefully verifying our theoretical claims and constructive comments. Responses to your concerns as follow: Q1: Include the original midpoint method's results in Table 4 and its inference time in Table 5. - Although both ours and Midpoint are 2nd-order solvers, they exhibit different numerical stability when applied to ReFlow ODE. Since $v_\theta(x,t)$ predicted by neural network is not perfectly accurate, numerical solvers handle the resulting noise differently. Ours demonstrates robustness to these inaccuracies, leading to slightly better results. Solver|steps|time(s)|Structure Distance↓|PSNR↑|SSIM↑|CLIP Whole↑|CLIP Edited↑ -|-|-|-|-|-|-|- Midpoint|15|24.6|0.0307|22.94|0.8208|25.93|22.88 Midpoint|8|13.3|0.0318|22.45|0.8122|26.02|22.89 Ours|8|7.7|0.0271|23.03|0.8249|26.02|22.81 Q2: Fig. 3 should include error analysis of the original midpoint. Fig. 4 and 7 can include RMSE of the original midpoint. - Thanks for valuable suggestions. We hope to clarify that Fig. 3 illustrates the approximation error between the estimated midpoint velocity and true midpoint velocity. The original midpoint method does not involve such an approximation, so an error analysis is not applicable. Regarding Fig.7 with RMSE values, we revised the illustration of RMSE versus ODE steps "for better understanding of the effects of modification of the first substep of the midpoint method" at [link](https://ibb.co/mrBDMkTT). Q3: Can authors include the prompts used in Fig. 6? - Due to this year’s strict character limit for rebuttal, we hope to include the prompts in appendix to make room for addressing your other insightful comments. We appreciate your understanding. Q4: How many sampling steps were used to compute time cost in Table 5? - We used 8 steps for FireFlow, 15 steps for RF-Solver, and 28 steps for RF-Inversion, consistent with Table 4. We appreciate your suggestion and will include an additional column to indicate the number of steps. Q5: Table 4 mentions steps for RF-Solver as 15 but line 328 mentions it as 25. - We hope to clarify that line 328 states, “*up to 25* steps.” According to RF-Solver's official code, step number is not fixed and up to 25. Since 15 steps results in a computational cost comparable to RF-Inversion, we report 15 in Table 4 for a fair comparison. Q6: In Fig. 2, how were the pairs of points for plotting the transport maps/trajectories chosen? The number of points in transport maps are fewer than scatter plots. - We strictly follow the [official instructions](https://colab.research.google.com/drive/1CyUP5xbA3pjH55HDWOA8vRgk2EEyEl_P?usp=sharing) provided by the original ReFlow paper for visualizing Fig. 2. According to their code, the first 30 pairs of points are selected to draw the transport trajectory, while all pairs are used for the scatter plot. Q7: Could authors elaborate why trajectories of midpoint method are less straight than the proposed method? - Please kindly refer to Reviewer ekVQ’s Q3, where we explain why FireFlow performs better. Q8: Alg. 1 and 2 mention $V_{t_{N-1}}^{inv}$ in self-attention layers. Could you elaborate the context of self-attention layer? Isn’t $V_{t_{N-1}}$ simply the prediction of ReFlow model? How does this relate to the discussion in Lines 1008-1012? - We hope to clarify that $V_{t_{N-1}}^{inv}$ refers to the $V$ feature in $\text{softamx}(\frac{QK}{\sqrt{d}})V$ within the self-attention layer of ReFlow model, whereas $v_{t_{N-1}}(X_{t_{N-1}})$ denotes the prediction of network. Algorithms 1 and 2 mentioned $V_{t_{N-1}}^{inv}$ as it involved in the editing technique introduced by RF-Solver, which we follow to ensure a fair comparison (detailed in “Image Semantic Editing” section). We observe that using $V_{t_{N-1}}^{inv}$ can lead to certain failure cases, and Lines 1008-1012 discuss its limitations along with some simple alternatives. Q9: Have authors tried using proposed method for unconditional image generation from ReFlow models? - Thanks for your suggestions. We follow the original ReFlow paper’s protocol for unconditional image generation on CIFAR10, using open-source 1-Rectified-Flow-distill weights. Our method performs well: Solver|steps↓|NFEs↓|FID↓|IS↑ -|-|-|-|- Euler|15|15|5.67|9.20 Euler|10|10|5.83|9.03 Midpoint|5|10|5.45|9.27 Ours|5|6|5.35|9.26 Q10: Can FireFlow be used for solving SDEs similar to original Midpoint method? Do any assumptions in the proof not hold? - In fact, neither FireFlow nor the original Midpoint can be directly applied to SDEs because they do not properly handle the stochastic term. The noise term $dW_t$ in SDE representing a Wiener process is nowhere differentiable, rendering approximations that rely on smoothness invalid. Besides, if someone tries to use a midpoint method, would implicitly assume a Stratonovich interpretation, while SDEs are defined in the Itô sense. Standard ODE solvers don’t account for the correction term required when switching between these interpretations.
Summary: This paper proposes a low-cost alternative method for a second-order ODE solver aimed at Rectified Flows. Compared to the common second-order ODE solver, which requires $2T$ NFEs, this method only needs $T+1$ NFEs while maintaining the sampling quality of the second-order ODE solver. The core idea is to replace $v_t$ in the standard midpoint method with the midpoint velocity $v_{(t-1)+\frac{\Delta t}{2}}$ from the previous $t-1$ step (Eq.8 vs. Eq.10), so that intermediate values from $t-1$ step can be cached during the denoising process and loaded in the t-th step, avoiding the need for 2 NFEs calculation at each step. ## Update After Rebuttal The author's reply solved most of my problems, but I still feel that this work has some shortcomings in innovation, and I will keep my score. Claims And Evidence: This paper claims that the proposed method can maintain the sampling efficiency of a 1st-order ODE solver while achieving the sampling quality of a 2nd-order ODE solver. This claim is mainly verified through a comparison of its sampling quality-NFEs with those of the 1st-order Vanilla ReFlow and the 2nd-order RF-solver. The experimental comparison includes qualitative and quantitative results such as t2i, inversion then reconstruction, and inversion then prompt-guided editing. According to the experimental results of the report and my understanding of the field, this claim has been verified. Methods And Evaluation Criteria: This paper proposes a method that is meaningful for accelerating the sampling of rectified flows. The method is meaningful for accelerating rectified flow sampling, which helps to improve efficiency while ensuring the quality of sampling, thereby achieving fast inversion and prompt-guided editing. Theoretical Claims: The core contribution of this paper lies in improving the algorithm in Eq.8~Eq.9 to Eq.10~Eq.12, where Eq.8~Eq.9 are the standard midpoint method. By the way, the proof of proposition 3.1, 4.1, and Theorem 4.2 is also provided in the supplementary materials, providing a theoretical basis for the methods presented in this paper. Experimental Designs Or Analyses: This paper first verifies the effectiveness of the method on synthetic mixed Gaussian data through a toy example, and then validates the effectiveness of the method for T2I, inversion-reconstruction and inversion-editing experiments. From the perspective of accelerated generation without training through solving the ODEs solver, the experimental design is reasonable. Supplementary Material: Yes, the supplementary materials include the code of this paper. Relation To Broader Scientific Literature: The most relevant to this paper is the previous method RF-Solver [1]. RF-Solver has proven that expanding the first-order ODE solver of vanilla Rectified Flow into a second-order form will help improve the accuracy of inversion. The contribution of this paper lies in reducing the NFEs of the second-order ODE solver while maintaining the quality of sampling. [1] Wang, J., Pu, J., Qi, Z., Guo, J., Ma, Y., Huang, N., Chen, Y., Li, X., and Shan, Y. Taming rectified flow for inversion and editing. arXiv preprint arXiv:2411.04746, 2024. Essential References Not Discussed: Based on my understanding of the field, this paper discusses most methods of accelerating rectified flow by solving solvers. Other Strengths And Weaknesses: Strengths: This paper relies on FLUX, a 12b t2i model, and it is meaningful to study its inversion acceleration. Weaknesses: Some experimental validations are not sufficiently thorough. For example, as shown in Table 2, the effect of the 20-step method should be compared again to observe how much better its solver performance is compared to the synchronous number. Other Comments Or Suggestions: Some chart captions are too simple, such as Table 2, Table 4, and Figure 6, which should have more complete and self-evident captions. Questions For Authors: 1. Through the toy example in Figure 2, it can be found that the method proposed in this paper is better than the Midpoint method under the same NFE, but according to my understanding, the method proposed in this paper is a low-cost approximation of the Midpoint method. Why is the result even better than the Midpoint method? 2. In addition to the approximation of Eq.10~12 v_{t} and the introduction of cache mechanisms, what are the most valuable contributions of this paper, according to the author? 3. Under different seeds, what is the comparison of the results produced by inversion and editing, and do they have diversity? 4. When NFE is extremely small (1-4), how is the performance? Some distillation-based methods seem to be able to achieve inversion and generation with fewer steps. 5. Why does this article indeed compare with methods such as dpmsolver++, unipc, etc., and why can't these methods be adapted to the sampling method of rectified flow? I will re-evaluate my score based on the author's reply. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer’s detailed feedback and constructive suggestions for improving our paper. Below are our responses to your concerns: Q1: As shown in Table 2, the effect of the 20-step method should be compared again to observe how much better its solver performance is compared to the synchronous number. - Thank you for your valuable suggestion. We have now reported all three methods with the same 20 steps, and our approach continues to outperform the others, as shown below: Methods|Steps↓|NFE↓|FID↓|CLIP Score↑ -|-|-|-|- FLUX-dev|20|20|26.77|31.44 RF-Solver|20|40|25.54|31.39 Ours|20|21|25.13|31.44 Q2: Some chart captions are too simple, such as Table 2, Table 4, and Figure 6. - Thank you for pointing that out. We will revise the descriptions for Table 2, Table 4, and Figure 6 to make the captions more complete and self-explanatory. Q3: In Figure 2, why is the result of a low-cost approximation of the Midpoint method even better than the Midpoint method under the same NFE? - To clarify, the standard Midpoint method requires 2 NFEs per step, whereas our approach requires only 1 NFE per step. This means that, under the same computational cost, our method effectively has more steps. Using more steps (i.e., smaller step sizes) generally improves accuracy by reducing truncation error, which is bounded by the step size, in the discretization of the continuous ODE. This explains why our low-cost approximation achieves better results despite using the same total NFE. Q4: Except for the approximation, what are the most valuable contributions of this paper, according to the author? - We believe our work challenges the conventional notion that high-order ODE solvers must be computationally expensive. Beyond mere approximation, we introduce a new direction for accelerating ReFlow ODEs with a solid theoretical foundation and a fully training-free approach. Our method is distinctly different from training-based distillation and traditional model compression techniques. The modified Midpoint method simply serves as a starting point, we are actively exploring a broader group of high-order methods to further enhance efficiency. Q5: Under different seeds, what is the comparison of the results produced by inversion and editing, and do they have diversity? - We would like to clarify that, unlike T2I which begins from random noise, inversion-based editing and reconstruction start from a fixed original image as the initial point in the ReFlow ODE. As shown in Algorithm 1 and 2, solving the inversion and denoising ODE do not involve any randomness, resulting in no diversity across different seeds. For a quantitative analysis, please kindly refer to our response to Q1 for Reviewer MGiL. Q6: When NFE is extremely small (1-4), how is the performance? Some distillation-based methods seem to be able to achieve inversion and generation with fewer steps. - Thank you for your insightful question. Our current modified midpoint method is not designed for extremely small NFEs (1–4) and the results are unsatisfactory. However, we are actively exploring this avenue and have found higher-order solvers compatible with FireFlow, enabling faster inversion with as few as 4 steps, yet this exploration seems to extend beyond the scope of this submission. - Regarding distillation-based methods, FireFlow is a fully training-free approach, making direct comparisons with training-based methods less fair. Besides, efficient solver introduces a novel perspective for accelerating ReFlow models, offering a distinct and complementary direction to existing techniques. Q7: Why does this article indeed compare with methods such as dpmsolver++, unipc, etc., and why can't these methods be adapted to the sampling method of rectified flow? - DPM-Solver++ and UniPC rely on diffusion model ODEs, which contain an analytically tractable term. ReFlow lacks this structure, making these solvers ineffective. Specifically: - Fast solvers for diffusion models exist because traditional diffusion trajectories are **curved** and inefficient, requiring precise numerical integration. In contrast, **ReFlow straightens these paths**. Since DPM-Solver++ and UniPC refine curved trajectories through multi-step corrections, they are unnecessary for ReFlow. Instead, as demonstrated by FireFlow, an efficient solver for ReFlow should focus on minimizing discretization error without requiring multiple steps. - Formally, both DPM-Solver++ and UniPC leverage the diffusion model ODE structure $\frac{dx}{dt}=f(t)x+g(t)e\_\theta(x,t)$ where the linear term $f(t)x$ allows for **partial analytical integration**, enabling efficient numerical solvers. However, ReFlow follows a different ODE $\frac{dx}{dt} =v_\theta(x,t)$ which **lacks an explicit drift term** $f(t)x$. As a result, the analytical simplifications used by DPM-Solver++ and UniPC do not apply, making these fast solvers inapplicable without a fundamental redesign.
Summary: The authors proposed a second-order ODE solver to speed up inversion and reconstruction of flow-based model. It reused saved midpoint velocity so the computation efficiency remains the same as first-order ODE. It is proven to be faster in speed and higher in quality, and also benefit image editing by a large margin. Claims And Evidence: Yes. The claims in the paper are proven with evidences. - For example, the authors claimed that the inversion efficiency can be improved with the proposed second-order method. Compared with midpoint method, it reduces half of the NFEs. - The authors claims the accuracy remains similar to standard midpoint methods. It is proven in the appendix given some constraints. - The authors also showed results on PIE-bench and visual results to prove the editing quality. Methods And Evaluation Criteria: Yes. Theoretical Claims: The main theoretical claims are proposition 4.1 and theorem 4.2. They are intuitively correct, however, they may have some potential issues to the best of my knowledge. - I am not sure whether the proposed method will downgrade to 1st-order ODE if the velocity estimation of the midpoint is not accurate enough, saying v is not well trained. - Whether it will be better to estimate the next midpoint velocity by measuring the current velocity and previous velocity to propagate, instead of fully relaying on the velocity estimation? But given the constraints and assumption, the proof is sound and correct. Experimental Designs Or Analyses: The experiments have no obvious flaw. One concern is all the methods are evaluated using FLUX model which is somehow well trained. So it is not clear whether the method can be well generalized to other ReFlow models, especially when v is not accurate enough. Supplementary Material: I reviewed the results and related codes. Codes look good, but the editing results have the same identity preservation issues like all other inversion-based editing methods. Relation To Broader Scientific Literature: I am not confident that this method is novel enough. First, it is a fully training-free method which does not involve 'learning' methods which are better aligned with ICML; Second, the 2nd-order or 3rd-order ODE can be also achieved by Numerical Extrapolation (using the previous two velocity values from the network to extrapolate next midpoint) which also only needs 1 NFE. Some proper comparison might be needed to mitigate the concerns of large delta_t and inaccurate v estimation. Third, the paper does not introduce enough novelty in inversion and editing. The proposed method remains having problems like all other inversion-based method like poor identity preservation. Essential References Not Discussed: Not found. Other Strengths And Weaknesses: See above. Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We sincerely appreciate the authors for carefully verifying our theoretical claims, especially given the heavy review process. We will do our best to address your concerns. Q1: It is a fully training-free method which does not involve 'learning' methods which are better aligned with ICML. - We respect the reviewer’s opinion on whether a fully training-free method fits within the scope of machine learning. However, we believe that generative models are a prominent topic in the field of machine learning, and the ReFlow model, as a pioneering work in this area, warrants attention. FireFlow, with its efficiency gains, demonstrates practical applicability, which aligns with the interests of the machine learning community, hence aligns with ICML. Q2: Whether it will be better to estimate the next midpoint velocity by measuring the current velocity and previous velocity to propagate, achieved by Numerical Extrapolation (using the previous two velocity values from the network to extrapolate next midpoint)? - Thank you for your insightful suggestion. If we understand correctly, the proposed numerical extrapolation involves using the previous two velocity values, $v_t$ and $v_{t-1}$, generated by the network to approximate the next midpoint velocity as $\hat{v}\_{t+\frac{1}{2}\Delta t}=v_t + 0.5(v_t-v_{t-1})$ and updating the state as $X_{t+1}=X_t+\Delta t\cdot\hat{v}_{t+\frac{1}{2}\Delta t}$. To evaluate this approach, we have conducted an ablation study on step selection, and our approach continues to outperform the others, with results on PIE-Bench presented below: Method|steps↓|NFEs↓|Structure Distance↓|PSNR↑|SSIM↑|CLIP Whole↑|CLIP Edited↑ -|-|-|-|-|-|-|- Numerical Extrapolation|15|30|0.0398|22.48|0.7913|**26.14**|22.64 Numerical Extrapolation|8|16|0.0658|21.45|0.7504|26.04|22.32 Ours|**8**|**18**|**0.0271**|**23.03**|**0.8249**|26.02|**22.81** Q3: Whether the proposed method will downgrade to 1st-order ODE if the velocity estimation of the midpoint is not accurate enough, whether the method can be well generalized to other ReFlow models, especially when $v$ is not accurate enough. - Thank you for your insightful question. We recognize that the robustness of our approach, particularly when velocity estimation is less accurate, is an important aspect to explore further. Since the accuracy of $v$ directly impacts image quality, we assess our method on earlier-generation, medium-sized Stable Diffusion models, such as SD3-medium, which has a significantly lower ELO score than the FLUX model and serves as a meaningful test case. We apply different ODE solvers to ReFlow-based editing, and the results on PIE-Bench are as follows: Method|Order|steps↓|NFEs↓|Structure Distance↓|PSNR↑|SSIM↑|CLIP Whole↑|CLIP Edited↑ -|-|-|-|-|-|-|-|- RF-Inversion|1st-order|28|56|0.0464|19.75|0.6951|25.20|**23.16** Ours|2nd-order|**8**|**18**|0.0476|19.41|0.6871|25.16|23.11 Ours|2nd-order|15|32|**0.0439**|**20.07**|**0.7019**|**25.30**|23.07 - While the speedup rate is lower than FLUX, our method consistently outperforms 1st-order ODE across most metrics, achieving approximately a 2× speedup while maintaining high-quality results. Given that the accuracy of $v$ is directly linked to image quality, we believe it is more meaningful to focus on state-of-the-art models with higher accuracy and better generation capabilities—such as the FLUX model, as discussed in our paper. Q4: The paper does not introduce enough novelty in inversion and editing, with problems like all other inversion-based method like poor identity preservation. - We respectfully disagree with the claim that our work lacks novelty in inversion and editing. Our fast ODE solver for ReFlow introduces a fundamentally new approach to accelerating generative models, with no direct overlap with existing acceleration methods. This is substantiated by both rigorous theoretical analysis and extensive experiments, demonstrating clear improvements. - As with any training-free editing method, certain failure cases are expected, and claiming otherwise would be unrealistic. This is precisely why we have transparently acknowledged limitations in the appendix. However, occasional challenges in identity preservation should not overshadow the consistently strong performance demonstrated on widely recognized benchmarks. Our results indicate that FireFlow surpasses prior methods in efficiency and effectiveness. - More importantly, our core contribution extends beyond individual editing performance—it establishes a new direction for accelerating ReFlow models. This is not merely an incremental improvement but a principled advancement, supported by both theoretical foundations and empirical validation. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. For the numerical extrapolation, it is better to consider the timestep difference but not simply averaging them. I believe the conclusions will be slightly different and encourage the authors to add them in the paper. I will upgrade the scores to weak accept because the authors tried the methods on other base models in the rebuttal and proves its effectiveness. But I still hope AC can balance the novelty and technical depth in the final evaluation. The authors did a great work on proposing a simple yet effective inversion method, evaluate its effectiveness and efficiency, and prove the quality on a wide range of editing benchmark. But it has little impact on learning approach, and is also not tightly relevant to improving core image editing quality. Also the truth that 2nd-order ODE helps on sampling is not novel, although the authors used a similar trick to numeric extrapolation to reduce NFEs. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the thoughtful reconsideration of our work and for providing constructive suggestions, especially regarding the numerical extrapolation and the discussion on novelty and technical depth. Please find our response below, which we hope will be viewed as a continuation of academic discussion rather than self-defense. - Following your helpful suggestion, we revisited the numerical extrapolation by incorporating timestep differences rather than simply averaging. If we understand correctly, the reviewer suggests estimating the midpoint velocity using a weighted formulation such as $\hat{v}\_{t+\frac{1}{2}\Delta t} = \alpha \cdot v_t + (1 - \alpha) \cdot v_{t-1}$, where $\alpha$ is a weight determined by the timestep difference. A straightforward implementation is $\alpha = \frac{\Delta t_{\text{current}}}{\Delta t_{\text{current}} + \Delta t_{\text{prev}}}$, which assigns more weight to the current timestep when it is larger, reflecting the intuition that more recent velocities are more informative. We have conducted experiments using this formulation, and the updated results will be included in the revised version of the paper. Method|steps|NFEs|Structure Distance↓|PSNR↑|SSIM↑|CLIP Whole↑|CLIP Edited↑ -|-|-|-|-|-|-|- Numerical Extrapolation|15|30|0.0397|22.44|0.7911|26.12|22.70 Ours|8|18|0.0271|23.03|0.8249|26.02|22.81 - We also sincerely appreciate the reviewer’s comments on the balance between novelty and technical depth. We fully agree with and respect this perspective. At the same time, we would like to slightly clarify the motivation and contribution of our work. Forward and inverse ODEs are central to ReFlow-based editing methods, which recently represent a mainstream direction in image editing. Our contribution lies in providing a fast and effective method that contributes to solving ReFlow ODEs, making our work relevant within this framework. Compared to the numerical extrapolation baseline, our method provides clear empirical and theoretical improvements in reducing NFEs, and is non-trivial, introducing a principled mechanism rather than a simple variant of existing tricks. Beyond the basic application of second-order solvers, our goal is to challenge the conventional belief that higher-order methods must be computationally expensive. By designing a lightweight yet effective scheme tailored to the characteristics of ReFlow ODEs, we hope to inspire future research on efficient solver designs for image generation tasks. Once again, we thank the reviewer for the valuable feedback and for the encouraging remarks on our work.
Summary: This paper introduces ​FireFlow, a fast inversion and editing method for Rectified Flow (ReFlow) models, designed to enable accurate image reconstruction and semantic editing with minimal computational overhead, enabling fast, high-fidelity image editing while fully leveraging the model's inherent linear motion properties. The approach advances practical applications of ReFlow-based generative models without architectural modifications or training. Claims And Evidence: The claims are ​largely supported by theoretical analysis and extensive experiments. The core contributions (efficient high-order solver, empirical superiority) are valid and impactful. Some suggestions for revisions: The evaluation on PIE-Bench uses a fixed seed; variance across multiple runs is not reported. Involving humen assessment would strength the contribution. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are well-aligned with the problem of efficient inversion and semantic editing for ReFlow-based generative models. Evaluation criteria are ​appropriate and comprehensive, covering quality, accuracy, and speed. Minor gaps in dataset diversity and human evaluation could be addressed in future work but do not undermine the core contributions. ​ Theoretical Claims: The numerical experimental results and theoretical results are not perfect matches, there are slight inconsistencies at some points, but I agree with the author that the overall trend is consistent. The findings presented in this paper contribute to this field. Experimental Designs Or Analyses: Most results are compelling, several aspects of the experimental design and analysis warrant closer scrutiny: - The baselines (e.g., RF-Solver, RF-Inversion) are compared at different step counts and NFEs. Without controlling for NFEs or steps, the claimed speedup (e.g., "3× runtime speedup") may exaggerate the method's efficiency. Supplementary Material: The reviewer checks the ablation and limitation part. More discussions on the cause of limitation should be given. Any solutions for these cases? Relation To Broader Scientific Literature: The key contributions of FireFlow build upon and extend several lines of research in generative modeling, numerical methods for ODEs, and image inversion/editing, would benefit researchers in related fields. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths - The authors propose a modified midpoint method that achieves ​second-order accuracy with first-order computational cost, a creative adaptation of ODE solvers tailored for ReFlow's constant velocity dynamics. This is a significant departure from prior ReFlow inversion methods (e.g., RF-Solver, RF-Inversion) that either sacrifice accuracy for speed or require higher NFEs. - Establishes error bounds for velocity reuse (Proposition 4.1) and proves global truncation error equivalence to standard midpoint methods (Theorem 4.2), advancing the theoretical understanding of ReFlow inversion. - Logical flow from motivation to theory, method, and experiments are well-structured. Algorithms 1–2 and synthetic 2D experiments clearly illustrate the solver's mechanics. Code availability and detailed ablation studies (e.g., step-size vs. error in Figure 3) enhance reproducibility. Weaknesses - The statistics are reported with fixed seed. - More discussions on limitations should be given. Experiments focus on ​semantic edits (e.g., object replacement). Complex edits like style transfer, multi-object manipulation is unexplored, limiting the perceived versatility of the method. FireFlow’s ​innovative solver design and ​strong empirical results make it a compelling contribution to fast inversion/editing in ReFlow models. While limited comparisons and narrow task scope slightly weaken the narrative, its theoretical rigor, efficiency gains, and practical applicability position it as a valuable advancement for generative ODEs. Other Comments Or Suggestions: Line 72: This motivates a closer investigation. The format of Table 3 could be revised. Questions For Authors: How to enhance the proposed method for more challenging editing tasks? Can this method be extrapolated to further accelerate by approximating higher-order methods through more caching intermediate steps? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Q1: Results on PIE-Bench are reported with fixed seed? Variance across multiple runs is not reported. - Thank you for your valuable feedback. To clarify, T2I generation starts from random noise determined by seed, while inversion-based editing starts from a fixed image as the initial point in ODE and solving ODE does not involve randomness. Nonetheless, we have re-run the experiments 5 times using different random seeds: PIE-Bench|Structure Distance↓|PSNR↑|SSIM↑|CLIP Whole↑|CLIP Edited↑ -|-|-|-|-|- Reported in the paper|0.0271|23.03|0.8249|26.02|22.81 5-runs Mean|0.0271|23.03|0.8249|26.02|22.81 5-runs Std|0.|0.|0.|0.|0. Q2: Minor gaps in dataset diversity and human evaluation. - Thank you for your comment. We evaluated FireFlow using three public datasets—MSCOCO Caption, Densely Captioned Images (DCI), and PIE-Bench—across T2I, inversion, and editing tasks. While additional experiments could further support our findings, the current results already show significant improvements over recent methods. Notably, PIE-Bench includes 10 diverse editing tasks, including style transfer, as requested. - Regarding human evaluation, we are conducting a crowdsourced blind experiment. Due to time constraints, we couldn’t gather enough voting results for the rebuttal. However, we will include the user study results in the appendix for a more comprehensive evaluation. Q3: Without controlling for NFEs or steps for baseline methods, will the claimed speedup (e.g., "3× runtime speedup") be exaggerated? - Thank you for your insightful question. To clarify, we did not intentionally increase the NFEs or steps in the baseline methods to exaggerate the reported speedup. We followed the standard settings for a fair comparison. Besides, we conducted an ablation study on RF-Inversion and RF-Solver, which shows that reducing the number of NFEs or steps leads to higher image editing failure rates. This demonstrates that the selected steps are more optimal for editing tasks. Method|steps|NFEs|Structure Distance↓|PSNR↑|SSIM↑|CLIP Whole↑|CLIP Edited↑|Qualitative Analysis -|-|-|-|-|-|-|-|- RF-Inv.|28|56|0.0406|20.82|0.7192|25.20|22.11|Better trade-off RF-Inv.|14|28|*0.0677*|*18.88*|*0.7160*|26.03|23.20|**Failed to preserve original content**| RF-Solver|15|60|0.0311|22.90|0.8190|26.00|22.88|Better trade-off| RF-Solver|8|32|0.0211|24.93|0.8597|*24.47*|*21.25*|**Failed to conduct editing** Q4: More discussions on the cause of limitation should be given. Any solutions for these cases? - Thank you for your valuable comment. As a fast, training-free inversion-based editing method, we believe the limitations of our approach can be attributed to the following factors, along with potential solutions: - Practical conditions vs. Proposition 4.1: As shown in Table 4, increasing the number of steps contributes to smoother velocity, which in turn improves performance. Additionally, more powerful ReFlow models offer more accurate velocity estimations, helping to bridge the gap between the theoretical assumptions and real-world scenarios. This is reflected in Table 7, where SD3.5 performs better than SD3. - Editing technique compatibility: Our method works as a fast solver for ReFlow models, which makes it compatible with other ReFlow-based editing techniques. However, it inherits both the strengths and weaknesses of those methods. As such, improving the design of the editing technique itself will further enhance the performance of our approach. - Prompt suitability: It is well-known that prompt engineering plays a crucial role in image generation. The official prompt for PIE-Bench is relatively brief, which could limit further improvements in editing quality. A more sophisticated vLLM-based prompt generator could better align the prompt with the editing task and yield better results. Q5: Experiments focus on ​semantic edits (e.g., object replacement). Complex edits like stylization, multi-object manipulation is unexplored. - Thank you for pointing that out. We apologize for the bias in the illustrations, which mainly focus on object replacement and may have been misleading. In fact, our approach performs well on more complex tasks, including stylized generation and multi-object manipulation. For additional results, please refer to the following [anonymous link](https://ibb.co/F4XzBrbM). Q6: Can this method be extrapolated to further accelerate by approximating higher-order methods through more caching intermediate steps? - Thank you for your insightful suggestion. We are actively exploring this avenue and have already found higher-order solvers compatible with FireFlow, enabling faster inversion with as few as 4 steps. We are also working towards a theoretical proof that a class of numerical solvers can be accelerated in this way. - However, we believe the current submission is self-contained, and we view this as an exciting direction for future work. We appreciate your suggestion and plan to explore it further in subsequent research.
null
null
null
null
null
null
Symmetry-Aware GFlowNets
Accept (poster)
Summary: This paper aims to rectify equivalent actions in the graph generation process of GFlowNet. In GFlowNet’s graph generation, actions that result in identical graphs are considered distinct actions, leading to incorrect learning of their probabilities. The paper presents a theoretical framework for these actions and proposes a solution using reward scaling to correct them. The effectiveness of this approach is validated through both synthetic and real-world experiments. Claims And Evidence: The theoretical claims presented in this paper are substantiated by detailed proofs presented in the appendix. The experiment settings are outlined in the main text, while the experiment details are further elaborated upon in the appendix. Methods And Evaluation Criteria: Graph generation is a common use case of GFlowNets, and the existence of equivalent actions is a common problem in related tasks like atom-based and fragment-based molecule generation. The authors propose reward scaling as a simple and effective solution to this issue. However, I have a potential concern with this approach: its exponential complexity could limit its scalability to large and symmetric graphs. In contrast, the positional encoding approach proposed by Ma et al. has polynomial time complexity. While Ma et al. didn’t provide their code, it would be helpful for the authors to compare the proposed reward scaling approach with Ma et al.’s approach in different scales. Specifically, the authors could compare the following two methods: 1. The proposed reward scaling approach, which requires computing the automorphism group of terminal graphs. 2. Computing the random walk positional encodings of each graph in the trajectory, e.g., using the `torch_geometric.transforms.AddRandomWalkPE` function from the PyG package. Since Ma et al. didn’t provide their code implementation, the authors don’t need to use positional encodings to check for orbit equivalent actions. Theoretical Claims: I reviewed the theoretical proofs and didn’t find any errors on my own. By the way, in Theorem 4.4, when you mention that the state-action flow constraints $F(s)p_{\bar{\mathcal{A}}}(a|s)=F(s’)p_{\bar{\mathcal{A}}}(a|s’)$ are satisfied, do you mean that they should be satisfied for every possible action $a$ that leads from $s$ to $s’$? It would be clearer if you could clarify this in the theorem statement to prevent any confusion. Experimental Designs Or Analyses: I’ve thoroughly examined the soundness of all experiment settings. The proposed method appears to be effective in generating synthetic and fragment-based molecules. However, it shows limited improvement in generating atom-based molecules. My understanding is that symmetries are uncommon in real-world molecule graphs, and as the graph size increases, these symmetries become even rarer. Consequently, the proposed method may not scale well to large-scale graphs. The authors could validate this by measuring the ratio of equivalent actions in their tasks. This could potentially restrict the scalability of the proposed approach to large-scale tasks. Additionally, I have some concerns regarding the synthetic experiment results: - In Figure 3(c), why is the performance of reward scaling consistently lower than transition and orbit correction? I suspect that the reward scaling approach only corrects the terminal probabilities, while the intermediate probabilities remain incorrect. Therefore, for methods that rely on intermediate probabilities, such as detailed balance, the performance would still be negatively impacted. This limitation is particularly significant considering that detailed balance outperforms trajectory balance in your task. - In Figure 3(a), why are there some outliers that deviate from the straight line? From your theoretical analysis, both reward scaling and transition correction are expected to yield perfect terminal probabilities. Consequently, I would anticipate the predicted probabilities in Figure 3(a) to form a perfect straight line. Supplementary Material: I reviewed the codes provided in the supplementary material and didn’t find any problems. Relation To Broader Scientific Literature: This paper makes a significant and broad contribution to the application of GFlowNets in graph generation tasks. In theory, any application of GFlowNets in this domain should incorporate these correction methods to ensure accurate prediction of rewards. Essential References Not Discussed: I couldn’t find any missing essential references. Other Strengths And Weaknesses: The strengths and weaknesses of this paper have been thoroughly discussed in the preceding points, and I have nothing further to contribute. Other Comments Or Suggestions: A minor point on Appendix H.2: you mentioned augmenting edge-level representations with shortest-path lengths. However, shortest-path lengths aren’t very expressive, and there are many edge cases where non-equivalent edges in the same graph can have the same shortest-path lengths. I suggest augmenting edge-level representations with the off-diagonal elements of the powers of the random walk matrix. As per results from Ma et al, this approach should yield nearly perfect performance. Questions For Authors: My questions for this paper are listed in the preceding points. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your helpful feedback and for recognizing our work as a significant contribution to the community! --- ## **Time Comparison with the Positional Embedding (PE)** **Since PE must be computed at every transition, its computational cost scales poorly with the trajectory horizon.** To evaluate how each method scales with the number of transitions, we sampled random graphs with varying transition requirement. We measured the time spent on only the major components of each method (e.g., computing automorphisms for Reward Scaling). 8-dimensional PEs were computed for the PE method. The table below shows the additional computational cost incurred by each method. Plot link: https://drive.google.com/file/d/1hhB0TVyHKmclFJ65O__BlX74fRUvNtrS/view?usp=sharing | Method | 10 Transitions (ms) | 50 Transitions (ms) | 100 Transitions (ms) | | --- | --- | --- | --- | | Transition Correction | 24.32 ± 6.28 | 1148 ± 240.3 | 7354 ± 1288 | | PE (Ma et al., 2024) | 5.49 ± 0.58 | 186.7 ± 29.87 | 997.5 ± 133.9 | | Orbit Correction (Ours) | 0.215 ± 0.025 | 2.106 ± 0.116 | 6.975 ± 0.421 | | Reward Scaling (Ours) | 0.024 ± 0.002 | 0.063 ± 0.004 | 0.111 ± 0.008 | **The experiments clearly show that our methods (Orbit Correction, Reward Scaling) scales much better by significant margins.** Note that the reported time is required for each trajectory and accumulates over the course of training. This indicates that exponential complexity of our method is not the current bottleneck in GFlowNets. We can compute the size of automorphisms in just a few milliseconds, even for graphs with thousands of nodes—far beyond the scale of current applications, which involve fewer than a hundred nodes. --- ## **Clarification of the Theorem 4.4** Your understanding of the theorem is correct: the flow constraints must be satisfied for all actions and states. Thank you for giving us the opportunity to clarify this, and we will further revise the manuscript in the updated version. --- ## **On Atom-based Generation Task** Your conjecture suggests that symmetries are uncommon in large, real-world molecular graphs, explaining the limited improvement in the atom-based task. We observed that the average symmetries in the synthetic environment is 2.6, whereas molecules in the QM9 training dataset have an average symmetry of 1.3. As you pointed out, this partly explains the limited improvement in the atom-based task. However, a few important points are worth noting: - Our method does not inherently guarantee improved sample diversity, as this depends on the reward landscape. - The average symmetry of molecules in the ZINC250k dataset is 2.44, despite containing larger molecules than QM9 (38 atoms vs. 9 atoms). Additionally, the Spearman correlation between the number of atoms and symmetry is 0.08 in ZINC250k, indicating that real-world molecular graphs do not necessarily exhibit fewer symmetries as they grow larger. - For larger graphs, where only fragment-based generation is practical, what matters is the symmetry of fragments, as demonstrated in our paper. --- ## **On the Performance of Reward Scaling in Figure 3(c)** You are correct in pointing out that reward scaling only corrects the terminal probabilities, while the intermediate probabilities remain incorrect. In contrast, both transition and orbit correction methods utilize the correct intermediate probabilities. **This explains the performance gap between Reward Scaling and Orbit Correction with detailed balance, as it depends on intermediate probabilities.** As mentioned in the paper, this provides additional signal, leading to faster convergence. --- ## **On Outliers in Figure 3(a)** We are happy to elaborate on this. **It turns out that the previous experiment stopped training too early.** We resumed training from the last checkpoint with a larger batch size and a lower learning rate, obtaining an almost straight line! See the plot in link: https://drive.google.com/file/d/1IHdLn5Ht4zlZ_frqeEOk31X-F-rrIKzC/view?usp=sharing **In addition, outliers in the Fig. 3(a) corresponds to to graphs that are challenging to represent using GNNs**; compare Figure 3(a) with Figure 10. As mentioned in the paper, the expressive power of the GNN is another source of incorrect learning, as it can give the same representation for actions in different orbits. Your suggestion to use the off-diagonal elements of the random walk matrix is insightful. However, in this simple environment, shortest-path lengths provide sufficient information to distinguish different actions. The main limitation was that the information is processed by a simple MLP with only one hidden layer. We observed similar learning dynamics with the random walk matrix. While other approaches could address this issue, the current setup already serves its illustrative purpose. We hope this clarifies our points and addresses your concerns. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response, and I’ve adjusted my score accordingly. I suggest the authors include the discussion on intermediate probabilities in the related work and experiment sections, as it’s currently absent from the paper. Specifically, reward scaling significantly improves computational efficiency, but it comes at the cost of inaccurate intermediate probabilities and a slight performance drop for detailed balance. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful follow-up, for updating your score, and for your support. We sincerely appreciate your engagement with our work and your valuable suggestions throughout the review process. We appreciate your recommendation to clarify the role of intermediate probabilities and the trade-off between computational efficiency and performance. We will revise the manuscript to incorporate this discussion more clearly in the related work and experimental sections, as suggested. Thank you again for your time and constructive feedback, which have helped us improve the clarity of our results.
Summary: The authors claim that when applying GFlowNets to the graph generation problem, the presence of equivalent actions—where different actions produce the same graph structure—introduces additional bias. Unlike previous works that require computationally expensive calculations at each iteration, the authors propose a simple yet effective approach to compute the number of graph automorphisms only in the final iteration. This is achieved by modifying the Trajectory Balance (TB) and Detailed Balance (DB) losses based on their theoretical proof. The authors demonstrate the effectiveness of their proposed loss function using both synthetic data and the benchmark molecular generation task. ## update after rebuttal My remaining concerns have been well addressed. Claims And Evidence: I agree that the problem identified by the authors is important but has not been well studied. I also find the proposed method to be theoretically sound and a practical approach to addressing graph symmetry. Methods And Evaluation Criteria: The illustrative example and synthetic graphs effectively demonstrate that the proposed Reward Scaling mitigates the bias present in vanilla GFlowNets, which do not account for graph symmetry. Additionally, they show that this strategy performs as well as transition correction while requiring significantly fewer computational resources. Theoretical Claims: The theoretical claim that handling only orbit equivalence, instead of transition equivalence, is sufficient, is valid. Additionally, their modified TB and DB loss, derived through automorphism correction based on ratios, is sound. Experimental Designs Or Analyses: The experiments using illustrative examples and synthetic graphs validate the proposed reward scaling method, showing competitive results compared to transition correction, which requires high computational cost. Additionally, experiments on molecule generation demonstrate the practicality of this method. Supplementary Material: I checked the supplementary materials A, B, C, D, and I. Relation To Broader Scientific Literature: Since this work addresses the challenges commonly encountered in graph generation tasks using GFlowNets and successfully solves them with minimal computational cost, I believe it has the potential to make a significant impact in this area. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The presentation of this paper is clear and easy to follow, making it accessible to readers with varying levels of expertise in the field. Other Comments Or Suggestions: N/A Questions For Authors: 1) I have a remaining concern regarding the method's reliance on 'bliss' for calculating the number of automorphisms. I believe it may occasionally produce incorrect outputs. Could you provide a sensitivity analysis to assess the impact of incorrect automorphism calculations and demonstrate the robustness of the proposed method? 2) Could you provide the total training time compared to exact transition correction or prior work using positional encoding to demonstrate the significance of this method? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful feedback and your recognition of our work's potential impact in this area! --- ## **Impact of Incorrect Automorphism Counting** While we are uncertain about the conditions under which the *bliss* algorithm might fail, we believe it produces exact outputs given the current scale of GFlowNet applications (with fewer than 100 nodes). Moreover, reliance on *bliss* is not essential, as popular alternatives such as *nauty* are available. That said, incorrect automorphism calculations can result in slow convergence or an inaccurate policy, with the exact impact depending on the reward landscape. We consider two specific cases separately. **Random Counting:** We consider a scenario where the software randomly miscounts the size of the automorphism group in 10% of cases—doubling it 5% of the time and halving it 5% of the time. Since the GFlowNet objectives are constructed in the log space, this amounts to the random noise of $\log 2$, which incurs no bias. **Systematic Miscounting:** We implemented an approximate automorphism counter using the Weisfeiler-Lehman (WL) algorithm. If the final coloring has $k$ distinct color classes, and class $i$ has size $s_i$, then an approximation is computed as $\prod_{i=1}^k s_i$. This assumes that nodes within each class can be permuted freely, which is an upper bound of $|\mathrm{Aut}(G)|$. **Experimental Results** We conducted a simple experiment in relatively small environment ($|\mathcal X| = 301$) using the TB objective with Reward Scaling. As shown in the plot linked below, Random Counting converges to the exact method but does so slowly. The approximation using the WL algorithm proved too crude for this environment. A more accurate approximation would better align with our method. https://drive.google.com/file/d/12m4uqiZdPxcpsaUYowD7ZWUh3wYCGz99/view?usp=sharing Additionally, we implemented the PE method of Ma et al. (2024), which gives an approximate solution to the problem. The experiments were conducted on two synthetic environments: Cliques and Cycles. While it underperformed compared to ours, it proved to be far better than the Vanilla GFlowNets, which highlights the importance of removing bias. | Method | Cliques (L1 Error) | Cycles (L1 Error) | | --- | --- | --- | | Vanilla | 0.693 ± 0.001 | 0.463 ± 0.001 | | PE (Ma et al., 2024) | 0.476 ± 0.026 | 0.233 ± 0.005 | | Reward Scaling (Ours) | 0.464 ± 0.011 | 0.189 ± 0.002 | We obtained similar results in the fragment-based generation task (reported in page 8), where we proposed an approximate solution. Overall, miscalculating automorphisms can degrade performance. However, as long as we can approximate them well, we should always apply the necessary corrections to eliminate bias. --- ## **Scalability Comparison** **Our method scale far better than other correction methods.** We evaluated each method, varying the number of transitions per trajectory. We sampled 100 random graphs for each horizon length category. We measured the time spent on only the major components of each method (e.g., computing automorphisms for Reward Scaling). The table below shows the additional computational cost (compared to Vanilla GFlowNets) incurred by each method per trajectory. Plot link: https://drive.google.com/file/d/1hhB0TVyHKmclFJ65O__BlX74fRUvNtrS/view?usp=sharing | Method | 10 Transitions (ms) | 50 Transitions (ms) | 100 Transitions (ms) | | --- | --- | --- | --- | | Transition Correction | 24.32 ± 6.28 | 1148 ± 240.3 | 7354 ± 1288 | | PE (Ma et al., 2024) | 5.49 ± 0.58 | 186.7 ± 29.87 | 997.5 ± 133.9 | | Orbit Correction (Ours) | 0.215 ± 0.025 | 2.106 ± 0.116 | 6.975 ± 0.421 | | Reward Scaling (Ours) | 0.024 ± 0.002 | 0.063 ± 0.004 | 0.111 ± 0.008 | While the cost increases for all methods as the number of transitions grows, our method clearly scales better. It is important to note that the time differences accumulate over the entire training duration. Our implementation of positional encoding is adapted from `torch_geometric.transforms.AddRandomWalkPE`. --- ## **Training Time Comparison** We report the total training time for each method, including positional encoding (PE) method of Ma et al. (2024). The results for each training step are summarized in the table below. | | 1000 Steps (s) | 3000 Steps (s) | 5000 Steps (s) | | --- | --- | --- | --- | | Transition Correction | 1338 ± 79 | 4122 ± 168 | 6859 ± 80 | | PE (Ma et al., 2024) | 1322 ± 44 | 4001 ± 131 | 6604 ± 152 | | Orbit Correction (Ours) | 1176 ± 32 | 3577 ± 100 | 5987 ± 168 | | Reward Scaling (Ours) | 1178 ± 31 | 3584 ± 97 | 5999 ± 146 | While PE is faster than Transition Correction (which performs several isomorphism tests), our methods (Reward Scaling, Orbit) are faster. We used one processor with one GPU (24GB TITAN RTX GPU, Intel Xeon Silver 4216 CPU). Overall, our method has lower computational cost compared to other methods and scales better. --- Rebuttal Comment 1.1: Comment: Thank you for your thorough response to my concerns. All of these concerns have been well addressed. --- Reply to Comment 1.1.1: Comment: We are happy to hear that your concerns have been fully addressed, and we appreciate thoughtful follow-up and continued support. Your feedback throughout the review process has helped us improve the clarity of our work.
Summary: The authors propose a formal characterization of the equivalent action problem, first outlined by Ma et al. (2014), along with a simple fix by re-scaling the reward to account for automorphisms. Claims And Evidence: To my best judgment, the work is theoretically sound, and the authors provide compelling evidence that their method fixes the pathology they set up to solve—with a few caveats pointed out below. Methods And Evaluation Criteria: The comparison should include Ma et al. (2014)'s approach, including comparisons regarding computing time. Theoretical Claims: I didn't check the proofs closely, but I found the notation overcharged and sometimes confusing. For instance, the same $\pi$ is used for sets, edges, and graphs. Also, $\mathcal{E}$ is used is used to denote bot a set of ordered pairs of graph and to denote a function outputting the set of graphs reachable from $G$ in a single transition. There are also things that are not formally defined like $\mathcal{E}(G) \cap s'$, which I guess assumes $s'$ is an equivalence class of graphs. Experimental Designs Or Analyses: I like the synthetic experiment showcasing the pathology in question. However, I would also like to point out that analyzing the results in Table 1 alone might be misleading, as the quantities presented do not measure sampling correctness. A measure like [1]'s FCS would better proxy distributional correctness. [1] https://openreview.net/forum?id=9GsgCUJtic Supplementary Material: I skimmed through the appendices. Relation To Broader Scientific Literature: This work addresses a well-known issue in the GFlowNet literature with an easy and provably correct fix --- which could become a standard when implementing GFlowNets to sample from distributions over anonymous graphs. Essential References Not Discussed: To the best of my knowledge, authors properly recognize the prior art. Other Strengths And Weaknesses: ## Strengths * Proposes a formal treatment to the equivalent actions problem, along with an easy fix by rescaling the reward * The manuscript is well written, and developments are mostly easy to follow * Nice illustration in a synthetic experiment, along with experiments in molecule generation ## Weaknesses * Missing direct comparison with Ma et al.'s (2014) original fix * Measures of performance for the molecule generation experiments to not reflect goodness-of-fit (i.e., distributional correctness) * Notation is sometimes confusing (see "Theoretical Claims" above) Other Comments Or Suggestions: N/A Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback, and recognizing our work as important. --- ## **Comparison with Ma et al. (2024)** ### **Implementation of Positional Encoding (PE) method** **To better position our work, we implemented the PE method to compare with Ma et al. (2024)** and made our best effort to reproduce their approach. We computed 8-dimensional PEs for each node, with edge-level PEs obtained by summing the corresponding node-level PEs. Finally, actions sampled from the policy were compared with other forward/backward actions based on their PEs to identify equivalent actions. ### **Performance Comparison** **Since PE is an approximate solution, it underperforms compared to our method in terms of $L_1$ error.** We validated this in two synthetic environments: the Cliques and Cycles environment. The table below presents the results for the Trajectory Balance objective. | | Cliques (L1 Error) | Cycles (L1 Error) | | --- | --- | --- | | PE (Ma et al., 2024) | 0.476 ± 0.026 | 0.233 ± 0.005 | | Reward Scaling | 0.464 ± 0.011 | 0.189 ± 0.002 | For the Detailed Balance objective, however, it is difficult to predict whether Reward Scaling will outperform, as the PE method corrects intermediate probabilities similarly to Orbit Correction. In our experiments, PE performed on par with Orbit Correction. | | Cliques (L1 Error) | Cycles (L1 Error) | | --- | --- | --- | | PE (Ma et al., 2024) | 0.129 ± 0.007 | 0.190 ± 0.009 | | Orbit Correction (Ours) | 0.122 ± 0.008 | 0.184 ± 0.005 | | Reward Scaling (Ours) | 0.150 ± 0.005 | 0.188 ± 0.025 | However, the PE method, as proposed in Ma et al. (2024), requires task-specific tuning and is not applicable to fragment-based generation. Therefore, we do not recommend using PE. Orbit Correction offers clear advantages over PE: exact correction, faster computation, broader applicability, and ease of implementation. ### **Scalability Comparison** **Since PE must be computed at every transition, its computational cost scales poorly with the trajectory horizon.** We evaluated each method, varying the number of transitions per trajectory. We sampled 100 random graphs for each horizon length category. The table below shows the additional computational cost (compared to Vanilla GFlowNets) incurred by each method per trajectory. Plot link: https://drive.google.com/file/d/1hhB0TVyHKmclFJ65O__BlX74fRUvNtrS/view?usp=sharing | Method | 10 Transitions (ms) | 50 Transitions (ms) | 100 Transitions (ms) | | --- | --- | --- | --- | | Transition Correction | 24.32 ± 6.28 | 1148 ± 240.3 | 7354 ± 1288 | | PE (Ma et al., 2024) | 5.49 ± 0.58 | 186.7 ± 29.87 | 997.5 ± 133.9 | | Orbit Correction (Ours) | 0.215 ± 0.025 | 2.106 ± 0.116 | 6.975 ± 0.421 | | Reward Scaling (Ours) | 0.024 ± 0.002 | 0.063 ± 0.004 | 0.111 ± 0.008 | While the cost increases for all methods as the number of transitions grows, our method clearly scales better. It is important to note that the time differences accumulate over the entire training duration. ### **Training Time Comparison** **While PE is faster than Transition Correction (which performs several isomorphism tests), our methods (Orbit Correction, Reward Scaling) are faster.** We present the actual running time of each method on the synthetic environment presented in the paper. | | 1000 Steps (s) | 3000 Steps (s) | 5000 Steps (s) | | --- | --- | --- | --- | | Transition Correction | 1338 ± 79 | 4122 ± 168 | 6859 ± 80 | | PE (Ma et al., 2024) | 1322 ± 44 | 4001 ± 131 | 6604 ± 152 | | Orbit Correction (Ours) | 1176 ± 32 | 3577 ± 100 | 5987 ± 168 | | Reward Scaling (Ours) | 1178 ± 31 | 3584 ± 97 | 5999 ± 146 | We used one processor with one GPU (24GB TITAN RTX GPU, Intel Xeon Silver 4216 CPU). --- ## **Measure of Goodness-of-Fit** **We emphasize that the purpose of the molecule generation task is to assess whether the correction methods can improve the generation of high-reward samples in realistic reward settings, rather than to demonstrate the correctness of our method.** Therefore, we included the correlation plot only for fragment-based generation (Fig. 4). We obtained improved results with correction for atom-based generation as well, which will be incorporated into the revised version. We used correlation as a metric because it is a widely adopted measure of goodness-of-fit in several previous works (Malkin et al., 2022; Malkin et al., 2023; Madan et al., 2023). While we considered your suggestion to use the recently proposed FCS score (Silva et al., 2025), we concluded that further investigation is needed to determine the best practice for its application. --- ## **On Notations** While we summarized our notations in Appendix A, we acknowledge that some of our notation can be confusing. We are carefully reviewing our notation and theoretical explanations to make the paper more readable in the revised version. We hope this addresses your concerns regarding the weaknesses of our paper.
Summary: This paper addresses the equivalent action problem in GFlowNets for graph generation, providing a theoretical foundation and solution to a bias previously identified in Ma et al. (2024). While the paper offers valuable theoretical insights and formal proofs, the core solution (reward scaling based on automorphism counts) appears to be fundamentally the same as what was previously proposed. The paper's novelty is therefore primarily in its theoretical analysis rather than in the proposed method itself, raising questions about sufficient contribution for publication. Claims And Evidence: The paper's claims are mathematically proven in the appendix. The authors establish that: - GFlowNets without correction exhibit systematic bias toward graphs with fewer symmetries in atom-based generation and toward symmetric components in fragment-based generation. - This bias can be corrected by scaling rewards based on the automorphism group size. - This correction is sufficient for both TB and DB objectives. These claims are supported by mathematical proofs and experimental validation. Methods And Evaluation Criteria: The proposed reward scaling method is mathematically sound and the evaluation metrics used are relevant: - L1 error for synthetic experiments - Diversity, Top K diversity, reward metrics for molecule generation These are appropriate for measuring both the theoretical correctness and practical utility of the approach. Theoretical Claims: These claims are supported by mathematical proofs that seem correct Experimental Designs Or Analyses: The illustrative example with uniform reward is particularly effective at visualizing the bias. The experimental settings span synthetic graphs and molecule generation tasks, providing good coverage of potential applications. One weakness is the lack of direct experimental comparison with the positional encoding approach from Ma et al. (2024), which would have strengthened the paper's positioning. Supplementary Material: skimmed only Relation To Broader Scientific Literature: The paper's relationship to Ma et al. (2024) is the most critical aspect to evaluate. The authors acknowledge that Ma et al. (2024) was the first to identify the equivalent action problem in GFlowNets and propose methods to address it. However, the core solution in both papers appears to be fundamentally the same: accounting for graph automorphisms to correct the sampling bias. While this paper provides a more rigorous theoretical foundation and generalizes to fragment-based generation, the central mechanism of the solution is not novel. The paper states: "our work provides the first rigorous theoretical foundation for the correction," which is accurate, but the actual solution method appears to be a formalization of what was already proposed rather than a new approach. Essential References Not Discussed: na Other Strengths And Weaknesses: Appendix C contains an unclear explanation. The statement "permuting nodes 3 and 4 yields the same graph" doesn't seem accurate for the example shown. A more complex permutation (e.g., 1→4, 2→5, 3→1, 4→3, 5→6, 6→2) would be needed to map between the two graphs. Otherwise, the paper is well-written and easy to follow Other Comments Or Suggestions: na Questions For Authors: na Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely appreciate your thoughtful comments and valuable feedback. In particular, we sincerely hope that our response helps evaluation recognizing our contributions, especially in providing a rigorous theoretical foundation and conducting extensive experiments on the problem. --- ## **Clarification on the Differences from Ma et al. (2024)** **Our proposed approach is methodologically different from that of Ma et al. (2024).** Their method explicitly identifies actions that lead to the same next state to compute exact transition probabilities, using positional encoding as an approximate way to find such actions. In contrast, our approach is based on Theorem 4.6 and does not rely on identifying equivalent actions. This brings clear operational advantages over the approach of Ma et al. (2024). **Exactness:** While our method provides an exact bias correction, the PE method can only approximate the target distribution. **Faster Computation:** Our method is orders of magnitude faster than the PE method, as PE should be computed for every transition (see Appendix I and the comparison below). **Broader Applicability:** Our method is applicable to any graph type and can be extended to fragment-based generation with minimal modifications. In contrast, the PE method as proposed in Ma et al. (2024) cannot be applied to graphs with edge types or fragment-based generation. **Straightforward Implementation:** Our method requires only scaling the final reward, making it straightforward to implement. In contrast, the PE method must identify both forward and backward equivalent actions and sum their probabilities, which may require substantial modifications to existing codebases. One of our key theoretical findings is that graph automorphism is central to this problem, and in this sense, all solutions may appear fundamentally similar in hindsight. However, our approach offers clear theoretical and practical advantages, providing exact bias correction while maintaining broad applicability across different graph types. We further validate these arguments in the following comments. --- ## **Experimental Comparison with Ma et al. (2024)** **To better position our work, we implemented the PE method to compare with Ma et al. (2024)** and made our best effort to reproduce their approach. Implementation details, as well as plots for additional experiments, can be viewed through the following link: https://docs.google.com/document/d/1GhQgcaxQ32uXM7s6xfteGWsBovdm-JvQNW_m3hAG30Q/edit?usp=sharing ### **Performance Comparison** **Since PE is an approximate solution, it underperforms compared to ours in terms of $L_1$ error.** We validated this in two synthetic environments: Cliques and Cycles. The results are presented using the Trajectory Balance objective, while the results for the Detailed Balance objective are provided in the linked document. | | Cliques (L1 Error) | Cycles (L1 Error) | | --- | --- | --- | | PE (Ma et al., 2024) | 0.476 ± 0.026 | 0.233 ± 0.005 | | Reward Scaling (Ours) | 0.464 ± 0.011 | 0.189 ± 0.002 | ### **Scalability Comparison** **Since PE must be computed at every transition, its computational cost scales poorly with the trajectory horizon.** We evaluated each method, varying the number of transitions per trajectory. We sampled 100 random graphs for each horizon length category. We measured the time spent on only the major components of each method (e.g., computing automorphisms for Reward Scaling). The table below shows the additional computational cost incurred by each method per trajectory. | Method | 10 Transitions (ms) | 50 Transitions (ms) | 100 Transitions (ms) | | --- | --- | --- | --- | | PE (Ma et al., 2024) | 5.49 ± 0.58 | 186.7 ± 29.87 | 997.5 ± 133.9 | | Orbit Correction (Ours) | 0.215 ± 0.025 | 2.106 ± 0.116 | 6.975 ± 0.421 | | Reward Scaling (Ours) | 0.024 ± 0.002 | 0.063 ± 0.004 | 0.111 ± 0.008 | While the cost increases for all methods as the number of transitions grows, our method clearly scales better. It is important to note that the time differences accumulate over the entire training duration. ### **Training Time Comparison** We measured the actual training time with the same setting used for our paper. | | 1000 Steps (s) | 3000 Steps (s) | 5000 Steps (s) | | --- | --- | --- | --- | | PE (Ma et al., 2024) | 1322 ± 44 | 4001 ± 131 | 6604 ± 152 | | Orbit Correction (Ours) | 1176 ± 32 | 3577 ± 100 | 5987 ± 168 | | Reward Scaling (Ours) | 1178 ± 31 | 3584 ± 97 | 5999 ± 146 | Our method is faster than the PE method in terms of actual training time. --- **Appendix C**: We appreciate the feedback. We will revise it in the revision. --- ## **Conclusion** Overall, our method offers clear advantages over Ma et al. (2024), which are validated through additional experiments. This demonstrates the novelty of our solution. We compared our method with Ma et al. (2024) to better position our work, and we will incorporate these results into the revised version.
null
null
null
null
null
null
Enhancing Target-unspecific Tasks through a Features Matrix
Accept (poster)
Summary: Partial parameter optimization methods face challenges in handling target-unspecific tasks due to overfitting, which causes the model to lose its general knowledge essential for these tasks. To address this issue, this paper proposes a regularization technique using a Feature Matrix (FM). Through extensive evaluations across various tasks, the proposed method demonstrates significant improvements in enhancing performance on target-unspecific tasks. Claims And Evidence: The paper asserts that partial parameter optimization struggles with novel classes due to overfitting, as evidenced by the results in Table 1. To address this issue, the proposed Feature Matrix (FM) enhances generalization by preserving general knowledge through multiple hand-crafted prompts. Experimental results demonstrate that FM significantly improves novel class accuracy compared to previous approaches. Methods And Evaluation Criteria: The proposed methods are straightforward and easy to comprehend. The paper evaluates their effectiveness using a diverse set of benchmark datasets. Theoretical Claims: It appears that the paper does not present theoretical claims that require formal proof. Experimental Designs Or Analyses: The overall experimental setup appears to be well-designed, and the paper includes essential analyses, such as base and novel classification performance, the impact of additional parameters, prompt length, and cost comparisons with previous methods. Supplementary Material: I reviewed the supplementary material, and it contains various ablation studies, including analyses of hyperparameters such as prompt length. Relation To Broader Scientific Literature: This paper builds on previous research in vision-language models (VLMs), prompt tuning, and generalization techniques. It also introduces the Feature Matrix (FM) as a novel regularization method to enhance generalization in target-unspecific tasks. Essential References Not Discussed: I am not a strict expert in this domain, so I cannot fully confirm whether the paper includes all the necessary references. However, it appears to cite the relevant works and uses them effectively for comparison to validate its proposed method. Other Strengths And Weaknesses: - The paper introduces the Feature Matrix (FM) regularization technique to enhance the generalization of unseen classes in prompt learning. The proposed methods are well-structured, intuitive, and easy to understand. - To validate their approach, the paper evaluates performance across various benchmarks and includes essential analyses, such as hyperparameter tuning and cost comparisons. - However, a notable limitation is the lack of theoretical analysis, which could strengthen the explanation of why FM improves generalization. Other Comments Or Suggestions: No other suggestions. Questions For Authors: Overall, the work is well-structured, easy to understand, and presents a plausible approach. The proposed methods effectively enhance performance on novel classes compared to previous approaches. One notable limitation is the lack of theoretical analysis, which could provide a stronger justification for why the proposed methods improve generalization. However, this does not seem to be a critical issue in evaluating the paper. Since I am not an expert in this domain, I believe it would be best to wait for comments from other reviewers for a more comprehensive assessment. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # **Thanks for review**: Thank you very much for the reviewer's appreciation of our work. Thank you to the reviewer for your valuable time for our paper, we express our **heartfelt** gratitude. ## Response: Our work is based on pre-trained CLIP (ICML2021), and learnable and non learnable prompt words to explore and attempt under existing limited conditions and theories. * Our initial idea is to **minimize** the introduction of new theories, and to explore performance based on existing technology. * We focused more on exploring the performance of fine-tuning multimodal models on large-scale datasets for **realistic** application scenarios (11 datasets). * Our method is a plugin based approach, which appears to have practical value for **convenience** and low coupling. Thank you to the reviewer for the insights on the theoretical aspects of our work. The theoretical analysis will definitely increase the strictness of our work. According to the **guidance** of the reviewer, we will continue to improve this manuscript, including but not limited to theoretical analysis of experiments. We **promise** to make revisions before submitting the final version.
Summary: The paper proposes to mitigate the overfitting observed in parametric optimization methods when optimizing on the target domain. To address this issue, the authors introduce a feature matrix-based approach. This method leverages features extracted from multiple handcrafted prompts, combined with features from various classes, and employs a contrastive loss to optimize the process. The aim is to excavate general feature information, although some of it may not be directly relevant to the task. They validate their approach by integrating it with existing methods and evaluating it on multiple datasets. The experimental results, along with the average performance improvements across datasets, demonstrate that their method not only achieves better overall performance and also delivers significant improvements in certain cases. ## update after rebuttal Thanks for the efforts. I recommend the authors include the full version of Table 8 for every dataset in the paper (as promised). Moreover, the implementational details of the proposed method in conjunction with the other methods are not clear; hence, also include this in the paper from the rebuttal. In response, I have decided to retain my score. Claims And Evidence: My main critique is about detailing their method (proposal of the paper) and substantiating it with appropriate text and experiments. – For the claim that common prompt learning methods suffer from overfitting, leading to poor generalization in novel classes (Lines 162–164), there are no references or empirical evidence. Can the authors provide them to substantiate this? – What does "Low β" mean in Line 188? The authors should elaborate. – In the methodology, the claim in the abstract regarding the use of regularization is not clear. This term appears in the introduction and abstract but nowhere else in the paper. Can the authors clarify this? – The methodology does not explain convincingly why cosine similarity was chosen. Methods And Evaluation Criteria: Yes. Theoretical Claims: The paper includes minimal theory and theory investigation. Experimental Designs Or Analyses: – While the additional experiments exploring target k-shots and learning depth are interesting, they are insufficient. The authors should include more experiments that provide insights into their method. For instance, demonstrate how many P_{1,2,3} samples are necessary for their method. Similarly, how does the method perform with increases or decreases in t_{k}? What are the assumptions for the prompts, and are there any guidelines? – I recommend providing the full version of Table 8 that will detail the utility of \beta for every dataset. Currently, with average performance it's unclear which similar features between source and target would be useful – The authors vaguely mention the need for a ‘set’ of prompts (line 377) but fail to specify how many are needed. Can this be clarified and elaborated? Supplementary Material: All, yes. Relation To Broader Scientific Literature: The paper lacks significant contribution in terms of novelty. Essential References Not Discussed: No. Other Strengths And Weaknesses: – Can the authors include the algorithm for their method? Additionally, for one of the baselines used, can they incorporate their algorithm into the paper’s algorithm (‘Ours’)? This would provide additional details and help describe the steps more clearly. – In the methodology, after Equation 5 is introduced, what happens next? How are the features used in the model for the baseline methods to integrate the proposal? I recommend adding a subsection in the methodology to explain this process clearly. Currently, it is unclear. Other Comments Or Suggestions: Figure 2 doesn't add value to the paper. I recommend shifting it to the appendix. Questions For Authors: The authors do not provide enough details regarding methodology and experiments that would investigate their proposal even deeper. As a reader, I was intrigued to know more about features matrix and regularization, which are not detailed sufficiently. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # **Thanks for review**: We express gratitude to the reviewers: 1) for providing rich and **comprehensive** comments of our work, 2) for investing a significant amount of time and effort in reviewing our work, and 3) for providing practical guidances. We will add acknowledgements in the final version and express our **gratitude**. ## Response: >**Q1**: For the claim that common prompt learning methods suffer from overfitting, leading to poor generalization in novel classes (Lines 162–164), there are no references or empirical evidence. **A1**: Thank you very much for the reviewer's reminder. We apologize that our writing here is not very organized. In line (162-164), these methods refer to line (158-160) of paper. These methods are CoCoOp, CoOp, KgCoOp, ProDA, DPLQ, PLOT in line (158-160) of our paper. >**Q2**: What does "Low-β" mean in Line 188? **A2**: 'Low-β' refers to a list of scores sorted from low to high, with β values assigned from the front. We **promise** to write it clearly in the final version. >**Q3**: This term 'regularization' appears in the introduction and abstract but nowhere else in the paper. **A3**: Our method is **non-traditional** regularization . In other words, the "regularization" method mentioned in our paper is actually a new method. Based on this, this term 'regularization' does not appear elsewhere in our paper. >**Q4**: The methodology does not explain convincingly why cosine similarity was chosen. **A4**: Our infrastructure (CLIP) and previous works (CoOp, CoCoOp, ProDA, KgCoOp, PLOT, MaPLe, PromptSRC, etc.) use cosine similarity in the alignment calculation of text and visual branches. Therefore, to ensure **fair** comparison, our method also uses cosine similarity here. Based on this, cosine similarity is also used to align the two modalities in the 'scores matrix' stage. Our idea is to **minimize** the introduction of other measurement methods, and to explore performance based on existing methods (CLIP, CoOp, MaPLe, etc) and hand-crafted templates. >**Q5**: Demonstrate how many prompt templates are necessary for their method. How does the method perform with increases or decreases in features of templates? **A5**: We used 60 manual prompt templates, the contents of which are in our Appendix (A.6), the number 60 is fixed. We apologize for not including the number in the method details. We **promise** to write it clearly in the final version. Thank you for the careful review by the reviewer. >**Q6**: I recommend providing the full version of Table 8 that will detail the utility of $\beta$ for every dataset. Currently, with average performance it's unclear which similar features between source and target would be useful. **A6**: Thank you for the reviewer's comments. Due to the limited word count of Rebuttal, we now write the specific values of base and novel here to illustrate the relationship between source and target. And, we **promise** to refine the values of each dataset in the final version. Base-to-novel generalization of 11 datasets ($\beta$) | $\beta$ | 3 | 4 | 5 | 6 | |:--------------- |:------:|:-------:|:---------:|:----------:| | Base | 80.19 | 83.47 | **85.70**| 84.46 | | Novel | 76.33 | 76.51 | **77.35**| 76.90 | | HM | 78.21 | 79.81 | **81.32**| 80.51 | >**Q7**: How are the features used in the model for the baseline methods to integrate the proposal? I recommend adding a subsection in the methodology to explain this process clearly. **A7**: Thank you for the reviewer's comments. In the final version, we promise to add pseudocode for the algorithm flow. In addition, we **promise** to add detailed implementation subsection for plug-and-play applying for CoOp CoOoOp、MaPLe、PromptSRC, and write it into a new subsection. Our brief introduction to the plug-and-play concept is as follows: * In Figure 3, Equation 5 refers to the part below the grey dashed box. After Equation 5, our plug-and-play framework begins to empower the results with Equation 5. Our plug-and-play architecture input consists of two parts: 1) text features generated by 60 manually prompted words, and 2) visual features generated by learnable visual embeddings. Therefore, learnable visual embeddings are necessary for the work we propose. * In Figure 2, we found that CoOp and CoCoOp do not have a visual embedding part, so we will first add learnable visual embeddings to the CoOp and CoCoOp architecture. Afterwards, based on this modified architecture, integrate the methods we proposed. For MaPLe and PromptSRC, our proposed method can be directly integrated. * It is worth noting that the input of the text encoder consists of two parts: 1) manual prompt word templates, and 2) learnable prompt words, which vectorize the text into a learnable matrix. These two parts will not affect each other during the input phase, as they are two separate input processes.
Summary: ## Summary This paper addresses out-of-distribution generalization problem in prompt fine-tuning "CLIP" kind of model. The challenge is common: fine-tuning on a specific task boosts the performance in this task, but hurt the performance on other general tasks. This work addresses this problem by adding a regularization during fine-tuning. This regularization, working in a constrastive process, encourages the model (during optimization) to output a close representation as the pretrained model. This is an common, yet effective regularization. As a regularization, it is supposed to help different "fine-tuning" methods. The experiment shows verifies this hypothesis on many tasks. ------- ## Strengthness - "negative interference" has long been a problem in model-based algorithm [1]. Fine-tuning in neural network is the common scenario that suffers from "negative interference". Thus it is interesting and important to research on negative interference in fine-tuning neural networks. - The regularization, which encourages the representation to be close to the pretraining one during fine-tuning, is an resonable and intuitive approach to reduce negative interference. - Many experimental results justifies the positive effect of the proposed method in this paper. ---- ## Weakness - The novelty of the regularization is questionable. Many related works shares the same idea. For example, L2 weights decay towards pretrained weights rather than zero. KL-divergence regularization in fine-tuning transformers via reinforcement learning manner, weights-averaging of pretrained and fine-tuned models, etc. [1] Atkeson, Christopher G., Andrew W. Moore, and Stefan Schaal. "Locally weighted learning." Lazy learning (1997): 11-73. Claims And Evidence: check summary. Methods And Evaluation Criteria: check summary. Theoretical Claims: no theory. Experimental Designs Or Analyses: yes. Supplementary Material: no. Relation To Broader Scientific Literature: check summary. Essential References Not Discussed: no. Other Strengths And Weaknesses: check summary. Other Comments Or Suggestions: check summary. Questions For Authors: check summary. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # **Thanks for review**: Thank you to the reviewer for taking time to take detailed research and valuable reference on our work. We would like to express our **gratitude**. We will also **carefully** revise according to the opinions of the reviewers. In addition, thank you to the reviewer for providing such a comprehensive reference [1] in the comments, and we went to read this article. This article investigates many restrictive and regularization schemes, which we believe are very valuable. It is interesting and important to research on negative interference. The "negative interference" has long been a problem in model-based algorithm. Our future work will conduct an in-depth investigation of these regularization functions with prompting works. We will add this article to our citation and provide a descriptive introduction to the relevant work. [1] Atkeson, Christopher G., Andrew W. Moore, and Stefan Schaal. "Locally weighted learning." Lazy learning (1997): 11-73 ## Response: >**Q1**: L2 weights decay towards pretrained weights rather than zero. KL-divergence regularization in fine-tuning transformers via reinforcement learning manner, weights-averaging of pretrained and fine-tuned models, etc. Many related works shares the idea. **A1**: Thank you very much to the reviewer for specific technical analysis and insights. We will now introduce some **unique** aspects of our work, hoping that reviewer will have new perspectives on our work. * The **contribution** of our method: fully utilizing manual prompt words to mine information. * Our core module is **non-traditional** sense of regularization. In other words, the "regularization" mentioned in our paper is actually a **new** method. * Compared to L2 and KL divergence, our method is flexible and **delves** into specific semantics. Our proposed scheme is closely integrated with manual prompt words. In our future work, we will study and **draw on** the opinions provided by reviewers to design more works in earnest.
Summary: This paper proposes a Features Matrix regularization method to improve model performance on target-unspecific tasks. FM preserves general knowledge and reduces overfitting by extracting and leveraging semantic information from diverse inputs. The approach, compatible with existing frameworks, enhances generalization and performs well across various datasets, including 11 datasets with limited labeled data. It also incorporates pre-trained CLIP features and multiple handcrafted prompts to prevent forgetting essential knowledge, demonstrating state-of-the-art performance in target-unspecific tasks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: No, this paper does not have the theoretical claims. Experimental Designs Or Analyses: Yes, I have checked the experimental designs and analyses. Supplementary Material: This paper does not have the supplementary material. Relation To Broader Scientific Literature: This paper proposes a better way to combine manually designed prompts and trainable tokens to improve the performance of target-unspecific tasks. Essential References Not Discussed: No. Other Strengths And Weaknesses: 1. The illustrations are of great style, but the detailed explanation of the illustrations can be improved. The paper writing can be improved to make the paper more clear. 2. The paper discusses the limitation of the proposed method, it is good. 3. The experiments are comprehensive, but more ablation studies are needed to verify the effectiveness of the proposed method. Other Comments Or Suggestions: Some parts of the paper can be further improved to make the paper more clear, for example, the more detailed discussion of the base classes and novel classes in the introduction. The illustration of different types of prompt design (Fig 2) are not very clear, more details can be added in the figure caption. Questions For Authors: 1. What is the details of the split of base classes and novel classes in the experiments? 2. Some ablation studies can be provided to verify the effectiveness of trainable text tokens and image tokens. Because one simple baseline may be to train a "MOE" to dynamically select the manually designed prompt. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: # **Thanks for review**: We thank the reviewer for the valuable time and consideration of our manuscript. The comments provided by the reviewers are very useful for our work, we express our heartfelt **gratitude**. ## Response: >**Q1**: The detailed explanation of the illustrations can be improved. **A1**: Thank you very much to the reviewer for important comments on the illustration. Our initial focus in designing this illustration was on "simplicity", "contrast", and "the designed artistic drawing". However, we lack description of some components in the diagram. We **promise** to make serious revisions. Our explanation of the symbols in the figure is as follows: In the figure, "snowflake pattern" represents parameter freezing, "flame pattern" represents learnable pattern, "Deep" represents learnable tokens embedded in several layers of the encoder, where "T" represents learnable text embedding, "V" represents learnable visual embedding, "Class priors" represents which category it belongs to, "Tuning Similarity" represents the calculation of cosine similarity for fine-tuning architecture, and the light gray "Similarity" represents the calculation of cosine similarity for frozen architecture. In the MaPLe architecture diagram, the "e" represents the matrix function that connects the encoders of two modalities in several layers. >**Q2**: The more detailed discussion of the base classes and novel classes in the introduction. **A2**: We will add the revised records to the final version to express our **gratitude** to the valuable review. In terms of article writing, we need to improve our manuscript from various aspects. We **promise** to make serious revisions. Our explanation of the "base and novel classes" is as follows: In the base-to-novel generalization task, the datasets are divided into base and novel classes. The model is trained on the base classes, and tested on both the base and novel classes. The number of classes for base and novel is the **same**, which means that all classes in a dataset are evenly divided into two groups of classes. The process of dividing all classes in the dataset is randomly selected. >**Q3**: Some ablation studies can be provided to verify the effectiveness of trainable text tokens and image tokens. **A3**: The experiments proposed by the reviewer are very meaningful and have greatly **helped** our work. We will include this part of the experiments in the main paper and express our gratitude to reviewer. We are now conducting ablation experiments on text embedding and visual embedding **separately** to explore more phenomena. 1) We **add experiments** on the text and image tokens **length** for their effectiveness. Specifically, we set different learning tokens lengths for text tokens and visual tokens. When we conducted ablation experiments on visual embeddings, the value of text embeddings remained at best value (4). The reverse is also the same. * Base-to-novel generalization of 11 datasets (Textual Tokens Length) | Textual Tokens Length | 1 | 2 | 4 | 6 | 8 | 10 | |:--------------- |:------:|:------:|:--------:|:------:|:------:|:------:| | HM (Ours based on MaPLe) | 77.11 | 79.00 | **80.32**| 79.12 | 78.81 | 77.03 | | HM (Ours based on PromptSRC) | 78.02 | 79.34 | **81.32**| 81.00 | 80.86 | 78.94 | * Base-to-novel generalization of 11 datasets (Visual Tokens Length) | Visual Tokens Length | 1 | 2 | 4 | 6 | 8 | 10 | |:--------------- |:------:|:------:|:--------:|:------:|:------:|:------:| | HM (Ours based on MaPLe) | 77.01 | 78.11 |**80.32** | 79.33 | 78.11 | 77.50 | | HM (Ours based on PromptSRC) | 77.32 | 78.08 |**81.32** | 80.54 | 80.51 | 79.11 | 2) We **add experiments** on the text and image tokens on different **learning depth**. When we conducted ablation experiments on visual embeddings depth, the value of text embeddings depth remained at best value (9). The reverse is also the same. * Base-to-novel generalization of 11 datasets (Textual Tokens Depth) | Textual Tokens Depth | 1 | 3 | 5 | 7 | 9 | 11 | |:--------------- |:------:|:------:|:--------:|:------:|:------:|:------:| | HM (Ours based on MaPLe) | 77.87 | 80.00 | 78.90 | 79.02 | **80.32** | 77.51 | | HM (Ours based on PromptSRC) | 78.15 | 79.90 | 80.16| 80.80 | **81.32** | 78.23 | * Base-to-novel generalization of 11 datasets (Visual Tokens Depth) | Visual Tokens Depth | 1 | 3 | 5 | 7 | 9 | 11 | |:--------------- |:------:|:------:|:--------:|:------:|:------:|:------:| | HM (Ours based on MaPLe) | 76.06 | 77.03 |78.65 | 78.16 | **80.32** | 76.93 | | HM (Ours based on PromptSRC) | 76.91 | 78.22 |78.33 | 81.00 | **81.32** | 78.09 | 3) If the reviewer has further suggestions on ablation experiments, we welcome discussions with reviewer. --- Rebuttal Comment 1.1: Comment: Thank you for the author’s detailed response. I would like to revise my score upward. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for your time and effort in reviewing our paper. We are grateful for your feedback and pleased to hear your positive remarks! Best regards, Authors of #123
null
null
null
null
null
null
Diff-MoE: Diffusion Transformer with Time-Aware and Space-Adaptive Experts
Accept (poster)
Summary: The paper introduces Diff-MoE, a framework integrating DiT with MoE to enhance scalability and performance in generative modeling. The proposed modules in Diff-MoE are specially designed for diffusion model, including the spatial-temporal adaptive experts and global feature recalibration. Extensive experiments show Diff-MoE significantly outperforms existing dense DiTs and prior MoE-based methods. ## update after rebuttal My concerns have been well addressed and would like to recommend accept. Claims And Evidence: The claims are supported by extensive experiments. Diff-MoE consistently outperforms dense DiTs and MoE variants (Tables 2-4) with similar number of parameters. Methods And Evaluation Criteria: Combining MoE with DiT for spatiotemporal adaptation is novel and well-motivated. The global recalibration mechanism addresses MoE’s local bias. Class-conditional image generation on ImageNet dataset is a widely used benchmark. The architecture design of generative model is usually validated on this dataset, and the metrics reported by the manuscript are persuasive. Theoretical Claims: The paper focuses on empirical contributions. Experimental Designs Or Analyses: Comprehensive ablations isolate contributions of each component, including basic architecture design, spatial-temporal adaptive experts and global feature recalibration. The comparison with existing methods is also fair, as the authors have targeted the design of models of different sizes. Supplementary Material: In appendix C, the discussion about decoupling temporal adaptation from spatial specialization is intresting, which explains the reason for the performance improvement in a novel view. Relation To Broader Scientific Literature: This paper builds on Diffusion Transformers (Peebles & Xie, 2023) and MoE architectures (Shazeer et al., 2017). It advances prior diffusion-based MoE works (e.g., DiT-MoE, DTR) by unifying temporal and spatial adaptation. Essential References Not Discussed: The related work discussed in this paper is more comprehensive. Other Strengths And Weaknesses: Strengths: 1. I completely agree that the current diffusion-based methods do not consider temporal and spatial flexibility simultaneously. The integration of temporal and spatial MoE for diffusion model is well-motivated. And the integration method is somewhat clever compared to the conventional MoE method. 2. The introduction of low-rank decomposition to reduce the number of parameters is natural under the design of this paper. The combined use of LoRA and AdaLN has no significant performance loss in this method. 3. The experiments in this paper are very comprehensive, including comparisons with dense and expert-based diffusion models. Compared with the existing methods across different scales, the performance of the proposed method is greatly improved. Weaknesses: 1. The FID value of +GLU in Tab.5 is not consistent with the text in Sec. 5.3, which could be a typo. 2. The design motivation of depthwise convolution in Basic Architecture Design section is not clear. Why add convolution to the pure transformer architecture? Based on Sec 5.3, this improvement resulted in a significant decrease in FID for reasons that need to be explained in further detail. Other Comments Or Suggestions: I'm positive about the paper. There are several contributions, which are well motivated, executed and ablated. The evaluation - especially the quantitative results - is convincing. Questions For Authors: Why fix the number of experts to 8? Was this choice empirically validated against other configurations? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Q1: Typo on Tab.5. We apologize for the typo in Table 5. The correct FID and IS scores for "+GLU" should be 38.20 and 41.43, respectively. The ability of GLU to enhance model capacity compared to a simple MLP has been discussed in works such as Llama [1] and StarNet [2]. We will fix this typo in the revised version. Q2: Motivation of Depthwise Convolution. We apologize for the unclear discussion of the motivation behind this design choice. The integration of depthwise convolution prior to the MoE module draws inspiration from hybrid vision architectures like CMT [3] and LocalViT [4], which strategically combine the spatial locality of CNNs with the global receptive fields of transformers. Our design addresses two critical requirements: ● Spatial Locality Preservation: Depthwise convolution injects inductive biases for local feature extraction (edges, textures). ● Parameter Efficiency: With computational complexity reduced from $k^2C^2$ (standard convolution) to $k^2C$, depthwise operations contribute only 0.1% additional parameters in Diff-MoE-S while maintaining spatial fidelity. In our ablation studies (Table 5), we observed that removing depthwise convolutions led to a 6.5% increase in FID (35.85 → 38.20), highlighting their importance. Q3: Why fix the number of experts to 8? The selection of 8 experts balances empirical performance gains against computational and architectural constraints, following DiT-MoE [5]. We report the results of the ablation experiments for the number of experts in the following table: | CFG=1.5, Imagenet256 | Params | FID↓ | IS↑ | |----------------------|------------|-------|-------| | Diff-MoE-S-4E1A | 36M / 66M | 33.45 | 46.95 | | Diff-MoE-S-8E1A | 36M / 107M | 31.18 | 50.55 | | Diff-MoE-S-16E1A | 36M / 187M | 30.08 | 51.76 | While larger expert pools may benefit extreme-scale models, our focus on parameter-efficient diffusion training prioritizes balanced specialization over brute-force scaling. Open-sourced implementations will support flexible expert configurations for future hardware-augmented explorations. [1] Touvron, Hugo, et al. "Llama: Open and efficient foundation language models." arXiv preprint arXiv:2302.13971 (2023). [2] Ma, Xu, et al. "Rewrite the stars." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [3] Guo, Jianyuan, et al. "Cmt: Convolutional neural networks meet vision transformers." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. [4] Li, Yawei, et al. "Localvit: Analyzing locality in vision transformers." 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2023. [5] Fei, Zhengcong, et al. "Scaling diffusion transformers to 16 billion parameters." arXiv preprint arXiv:2407.11633 (2024).
Summary: Diff-MoE introduces a novel integration of temporal and spatial adaptation in MoE for diffusion models. The module proposed in Diff-MoE takes into account the characteristics of the diffusion model. The experimental results are also impressive, proving the effectiveness and scalability of Diff-MoE. Claims And Evidence: The quantitative and qualitative evaluations are overall adequate and support the claims of the paper. Strong baselines from prior work are considered for comparison. Authors trained models of different sizes, and the proposed method is clearly and consistently superior to the baselines across various sizes. Methods And Evaluation Criteria: The paper is technically sound. All the proposed components are well motivated and serve a clear purpose: MoE for spatial dynamic computation, expert-specific timestep conditioning for temporal adaptation. The evaluation is in line with what is used in previous work: standard ImageNet benchmark and FID metrics. Theoretical Claims: This paper verifies the effectiveness of architecture design through qualitative and quantitative experiments. Experimental Designs Or Analyses: The training strategy is aligned with existing methods. Moreover, several models of different sizes are designed to compare with existing methods in the case of the number of alignment parameters. The comparison baselines are comprehensive, including dense and sparse (temporal or spatial) DiTs. The proposed method consistently outperforms different kinds of comparison methods across different scales. Supplementary Material: Appendix validates convergence (Fig. 6) and expert routing dynamics (Fig. 7). Relation To Broader Scientific Literature: The current MoE-based DiT methods are still in its early days. Most methods only consider space or time, this work is the first to consider both time and space. Essential References Not Discussed: To the best of my knowledge the paper offers a good coverage of related works. The papers listed are all relevant, are properly organized and accurately discussed. Other Strengths And Weaknesses: Strengths: 1. The experimental setup is solid and the results are impressive. The performance of Diff-MoE is far superior to the existing methods, and the convergence speed is fast. 2. The MoE is a good way to scale up diffusion models to larger sizes, but this direction is still in the early stages of development. The motivation of this paper is clear, considering both spatial and temporal adaptability. The proposed module are specifically designed for diffusion models, rather than naively introducing MoE design from LLM. 3. This writing is good and well organized. The design motivation for each module is clear and easy to follow. Weaknesses: See questions. I recommend the paper for accept owing to the merits of the work and clear motivation behind each of the modules used. My final rating depends on how the authors address the concerns in the Question section. Other Comments Or Suggestions: See questions. Questions For Authors: 1. There is a typo of Tab. 5. The FID of of +GLU should be 38.20, which is described in L433. 2. Why the CFG value of Tab. 2 is 1.0, while others are 1.5? 3. Do the authors consider open source code? Open source makes a lot of sense to the community because the training process of MoE is usually accompanied by some tricks. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the thorough reviews. Below we try to solve issues one-by-one. Q1: Typo on Tab.5. We apologize for the typo in Table 5. The correct FID and IS scores for "+GLU" should be 38.20 and 41.43, respectively. The ability of GLU to enhance model capacity compared to a simple MLP has been discussed in works such as Llama [1] and StarNet [2]. We will fix this typo in the revised version. [1] Touvron, Hugo, et al. "Llama: Open and efficient foundation language models." arXiv preprint arXiv:2302.13971 (2023). [2] Ma, Xu, et al. "Rewrite the stars." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Q2: Why the CFG value of Tab. 2 is 1.0, while others are 1.5? The variation in classifier-free guidance (CFG) scales across Table 2 stems from our commitment to fair, methodologically consistent comparisons. For SiT-LLaMA, we directly adopt the reported CFG=1.0 results from its original paper, as no CFG=1.5 benchmarks were available. Conversely, other baselines (spatial/temporal MoE) are evaluated at CFG=1.5 following their established protocols. Q3: Code. We sincerely appreciate the recognition of this work. To ensure reproducibility and foster community progress, we commit to open-sourcing all training frameworks, inference pipelines, and architectural implementations upon publication, to facilitate future research in scalable diffusion models.
Summary: This paper introduces Diff-MoE, whihc is a novel framework combining Diffusion Transformers with Mixture-of-Experts to enhance scalability and flexibility in generative modeling. It achieves better FID scores across different model sizes compared to standard DiT models. Claims And Evidence: Yes, most of the claims made have convincing evidence. Methods And Evaluation Criteria: Yes, proposed methods and evaluation criteria make sense. Theoretical Claims: Does not contain an explicit formal mathematical proof that needs to be verified for correctness. Experimental Designs Or Analyses: Limited evaluation at extreme scales (7M steps) due to computational constraints Supplementary Material: Yes, I have read all parts in supplementary material. Relation To Broader Scientific Literature: Diff-MoE integrates the Mixture-of-Experts paradigm, which has been effectively used to scale large models in Natural Language Processing, into diffusion models to achieve computational efficiency through dynamic parameter activation. Diff-MoE distinguishes itself from prior work on MoE in diffusion models by jointly optimizing for temporal adaptation and spatial specialization. Previous approaches often focused on either temporal partitioning of experts across denoising stages (e.g., DTR, Switch-DiT, MEME) or spatial routing of tokens to experts (e.g., DiT-MoE, EC-DiT). Diff-MoE proposes a more unified approach. Essential References Not Discussed: This paper only compares DiT-based models and lacks a comparison with state-of-the-art (SOTA) image generation models. Other Strengths And Weaknesses: Strengths: The paper introduces several novel ideas, including joint spatiotemporal expert coordination, expert-specific timestep conditioning, and a globally-aware feature recalibration mechanism. The use of low-rank decomposition to reduce parameter overhead is also a notable contribution. Weaknesses: The paper acknowledges that due to computational constraints, a full evaluation of the largest models (e.g., Diff-MoE-XL) at extended training durations was not performed. Other Comments Or Suggestions: Please refer to Strengths And Weaknesses. Questions For Authors: While the paper extensively discusses parameter efficiency, it doesn't clearly address the training and inference computational costs (FLOPs/throughput) of Diff-MoE compared to baseline models. Could you provide quantitative comparisons of computational overhead and wall-clock time for training and inference? This would help evaluate the practical applicability of the approach beyond parameter efficiency. Were there any specific failure cases or training instabilities (e.g., dead experts, high variance in activation patterns) encountered during training? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for the constructive comments and the recognition of novelty. Q1: Limited evaluation at extreme scales due to computational constraints. We acknowledge the limitation in fully characterizing Diff-MoE's scaling laws and commit to conducting large-scale evaluations once additional computational resources are secured. Our contribution lies in synthesizing existing MoE methods in diffusion models, combining the benefits of timestep-based and spatial MoEs. This perspective offers a novel approach for scaling diffusion models, and extensive experiments validate its effectiveness. Furthermore, we will open-source the code to enable researchers with sufficient computational resources to explore and build upon our work. Q2: Comparison with more SOTA models. Thank you for this constructive comment. State-of-the-art AR/diffusion methods can achieve an FID below 1.5 on ImageNet 256 generation. As discussed in Q1, current hardware limitations constrain our ability to explore larger scales or more iterations, which impacts our ability to achieve state-of-the-art performance. For reference, DiT-MoE-XL[1] achieves 1.72 FID after training for 7M iterations and 3.42 FID after 400K iterations. In comparison, Diff-MoE-XL achieves 2.69 FID after training for 400K iterations, which can serve as a benchmark. We will open-source all implementations to facilitate community-driven scaling efforts and will pursue large-scale training once expanded infrastructure becomes available. These steps aim to bridge the gap between methodological innovation and SOTA performance benchmarks in future work. Q3: Training and Inference computational costs. We thank the reviewer for raising this critical aspect of MoE system design. Below we detail computational comparisons between Diff-MoE and DiT-MoE baseline under identical hardware (V100 GPU) and framework conditions. Inferency Efficiency: Compared with expert-base method, Diff-MoE-S+ incurs only 6% FLOPs increase (16.05G → 17.01G) but 19% throughput reduction (278 → 225 samples/sec) compared to DiT-MoE-S, which may be due to the fact that the GPU optimizes different operators differently. When compared to dense DiT models, despite having similar parameter counts and theoretical FLOPs, our current implementation exhibits slower throughput due to the sequential computation of experts in a for-loop—an inherent challenge in MoE architectures. Nevertheless, Diff-MoE-S/2 achieves competitive FID (44.27 vs. DiT-B/2's 42.84) with 63% fewer activated parameters (36M vs. 131M), demonstrating superior memory efficiency critical for large-scale deployment. Training Optimization: Building on fast-DiT[2] optimizations, we implement mixed precision training and pre-extracted VAE features. These adjustments yield 1.2 iterations/sec for Diff-MoE-S+ (211M params) vs. DiT-MoE-S’s 1.0 iter/sec (199M params), despite our model’s increased capacity. While our current implementation prioritizes architectural innovation over low-level optimizations, the sequential computation of experts in a for-loop exponentially exacerbates our speed disadvantage compared to sparse and dense models. We recognize the need for low-level optimizations and identify clear pathways to mitigate throughput costs like parallel expert execution (DeepSeek-MoE[3] and DeepSpeedMoE[4]). Following these advanced strategies, we believe there is significant room for improvement in inference speed. We will prioritize these optimizations after the code is open-sourced to continuously improve inference efficiency to bridge the efficiency gap while retaining the architectural advantages of Diff-MoE. Thanks again for this valuable question. Q4: Training Stability. Diff-MoE exhibits robust convergence behavior throughout the training process. As shown in Fig. 4, the FID consistently decreases as training advances. We also repeated the training for different model sizes multiple times. For Diff-MoE-S, the FID fluctuates by no more than 0.5 (44 ± 0.5). For Diff-MoE-XL, the FID fluctuates by no more than 0.05 (2.69 ± 0.05). Load balancing is an inherent problem in MoE training. While load balancing remains a persistent challenge in MoE architectures—partially addressed by conventional auxiliary losses—our expert-specific timestep conditioning mechanism introduces a critical refinement. By decoupling temporal adaptation (denoising stage dynamics) from spatial routing (token-level feature complexity), the framework redistributes computational loads more effectively, as discussed in Supplementary Material Section C. [1] Fei Z, Fan M, Yu C, et al. Scaling diffusion transformers to 16 billion parameters. arXiv:2407.11633, 2024. [2] https://github.com/chuanyangjin/fast-DiT [3] Liu, Aixin, et al. "Deepseek-v3 technical report." arXiv:2412.19437 (2024). [4] Rajbhandari, Samyam, et al. "Deepspeed-moe: Advancing mixture-of-experts inference and training to power next-generation ai scale." International conference on machine learning. PMLR, 2022
Summary: This paper proposes Diff-MoE, a method that captures both timestep and spatial contexts for expert routing. The approach consists of 1) Expert-Specific Timestep Conditioning – Unlike previous spatial MoE approaches, this enables each expert to adapt its operations based on the timestep, improving adaptability to different noise levels and 2) Feature Recalibration with Global Contexts – This enhances feature representations by incorporating global spatial information, leading to better expert specialization. These two techniques improve the model’s expert capabilities and global context awareness. Additionally, a parameter reduction technique using low-rank decomposition is employed to improve efficiency. Experimental results demonstrate that Diff-MoE outperforms both timestep-dependent routing methods and previous spatial MoE approaches. Claims And Evidence: I find most of their claims reasonable. Their primary claim is that their method combines previous temporal and spatial MoE approaches. By demonstrating that their approach outperforms both prior temporal and spatial MoE methods, they effectively validate this claim. Methods And Evaluation Criteria: They follow the standard evaluation criteria used in the DiT-series evaluations. Theoretical Claims: N/A Experimental Designs Or Analyses: The ablations are well-conducted. However, one minor drawback is that the method does not achieve state-of-the-art (SOTA) performance. That said, I don’t find this to be a critical issue. Supplementary Material: Reviewed. They provide details of experiments and implementation. Relation To Broader Scientific Literature: ### MoE in Diffusion Models To enable efficient scaling of diffusion models, several works have explored MoE-based approaches. I agree with this paper's categorization, which classifies MoE methods into timestep-based and spatial MoEs. This work effectively combines both strategies to enhance the MoE architecture. Additionally, training MoE models in diffusion frameworks is notoriously unstable, yet this paper seems a stable training process across various scale of base experts, which is a significant strength. If the code is made publicly available, it would greatly contribute to the field by providing insights into stabilizing MoE training in diffusion models. ### Timestep-Aware Designs This work aligns well with prior research advocating timestep-aware network operations in diffusion models. However, previous works have already demonstrated why timestep-aware design is necessary, and citing them would strengthen the paper’s argument. Including such references would reinforce the motivation behind their approach. ### Diffusion Model Architectures Since DiT (Diffusion Transformers), most diffusion models have followed a transformer-based architecture, which is also prevalent in large-scale video diffusion models. Although this work does not achieve state-of-the-art (SOTA) performance, it explores an efficient scaling mechanism for MoE in DiTs. It would be interesting to see how well this method scales to even larger models, though this is not a critical issue. ### Overall Contribution If the code is released, I believe this work would make a substantial contribution to MoE-based diffusion models, particularly by stabilizing training and providing an efficient scaling mechanism. Essential References Not Discussed: Well discussed. Other Strengths And Weaknesses: ### Strength - This paper is well-written. - Proposed MoE architecture seems effective. ### Weakness - Including the references about why the timstep awareness of networks is necessary will be beneficial. - Including results on scaling over XL-size will be beneficial. Other Comments Or Suggestions: Line 50: Please correct strange texts. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for the thorough reviews. Below we try to solve issues one-by-one. Q1: Code release. We sincerely appreciate the insightful feedback and recognition of this work. As astutely noted, training stability remains a critical challenge in MoE architectures, where conventional load-balancing losses offer partial mitigation. Our proposed expert-specific timestep conditioning mechanism further alleviates this issue by disentangling temporal adaptation from spatial routing, as analyzed in Supplementary Material Section C. To ensure reproducibility and foster community progress, we commit to open-sourcing all training frameworks, inference pipelines, and architectural implementations upon publication, to facilitate future research in scalable diffusion models. Q2: Including results on scaling over XL-size will be beneficial. We fully agree that scaling up MoE to larger sizes and training more iterations can further validate the framework’s efficacy. However, current hardware limitations bound our exploration. Training configurations exceeding the XL scale (4.5B parameters) on our 8-GPU node infrastructure risk exceeding GPU memory capacity or incurring impractical training durations. We acknowledge this limitation in fully characterizing Diff-MoE’s scaling laws and commit to future large-scale evaluations upon securing expanded computational resources. Moreover, we will open-source the code to enable researchers with sufficient computing resources in the community to explore and build upon our work. Q3: Including the references about why the timstep awareness of networks is necessary will be beneficial. Thanks for this valuable suggestion. We will discuss following papers in the revision: [1] Hatamizadeh, Ali, et al. "Diffit: Diffusion vision transformers for image generation." ECCV, 2024. [2] Liu, Qihao, et al. "Alleviating Distortion in Image Generation via Multi-Resolution Diffusion Models and Time-Dependent Layer Normalization." NeurIPS, 2024. [3] Karras, Tero, et al. "Elucidating the design space of diffusion-based generative models." NeurIPS, 2022. Q4: Typos. Thanks for pointing that out. We will fix all the typos in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My concerns have been well addressed.
null
null
null
null
null
null
Graph Neural Network Generalization With Gaussian Mixture Model Based Augmentation
Accept (poster)
Summary: This paper introduces GRATIN, a novel graph data augmentation method using Gaussian Mixture Models (GMMs) to enhance the generalization of Graph Neural Networks (GNNs) for graph classification. It provides a theoretical framework analyzing the impact of augmentation on GNN generalization via Rademacher complexity, designs the GRATIN algorithm to generate augmented data in the hidden representation space, and demonstrates superior performance in generalization and robustness compared to existing methods through experiments on multiple benchmark datasets. #Update after rebuttal Claims And Evidence: This paper introduces GRATIN, a novel graph data augmentation method using Gaussian Mixture Models (GMMs) to enhance the generalization of Graph Neural Networks (GNNs) for graph classification. It provides a theoretical framework analyzing the impact of augmentation on GNN generalization via Rademacher complexity, designs the GRATIN algorithm to generate augmented data in the hidden representation space, and demonstrates superior performance in generalization and robustness compared to existing methods through experiments on multiple benchmark datasets. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are suitable for the problem. Theoretical Claims: The theoretical proofs (e.g., Theorem 3.1 and Proposition 3.2) are sound and well-presented, offering new insights into the impact of data augmentation on GNN generalization. Experimental Designs Or Analyses: Yes. I have check the experimental designs. Supplementary Material: I have reviewed relevant sections of the supplementary material, including proofs of the Theorem and Proposition. Relation To Broader Scientific Literature: GRATIN's contributions are closely related to existing research on graph data augmentation and GNN generalization. Essential References Not Discussed: No. Other Strengths And Weaknesses: **Strengths** 1) The method shows strong performance across multiple datasets, **Weaknesses** 1) The organization of Section 3 is somewhat confusing and detracts from readability. 2) The relationship between the model and the Graph needs further clarification, particularly in distinguishing between topology and attribute distribution shifts. Other Comments Or Suggestions: 1) Key formulas should be numbered to better align with the steps in Algorithm 1. Questions For Authors: 1) What are the differences between OOD in graph classification and node classification tasks? 2) What are the differences between the method described in Line 229 and MMD? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Response to Reviewer wgAh ============== We thank Reviewer wgAh for the feedback. In what follows, we address the raised questions and weaknesses point-by-point. ***[Weakness 1] Section 3 Clarity*** We appreciate the feedback and will do our best to improve the organization and clarity of Section 3 in the camera-ready version. ***[Weakness 2] Clarifying Topology vs. Attributes*** A topology shift refers to changes in the structure of the graph, such as the addition or deletion of edges, modifications in connectivity patterns, or global properties like graph density. On the other hand, attribute shift refers to changes in node features, such as different feature distributions and scales. Since GRATIN operates in the hidden space learned using several message passing neural network layers, it naturally captures both topological and attribute variations, leading to robust performance against diverse types of shifts. We will further clarify this advantage in the camera-ready version. ***[Question 1] OOD Differences: Graph vs. Node Classification*** The primary distinction between OOD in graph and node classification lies in the level at which distribution shifts occur. In graph classification, each sample is an entire graph, and the task is typically approached as a fully supervised problem, with a label for every training graph. Consequently, OOD shifts tend to manifest as substantial changes in global structure or attributes. For example, training graphs may be relatively small or simple, while test graphs might be larger or more complex. Conversely, in node classification, each sample is an individual node within a large graph, and the problem is addressed via semi-supervised learning, where only a subset of nodes have labels and the rest must be inferred from the graph’s topology and feature structure. Here, OOD shifts can arise locally, such as when training nodes appear in sparse neighborhoods while test nodes appear in denser regions or when training features lie within a certain range and test features experience domain drift. These differences require distinct OOD modeling strategies: node-level tasks, which often rely on partial labels and independent features, must manage semi-supervised constraints, whereas graph-level tasks, typically with full supervision on entire graphs, emphasize broader structural or attribute shifts across distinct graphs. Moreover, unlike most baseline approaches, our method, GRATIN, is flexible and can be extended to node classification tasks. We have presented the details of this extension, including experimental results, in our response to Reviewer eSPE (c.f. [Question 4]), and we will incorporate this discussion into the camera-ready version of the manuscript. ***[Question 2] Line 229 vs. MMD*** The term in Line 229 is the expected distance between a representation of an original graph $\mathbf{h}$ and an augmented representation $\tilde{\mathbf{h}}$. By contrast, Maximum Mean Discrepancy (MMD) aligns full distributions by comparing mean embeddings in a Reproducing Kernel Hilbert Space (RKHS). While both aim to reduce the gap between original and augmented data, MMD enforces a global distribution-level alignment, whereas Line 229 focuses on local instance-level consistency. Despite these differing scopes, they share the overarching goal of encouraging similarity between real and augmented samples. Practically, the term in Line 229 is simpler, computationally cheaper, and better suited for theoretically motivating our data augmentation strategy. ***[Suggestion 1] Formula Numbering*** We thank the reviewer for the suggestion. In the current version, we chose to number only the equations that are explicitly referenced in the text. However, we agree that numbering all equations would improve clarity and alignment with Algorithm 1. We will revise the manuscript accordingly and include equation numbers in the camera-ready version to enhance readability. --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I understand that variations in the hidden space can unify both topological and attribute variations. However, wouldn't directly perturbing the features to achieve augmentation be a simpler approach under the control of perturbation magnitude? This method could also simulate any kind of perturbation. So, can the author theoretically or experimentally explain what the key role of the Gaussian mixture model is? --- Reply to Comment 1.1.1: Comment: We thank Reviewer wgAh for their follow-up questions. **1.Feature Perturbation.** Adding noise to raw features may seem simpler, but overlooks the coupling between topology and node attributes. First, recall that after a graph $(\mathbf{A}, \mathbf{X})$ passes through a GNN to produce a graph-level embedding $\mathbf{h_G}$ capturing both the node features and the graph’s topology. If we only perturb the features while keeping the adjacency matrix fixed, we obtain an embedding distribution denoted by $\mathcal{D_{\text{feat-only}}}=\lbrace\mathbf{h_{\tilde{G}}}| \tilde{G}=(\mathbf{A},\mathbf{X}+\delta\mathbf{X})\rbrace,$ where $\delta \mathbf{X}$ is drawn from a distribution. Similarly, if we perturb the graph stucture only, we obtain another distribution $\mathcal{D_{\text{struct-only}}}=\lbrace \mathbf{h_{\tilde{G}}}|\tilde{G}=(\mathbf{A}+\delta \mathbf{A}, \mathbf{X})\rbrace.$ In contrast, jointly perturbing both features and structure yields a third distribution $\mathcal{D_{\text{combined}}}=\lbrace \mathbf{h_{\tilde{G}}}|\tilde{G}=(\mathbf{A}+\delta \mathbf{A},\mathbf{X}+\delta \mathbf{X})\rbrace,$ which generalizes the other two. Since the GNN’s embedding depends nonlinearly on both $\mathbf{A}$ and $\mathbf{X}$, we have $\mathcal{D_{\text{feat-only}}} \neq \mathcal{D_{\text{struct-only}}} \neq \mathcal{D_{\text{combined}}},$ and each covers a different subspace of the learned representation manifold. To understand this, consider a bottleneck structure in a graph, such as a single edge acting as a bridge between two otherwise disconnected components. Perturbing the node’s features might affect its representation locally, but perturbing the structure, e.g., removing that connecting edge, has a significantly greater impact on the message-passing dynamics, potentially altering how information flows across the entire graph. Similarly, in sparse graphs, including most real-world datasets, this problem becomes more pronounced, as many nodes already have limited connections, and feature perturbations alone cannot introduce the topological changes required to mimic realistic scenarios. Nevertheless, we acknowledge the suggested augmenation strategy, so we added an experiment applying Gaussian noise of varying magnitudes, i.e., we sampled $\delta\mathbf{X}\sim\mathcal{N}(0,\sigma^2 I)$ with different values of $\sigma$, and perturbed the input features as $\tilde{\mathbf{X}}=\mathbf{X}+\delta \mathbf{X}$, keeping $\mathbf{A}$ unchanged. As shown in the table below, GRATIN consistently outperformed this approach, highlighting its ability to generate meaningful augmentations by capturing variations in both topology and features. |Method|IMDB-MUL|MUTAG|PROTEINS| |-|-|-|-| |Feat. only (0.01)|47.06 ±2.21|74.97± 7.71|70.16 ±4.89| |Feat. only (0.05)|46.20 ±4.42|73.86 ±7.21|70.07± 5.12 | |Feat. only (0.1)|46.26±4.37|75.49±7.78|69.36±5.16| |GRATIN|**49.82±4.26**|**76.05±6.74**|**70.97±5.07**| **2. The Role of the Gaussian Mixture Model (GMM)** ***2.1.Theory.*** Using GMMs is supported by the theory in our paper. Thm 3.1 shows that the generalization gap can be bounded by the Rademacher complexity, which itself depends on how close the augmented data distribution is to the real one. This provides a theoretical basis for ensuring that data augmentation should not introduce arbitrary perturbations but rather ones that respect the geometry of the learned representation space. Additionally, Thm 3.3 establishes that GMMs are universal density approximators. This guarantees that the GMM can faithfully approximate the true distribution of graph representations. Finally, Proposition 3.2 refines this idea by providing a bound on the expected perturbation between the original and augmented data. Specifically, it links the deviation to two terms: the KL divergence between the real and augmented distributions and the supremum distance in representation space. By fitting a GMM to the real data, we can reduce the KL divergence. Moreover, due to the exponential decay of Gaussian distributions, the supremum distance $\sup_{\mathbf{h}\sim\delta_\mathcal{D}, \tilde{\mathbf{h}}\sim Q_\lambda}\|\mathbf{h}-\tilde{\mathbf{h}}\|$ is naturally constrained, ensuring a better control of the expected distance $\mathbb{E}_{\mathbf{h}\sim\delta_D, \tilde{\mathbf{h}}\sim Q}[\|\mathbf{h}-\tilde{\mathbf{h}}\|]$. ***2.2.Empirics.*** Empirically, we chose GMMs also for their efficiency. Unlike generative models such as GANs or VAEs, GMMs are fast to fit and require no adversarial training or reconstruction objectives. This efficiency suits GNNs. Sampling from mixture components yields diverse, coherent augmentations. As shown in Appendix F (ablation study), we compared GMM based sampling with alternative strategies. The GMM consistently outperformed these baselines, particularly in terms of generalization ability. Thus, GMMs offer both theoretical and practical benefits.
Summary: The authors propose a novel graph data augmentation method, GRAFIN, which leverages Gaussian Mixture Models (GMMs) to learn the distribution of hidden representations generated by a trained Graph Neural Network (GNN). The method then augments the training data based on this learned distribution. Furthermore, the authors provide theoretical results that support the correctness of the proposed approach. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Yes. I checked all of them (main text + supplementary materials) and found the following issues/suggestions: 1. In the proof of Theorem 3.1, the penultimate inequality of this proof can be replaced with an equality due to the vanishing of Rademacher variables under the absolute value. 2. In the proof of Theorem 3.1, in the final equation, shouldn't it be $G_n^{m}$​ instead of $G_n^{\lambda}$​? 3. It looks like there's a compilation issue related to mathopdh on page 15. 4. End of page 15: summation from lowercase n=1, not n=N. 5. Line 766: There appears to be a potential issue when the measures return the same value for distinct arguments, which could result in a division by zero. Experimental Designs Or Analyses: Yes, all experiments in the main text seem correct. The obtained results in Table 3 are bolded as the best, but there are no statistically significant differences compared to the other methods. Supplementary Material: Yes, the theoretical part. Relation To Broader Scientific Literature: GMM-GDA builds on prior work in graph data augmentation (e.g., GAug and G-Mixup) by using Gaussian Mixture Models (GMMs) to generate synthetic graph data, improving GNN generalization. Unlike previous methods, it provides a theoretical framework using Rademacher complexity to bound generalization error. Its efficiency and theoretical grounding make it a significant step forward in enhancing GNNs with data augmentation. Essential References Not Discussed: No Other Strengths And Weaknesses: Strengths: 1. The authors propose a novel method to augment graph data. 2. The paper is well-written and easy to follow. 3. The authors present a solid theoretical framework that supports their claims. 4. The proofs of the theorems are well-structured. Weaknesses: 1. There are typos that need to be fixed to maintain the professionalism of the paper. 2. The proposed approach should be validated on datasets with a larger number of graphs and larger graphs in terms of nodes and edges, such as COLLAB and REDDIT-MULTI-5K. 3. Equations should be numbered. 4. In the proposed approach, the authors rely on the law of large numbers, which applies for sufficiently large m (notation from the paper), where m is the number of generated graphs per graph in the training set. I am not convinced that this reasoning holds when the number of generated graphs is much smaller. Other Comments Or Suggestions: Comments / Suggestions: 1. Page 3: Replace “Augemtation” with “Augmentation.” 2. Figure 1: The steps presented in the figure’s caption should be marked on the figure for clarity. 3. Page 16: Correct “us to e perform” to “us to perform.” 4. Page 16 (redundant repetition): “The parameters θ̂ and θ̂” should be revised to avoid redundancy. 5. Structure space term clarification: - Original: “The findings of Theorem 3.1 hold for all norms defined on the graph input space. Specifically, let us consider the graph structure space [...]” - Suggested: The term “graph structure space” might be unclear for readers; consider providing a brief explanation or rephrasing for clarity. 6. Misleading notation clarification: - Original: “[…] where G denotes the space of all possible graphs […]” - Suggested: “G should denote the space of all possible graphs whose distribution matches that of the training/validation/test sets.” 7. Line 936: “nodes in the graph, d is hidden dimension” should likely be corrected to “nodes in the graph, where $d_t$ is the hidden dimension.” - Questions For Authors: Questions: 1. Could you elaborate more on the application of the law of large numbers? Is it still applicable if the number of generated graphs (m) is smaller than one per graph in the training set? 2. Could you explain why, after fitting the GMM to representations produced by a GNN, only the learnable parameters of the post-readout function $\psi$ are fine-tuned? If the reason is that you do not want to modify the GNN weights responsible for producing hidden representations, I believe it is worth explicitly mentioning. 3. I do not understand the last four lines at the end of page 12. Could you please clarify what you mean by introducing indexing? 4. Can we extend your framework to node classification? Is that possible? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank Reviewer eSPE for their review. In what follows, we address the raised questions point-by-point. **[R1] Penultimate Inequality in the Proof of Thm 3.1** Thank you, indeed, the inequality can be replaced with an equality. We will update the proof in the camera-ready (CR) version. **[R2, R3, R4, W1] Typos** We are grateful to the reviewer for spotting the typos. We'll correct them in the CR version. **[R5] Division by Zero** In our framework, $\delta_{\mathcal{D}}$ is a discrete distribution, where each $h\sim\delta_{\mathcal{D}}$ has equal probability $1/N$. Consequently, we disregard sampling augmented representations $\tilde{h}$ such that $Q_\lambda(\tilde{h}) = 1/N$, which is highly unlikely to occur in practice. We'll add a technical assumption in Prop. 3.2 to ensure that our choice of $Q_\lambda$ excludes augmented representations $\tilde{h}$ for which $Q_\lambda(\tilde{h})=1/N$. This assumption formally avoids the risk of division by zero. **[W2] Large datasets** The requested results are included in Table 1 below and further confirm the effectiveness of GRATIN on graphs with a larger number of nodes and edges. Table 1: GRATIN for larger datasets. The symbol ‘--’ indicates augmentation time exceeding 2 hours. ||Model|Vanilla|DropEdge|DropNode|SubMix|G-Mixup|GeoMix|GRATIN| |-|-|-|-|-|-|-|-|-| |COLLAB|GCN|79.94±1.61|79.70±1.10|79.62±1.84|81.86±1.62|81.76±1.58|80.74±1.89|**82.28±1.82**| ||GIN|77.80±1.53|78.26±1.46|78.86±2.09|**80.98±1.24**|78.89±2.33|78.20±1.31|80.07±1.35| |REDDIT-M5K|GCN|48.88±2.31|48.87±1.99|48.73±2.39|48.77±2.01|46.23±2.74|--|**49.31±1.56**| ||GIN|51.85±4.29|44.52±9.58|50.87±3.36|49.93±3.63|50.63±4.04|--|**52.01±3.54**| **[W3] Formula Numbering** We chose to number only equations that are explicitly referenced in the text. However, we’ll number all equations in the revised manuscript to improve readability. **[Other Comments]** We appreciate the detailed suggestions. We'll align figure captions more closely with visual steps and rephrase ambiguous terms to improve clarity. **[Q1 and W4] Law of Large Numbers (LLN)** We invoke the LLN primarily for theoretical considerations, illustrating that our method’s assumptions hold as $m$ (the number of augmented graphs per original sample) grows large. In practice, however, even small $m$ consistently improve accuracy without incurring a substantial computational cost (see additional experiments varying $m$ in our response to Reviewer 4GM3, c.f. S3-Part 2). Our findings confirm that a modest number of augmentations can already be beneficial, indicating that while the LLN provides a formal theoretical foundation, our approach remains effective when $m$ is limited. **[Q2] Post-readout Fine-tuning** We split the GNN into two parts: (1) message-passing layers that produce graph-level representations and (2) a shallow post-readout function $\psi$ that maps these representations to final predictions. After training the message-passing layers on the classification task, we fit a GMM to the resulting graph-level representations. This hidden space becomes the manifold where we perform our augmentation. If we were to update the message-passing layers after fitting the GMM, the structure of the learned representation space would shift, making the distribution modeled by the GMM inconsistent and, thus, degrading the quality of the augmented samples. To avoid this, we keep the message-passing layers fixed and fine-tune only $\psi$, which is computationally efficient, scalable and capable of adapting to the augmented dataset. Retraining the message-passing layers would increase computational time. Importantly, the test time prediction remains a composition of the fixed GNN encoder and the updated post-readout function, meaning that the GNN weights responsible for producing hidden representations still play a central role in the model's predictions. **[Q3] McDiarmid’s Inequality** The line break may have introduced confusion. We refer to the standard application of McDiarmid’s inequality by considering two datasets that differ at exactly one index. This setup allows us to bound the change in the expected loss. We'll revise the text to make this explanation clearer in the final version. **[Q4] Node Classification** GRATIN can be extended for tasks like node classification. While other methods might also be adaptable, such extensions are not always straightforward in their original formulations. The extension follows the same framework but shifts focus to node-level distributions. We train a GNN, fit class-wise GMMs on node embeddings, sample new representations for augmentation, and retrain a shallow classifier. Experiments with the GCN on widely used node classification datasets (Table 2) demonstrate effectiveness beyond graph-level tasks. Table 2:GRATIN for node classification. ||Cora|CiteSeer|PubMed|CS| |-|-|-|-|-| |Vanilla|**80.71±0.61**|70.36±0.90|79.78±0.28|89.45±1.25| |GRATIN|80.47±0.51|**71.08±0.71**|**79.85±0.22**|**90.82±0.76**| --- Rebuttal Comment 1.1: Comment: The authors answered all my concerns. I raised the score. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging our clarification and for raising your score. We greatly appreciate your constructive feedback.
Summary: This paper introduces GRATIN, a novel graph data augmentation approach leveraging Gaussian Mixture Models (GMMs) to enhance the generalization and robustness of Graph Neural Networks (GNNs). The authors argue that GNNs often face challenges in generalizing to out-of-distribution (OOD) data, especially with limited or imbalanced datasets. The proposed method generates augmented graph representations by modeling the distribution of graph embeddings (via GMMs) in the hidden representation space. GRATIN is supported by a theoretical framework based on Rademacher complexity and influence functions, which quantifies the impact of augmentation on generalization performance. Extensive experiments on benchmark graph classification datasets demonstrate GRATIN's effectiveness, achieving competitive or superior results compared to existing augmentation methods. Claims And Evidence: I think the main claims in the paper are well supported. Methods And Evaluation Criteria: Overall, the paper conducts reasonable evaluations. It can be further enhanced: 1) The paper only studies structure perturbation. Is the model robust to feature perturbation? 2) It is suggested to include learning-based automated graph augmentations, such as: [1] Luo, Youzhi, et al. "Automated Data Augmentations for Graph Classification." The Eleventh International Conference on Learning Representations (ICLR). 2023 3) sensitivity analysis of key hyperparameters, such as the number of Gaussian components in the GMM or the number of augmented samples. Theoretical Claims: I am not able to check the correctness of the theory, because I am not familiar with the model generalization theory. Experimental Designs Or Analyses: See Methods And Evaluation Criteria. Supplementary Material: Yes, the Supplementary Material provides the source code. Overall, the code is clear and well-organized. However, the readme is empty. Relation To Broader Scientific Literature: 1) Advancing Mixup-Based Techniques: GRATIN contributes to the literature by leveraging Gaussian Mixture Models (GMMs) to generate augmented graphs in the hidden representation space rather than directly modifying graph structures. This makes the augmentation highly efficient. 2) Theoretical Focus: While most prior augmentation methods lack theoretical guarantees, GRATIN provides a rigorous analysis of generalization improvements through augmentation using influence functions and regret bounds. Essential References Not Discussed: I suggest discussing GMM-based augmentation for graph representation learning. Such as: [1] Li, Yanjin, Linchuan Xu, and Kenji Yamanishi. "GMMDA: Gaussian Mixture Modeling of Graph in Latent Space for Graph Data Augmentation." IEEE International Conference on Data Mining (ICDM), 2023. [2] Fukushima, Shintaro, and Kenji Yamanishi. "Graph Community Augmentation with GMM-based Modeling in Latent Space." IEEE International Conference on Data Mining (ICDM), 2024. Other Strengths And Weaknesses: Strengths: 1) The use of Gaussian Mixture Models (GMMs) for graph data augmentation at the hidden representation level is both innovative and computationally efficient. Unlike traditional augmentation methods that operate directly on graphs (e.g., DropNode, DropEdge, Mixup variants), GRATIN focuses on the latent space, which avoids costly node alignments and allows architecture-specific augmentations. 2) The paper provides a strong theoretical foundation for the proposed method. The use of Rademacher complexity to analyze generalization bounds and influence functions to measure the impact of augmented data on test performance demonstrates a deep understanding of the augmentation problem. 3) GRATIN is evaluated on a diverse set of graph classification datasets (e.g., IMDB-BIN, MUTAG, PROTEINS, DD) using two prominent GNN architectures (GCN and GIN). The results show that GRATIN achieves strong generalization performance and robustness to structural perturbations compared to baseline methods like SubMix, G-Mixup, and GeoMix. Other Comments Or Suggestions: N.A. Questions For Authors: 1) Is the model robust to feature perturbation? 2) Could the hyper-parameters introduced in the method be integrated into the augmentation process dynamically during training? Such as the augmented representations filtering proportion. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Response to Reviewer 4GM3 ====== We thank Reviewer 4GM3 very much for their careful review. In what follows, we answer point-by-point. **[S1 and Q1] Robustness Exp.** While our primary contribution lies in improving generalization via data augmentation, we also address robustness as a secondary but significant aspect, building on the recent framework introduced by the baseline GeoMix. Since, in GRATIN we act on the hidden space generated by several GNN layers, our embedding space jointly considers the structure and features by design (unlike most baselines that focus only on structural perturbations). This comprehensive approach allows our model to significantly outperform baselines under feature perturbation, where others often fail due to their limited augmentation scope. We employed a standard feature perturbation baseline by injecting Gaussian noise $\mathcal{N}(0, I) $ into node features with a scaling parameter $\beta=0.5$. We compared GRATIN against a Vanilla GNN (no augmentation) and the recent GeoMix baseline. The results of this experiment, shown in Table 1, demonstrate that GRATIN achieves superior robustness to feature perturbations across both datasets. Table 1: Robustness to features perturbation |Dataset| Vanilla | GeoMix|GRATIN| |-|-|-|-| |PROTEINS|64.96± 4.08|64.87± 9.28|**69.26±3.52**| |MUTAG|62.77±12.40|65.40±8.28|**68.56± 8.81**| **[S2] Additional Baseline** We appreciate the reviewer’s suggestion. We ran additional experiments using the augmentation method proposed in Luo et al. (ICLR 2023), referred to as GraphAug. This model is applied to a GCN backbone following the same experimental setup used for GRATIN and the rest baselines used in our paper. The results, presented in Table 2, will be included in the camera-ready version of the manuscript. GRATIN outperforms GraphAug across multiple datasets, demonstrating the effectiveness of our augmentation strategy that simultaneously operates over both structure and feature spaces. Table 2: GraphAug Results | Method|IMDB-BIN|IMDB-MUL|MUTAG|PROTEINS|DD| |-|-|-|-|-|-| | GraphAug |73.91±4.62|49.66±4.22|73.36±9.30|70.80±3.89|71.02±3.84 | **[S3-Part 1] Sensitivity to GMM Components** We experimented with $K$ components in the GMM in the range from 2 to 50. Below, we include a hyperparameter sensitivity analysis conducted on the GIN backbone using the IMDB-BIN dataset, where we observed consistent results across different numbers of Gaussian components $K$. Table 3: Impact of $K$ (IMDB-BIN) |K|10|20|30|40|50| |-|-|-|-|-|-| |GRATIN-GIN|71.12±2.70|71.34±2.28|71.38±2.57|71.64±2.72|71.74±4.24| **[S3-Part 2] Sensitivity to the number of augmented samples** We conducted a sensitivity analysis to study the effect of the number of augmented samples per graph $m$ on model performance. As shown in the table below, the performance of GRATIN remains stable across a wide range of augmentation levels. This indicates that our method is robust to the choice of this hyperparameter, with accuracy variations within a narrow range even when increasing the number of augmentations from 1 to 30. Table 4: Impact of $m$ (GRATIN-GCN) |$m$|1|5|10|20|30| |-|-|-|-|-|-| |DD|71.90±2.81|72.25±3.26|72.02±3.34|71.87±3.34|71.81±3.29| |MUTAG|76.05±6.74|75.53±6.76|75.53±6.76|75.37±6.54|75.16±6.42| **[S4] Related Work** Thank you for pointing out these relevant works. We appreciate the suggestions and will include both references in the camera-ready version of the manuscript. GMMDA focuses on node classification, proposing a GMM-based augmentation that preserves labels through MDL-guided sampling of synthetic nodes. GCA, on the other hand, targets graph community augmentation, generating unseen graphs with new community structures by introducing new clusters in the latent space. Both works support the general motivation of modeling latent graph representations with GMMs, which aligns with our method. However, our approach differs in that it uses GMM sampling not to generate new nodes or community structures but to augment graph-level representations in the hidden space of a GNN. We view these works as complementary and helpful in motivating GMM-based augmentation in GNNs, and will cite them accordingly. **[Q2] Dynamic Integration of Hyperparameters** In our current setup, certain parameters, such as the number of Gaussian components, were observed to have a negligible impact, whereas others, like the number of augmentations, directly influence the size of the training set and, consequently, the overall training time. The filtering proportion for augmented representations, which we currently fix based on influence-based heuristics, is indeed a good candidate for a dynamic approach. In principle, we could integrate an attention mechanism or adapt the filtering as an Active Learning problem, allowing the model to automatically learn the most informative augmented samples. We see this as a promising future direction for further improving generalization.
null
null
null
null
null
null
null
null
The Relationship Between No-Regret Learning and Online Conformal Prediction
Accept (poster)
Summary: The paper investigates the relationship between coverage and various definitions of regret in online learning. The results of this investigation are as follows: - Sublinear regret with respect to the pinball loss implies coverage guarantees if the data sequence is i.i.d. and a smoothness condition holds on the distribution of the data. - Sublinear swap regret implies (approximate) threshold-calibrated coverage when the empirical distribution of the threshold is smooth. - Sublinear Groupwise swap regret can be linked to (approximate) group-wise coverage if the empirical distribution of the threshold is smooth. Algorithmically, the author proposes group conditional ACI, a variant of ACI, which guarantees coverage for different groups in the covariate space. Claims And Evidence: Yes, all claims are supported by proof and theorems. Methods And Evaluation Criteria: Yes, the authors consider simple benchmarks where group membership is accounted for by differentiating samples over time or features. Except for the last example, the group membership definitions are artificially defined. Datasets with more natural membership conditions would be more instructive. Theoretical Claims: I have checked the proof on the coverage guarantees of G-ACI and those about the relationship between swap regret and coverage. Experimental Designs Or Analyses: I have read the description of the experimental set-up and it appears to be reasonable. However, doing the experiments with datasets with more natural membership conditions would be more instructive. Supplementary Material: Only the appendix with the proof of G-ACI coverage guarantee and the relationship between swap regret and coverage. Relation To Broader Scientific Literature: I think the paper does a good job explaining what are the existing work in the field. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper’s theoretical contributions are sound; however, it is not clear how one benefits from the derived guarantees. More specifically, I don’t see how they refine the analysis of existing online conformal calibration algorithms or lead to new ones. I would suggest elaborating on the implications of each result in the context of online calibration. As they are presented now, I have the impression that the results lack sufficient context and fail to explain their practical relevance to the field of online calibration. The criticism above applies to Section 5, in which G-ACI is introduced. In fact, it appears that the proposed algorithm is derived using standard techniques that are independent of the results developed in the preceding sections. Other Comments Or Suggestions: As mentioned in the previous section, I would suggest better contextualizing the theorems, explaining their practical relevance, and highlighting more clearly how the theoretical results are used to analyze G-ACI in a way that could not have been done before. Questions For Authors: - Theorems 3.5 to 3.8 make some assumptions about the smoothness of the empirical distribution of $\tau$ values at specific intervals. Does this imply an assumption on the distribution of the sequence $(x_t, y_t)$? If so, can this be elucidated? does this means that this coverage guarantees are not adversarial as normally assumed in online CP? - In the abstract, the perturbed leader algorithm is mentioned but the connection between this algorithm and online CP is not clarified in the main text. Update after rebuttal: I have decided to raise my original score to weak accept. Ethical Review Concerns: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback! We address your main concerns here: 1. **Benefits / Relevance of our work:** In regards to the guarantees of our paper, we wish to clarify a misconception the reviewer may have. Our goal in this paper is to elucidate the relationship between regret and coverage in two separate ways: * What is the direct relationship between the *properties* of “no-regret” and “marginal/group-conditional coverage” in conformal prediction. This is algorithm-agnostic --- we are interested in when one property of an algorithm generically implies another. * What are algorithms that are able to simultaneously achieve both kinds of properties, even when the properties are not (approximately) equivalent. This is algorithm-specific, since when the properties are not equivalent, our analysis needs to be specific to the algorithm itself. Our goal in this paper is to make explicit which connections between regret and coverage are connected through the first point and which are connected through the second. To our knowledge, apart from concurrent work in [1], no other work in either the no-regret or the online conformal prediction literature has focused on distinguishing between these points as well as substantiating when the properties do not imply one another. Our discussion in Sections 4 & 5 (including the introduction of GCACI) examines the second point. Though external regret wrt pinball loss does *not* imply coverage (as we show in Section 3), a general class of FTRL algorithms (which have been known for several decades to achieve external regret) are *also* able to achieve coverage guarantees bounded by the norm of the regularizer’s value. This is an insight not found in prior work, and it is through this connection that we instantiate the GCACI algorithm. The prior work related to this algorithm (ACI and its later variants) all apply only to the marginal, not the group conditional case. We will add discussion in our results section to explain this with more clarity. As far as practical benefits, we show in our experiments that GCACI, in addition to being more lightweight, achieves much faster convergence rates than the MVP algorithm from [2] which satisfies the stronger threshold-calibration guarantee. Previously, the only two families of online conformal prediction algorithms (MVP, and ACI and its variants) differed in multiple respects: the ACI algorithms obtained faster convergence and were more lightweight, but offered weaker guarantees than MVP in two respects: they were not threshold calibrated, and they did not offer group conditional guarantees. Here we show that we can obtain lightweight algorithms with fast convergence by giving up only on threshold calibration, while keeping group conditional validity --- this was not previously known. #### 2. **Smoothness requirement:** Smoothness is a mild condition that we can enforce if necessary by perturbing the scores chosen by the adversary (which implicitly perturbs the threshold) by a value drawn from a uniform distribution $[-\epsilon, \epsilon]$. This doesn’t require us to make assumptions on the adversary. We also note that Theorems 3.5 through 3.8 are making direct connections between no regret and coverage, and it is not possible to make this connection without some kind of smoothness guarantee - we address this in more detail in point (1) with reviewer **8yW2**. #### 3. **Follow-the-perturbed-leader mention in abstract:** This is a typo; it should be follow-the-regularized leader instead. Oops! Thanks for pointing it out! #### 4. **More natural definition of group-membership:** Our experiments are of course synthetic, and you are completely right that in some of them the groups are not “natural”. Our goal in these experiments is simply to compare the convergence rates of coverage, and for this purpose we think that the results are agnostic to the meaning of the group functions. In general, we agree that it is more interesting to define groups based on features we wish to eliminate disparity between, as we do in the Folktables experiments. The time-series data does not come with features, so we found the most natural way to define groups was as a function of the time-step. For the UCI Airfoil data, all features are numerical variables measuring parts of the airfoils - we did not find there was any meaningful way to define groups through these variables. #### [1] Angelopoulos, Anastasios N, et al. “Gradient Equilibrium in Online Learning: Theory and Applications.” ArXiv.org, 2025, arxiv.org/abs/2501.08330 [2] Bastani, Osbert, et al. “Practical Adversarial Multivalid Conformal Prediction.” ArXiv.org, 2022, arxiv.org/abs/2206.01067. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response and for clarifying the scope of your results and the smoothness condition. I appreciate the insights provided between the relationship between regret and coverage; however, I see that reviewer obBc shares my concern/misconception regarding the interpretation of the presented results and what practical conclusions one might draw from them. I agree that the results are algorithm agnostic; however would it be possible to present, for each result, a known algorithm that satisfies the stated properties in online convex optimization, along with the corresponding implications for calibration? For example, elaborating on how this existing algorithm would look like for calibration or what guarantees it would have (similarly as done for GCACI) --- Reply to Comment 1.1.1: Comment: Hi! Thanks for continuing to engage. Sure, we're happy to flesh out the algorithmic implications of our swap regret results. The best rates for swap regret are obtained by the algorithm of Blum and Mansour, 2007. The algorithm is parameterized over a set of $n$ actions, which for us will be discrete points on the unit interval $A = (1/n, 2/n, \ldots, \frac{n-1}{n}, 1)$. The algorithm runs n copies of multiplicative weights, each of which maintains a distribution over the $n$ actions in $A$. The algorithm then "stacks" these $n$ distributions into an $n\times n$ matrix $M$, and computes the top eigenvector $p \in \mathbb{R}^n$ of this matrix --- i.e. a vector $p$ such that $Mp = p$. Since it is a stochastic matrix, $p$ has eigenvalue $1$ and $p$ can itself be normalized to be a probability distribution over $A$. The algorithm selects an action according to the distribution $p$. When the conformity score is observed, the algorithm can compute a loss vector $\ell \in \mathbb{R}^n$ (corresponding to the pinball loss of each discrete threshold in $A$). Losses are then fed back to each copy of multiplicative weights; the loss vector fed to copy $i$ is scaled by $p_i$: $\ell^i = p_i \cdot \ell$. Generically the Blum/Mansour algorithm gets a swap regret bound of $R(T) \leq O\left(\sqrt{Tn}\right)$. In our case, as pinball loss is Lipschitz, the best threshold in hindsight in the discrete set $A$ might have cumulative pinball loss that is higher than the best threshold in hindsight by as much as $T/n$, and so in terms of our discretization threshold, the swap regret bound with respect to pinball loss will be $O\left(\frac{T}{n} + \sqrt{Tn}\right)$. Choosing the value of $n$ to optimize this bound ($n = T^{1/3}$) we get a swap regret bound to pinball loss of $R(T) \leq O(T^{2/3})$. We can now plug this swap regret bound into Theorem 3.5. Doing so, we find that (under the assumption that the distribution is $(\alpha, \rho, r)$-smooth), for any threshold $\tau \in A$ that has been used $T_\tau$ many times, conditional on having used $T_\tau$, the coverage error (i.e. the deviation from the realized coverage rate and the target coverage rate $q$) is bounded by: $$O\left(\frac{\rho}{2} + \frac{\rho r}{n} + \sqrt{\frac{T^{2/3}}{T_\tau \alpha r}} \right)$$ Thus, for any threshold $\tau$ played a constant fraction of rounds (in fact, played at least an $\omega(1/T^{1/3})$ fraction of rounds suffices), the coverage converges to $O\left(\frac{\rho}{2} + \frac{\rho r}{n}\right)$, a baseline error rate depending only on the smoothness parameters of the distribution. Its not hard to see that predictions on a finite grid at discretization $1/n$ cannot drive the threshold coverage error below this --- the prior MVP algorithm has a similar term dependent on the smoothness parameters. We're happy to add this discussion/calculation to the paper. As we note, we view the connection between swap regret and threshold calibrated coverage mainly as an important characterization to understand the relationship between regret and coverage: the concrete bound we get here does not improve on MVP for threshold calibrated coverage. In contrast, our GCACI algorithm (which comes from understanding the relationship between FTRL and coverage) -does- give an algorithm with new state of the art guarantees, which is why we focus our concrete algorithmic exposition on GCACI --- but we're happy to clarify/expand on this in the revision. Please let us know if you have any other questions!
Summary: - This paper studies the relationship between no-regret learning and online conformal prediction - They show that no-external regret guarantees imply non-trivial coverage guarantees if data is selected stochastically. Moreover, this is *not* true if the data stream is adversarially generated. - Moving beyond external regret, the authors also consider group-wise regret. Here, they show that even if the data stream is chosen stochastically, no group-wise regret guarantees does *not* imply non-trivial groupwise coverage bounds. - The authors turn to swap regret. Here, they show that thresholded calibrated coverage is equivalent to no-swap regret. They extend this result to show a similar equivalence between no group-conditional swap regret and multivalid coverage. - Finally, the authors study the coverage guarantees of online learning algorithms in the FTRL family. Their main theorem in this section relates the miscoverage rate for FTRL with a generic regularizer as a function of the magnitude of the last iterate in FTRL and the gradient of the regularizer. They then instantiate this theorem with specific algorithms in the FTRL family. like gradient descent, to get concrete upper bounds on the miscoverage rates in terms of $T$ and the number of groups $k$. The authors complement their theoretical findings with experiments. ## Update after rebuttal I thank the authors for their response. As they have satisfactorily addressed my questions and concerns, I will maintain my positive score for this paper. Claims And Evidence: Yes, the claims made in the submission supported by clear and convincing evidence. Methods And Evaluation Criteria: Yes, the proposed methods and/or evaluation criteria (e.g., benchmark datasets) make sense for the problem. Theoretical Claims: As no proofs were included in the main text, I did not verify the correctness of any proofs. Experimental Designs Or Analyses: Yes, I reviewed the experiments in Section 6, but found no glaring issues. Supplementary Material: No, I did not review the supplementary material. Relation To Broader Scientific Literature: This paper studies the relationship between various notions of regret minimization and online conformal prediction. In particular, while previous works have established coverage bounds for online conformal prediction under a worst-case adversary, these bounds are not in terms of regret bounds. This work fills this gap by trying to understand what sort of regret bounds can yield bounds on converge for online conformal prediction and vice versa. In addition, while previous works have given OGD-based algorithms for online conformal prediction, this paper extends this to a strictly larger family of FTRL-based online learning algorithms that are able to obtain *both* marginal and multi group coverage guarantees. Essential References Not Discussed: To my best knowledge, the authors covered all related works that are essential to understanding the key contributions of the paper. Other Strengths And Weaknesses: **Strengths**: - The paper is well-written and easy to follow - I found the connection between swap regret and thresholded-calibrated coverage interesting - I found the experimental results to be strong **Weaknesses** - Lack of Clarity. In many places, the authors consider transcripts that are random variables themselves (e.g. Theorem 3.2). Unfortunately, I feel that the authors do not do a good job at properly dealing the randomness here. In addition, the authors consider losses throughout the paper, yet the theorem statements are written in terms of gains (i.e. negative losses). More confusingly, in Definition 2.6, regret is defined "with respect to the loss function l", yet the order of the terms in the definition of regret is flipped -- instead of regret being defined as the algorithm's loss minus the competitor's loss, its defined as the competitor's loss minus the algorithm's loss. - Lack of proper definitions. In many places, the authors do not property define certain objects. For example, what is $\Phi_t$ in the second column of line 202? What is $\mathcal{D}_{\tau}$ is Theorem 3.6? - Unclear Motivation. I'm not fully convinced of why we should care about coverage guarantees Other Comments Or Suggestions: - In the second column of line 26, I think it should be $f(x, y)$ not $f(x)$ - I think it should be "multi valid coverage" instead of "threshold-calibrated coverage" in Definition 2.5 - I think it should be $\tau_t$ not $\Phi_t$ in the Equation in the second column on line 201-202 Questions For Authors: (1) In Theorem 3.2, the transcript $\Pi_T$ is a random variable as the examples $(x_t, y_t)$ are drawn iid. What does it mean to fix a transcript when it's random? Do you mean to fix a realization of the transcript? What does it mean for $\Pi_t$ to have external regret $\gamma$? Is this in expectation or point wise? When stating the coverage guarantees, what exactly is the probability over? Is this theorem essentially saying that if there is algorithm which can obtain *expected* regret $\gamma$ against the pin ball loss, then, with probability at least blah it achieves the stated coverage guarantees? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your feedback! We will fix the typos you’ve pointed out, and address your other specific concerns / questions below: 1. **$\Pi_T$ as a random variable:** We apologize for any confusion here - In Theorem 3.2, $\Pi_T$ is a realization of the transcript where $(x_t, y_t)$ are drawn from a fixed distribution. This theorem connects the empirical regret $\gamma$ to the empirical coverage, but at a high level uses the fact that expected regret being low implies expected coverage is close to the desired rate. Since we are in a stochastic setting, the empirical and expected values are close to each other with high probability, which is why the result is a high-probability guarantee. #### 2. **Definition of $\Phi$-regret:** Thank you for pointing this out - the definition of regret should be the algorithm’s loss minus the competitor’s loss, letting us consider regret with respect to the pinball loss directly. We will update this in revision. #### 3. **Undefined quantities:** $\Phi_t$ in line 202 is a typo - this should be $\Pi_t$ instead. $\mathcal{D}_\tau$ in Theorem 3.6 has the same definition as given in Theorem 3.5 - it is the empirical distribution over threshold values defined by the subsequence of the transcript for which the algorithm predicted $\tau$. #### 4. **Why coverage guarantees matter:** Achieving a marginal $(1-\alpha)$ coverage guarantee for threshold predictions is equivalent to producing prediction sets that include the true label marginally $(1-\alpha)$ fraction of the time (since the predicted threshold maps to a prediction set through the scoring function); this link similarly applies to groupwise and in general conditional coverage guarantees as well. Prediction sets with high coverage guarantees can be used in and of themselves as an interpretation of the uncertainty of a model’s point predictions. More practically, they can also be used to guide decisions for downstream agents. For example, if one is using a learning model in a safety-critical environment, it may be useful to utilize prediction sets with a very high guarantee of coverage to decide which actions minimize or eliminate the possibility of harmful outcomes. [1] formalizes this intuition and shows why prediction sets with coverage guarantees are the right form of uncertainty quantification for risk-averse decision-makers. #### [1] Kiyani, Shayan, et al. “Decision Theoretic Foundations for Conformal Prediction: Optimal Uncertainty Quantification for Risk-Averse Agents.” ArXiv.org, 2025, arxiv.org/abs/2502.02561.
Summary: This paper explores the connection between online conformal prediction, which aims to construct prediction sets that cover the true labels with a specified probability, and online learning algorithms minimizing the pinball loss, which aim to achieve a certain regret guarantee. The authors show that standard external regret guarantees ensure marginal coverage when the data is i.i.d.; however, they do not provide meaningful coverage guarantees in adversarial settings or multi-group settings. In contrast, when the data distribution is smooth, the stronger notion of swap regret guarantee is "equivalent" to threshold-calibrated coverage, meaning an upper bound on one directly implies a corresponding upper bound on the other. Finally, the authors extend the adaptive conformal inference algorithm of Gibbs \& Candes (2021) to the multi-group setting and demonstrate that the FTRL algorithm ensures group-wise coverage. Claims And Evidence: Most of the claims are well supported by counterexamples or theorems. However, I have some questions regarding the equivalence result between swap regret and the threshold-calibrated coverage. - In Theorem 3.5, the authors provide an upper bound on threshold-calibrated coverage in terms of swap regret. However, the right-hand side approaches zero only if $\rho \rightarrow 0$ and $T_{G,\tau}^3$ grows faster than the swap regret, which is not immediately clear. Consequently, it does not directly follow that any algorithm achieving no swap regret will necessarily lead to the desired threshold-calibrated coverage. - Ideally, a stronger quantitative equivalence result would establish that a transcript achieves a certain coverage error at most $\gamma$ __if and only if__ it attains a corresponding swap regret bound $R$. However, Theorems 3.5 and 3.6 seem to provide only a qualitative connection rather than a strict equivalence. Methods And Evaluation Criteria: The first part of the paper focuses on the relationship between regret and coverage, so evaluation criteria are not applicable. In the second part, the authors propose a group conditional ACI algorithm, which is lightweight and appears reasonable for the intended application. Theoretical Claims: To the best of my knowledge, the proof appears to be correct. However, there are some steps in the derivation that I do not immediately follow, which I have detailed in the "Questions" section. Experimental Designs Or Analyses: The experimental design follows that of Bastani et al. (2022) and seems reasonable to me. However, as also discussed in Bastani et al. (2022), for the group-conditioned coverage setting, it may be more meaningful to consider the threshold-calibrated version as the evaluation metric. Supplementary Material: Yes, I checked the proofs of Lemma A.1, Theorem 3.5, and Theorem 3.6. Relation To Broader Scientific Literature: This papers focus on sequential online conformal prediction problems, initially studied by Gibbs \& Candès (2021) and Gupta et al. (2022). The first part establishes an interesting connection to swap regret, a concept studied in the online learning literature. In the second part, the authors propose and analyze a group-conditional ACI algorithm, which can be viewed as a natural extension of ACI in Gibbs & Candès (2021). Essential References Not Discussed: Some prior works have also explored the use of online learning algorithms for online conformal prediction but with different notions of regret, such as strongly adaptive regret (Bhatnagar et al., 2023) and discounted regret. It would be helpful to discuss these papers to better position the current work within the broader literature. Bhatnagar, A., Wang, H., Xiong, C., & Bai, Y. (2023). Improved online conformal prediction via strongly adaptive online learning. ICML 2023. Zhang, Z., Bombara, D. & Yang, H. (2024). Discounted Adaptive Online Learning: Towards Better Regularization. ICML 2024. Other Strengths And Weaknesses: My main concern is that the paper's main results feel somewhat incomplete. - In the first part, after establishing the connection between swap regret and threshold-calibrated coverage, a natural follow-up question is whether this insight can lead to a new algorithm for online conformal prediction and how its coverage guarantee compares to the state-of-the-art methods, such as MVP in Bastani et al. (2022). Addressing this would help demonstrate the strength of the equivalence result. - In the second part, my understanding is that a key issue with the ACI algorithm is that the optimal learning rate $\eta$ depends on unknown quantities of the sequence, as noted in Gibbs & Candès (2022). However, this issue does not appear to be discussed in the current submission. Other Comments Or Suggestions: - In the abstract (Line 027), "follow the perturbed leader" should be corrected to "follow the regularized leader". - It would be more natural to define the $\Phi$-regret as the opposite of the version currently used in this paper. This way, in Theorems 3.2, 3.5, 3.6, 3.7, 3.8, the regret will be expressed in terms of the pinball loss rather than the negative of pinball loss, which aligns better with standard conventions. - In Theorems 3.2 and 3.5, the distribution should be $(\alpha,\rho,r)$-smooth instead of $(\rho,r)$-smooth. Also, this definition appears to be related to the definition given in Definition 3.1 of Bastani et al. (2022), which is worth discussing. Questions For Authors: - In the proof of Lemma A.1, the authors state that by the smoothness condition on $\mathcal{D}$, we have $|N_1 - qT| \leq \rho T/2$ (Line 644). Could you please elaborate on this? It seems that the bound should be $|N_1- \lceil qT \rceil| \leq \rho T$, and it is unclear where the additional factor of $1/2$ comes from. - Also, on Line 654, you mention that the final inequality follows from evaluating the expression at the optimal values of $N_1 = qT$ and $N_2 + N_3 = (1-q)T$. Could you clarify in what sense these values are "optimal"? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for your thoughtful and detailed comments! First we would like to note a small change to the statement of Theorem 3.5. It should read: $|Cov(\Pi_T, G_\tau) - q| \leq \frac{\rho}{2} + \frac{\rho}{rn} + \sqrt{\frac{2\gamma}{T_{G, \tau}\alpha r} + \frac{\rho}{\alpha}\left(\frac{1}{r} + \frac{2}{n}\right)}$. * The cube in $T_{G, \tau}$ was a typo. $\sqrt{\frac{2\gamma}{ T_\tau \alpha r}}$ on line 705 should be $\sqrt{\frac{2\gamma T_\tau}{\alpha r}}$. * The result in Lemma A.1. should read $|a-b| \leq \sqrt{\frac{2\gamma}{T\alpha r} + \frac{\rho}{\alpha}\left(\frac{1}{r} + \frac{2}{n}\right)}$, translating to an additional constant term in Theorem 3.5. We explain this below. #### Addressing your questions / comments: 1. **Dependence on $\rho$ and $T_{G, \tau}$ in Theorem 3.5:** It’s true that $T_{G, \tau}$ may grow slower than $\gamma$ for some $\tau$. However, we think of overall threshold-calibrated coverage as the sum of coverages $Cov(\Pi_T, G_\tau)$, weighted by the size of $G_\tau$. For each threshold $\tau$, either $T_{G, \tau}$ grows slower than $T$, and its weighted contribution $T_{G, \tau}/T$ goes to zero, or it grows on the order of $T$, and Theorem 3.5 tells us that miscoverage goes to zero - so overall threshold-calibrated coverage should converge. We will add this detail in revision. The dependence on $\rho$ is unavoidable, since exact coverage $q$ comes at the $q$-th quantile of the empirical distribution over thresholds. If a single value is allowed to carry $\rho$ of the total probability weight, then the closest we may be able to get is $q \pm \rho/2$. #### 2. **Stronger quantitative equivalence result:** The relationship between swap regret wrt squared loss and mean calibration is not in general tight because of choices in how calibration is normally measured (Linfty/L1 norm vs L2 norm). Similarly we don't believe the relationship between quantile calibration and swap regret wrt pinball loss is tight in the metric we give, but there may be another in which it is. We can note this in the revision. #### 3. **Using threshold-calibrated coverage as a metric:** We agree that it would be interesting to compare the performance of existing swap regret algorithms against MVP using threshold-calibrated coverage as a metric. However, one of the key points demonstrated in this paper is that achieving threshold-calibrated coverage is at least as difficult as achieving no swap-regret, which requires more computational overhead than obtaining external regret (no-swap-regret algorithms are based on finding fixed points of various sorts). Thus, there is a necessary tradeoff in terms of algorithmic simplicity if threshold-calibrated coverage is what we want - and we believe that MVP already occupies a good place in the tradeoff space, as a more complicated algorithm with a stronger qualitative bound. But as we see in Sections 4 & 5 (and in experiments), if all we want is group conditional coverage (giving up on threshold calibration), then MVP is overkill - we can get much simpler algorithms with faster convergence to the desired coverage rate. For marginal coverage, ACI already demonstrated this was possible compared to MVP - it is a strikingly simple and efficient algorithm that obtains marginal (not threshold calibrated) coverage. In our work we show that we can recover the same kind of simple, efficient algorithms even with group conditional coverage. #### 4. **How an optimal learning rate $\eta$ is chosen:** In both our paper and in ACI, choosing $\eta = 1$ gives the best rate of convergence towards desired coverage ($O(1/\sqrt{T}$ for ours, $O(1/T)$ for ACI). The discussion of optimality of $\eta$ in Gibbs & Candes (2021) is to do with balancing between having quick rates of convergence and having low volatility in predicted thresholds. The larger $\eta$ is, the quicker coverage converges and the more adaptive predictions are to adversarial shifts. The smaller $\eta$ is, the less predictions fluctuate round to round. They use this to describe how $\eta$ may be chosen in conditions where the distribution shift can be measured. Our work inherits the same balance between convergence and volatility, but since we are interested specifically in adversarial coverage we didn’t discuss this in detail. Empirically, we found indeed that the convergence rate improves the closer to 1 that $\eta$ gets. #### 5. **Lemma A.1 questions:** We should have the bounds $|N_1 - qT|, |N_2 + N_3 - (1-q)T| \leq \rho T/2 + \rho r T/ n$. The bound of $\rho T / 2$ is true if $a$ is the exact $q$-th quantile (or the closest value to it). Since $a$ may be at a distance of $1/n$ away from this value, there should be an additional term of $\rho r T / n$. The meaning of “optimal” was in reference to getting a lower-bound on the terms involving $N_1, N_2$ and $N_3$. Based on the updated bounds above, this appears as an additional term $-(b-a)(\frac{\rho T}{2} + \frac{\rho rT}{n})$ on Line 652. --- Rebuttal Comment 1.1: Comment: I thank the authors for their responses and for addressing my concerns. I have the following remarks: - Regarding point 2, I am not sure I fully understand the authors' comment. First, it seems that the swap regret is with respect to the pinball loss rather than the squared loss. Moreover, regarding the discussions on $\ell_{\infty}/\ell_{1}$-norm v.s. $\ell_2$-norm, I think the authors are suggesting that the threshold-calibrated coverage in Definition 2.4 measures calibration in terms of the $\ell_{\infty}$-norm, since $\gamma$ is the maximum of the coverage error among all the groups. Accordingly, the authors propose that we may establish a more quantitative equivalence by choosing a different norm for measuring coverage error. Could the authors please clarify? - Regarding point 3, the authors appear to suggest that the first part of their paper should be viewed as a *negative result*, showing that "achieving threshold-calibrated coverage is at least as difficult as achieving no swap-regret". However, this was not my initial impression when reading the paper. On the contrary, the authors state that "This [the equivalence result] gives new algorithms for guaranteeing group conditional multivalid coverage", which seems to frame the result more positively. If the intent is indeed to convey a negative result, it would be helpful for the authors to make this message more explicit. Furthermore, while it is true that achieving sublinear swap regret is generally more computationally demanding than external regret, in the context of online conformal prediction, we are dealing with a one-dimensional pinball loss, which may be more tractable. Thus, I am also uncertain about the strength of this negative result. - Regarding Lemma A.1, I may be missing something obvious: if $a$ is the exact $q$-th quantile of the set $\\{\tau_i\\}\_{i=1}^T$, then $a$ corresponds to the $\lceil q T\rceil$-th smallest element in $\\{\tau_i\\}\_{i=1}^T$. Given the smoothness assumption on $\mathcal{D}$, there can be at most $\rho T$ elements in the set equal to $a$. This suggests that $|N_1 - \lceil q T \rceil | \leq \rho T$, rather than $\rho T/2$. In any case, this appears to be a minor issue. --- Reply to Comment 1.1.1: Comment: Thank you for engaging --- we know reviewing is is important/demanding work and we sincerely appreciate it. In response to your questions: 1. Yes, in our paper we give a qualitative (i.e. non-rate preserving) equivalence between threshold calibrated coverage on the one hand and swap regret with respect to pinball loss on the other. We bring up (mean) calibration and swap regret with respect to squared error only to point out that in the more standard setting in which there is an equivalence between a kind of "calibration" and swap regret, the situation is analogous --- the equivalence is qualitative, not tight/rate preserving. In the case of mean calibration, there are multiple competing measures of calibration error, and so it is not surprising that they do not have a tight quantitative relationship with a fixed measure of regret (they differ amongst themselves). We here suggest that the situation may be similar in the case of threshold calibrated coverage. But we don't want to belabor the analogy --- our only point is that you are correct that the relationship is not rate preserving, but that this is not unexpected. #### 2. The relationship between swap regret and threshold calibrated coverage is not a negative result per-se --- swap regret is a stronger condition than external regret, and threshold calibrated coverage is a stronger condition than coverage on its own. There is an algorithm for obtaining swap regret in one-dimensional regression problems with respect to one dimensional predictions just as we use here --- see e.g. "Oracle Efficient Online Multicalibration and Omniprediction" from SODA 2024 --- and this algorithm is very similar to more standard swap regret algorithms like Blum/Mansour. In particular it requires computing the top eigenvector of a matrix at every iteration, and so is not as lightweight as simple algorithms like online gradient descent. So, our equivalence indeed gives a new algorithm for obtaining threshold calibrated coverage, but not one that would be substantially more lightweight than the existing algorithm of Bastani et al (we explicate this in more detail in our response to Reviewer b1E4's most recent questions). This is why we focus our empirical evaluation on our online gradient descent based algorithm, which is indeed substantially more practical (in terms of both convergence rates and per-iterate run-time) than any previously known algorithm for obtaining group conditional coverage in the online adversarial setting. #### 3. The confusion might arise from how we are defining $q$-th quantile. Since we are trying to minimize pinball loss, we are looking for the value that comes closest to covering q of the probability weight (or equivalently $qT$ of the values in the set defining the distribution), which is not the same thing as the smallest value that covers *at least* q of the probability weight (since this isn't the standard definition, we will clarify this in revision). Since the point $a$ along [0,1] where we go from covering less than $qT$ of the values to more can itself hold at most $\rho$ of the probability weight, the worst-case scenario is when below $a$ there is (approximately) $q - \rho/2$ and above there is $(1-q) + \rho/2$ of the probability weight. Thanks again for engaging, and please let us know if there are any additional questions!
Summary: This paper explores how to make better predictions with reliable uncertainty estimates in challenging, real-world settings—like when data changes over time or comes from many different groups. It focuses on *conformal prediction*, a method that builds a set of likely outcomes for each input, with a guarantee that the true answer falls inside that set a desired percentage of the time (say, 90%). Traditionally, conformal prediction works well in nice, stable environments, but struggles in harder, online or adversarial scenarios. The key insight of this paper is that existing learning techniques based on *external regret* (which just try to do as well as the best fixed action in hindsight) aren't enough to ensure these prediction sets stay reliable—especially when we care about fairness or accuracy across different groups. Instead, the paper shows that a stronger concept called *swap regret*—which says “I wouldn’t have done better even if I had consistently changed certain decisions in a smarter way”—is exactly what's needed to ensure these coverage guarantees hold, even in complex settings. [That's my hand wavy understanding] Using this idea, the authors design a simple and effective new algorithm called GCACI that can adjust its uncertainty sets over time for many groups simultaneously. They prove this method works in theory, and show in experiments that it beats previous approaches by achieving better coverage, more quickly, and with less computational effort. Claims And Evidence: The core claims are mostly theoretical and comes with formal proofs. The paper essentially provide this: - External regret is not sufficient for coverage guarantees: counter-example appears in adversarial setting or even iid but with group structure. Instead Swap regret is both necessary and sufficient for coverage. - Derive new conformal algorithms from no-swap-regret learning and achieve group-conditional coverage Methods And Evaluation Criteria: The evaluations are quite weak and more benchmark could be done with more datasets Theoretical Claims: Just skim quickly and did not check in details. Experimental Designs Or Analyses: The experiments section can be improved with richer experiments Supplementary Material: No Relation To Broader Scientific Literature: The paper fall into connecting conformal prediction and online learning. Mostly borrow ideas from the latter to improve understanding of the former. Essential References Not Discussed: Adaptive Conformal Inference by Betting by (Aleksandr Podkopaev, Darren Xu, Kuang-Chih Lee) Other Strengths And Weaknesses: The paper is reasonably well written and the core contributions are solid. The weakness is mostly the experimental section. Other Comments Or Suggestions: NA Questions For Authors: - Your coverage guarantees rely on an smoothness condition on the empirical distribution of true thresholds . This assumption is key for bounding the difference between pinball loss regret and coverage error. Could you clarify how realistic this assumption is in practice, especially in adversarial or highly structured environments where the distribution of may be heavily clustered or even discrete? Have you observed situations (empirically or theoretically) where violation of this smoothness significantly affects coverage, and do you foresee ways to either relax the assumption or detect when it may not hold? - In your setup, the adversary is allowed to choose the sequence of examples (or losses), but the smoothness assumption effectively constrains the sequence of threshold values. How should we interpret the adversary’s power in this setting? Is the smoothness constraint best understood as an assumption on the data-generating process (i.e., nature, not an adversary), or are there adversaries within your model who can adaptively shape while still satisfying smoothness? More generally, could you comment on the interaction between adversarial flexibility and the structural assumptions needed for regret-to-coverage equivalence? "## update after rebuttal" The authors provided some more clarifications about the novel contributions, which is nice; However, several points regarding the importance of swap regret and strength of the propositions compared to previous (group) conditional coverage are quite unclear to me. I think the paper would benefit from more rewritting and concret examples. That being said I am globally positive but cannot increase my score to full accept Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your detailed feedback! We will try to address your concerns and comments. First, as a clarification: there are two kinds of relationships we develop in this paper - (1) the relationship between the *properties* of no-regret and of conformal coverage guarantees, agnostic to the algorithm used. (2) specific algorithms that are able to achieve both kinds of properties, even if the properties do not imply one another. One of the motivations for this paper was identifying that though external regret and coverage guarantees do not imply each other (i.e. an arbitrary algorithm that achieves no external regret cannot guarantee coverage guarantees, and vice versa), there exist specific algorithms that are able to achieve both properties. GCACI is an example of this - the analysis for coverage does not go through regret. 1. **Smoothness condition on distribution of thresholds:** Smoothness is a mild condition and we can view it either as an assumption on the adversary (a “smoothed analysis” like assumption --- for example, the adversary can pick an arbitrary score which is then perturbed by small amounts of noise), or alternately as a condition that we can enforce ourselves. If we want to enforce it ourselves we simply perturb the scores from a uniform distribution $[-\epsilon, \epsilon]$ before passing them to the algorithm. We then randomize the algorithm’s threshold in this range. This doesn’t require us to make assumptions on the adversary at all. Note that it is not possible to draw a direct relationship between coverage and regret without some kind of smoothness guarantee. As you mention, the relationship between regret wrt pinball loss and coverage relies on the non-conformity scores being sufficiently spread out. If, for example, the distribution over scores is a point mass on some value $a$, the only achievable threshold-calibrated coverage rates are 0 and 1 - to achieve low miscoverage for any quantile $q$ in between, some assumption of smoothness is necessary. Similarly, in the stochastic setting, marginal $(1-\alpha)$-coverage could be achieved with predicted values arbitrarily far away from $a$ as long as $(1-\alpha)$ fraction of the values were below $a$, but this obviously doesn’t translate into a regret guarantee. We also emphasize that the smoothness guarantee is only required for algorithms achieving coverage through swap regret. Our new algorithm GCACI achieves coverage through properties of the class of FTRL algorithms, and thus no assumptions on smoothness need be made. This is an advantage over the MVP algorithm we compare against, which does require it. #### 2. **Power of adversary when smoothness is required:** Since smoothness can be enforced posthoc via the perturbation strategy, we do not typically enforce any kind of constraints on the adversary directly. #### 3. **Experiments not rich enough:** We included experiments to compare against all the benchmarks described in Bastani et al (2022), which to our knowledge details the only other algorithm for groupwise coverage in adversarial settings. We would be happy to include additional experiments that may be relevant to evaluation, if the reviewer has any particular benchmarks in mind. --- Rebuttal Comment 1.1: Comment: Thanks for the comments and clarification. > We included experiments to compare against all the benchmarks described in Bastani et al (2022), which to our knowledge details the only other algorithm for groupwise coverage in adversarial settings I am not sure to understand this comment. In my understanding the Line of work by Gibbs & Candes, with the online learning View can adapt to adversarial settings. Plus on a other follow up they provided an extensive analysis on conditional coverage. Mainly the idea is that conditional coverage can we written as an orthogonality condition wrt all measurable function of the test point. Thus to relax this notion they provided some restricted class of function that covers conditional group coverage. As such, most of the online CP follow up could be combined with these type of restrictions. So I am quite confuse by this comment. Would be Nice if the authors can clarify. Sorry if I am missing some obvious point.. --- Reply to Comment 1.1.1: Comment: Hi! Thanks for engaging, we really appreciate it. There are two lines of work that are relevant to this discussion, so lets try and disentangle them. There is the line of work on conformal prediction in online adversarial settings, using 1-dimensional gradient descent on pinball loss and variants. This line started with "Adaptive Conformal Inference Under Distribution Shift" by Gibbs and Candes. These algorithms update a single parameter (mapping to the threshold on the conformity score) and the coverage analysis depends on the fact that when one runs gradient descent on the pinball loss of a 1 dimensional quantile estimate, the iterates have magnitude bounded by 1. This line of work does not give groupwise coverage guarantees (and cannot, as it uses a 1 dimensional parameterization of the threshold). There is also a line of work on conditional coverage in conformal prediction. We are guessing the paper you are referring to here is "Conformal prediction with conditional guarantees" by Gibbs, Cherian, and Candes. The conditional guarantees in this work (as in the prior work in this literature) follow from the first order optimality conditions of pinball loss when optimizing a d-dimensional linear function mapping features to thresholds. In the groupwise case, the d dimensions correspond to indicator functions for d groups. When you run gradient descent on this parameterization, the iterates are no longer bounded by 1 --- the ACI analysis breaks. It is possible to analyze this algorithm --- this is one of the contributions of our paper --- but it requires a new analysis and gets a fundamentally different kind of bound. This is new to our work and hasn't been done previously. As we note in our submission, there is an independent and concurrent paper that discovers the same algorithm ("Gradient Equilibrium in Online Learning: Theory and Applications" by Anastasios Angelopoulos, Michael Jordan, Ryan Tibshirani) --- so we are confident that this work is not implicit in the prior literature. We're happy to engage further if you have any other questions --- as we said, we appreciate the opportunity to interact and are grateful for the time and effort you have spent in reviewing!
null
null
null
null
null
null
EvoControl: Multi-Frequency Bi-Level Control for High-Frequency Continuous Control
Accept (poster)
Summary: The authors propose EvoControl, a hierarchical, bi-level policy network for motor control. EvoControl separates low-level high-frequency control and high-level, low-frequency control, by training two separate policies which output actions at different frequencies. The high-level policy is trained with RL, facilitating exploration and credit assignment due to the stronger impact of action noise and the more straightforward correlation between actions and their outcomes (compared to training a single high-frequency policy). The low-level policy is instead trained with evolution strategies, an algorithm less susceptible to the credit assignment problem. Experiments show that EvoControl can achieve better reward in continuous control tasks, which benefit from higher frequency decision making. This feature is especially relevant where safety is a concern. Claims And Evidence: 1) There exist CTMDPs in which higher frequency decisions lead to higher reward. This claim is clearly true, but do the environments considered for the evaluation have the characteristics requested by the constructive proof of prop. 2.1? 2) The baselines are rather ablations of EvoFormer's components. Some performance gains are completely unexpected: for example, the locomotion environments considered in this paper are practically considered "solved" with standard RL methods. Instead, EvoControl claims, for example, 5x performance improvement in Ant vs direct torque control (also, 2x in Walker2D, 4x in Hopper). These results need to be better motivated, because standard algorithms seem to use the capabilities of these robot nearly optimally. What does EvoControl discover that makes these robots run at unprecedented speeds? Methods And Evaluation Criteria: I am not confident about the correctness of the evaluations. The authors should provide better context, because it is counterintuitive that locomotion, pushing or balancing would benefit so much from more frequent control. The authors should verify that the high frequency control does not interfere with the physics simulator, generating unrealistic behavior. Also, results from standard algorithms should be used to put these results into perspective - the fact that they are only reported in normalized form does not allow to understand whether the baselines achieve good results. I suspect the baseline performance detailed in table 3 to be low compared to the state of the art in these environments. Theoretical Claims: I have issues with prop. 2.1. The claim is trivial, clearly there exist processes in which taking decisions more often is an advantage. Imagine considering one environment where the reward is proportional to the number of clicks of a button, and at every decision step one can press the button or lift the finger. The authors, however, define an environment with certain requirements to write the proof, which does not seem to be related to any of the environments used in the experiments. What is the point of proving the existence of environments where higher decision frequency is beneficial, if then the requirements for such environments are not identified in any test? Experimental Designs Or Analyses: The environments considered for evaluation (classic control environments) do not seem to be ideal to test the importance of high-frequency control, as they involve either periodic control (locomotion), balancing (pendulum) or reaching. The authors should explain why experiments on a real robot are only presented in the supplementary material. Supplementary Material: Checked the proof of prop. 2.1 and the robot experiments. The supplementary material is five times longer than the main text, which makes it difficult to throughly review it. Relation To Broader Scientific Literature: The algorithm presented in this work should be compared with other forms of hierarchical control, as pairing a low-leve and a high-level controller is a common idea. Essential References Not Discussed: The background and literature review sections are extensive. However, the authors should justify why they do not compare EvoControl to the cited hierarchical RL methods. Other Strengths And Weaknesses: The strength of the paper is combining ES with PPO to make up for the shortcomings of each algorithm. The main weaknesses of the paper are lack of comparison with other methods and lack of analysis of very surprising results. Other Comments Or Suggestions: No additional comments. Questions For Authors: When the authors mention that they use MuJoCo Brax, what do they refer to? MuJoCo with GPU acceleration? The environments they are considering are very popular in the RL community, they should consider pretrained policies available online for comparison and verify that EvoControl really leads to an improvement in these environments. If the results stand, they should provide an analysis of the EvoControl policy. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback. Below, we address each concern: > do the environments considered for the evaluation have the characteristics requested by the constructive proof of prop. 2.1? Thank you; **Prop. 2.1** formally motivates that certain CTMDPs can yield higher returns by acting at higher frequencies. To concretely illustrate this, we use two safety-critical environments: 1. **Safety Critical Reacher (Sec. 5.2, Table 4; App. B.1)**–Here, unmodeled collision detection requires rapid reaction to avoid a penalty. Policies with slower control frequencies respond too late and accumulate significant penalties, leading to substantially lower returns. 2. **Safety Critical Halfcheetah** – We created a *new environment* that likewise penalizes collisions that need immediate corrective actions. We observe that policies acting faster achieve higher returns, consistent with Prop. 2.1– https://imgur.com/a/LNBypMy. > The authors should provide better context, because it is counterintuitive that locomotion, pushing or balancing would benefit so much from more frequent control....results from standard algorithms should be used to put these results into perspective **All our benchmark environments are non-standard** because we increase the control frequency to 500Hz (episode length = 1000 steps, i.e., 2 seconds of real time) and remove the typical control-cost term. Each original Gym MuJoCo environment is typically controlled at 12.5–100Hz. Therefore, our modified settings differ substantially from standard benchmarks. We made these changes to research within robotics, the question of whether we can remove the lower-level fixed controllers and instead directly control the motor torques at high frequency. What benefits (e.g., faster reactions, finer control) this higher-frequency control can enable, while revealing the challenges standard learning algorithms face in these now longer-horizon tasks. EvoControl addresses precisely these high-frequency demands. * **Table 3 (main paper)** reports relative policy performance at 1M high-level steps. * We include **explicit non-normalized results** of Table 3 in *Appendix J.7, Table 25*, alongside extended training for all baselines up to 1 billion steps in App. J.6. We now include this in Sec. 5. > What does EvoControl discover that makes these robots run at unprecedented speeds? In these high-frequency environments, standard policy learners struggle with exploration over long horizons (Peng, 2017)—EvoControl matches the exploration efficiency of temporally abstract goal-reaching methods (Fig. 2) and converges faster to high-reward policies **(App J.10, Figure 18)**. Rollouts (App. J.10, Fig. 18) show EvoControl often applies maximal torques to reach goals quickly while retaining precise control. This likely arises as direct high-frequency policy learning algorithms struggle with *less efficient exploration* and *slower convergence* (**Figure 2, Sec. 5.2**), and due to this, can get stuck in suboptimal policies (L 238, Left Col.) (App. J.6 & J.4), and low-frequency policies using a *lower-level goal based policies* struggle to learn accurate yet refined high-frequency motor movement, and rely on tuned fixed low-level controllers. We now include this in Sec. 5.2. > results from standard algorithms should be used to put these results into perspective ... consider pretrained policies Thank you; we now **include four additional standard baselines** using publicly available Brax code of Soft Actor-Critic and PPO, for both high and low frequency control. We observe that EvoControl still outperforms, https://imgur.com/a/yVFuNKk. Pretrained policies for standard MuJoCo tasks cannot be used because our tasks differ significantly (see above). Instead, we use official high-performance Brax code to ensure the baselines are well-optimized. > experiments on a real robot are only presented in the supplementary material We appreciate this point and will incorporate real-robot results directly into the main paper (using our extra page) to showcase practical feasibility of EvoControl. > justify why they do not compare to ... hierarchical RL In hierarchical RL, the most relevant method is HIRO, an off-policy method that assigns state-based goals to a low-level policy, rewarded by sub-goal completion. This closely resembles our *“fixed-controllers”*, which also receive state-based goals (L192). However, as shown in Table 4, sub-goal methods cannot handle the high-frequency reactions needed in safety-critical tasks. HIRO can also be unstable; with the authors’ hyperparameters and our training budget, it failed to converge and performed near-random, highlighting the need for extensive tuning. > MuJoCo Brax Yes, MuJoCo with GPU acceleration (MuJoCo XLA (MJX)). ---- *We hope that all of the reviewers’ concerns have been addressed and, if so, they would consider updating their score. We’d be happy to engage in further discussions.*
Summary: The paper learns high-frequency control with a two-level structure. A high-level policy working at a lower frequency is trained with PPO. A low-level high-frequency policy is obtained with evolutionary algorithms. The two-level design works better than directly training a single high-frequency control policy with RL. A learned low-level policy achieves better results than using a fixed PD controller and is robust to PD parameters. Notably, the proposed method is better in tasks that require adaptive reactions to the environment. Claims And Evidence: The claim that using two-level control policies can both handle long horizons and achieve fine-grained control performance is reasonable and is supported by clear evidence. Methods And Evaluation Criteria: The proposed method generally makes sense, but it is not clear whether it directly suits control applications. Specifically, what is the inference speed of the low-level policy? Can it be deployed at its desired frequency in the real world? Theoretical Claims: I did not check the detailed proofs, but the claim seems correct from the proof sketch. Experimental Designs Or Analyses: 1. For Table 3, it is better to report the raw returns without normalization, so that readers can better compare the results with other methods on the benchmark. 2. The experiments on the safety critical reacher task are appreciated since the task requires high-frequency close-loop control to get good performance. However, the task has only one degree of freedom to control. I think experiments on similar safety-critical tasks with more complex dynamics would be more valuable for practical use. Supplementary Material: I reviewed Appendix E and I. Relation To Broader Scientific Literature: The key contributions of the paper are mostly related to RL techniques for long horizon problems, such as hierarchical RL. Essential References Not Discussed: I did not notice any missing essential related works. Other Strengths And Weaknesses: None. Other Comments Or Suggestions: In line 83, CTMDP occurs without explanation. Questions For Authors: None. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their thoughtful and constructive feedback. Below, we address each concern: > not clear whether it directly suits control applications. Specifically, what is the inference speed of the low-level policy? Can it be deployed at its desired frequency in the real world? Thank you for highlighting this point. We confirm that EvoControl can indeed be deployed at its desired frequency in real-world settings. In **Appendix J.14 (p. 51)**, we provide a **real-robot validation** using a 7-DoF Franka Emika Panda on a tabletop manipulation task, where the high-level policy operated at 10Hz and the low-level policy at 200Hz in a **zero-shot sim-to-real transfer**, https://imgur.com/a/Eq2Dewy, https://imgur.com/a/9yUONbW. We measured the low-level network’s inference time to be **64 microseconds ($\mu$s)** on average (over 1,000 samples), corresponding to a potential 15 kHz operating frequency—well beyond typical robotic control rates. Our real-world experiments yielded three main findings: 1. *Stable and Fast:* The low-level network infers rapidly (64 µs per step), comfortably enabling deployment at 200 Hz or higher. In practice, the main bottleneck would typically be observation latency (on the order of a few milliseconds). Thus, the low-level controller itself is not the limiting factor for overall reactivity. 2. *Better Collision Handling:* Compared to a tuned PD controller, EvoControl produced *lower contact forces*, highlighting the benefits of learned high-frequency torque control for safety. 3. *No Fine-Tuning Required:* The policies trained in *MuJoCo (XLA)* transferred directly to the real robot without additional adjustments. These results strongly support EvoControl’s practicality in real-world environments, clearly demonstrating that the high-frequency policy can indeed be deployed at the desired rate despite practical observation latency constraints. > For Table 3, it is better to report the raw returns without normalization, so that readers can better compare the results with other methods on the benchmark. Thank you; that is a great suggestion. We already include the raw returns without normalization for Table 3 in Appendix J.7, Table 25. Allow us to kindly clarify that **all our benchmark environments** are non-standard because we increase the control frequency to 500Hz (episode length = 1000 steps, i.e., 2 seconds of real time) and remove the typical control-cost term (Appendix E). Each original Gym MuJoCo environment is typically controlled at 12.5–100Hz. Therefore, our modified settings differ substantially from standard benchmarks. We made these changes to research within robotics, the question of *whether* we can remove the lower-level fixed controllers and instead directly control the motor torques at high frequency. What benefits (e.g., faster reactions, finer control) this higher-frequency control can enable, while revealing the challenges standard learning algorithms face in these now longer-horizon tasks. EvoControl addresses precisely these high-frequency demands. > I think experiments on similar safety-critical tasks with more complex dynamics would be more valuable for practical use. Thank you for the helpful suggestion. In response, we introduce a **new “Safety Critical Halfcheetah” environment** with six degrees of freedom and an observation dimension of 19, exhibiting significantly more complex dynamics than our previous 1D safety task. Specifically, we add a blocking wall in 25% of the episodes, with collision force only observable upon contact. This wall blocks the cheetah’s forward path and incurs a penalty for any impact. As a result, the policy must quickly retreat if it detects a collision; otherwise, it continues forward. This design underscores the importance of higher-frequency actions for fast responses in unmodeled safety-critical situations, following a similar setup to "Safety Critical Reacher". Our full results (at https://imgur.com/a/LNBypMy) show that *EvoControl* still outperforms the baselines in this more challenging safety-critical scenario. > In line 83, CTMDP occurs without explanation. Thank you for this typo; line 83 now reads "continuous-time Markov decision processes (CTMDPs)". --- *We hope that all of the reviewers’ concerns have been addressed and, if so, they would consider updating their score. If any issues remain, we would be glad to discuss them further, especially in light of your current evaluation.* --- Rebuttal Comment 1.1: Comment: Thank you for the answers. I appreciate the real robot deployment results and the added "Safety Critical Halfcheetah" experiments. As my concerns are resolved, I would raise the score accordingly. --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and for reevaluating our work so favorably. We are glad that the real robot deployment results and the new “Safety Critical Halfcheetah” experiments have addressed your concerns. Your input has been very valuable in strengthening our paper, and we appreciate the time and thought you invested in your review.
Summary: The manuscript presents a novel bi-level optimization reinforcement learning method for high frequency control. The method combines a high-level low frequency policy with a low-level high frequency controller. Both controllers are learnable. The authors motivate the bi-level learning/optimization scheme. Overall, the manuscript, the experimental validation and result analysis are high quality. The results are also very encouraging. Finally, the authors provide motivation (and analytical proof; although trivial imho) for why a higher frequency controller is always more desirable if it can be learned effectively. Claims And Evidence: The authors back up their claims using both theoretical (although a bit weak) and empirical (quite strong) analysis. Methods And Evaluation Criteria: The evaluation pipeline makes sense and the criteria is rather rigorous. I appreciate the 128 rollouts for the metrics per policy (and 3 full training runs). Overall, every question that I had while reading the manuscript was answered later in the text. Theoretical Claims: This might be the weakest point of the manuscript. Although I do not see anything wrong with the proof, it seems rather trivial for the specific case of MDPs that the authors assume. Also, the authors do not really motivate why such MDPs exist, and why they are important. Overall, the proof/claim here seems superficial. I would, personally, be happy with the paper without the claim/proof. Although, nothing wrong with keeping it. Experimental Designs Or Analyses: As I said before, the authors have really thought out the experimental analysis. The authors have included all important baselines that I could think of, and the experimental analysis is thorough. Supplementary Material: I did not read the whole supplementary! It is 42 pages! I checked the proof, the extended related work and skimmed through the experimental results. Overall, the authors have put quite some effort into the manuscript. Relation To Broader Scientific Literature: Continuous control lies in the heart of many real-world systems, and having more accurate, robust and highly adaptive policies can be of great importance towards the widespread adoption of robots, and RL agents in general. Essential References Not Discussed: I do not think that the authors missed any important reference. Other Strengths And Weaknesses: The proposed method is not very novel (I would say the main novelty is incremental), but the presentation, experimental results and analysis are of top quality. Other Comments Or Suggestions: I have no other comments. Questions For Authors: I honesty do not have any question left. Every question that arised while reading the manuscript, the authors replied to it later in the text. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback. Below, we address each concern: > authors do not really motivate why such MDPs exist, and why they are important ### (A) Why Such MDPs Exist Our Proposition 2.1 is meant to show that there are environments/continuous-time MDPs in which acting at higher frequency can strictly yield higher returns. While this might be theoretically straightforward, it underscores the practical reality that in certain tasks, reacting faster confers a distinct advantage. Indeed, when new or unmodeled information arrives between coarser time steps, lower-frequency controllers (or those that do not adapt quickly) risk missing events that incur large penalties or lose large rewards. Concretely, we focus on safety-critical tasks where immediate reaction to collisions or unexpected contacts is crucial. In these tasks, an undetected collision or a delayed response can rapidly cause catastrophic damage. The “Safety Critical Reacher” environment (Section 6.2 of our paper) exemplifies this scenario: when there is a sudden collision, the policy must quickly backtrack, or else it will suffer a considerable penalty for the extra collision force. A lower-frequency controller often cannot respond fast enough within that sub-interval. To further verify the existence of such MDPs, we introduce a **new “Safety Critical Halfcheetah” environment** with six degrees of freedom and an observation dimension of 19, exhibiting significantly more complex dynamics than our previous 1D safety task. Specifically, we add a blocking wall in 25% of the episodes, with collision force only observable upon contact. This wall blocks the cheetah’s forward path and incurs a penalty for any impact. As a result, the policy must quickly retreat if it detects a collision; otherwise, it continues forward. This design underscores the importance of higher-frequency actions for fast responses in unmodeled safety-critical situations, following a similar setup to "Safety Critical Reacher". Our full results (at https://imgur.com/a/LNBypMy) show that *EvoControl* still outperforms the baselines in this more challenging safety-critical scenario. **Table 33, Safety Critical Halfcheetah Results** |Same PPO high-level alg. ($\rho$) with|Safety Critical Halfcheetah ($\mathcal{R}$) ↑)| |---------------------------------------------|----------------------------------------------| |*Fixed Cont.* – PD Position|84.7±1.68| |*Fixed Cont.* – PD Position Delta|0±0| |*Fixed Cont.* – Random|0.0±0.0| |*Direct Torque Cont.* – High Freq. (500Hz)|100±1.42| |*Direct Torque Cont.* – Low Freq. (31.25Hz)|0±0| |**EvoControl (Full State)**|**120±13.9**| |**EvoControl (Residual State)**|**105±5.84**| |**EvoControl (Target + Proprio.)**|**117±34.9**| |**EvoControl (Target)**|**117±5.33**| ### (B) Why These Scenarios Are Important These environments mirror real robotic applications, such as: * *High-gain robotic arms near humans:* Where collisions must be minimized or mitigated the instant they occur, e.g., in collaborative assembly lines. * *Aerial vehicles under sudden gusts:* Hovering drones or helicopters must correct torque and thrust when wind forces change abruptly. * *Surgical robotics:* In certain procedures, micrometer-level slip or unintentional contact can cause tissue damage; responding at tens-of-milliseconds scale is vital. In all these cases, the penalty for a delayed response can be high; hence, faster control loops can yield higher returns. Furthermore, we validated the real-world feasibility of EvoControl on a 7-DoF Franka Emika Panda (Appendix J.14, p. 51), initially demonstrating superior collision handling and reduced contact forces compared to a tuned PD controller. Importantly, the policies trained in MuJoCo XLA transferred directly to the robot without requiring additional task-specific tuning, unlike fixed-PD controllers. These results underscore the practicality of using a learned fast controller in real-world scenarios. > proof/claim here seems superficial ... nothing wrong with keeping it. While the proof (Proposition 2.1) is simple, we keep it to clarify that high-frequency control is not just an intuitive guess but can be rigorously shown beneficial under specific conditions. > not very novel (I would say the main novelty is incremental) Though high-/low-frequency control is common, our approach: 1. Learns a neural fast controller (vs. fixed PD/sub-goal tracking only policies). 2. Combines on-policy PPO for the high-level and ES for the low-level, learning jointly, avoiding instability from direct high-frequency RL. 3. Demonstrates consistent real-world performance with minimal tuning. This synergy extends hierarchical RL to truly fast torque control while retaining exploration benefits from slower layers. --- *We hope that all of the reviewers’ concerns have been addressed. We’d be happy to engage in further discussions.* --- Rebuttal Comment 1.1: Comment: I thank the reviewers for the detailed rebuttal. I am happy with the response, and I will keep my score. I still believe this is incremental work, but the authors did an amazing job in providing detailed analysis and results which imho produces added value. --- Reply to Comment 1.1.1: Comment: Thank you once again for your positive feedback and for evaluating our work so favorably. We are delighted that you appreciated our *“amazing job in providing detailed analysis and results which … produces added value,”* as well as our focus on achieving more accurate, robust, and highly adaptive policies. Your perspective on the incremental nature of our contribution is duly noted, and we have done our best to clarify the unique aspects of our approach in the latest revision. Our key novelty is the *joint learning* of a slow (e.g., 30Hz) high-level policy (via PPO) and a fast (e.g., 500Hz) low-level proprioceptive controller (via ES), thus avoiding the instability common to direct high-frequency RL. Unlike standard sub-goal methods, our low-level controller optimizes the overall long-horizon episodic return. Moreover, EvoControl has been validated on a real robot with minimal tuning, enabling zero-shot transfer. We sincerely appreciate the time and thought you invested in reviewing our paper, and we are pleased that our work resonates with your vision of accelerating real-world adoption of robots and RL agents.
Summary: The paper introduces a bi-level policy and training method for high-frequency and continuous-time control. The bi-level policy consists of a high-level policy that operates at a low frequency and issues a latent action. The low-level policy decides the final action based on the environment state and the latent action. For training stability, the low-level policy is a convex combination of a ​PD controller and a neural network where the combination gradually shifts towards the NN policy during the training. The high-level policy is trained with PPO and the low-level policy is trained with an evolution algorithm. The paper shows this approach can outperform both direct torque learning and fixed PD controller approaches and introduce exploration and robustness advantages over them. Claims And Evidence: All claims and information are very well supported by either relevant prior work or independently demonstrated ablation experiments. For instance,​ the challenges of policy gradient that are mentioned to motivate the evolutionary algorithm in Section 3.3 are verified empirically. Methods And Evaluation Criteria: The methods of the paper are in the ​vicinity of the ideas in the field and highly appropriate. The evaluations use both the standard benchmarks and customized tasks that further highlight the advantages of the algorithm. Theoretical Claims: The paper does not have much theory. The theoretical result seems correct from the proof sketch. I have not checked​ the details of the proof. Experimental Designs Or Analyses: The experiments and their analysis are mostly standard. The main concern I have is the choice to fix the number of high-level policy training steps in the comparisons instead of fixing the total environment interaction time (Line 295right). This may be specifically disadvantageous to direct torque high frequency. For this algorithm, the high-level policy takes orders of magnitudes less time to interact with the environment as each interaction is very short. I think fixing the total time and not the number of steps is more fair. If possible, including the results for this will be beneficial. Supplementary Material: I skimmed through the material. I reviwed the detailed psuedocode and section J.2. closely. Relation To Broader Scientific Literature: I am aware that there are multi-level hierarchical RL algorithms for control tasks. This paper's method is highly related to those approaches. The main new ability of the new approach is that the low-level policy is not limited to a fixed behavior as opposed to the previous methods where the low-level policy was trained to reach a goal state. This gives the agent a higher representation ability for more complex tasks. Essential References Not Discussed: I am not aware of any critical​ references that are missed. Other Strengths And Weaknesses: The paper does a tremendous job in supporting the findings with ablation experiments and evaluations. The paper is well written and the idea is novel and interesting. Please see questions and the comment for experimental design for potential weaknesses. Other Comments Or Suggestions: Some typos: 1. line 093 right, the definition of R, subscripts are wrongly formatted. Questions For Authors: 1. Are there any experiments to show the proposed method has advantages over hierarchical RL methods with goal-reaching low-level policies, such as HIRO. I saw the experiments comparing with having fi​xed PD controllers as the low-level policy, but no comparisons to HIRO. Some comments or empirical results would be appreciated. 2. Have you tried the techniques that enable stable training of other hierarchical RL algorithms such as option-critic and HIRO to stabilize the training of EvoControl instead of the annealing? For example, different learning rates for the two levels to induce almost fixed behavior of one level when the other one is adjusting to its changes. I wonder why RL has failed to train the low-level policy here (Appendix J2) as opposed to those methods. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful and constructive feedback, particularly their appreciation that all the claims are well supported and the paper does a tremendous job supporting the findings, with the idea being novel and interesting. Below, we address each concern: > fixing the total time and not the number of steps is more fair. If possible, including the results for this will be beneficial. This is an excellent point, we agree. We already provide results for all baselines in **Appenidx J.4** (also shown here: https://imgur.com/a/CBzbEBO) where we fix the total time—here implemented by fixing the total number of low-level policy steps, which for all our high-frequency environments, that take actions at 500Hz, is a duration of 2 seconds. This ensures that the direct torque controller is evaluated in an identical physical time window to EvoControl. These new results still show EvoControl outperforming baselines, including the high-frequency direct-torque policy, which continues to struggle with long-horizon credit assignment at 500 Hz. We will highlight these findings more prominently in the main paper’s Table 3 (as suggested) for camera-ready. This way, readers see both “fixed high-level steps” and “fixed total environment time” versions of the experiments. > Typos Thank you, we have now fixed these. > Are there any experiments to show the proposed method has advantages over hierarchical RL methods with goal-reaching low-level policies, such as HIRO. Yes, as HIRO closely resembles our *"fixed-controllers"* methods class (L192), where the low-level policies perform sub-goal state completion. Let us clarify that HIRO is an off-policy method that assigns state-based goals to a low-level policy, rewarded by sub-goal completion. However, as shown in Table 4, sub-goal attainment methods cannot handle the high-frequency reactions needed in safety-critical tasks. We did use an existing implementation of HIRO (from the authors) on our high-frequency environments, however, we found that it was too unstable, and did not converge to a meaningful policy (showing random performance). We likely believe that HIRO, being an off-policy method, may require extensive hyper-parameter tuning for each environment and task to work correctly, and the authors within the HIRO paper discuss the instability of the method during learning. Similarly, the Option-Critic framework uses a finite set of discrete options, each with learnable termination conditions. Because our tasks feature a continuous latent action (e.g., specifying a target joint position or velocity) and because we run on-policy PPO for the high-level layer, we found it non-trivial to combine discrete option-termination logic with continuous-time feedback. > Have you tried the techniques that enable stable training of other hierarchical RL algorithms such as option-critic and HIRO to stabilize the training of EvoControl instead of the annealing? Thank you for the insightful question. The techniques used to stabilize other hierarchical RL algorithms do not directly apply to EvoControl as: * HIRO is an off-policy method that relies on importance sampling and sub-goal relabeling of previously collected trajectories. By contrast, EvoControl’s high-level policy uses on-policy PPO, which discards old data once the policy updates. Hence, off-policy relabeling and importance sampling are not feasible in EvoControl. * Option-critic typically defines a finite set of discrete options, each with its own policy and termination condition. Its stability arises from learning when to switch or terminate these options. In EvoControl, the lower-level controller is a single, continuous neural policy (e.g., torque commands at 500 Hz)—there are no discrete “skills” to terminate, and the entire low-level training is through ES on raw episodic return. Consequently, we rely on annealing from a PD controller to the learned neural policy, which stabilizes our on-policy training and avoids the need for subgoal relabeling or discrete option terminations. > For example, different learning rates for the two levels to induce almost fixed behavior of one level when the other one is adjusting to its changes. I wonder why RL has failed to train the low-level policy here (Appendix J2) as opposed to those methods. That is a nice idea. We empirically tested a similar concept, where, when using RL to train the low-level policy, we take interleaved updates every so many steps, and also did the same with EvoControl. Interestingly taking frequent interleaved updates causes instability, and taking interleaved updates slightly less frequently improves learning (e.g., take an update of the higher-level policy for every 16 low-level policy updates), this could arise due to an update on one level has a larger effect on the overall policy than an update on the other layer. --- *We hope that all of the reviewers’ concerns have been addressed. We’d be happy to engage in further discussions.*
null
null
null
null
null
null
Active feature acquisition via explainability-driven ranking
Accept (poster)
Summary: **Edit:** Thank you to the authors for their response. Since repeated experiments have been run, an RL baseline, and reproducibility details have been provided, I have raised my score from 2 (weak reject) to 3 (weak accept). There are remaining points which I have detailed in the Rebuttal Comment, if they are addressed adequately I will raise the score to 4 (accept). **Original Review:** This paper is about Active Feature Acquisition (AFA) - the test time task of sequentially measuring features on an instance-wise basis to improve predictive performance. This is relevant to all applications where at test-time features are not all jointly available, for example, medicine where for each individual patient a doctor will make different measurements to make a diagnosis. Due to cost or time constraints the aim is to measure as few features as possible. The paper proposes a novel method where a policy is trained to predict (for each instance) which feature will be best to acquire next, and it is trained as a classification task, where an oracle gives the best feature and this is used as a label using the cross entropy of the policy's prediction as the loss. The oracle is defined using feature importance ranking methods such as FastSHAP. The training is split into two main phases, where the policy uses the oracle features, and the second phase it uses it's own acquisition as inputs (still using the oracle features as labels), so that it is able to train under the conditions which would be experienced during inference. The experiments are carried out on multiple real world medical datasets and image datasets. Two SOTA AFA methods are used as baselines as well as additional baselines. Ablations are carried out investigating the importance of the two stage training, as well as sensitivity to the feature importance ranking method used to generate labels. Claims And Evidence: - The variety of datasets is very good to see. - There are no uncertainties provided in the results. Have there not been repeats of the experiments over different seeds? If not these are required to improve the reliability of the results provided. - DIME and GDFS are good baselines to include. However, these are greedy methods, showing this method also beats an RL baseline will make the claims stronger. Similarly, beating a generative model for CMI maximization will also make the claim stronger. - At the start of Section 5 it is stated that the first three features are fixed to obtain the results in Figure 2. So why do the baselines perform poorly sometimes with 3 or fewer features and not the same as the oracle or the proposed model? Methods And Evaluation Criteria: The method makes sense for the problem at hand. Its very interesting to see a model trained to predict the feature to select as a classification problem. Using the feature importance rankings is also an innovative way to generate the labels. Theoretical Claims: There is no theory provided in this paper - which is not a strength or a weakness, just a statement that there are no theoretical claims therefore no errors. However, there is an issue with how the oracle is defined on page 3. Lines 142 - 164 Right. The claim is that at each step the oracle policy selects the most important feature that maximally improves the predictive performance. This is a greedy oracle, and therefore is not guaranteed to be optimal. This will not select features that are jointly informative but individually non-informative (see https://arxiv.org/abs/2302.13960 for example). For example we can have features that are only informative if jointly known, and many other very noisy features. The proposed oracle will never select the jointly informative features before the others (unless it already has measured one of them), because a noisy feature will always provide some more **immediate** information, but less information in the long-term. Therefore this greedy oracle is not guaranteed to be optimal, especially with jointly predictive features. How does this affect the definition of the oracle? How does it affect how the feature importances are ranked? If a dataset with this property is used, how would the model perform? How does it affect the conclusions drawn from the result in Table 4 if we can't guarantee the oracle is optimal? All greedy methods should "fail" on a dataset like this. The empirical results presented are positive, which would imply this property is not present in those datasets. Another experiment or at least a discussion would greatly improve the paper. As a starting point: Strong performance is also seen in greedy methods like DIME and GDFS in the literature, additionally theory does exist that supports greedy methods being **near-optimal** (https://proceedings.mlr.press/v40/Chen15b.html). Experimental Designs Or Analyses: The main issue with this paper is that without theory, the empirical results must be very convincing, and currently they can be improved (the model motivation is clear so this is not a problem). - Without repeats we can't really know how sensitive this model is to initialization - There is no code provided which is not a problem in itself, but the reproducibility details are limited. There are no optimizer parameters for example. - There has been limited hyperparameter tuning. This is framed as an advantage - the proposed model performs well with minimal tuning. However, there has also been minimal tuning on the baselines, so we don't know if they have just got poor architecture hyperparameters or optimizer hyperparameters. - The result in Table 2 is only meaningful if the first stage of training was continued to acquire its result. As it is currently, there are results from the first stage (200 epochs) then for the second stage (200 epochs of first stage + 16 epochs of second stage). So it is not clear if the result from the second stage is due to training for longer or because it is a useful training change. - The dataset selection is good, however the baseline selection is limited. Two AFA methods are used as baselines - GDFS and DIME, and these work in very similar ways, in terms of being greedy methods that are trained by simulating the acquisition process. The results would be improved by including an RL baseline and a Generative Model baseline. Supplementary Material: I read all of the supplementary material. The algorithms provided are useful for understanding the method. Relation To Broader Scientific Literature: The paper is appropriately framed within the AFA literature. Missing references are given next. Essential References Not Discussed: - Partial VAE, Ma et al. 2019 (https://arxiv.org/abs/1809.11142) - Dynamic Feature Acquisition with Arbitrary Conditional Flows, Li and Oliva 2020 (https://arxiv.org/abs/2006.07701) - Acquisition Conditioned Oracle for Nongreedy Active Feature Acquisition, Valancius et al. 2024 (https://arxiv.org/abs/2302.13960) - Sequential Information Maximization: When is Greedy Near-optimal?, Chen et al. 2015 (https://proceedings.mlr.press/v40/Chen15b.html) Other Strengths And Weaknesses: Stengths: - The idea the paper introduces is a very good one. - The paper generally is written nicely. - The method is clear, with a good use of diagrams and pseudo-code Weaknesses: - There is no impact statement. The paper does not have any ethical issues, so it is not majorly needed. However, it was written as a requirement in the Call for Papers (https://icml.cc/Conferences/2025/CallForPapers): "Authors are required to include a statement of the potential broader impact of their work, including its ethical aspects and future societal consequences." Other Comments Or Suggestions: - Line 176 Left: "these methods aim identifying" should be "these methods aim to identify" - How does center cropping keep working after selecting the center? Does it spiral out? It is not clear how it continues to select patches of images? Is it a global fixed ordering as well? Questions For Authors: I like the idea this paper proposes and it's interesting to see it working. I would like to recommend accept, however, currently I do not find the empirical evaluation strong enough for acceptance. I understand I may be wrong, so following the responses and reading other reviewers' thoughts I will re-evaluate. In particular these are the following concerns: - No repeats of experiments - Limited information on reproducing the experiments (no learning rates for example) - Lack of RL and generative model baseline - Limited hyperparameter tuning of baselines - Understanding of the oracle, and how it would work with jointly predictive features. How this might affect feature importances and therefore the training of the proposed model. This can either be with another small synthetic experiment, or with theoretical justification. How does the proposed oracle from this work compare to the oracle described in Acquisition Conditioned Oracle for Nongreedy Active Feature Acquisition, Valancius et al. 2024 (https://arxiv.org/abs/2302.13960), which does consider jointly informative features? - Addressing the lack of impact statement, it is enough to use the proposed statement from the call for papers: "This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here." Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Theoretical concern and Q5: We acknowledge that our original description of the oracle definition may have been misleading and we clarify the distinction here. Our oracle is not a purely greedy policy. Rather, it first identifies the optimal subset of features that minimizes prediction loss under a given acquisition budget. This is done by exhaustively evaluating all feasible subsets within the budget. Once the optimal subset is identified, features are ranked in a greedy order to define an acquisition trajectory. Importantly, the acquisition proceeds until all features in the subset are obtained. Thus, the internal ordering does not affect the final outcome. This approach is distinct from a sequential greedy policy that chooses features one step at a time based on marginal gains. Our formulation closely aligns with the Acquisition Conditioned Oracle proposed by Valancius et al. (2024), which also aims to account for joint informativeness. The main distinction is that our formulation imposes a hard budget constraint, whereas Valancius et al. incorporate feature costs into a weighted objective. We will reference this work in the paper and discuss the similarities and differences. Response to experimental design concerns (Q1–Q4): To improve the robustness of our claims, we conducted additional experiments and updated our results accordingly: -- (Repeatability and variability): We now report results over three runs (over nine runs on tabular datasets, please see our response to W1 and Q1 of the first reviewer) with different random seeds. Table 2 has been updated to include mean and standard deviation across runs. -- (First versus second stage comparison): To clarify that performance gains in Table 2 are not due to additional training epochs alone, we extended the first-stage training from 200 to 250. We observed that performance did not significantly improve with longer training, while our second-stage approach yielded clear gains. This confirms that the second stage provides meaningful training benefits beyond simply more epochs. The results of the first-stage training with 250 epochs are nearly identical to those with 200 epochs, demonstrating the effectiveness of the second-stage training. However, on BloodMNIST*, the additional epochs provide a meaningful performance increase. To further assess this, we conducted first-stage training with 300 epochs, yielding results of $79.73 \pm 0.19$, further reinforcing the effectiveness of the second-stage training. This also highlights the potential for performance improvements through dataset-specific hyperparameter tuning. -- (Reproducibility details): We used Adam optimizer with a learning rate of 1e-3 for all datasets, except for CIFAR100 and Imagenette, where we used 5e-4 due to larger model sizes. If our paper gets accepted, then we will release the complete code, including all training scripts and configuration details, to ensure reproducibility. -- (Baseline diversity – Q3): We agree that incorporating more diverse baselines would strengthen our comparisons. Generative AFA methods like ACF (Li and Oliva, 2020) were not included due to their high computational demands and limited scalability to tabular and high-dimensional image datasets. Moreover, several recent works (e.g., Gadgil et al., 2024) show that discriminative methods often outperform generative counterparts in practice. We appreciate the suggestion and plan to include an additional baseline in future work. Meanwhile, we have already evaluated an RL-based method. Please refer to the table in our response to Reviewer 1. -- (Hyperparameter tuning of baselines): For baselines with published results on datasets like CIFAR10 and Imagenette, we confirmed alignment with reported performances. For datasets without public results, we made reasonable efforts to tune hyperparameters. We will include detailed tuning procedures in the final version. | | Spam | CIFAR10 | CIFAR100 | BloodMNIST | ImageNette | Metabric | CPS | CTGS | CKD | | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | | First-stage (250 epochs) | 0.952 ± 0.001 | 75.96 ± 0.16% | 45.91 ± 0.36% | 79.83 ± 0.19%* | 73.95 ± 0.25% | 62.52 ± 1.27% | 67.23 ± 0.48% | 0.9157 ± 0.0002 | 0.822 ± 0.01 | | First-stage | 0.951 ± 0.0002 | 75.76 ± 0.19% | 46.05 ± 0.25% | 79.25 ± 0.15% | 73.76 ± 0.42% | 62.48 ± 1.39% | 67.21 ± 0.15% | 0.9155 ± 0.0004 | 0.824 ± 0.008 | | Second-stage | 0.955 ± 0.0001 | 78.44 ± 0.15% | 46.99 ± 0.15% | 83.87 ± 1.05% | 78.96 ± 0.12% | 69.83 ± 0.41% | 67.45 ± 0.13% | 0.9164 ± 0.0001 | 0.836 ± 0.07 | Response to missing impact statement - Q6: We have added the suggested statement. Response to clarification on center cropping (Comment 2): For the details of center cropping, we kindly refer the reviewer to the papers of DIME and greedy methods. Response to comment on Figure 2 (Q7): Please see our response to Q1 of Reviewer 3. We also included new references suggested by the reviewer. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for taking the time to respond, I have read the rebuttal and other reviews. I will raise my score from 2 to 3. I'm still unsure regarding some issues, if those are convincingly addressed I will raise the score to 4. My detailed feedback is below: # Positives **Repeats**: Thank you for running multiple repeats. Please make sure these are also added to Figure 2, the metric during acquisition, by using shaded areas to represent the uncertainty. **Reproducibility Details**: Thank you for including these. Please add all details to the appendix, including more than those given here (batchsize, dataset details, model sizes etc.) this should be done for the proposed model and all baselines. In particular I notice you've promised to publish code after the review period, this is definitely very helpful for reproducibility. **First vs Second Stage Training**: Thank you for extending the training. These results are promising. **New Baselines**: Thank you for including a new RL baseline, please can this be added to Figure 2. Is it also possible to add AACO (https://arxiv.org/abs/2302.13960, https://github.com/lupalab/aaco), since this does not need any new models to be trained. It would be insightful, since it claims to closely follow an oracle, but this is not essential, I just think it would be a good baseline to add if time permits. # Points for Improvement ## Small points **Center Cropping**: I looked at DIME and GDFS, searching for "center" and "crop" but neither describes how the acquisition is decided. Please can this be explained. **Baseline Hyperparameters**: Thank you for the explanation. Please can you clarify what datasets required tuning and what the tuning process was. E.g. what hyperparameters and values were tested, how the validation set was created, what validation metric was used etc.? ## Main Points **First 3 features**: Unfortuntately the answer has made this more confusing to me. Was fixing the first three features only done in training? And then the model can select freely during testing? In this case it is very unlikely the model would ever select those first three, since it never trained to. If it has been done for both training and testing, then the baseline methods should also be given those three features at testing time, to ensure fair comparison. Or to be even more fair they should receive them at training as well. This flexibility is not unique to the proposed method, DIME, GDFS and OPL can be told three features to always start with during training and testing. Please can this be explained, and if this is what has happened, why would the models be different for the first three acquisitions in Figure 2? **Oracle Definition**: Thank you for taking the time to adjust the oracle definition. This is quite a different definition from the original which said "The oracle policy sequentially constructs $M^*$", and is now "distinct from a sequential greedy policy". That being said, this is still not guaranteed to be optimal. The internal ordering of features does matter for AFA. Even though it is possible to select an optimal subset (is this done by knowing the feature values?), a greedy ordering within that can be suboptimal. Consider a case of three features that are independent and can be $-1$ or $1$ with equal probability. Let $p(y=1 | \mathbf{x}) = \text{sigmoid}(2x\_1x\_2 + 0.5x\_3)$. If all other features are irrelevant and the aim is to find the **smallest** subset (size of subset has not been mentioned), then only these three features are selected as the subset, which is good. However, because the sign of $x\_1x\_2$ requires knowing both, then they individually give no information, only jointly. The **greedy** ordering in this optimal subset is 3, then 1 or 2 with equal probability. However, the **optimal** ordering is to select either 1 then 2 or 2 then 1, because after the second acquisition the prediction is far more accurate, despite the first acquisition giving no information. If 3 is selected first, there is only a small amount of information in the first acquisition, and the second acquisition does not improve this. I'd recommend looking at these two points, especially the oracle definition. Unfortunately we can't discuss it in detail, but I'm interested what you think, I would argue that even if a subset is optimal, a greedy ordering within that can be suboptimal (see the example). If you agree with this, maybe the oracle can be renamed to "Approximate Oracle/Pseudo Oracle" or something like this, with the clarification about the greedy inner ordering. Note that the AACO oracle will also suffer from the same greedy internal ordering, I initially cited it when I thought this paper's oracle was sequentially greedy. Also, your model should actually perform better on the above example than the oracle, since feature importances won't suffer from the greedy behavior, so its not a criticism of the method, more of the proposed theoretical oracle. --- Reply to Comment 1.1.1: Comment: We sincerely appreciate the reviewer’s follow-up and the decision to raise the evaluation score. Below, we provide our responses to the remaining concerns: **Main Points** 'First 3 Features': During testing, the first three features were also fixed. These features were selected based on their average importance rankings, derived from the instance-wise feature rankings $\varphi^i$. This ranking information is specific to our method and enables the selection of highly informative features early in the acquisition process. In contrast, other AFA methods do not have access to such rankings. Therefore, to fix the first three features for other AFA methods, one could either randomly select three features (which is unlikely to yield strong performance) or precompute global feature importance rankings using a global feature selection algorithm and fix the top three accordingly. 'Oracle Definition': In our oracle construction, we assume that it has perfect knowledge, including access to the feature values and the label. Under this assumption, the oracle selects the optimal subset of features, minimizing the prediction loss, within a given acquisition budget among all feasible subsets. This selection process is guided by the true label, meaning the internal greedy ordering of features is also informed by this label. Therefore, in the provided example, the third feature is not necessarily always selected first. In the table below, we present the internal greedy ordering for all possible input scenarios using the subset {$x_1$, $x_2$, $x_3$}. For ordering, we assume missing features are filled with a value of 0, and when $x_1$ and $x_2$ yield equal loss reductions, we break the tie by selecting $x_1$ before $x_2$. | y | x₁ | x₂ | x₃ | Ranking Order | Value of 2x₁x₂ + 0.5x₃ at each step | |---|----|----|----|--------------|------------------------------------| | 1 | 1 | 1 | 1 | {x₃, x₁, x₂} | {0.5, 0.5, 2.5} | | 1 | 1 | 1 | -1 | {x₁, x₂, x₃} | {0, 2, 1.5}* | | -1 | 1 | -1 | 1 | {x₁, x₂, x₃} | {0, -2, -1.5}* | | -1 | 1 | -1 | -1 | {x₃, x₁, x₂} | {-0.5, -0.5, -2.5} | | -1 | -1 | 1 | 1 | {x₁, x₂, x₃} | {0, -2, -1.5}* | | -1 | -1 | 1 | -1 | {x₃, x₁, x₂} | {-0.5, -0.5, -2.5} | | 1 | -1 | -1 | 1 | {x₃, x₁, x₂} | {0.5, 0.5, 2.5} | | 1 | -1 | -1 | -1 | {x₁, x₂, x₃} | {0, 2, 1.5}* | Note: According to our oracle definition, in the scenarios marked with *, even the third feature should be excluded from the optimal subset, as its inclusion leads to an increase in the loss value. **New Baseline** We thank the reviewer for suggesting the AACO method. While we did not implement it directly, we incorporated its core idea to design a new baseline inspired by the approach. Specifically, for a given masked test instance, we first identified its nearest neighbor from the training set. Subsequently, the next feature to be acquired was determined based on the feature importance ranking of this nearest neighbor. That is, we selected the highest-ranked feature (according to the neighbor's ranking) that has not yet been acquired for the test instance. The results are in the following table. Please note that no special training was conducted for this method; instead, we used the same predictor networks from the second stage of our original method. Additionally, in AACO, the nearest neighbor is identified based on raw feature distances, which may not be effective for image datasets. We leave the exploration of alternative strategies such as computing distances in the embedding space as well as the development of dedicated training procedures, to future work. | | Spam | Metabric | CPS | CTGS | CKD | |---|---|---|---|---|---| | Second-stage | 0.955 ± 0.0001 | 69.83 ± 0.41 % | 67.45 ± 0.13 % | 0.9164 ± 0.0001 | 0.836 ± 0.07 | | AACO-like baseline | 0.954 ± 0.0005 | 68.12 ± 0.75 % | 67.20 ± 0.22 % | 0.9092 ± 0.0009 | 0.827 ± 0.003 | **Small Points** 'Center Cropping': This approach does not involve iterative selection. Instead, it involves training multiple models with varying (increasing) number of center patches unmasked in the input. For each height/width pair, that many patches are unmasked at the image center while others remain masked, and a model was trained using this configuration across the dataset. Patch dimensions were manually predefined, matching other global methods. Implementation was taken directly from the greedy method codebase. 'Baseline Hyperparameters': ImageNette, CIFAR10/100, BloodMNIST, and Metabric datasets required hyperparameter tuning. Specifically, for both GDFS and DIME, we tuned learning rate (1e-6-1e-2), learning rate decay (0.2-0.5), learning rate patience (2-10), minimum learning rate (1e-9-1e-7), and early stopping (2-10 epochs). DIME also needed exploration probability (0.05-0.6), exploration probability decay (0.1-3), and epsilon probability patience (2-14). We used a 60/20/20 split for training/validation/test, with validation based on the cross-entropy loss.
Summary: This paper tackles the problem of active feature acquisition, a setting where all features might not be available at inference and the model needs to make accurate predictions while minimizing the number of features acquired. The authors propose a framework that dynamically selects instance-specific features based on their importance ranking obtained from local explanation methods. They reformulate the problem as a feature prediction task and introduce a policy network based on the decision-transformer architecture which sequentially predicts the next most-informative feature. This network is trained in a two-stage approach along with a predictor network which makes the predictions given a subset of features that have been acquired. Experiments across both image and tabular modalities demonstrate that this approach outperforms other recent methods. Claims And Evidence: One of the main claims is that local explanation methods can provide a reliable signal for instance-specific feature importance and the experiment results do support that claim well, especially considering the oracle implementation which outperforms the others substantially across most datasets. Methods And Evaluation Criteria: I am a bit skeptical about the two-stage training strategy. Even though the results indicate that the second stage improves performance, I would like more analysis on why the first-stage is not enough for the policy network to learn the top features. I would have liked to see feature costs getting incorporated into the training as well. Since the authors consider feature costs as a potential constraint, there could be a scenario where a feature is highly informative but has a high cost (in terms or time, money, etc.), so it might be better to acquire a less costly feature instead (e.g. lab test features are costlier than demographic features). Theoretical Claims: There are no theoretical claims provided other than maybe the intuition of the oracle policy $q^*$ Experimental Designs Or Analyses: I think the experimental design makes sense. In terms of the analyses, I would have like a plot of the training times for the different methods since local explanation methods can be expensive to run (see weaknesses). Supplementary Material: I reviewed the pseudo-code of the training algorithm provided in the supplementary Relation To Broader Scientific Literature: There has been a lot of research work done in the field of feature selection, starting all the way from 1996 [1]. Static feature selection has also been an important subject for a long time [2, 3, 4]. For dynamic feature selection, the literature has focused on formulating it as an RL problem [5, 6] or using mutual information estimations [7, 8]. There are many more works in each of these sub-fields, which indicates that this problem is of great interest to the scientific community. [1] Geman, Donald, and Bruno Jedynak. "An active testing model for tracking roads in satellite images." IEEE Transactions on Pattern Analysis and Machine Intelligence 18.1 (1996): 1-14\ [2] Isabelle Guyon and André Elisseeff. An introduction to variable and feature selection. Journal of Machine Learning Research, 3(Mar):1157–1182, 2003.\ [3] Jundong Li, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P Trevino, Jiliang Tang, and Huan Liu. Feature selection: A data perspective. ACM Computing Surveys (CSUR), 50(6):1–45, 2017.\ [4] Balın, Muhammed Fatih, Abubakar Abid, and James Zou. "Concrete autoencoders: Differentiable feature selection and reconstruction." International conference on machine learning. PMLR, 2019.\ [5] Jaromír Janisch, Tomáš Pevny, and Viliam Lis y. Classification with costly features as a sequential decision-making problem. Machine Learning, 109:1587–1615, 2020.\ [6] Mohammad Kachuee, Orpaz Goldstein, Kimmo Kärkkäinen, Sajad Darabi, and Majid Sarrafzadeh. Opportunistic learning: Budgeted cost-sensitive learning from data streams. In International Conference on Learning Representations, 2018.\ [7] Aditya Chattopadhyay, Kwan Ho Ryan Chan, Benjamin D Haeffele, Donald Geman, and René Vidal. Variational information pursuit for interpretable predictions. arXiv preprint arXiv:2302.02876, 2023.\ [8] Yang Li and Junier Oliva. Active feature acquisition with generative surrogate models. In International Conference on Machine Learning, pages 6450–6459. PMLR, 2021. Essential References Not Discussed: I think the authors cover most of the important works and references related to the topic of feature selection. I would have liked more discussion on the potential limitations of the local explanation methods (reliability, computational feasibility) , since they form the basis of the proposed framework. Other Strengths And Weaknesses: Strengths: 1. The paper tackles an important and well-studied task for active feature acquisition. As machine learning is getting used in real-world settings, this will be an important practical consideration to reduce costs and improve efficiency. 2. The authors integrate different existing works like decision transformers and local explanation methods for this task, which is interesting. 3. Experiments are performed across a variety of datasets and it looks like they achieve state-of-the-art performance on most of them. 4. I like the way concept figure 1 is presented as it gives a good high-level idea of how the two-stage approach works. Weaknesses: 1. The dependence on local explanation methods also means that the proposed framework also has the same limitations as these methods. Feature attributions methods like SHAP grow exponentially in computation with the number of features. This might reduce scalability of the proposed framework. The authors should show a comparison the total training time with the other methods as well. Since the gains are marginal for some of the datasets, it might be practically more feasible to use another method which is more efficient. 2. From my understanding, feature costs are not considered during training which is also an essential factor to consider. I would like to see an extension of this method that incorporate feature costs in the training. Other Comments Or Suggestions: Suggestions: 1. Provide visualizations of the features obtained on the image datasets. Also, for the medical tabular datasets, showing that the features selected correspond with prior knowledge would also be useful. 2. The results of the ablation experiments on page 8 (using ResNet-10 and replacing the decision transformers) should be shown in a different table 3. In the training pseudo-code, show how $a^i_{t:t_i'}$ and $r^i_{t:t'_i}$ are computed. Please provide the pseudo-code for the inference stage as well Questions For Authors: 1. Initializing the input with three features during inference seems counterintuitive, is there an intuition of why it helps stabilize the training? 2. Can this method be adapted so that a different number of features are acquired for different instances? One way could be to use a different stopping criteria for each instance (maybe using the prediction entropy). Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Methods and evaluation criteria: The reviewer raises a thoughtful point about the role of the second-stage training. The first stage provides clean supervision through ground-truth feature importance rankings generated by explanation methods. However, during inference, the policy must operate on partially observed inputs and select features sequentially. This setting diverges from the idealized full-information assumption of the first stage. It also introduces two key challenges: (a) the transferability of supervision degrades with incomplete or noisy inputs, and (b) the policy must make decisions under uncertainty introduced by prior imperfect acquisitions. Our second-stage training directly addresses these issues by jointly training the policy and predictor under realistic, policy-driven acquisition settings. To further clarify this benefit, we added a comparison in Table 2 showing that extending first-stage training to 250 epochs yields minimal or no improvement, which underscores the utility of the second stage. Please see the updated Table 2 in our rebuttal to the fourth reviewer. The reviewer also notes that incorporating feature costs could further improve the method. We agree that incorporating feature costs is important for real-world applications, especially in domains like healthcare where test costs vary. While our current method assumes uniform costs, we view cost-aware acquisition as an important direction for future work. Experimental design or analyses: The reviewer suggests comparing training times due to the high cost of generating explanation-based rankings. Our current implementation trains 2-4x slower than baseline methods. However, we expect significant speed-ups with optimization (e.g., using PyTorch Lightning, early stopping). Additionally, as noted in our response to Reviewer 2, W2, our method can leverage precomputed rankings, which aligns with practical scenarios (e.g., clinical models) where such data already exist. We acknowledge the current limitation and will include a discussion in the final version. Relation to broader scientific literature: The reviewer would like a deeper discussion of the reliability and computational feasibility of explanation methods. We agree, and have discussed this in response to Reviewer 2 (W1 and W2). In short, while explanation methods have known limitations, our experiments show that the AFA policy is robust to variation across seeds and models. Practical approximations (e.g., FastSHAP) are also available for scaling. Response to W1: Dependency on explanation methods may limit scalability due to computational cost. We addressed this in detail in our response to Reviewer 2, W2. To summarize: our approach can reuse precomputed rankings and has room for significant implementation optimizations. This trade-off will be discussed explicitly in the paper. Response to W2: Feature costs are not currently incorporated. We acknowledge this limitation and see cost-aware acquisition as a valuable extension. We plan to explore this direction as part of future work. Other comments: If accepted, we will include visualizations of selected image patches. For tabular data, further work is needed to assess alignment with prior clinical knowledge. We will move the ResNet-10 and decision transformer ablation results to a separate table. Also, we will update the pseudocode and include inference pseudocode. Response to Q1: Fixing the initial features stabilizes training by reducing uncertainty in early steps and simplifying the learning of conditional distributions. This is especially helpful when many features are available. Notably, this flexibility is unique to our method compared to other AFA methods. Response to Q2: Our framework can be extended to support adaptive stopping criteria (e.g., based on prediction entropy). While not implemented here, we consider this a promising direction for future work. --- Rebuttal Comment 1.1: Comment: I thank the reviewers for the responses. They have addressed the concern I had with the two-stage training along with most of my questions. As for the computational complexity, while I agree that having precomputed rankings will speed up computation, I am still skeptical of the practicality of this method, especially in medical settings like the authors mention. Time plays a major role in such sensitive settings and even if there are precomputed explanations, they would likely need to be continually updated due to the dynamic nature of the clinical setting. Nevertheless, this method seems to be an interesting and novel approach to feature selection and I will raise my score accordingly. I urge the authors to include the limitations and additional analyses discussed here in the updated manuscript. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the thoughtful follow-up and for raising the evaluation score. In the revised manuscript, we will include a discussion of these limitations and trade-offs, highlighting the current constraints and potential mitigation strategies, such as using faster approximation methods (e.g., FastSHAP).
Summary: The authors propose an active feature acquisition (AFA) framework that selects features based on their importance to each individual case. The method leverages local explanation techniques to generate instance-specific feature importance rankings. The authors reframe the AFA problem as a feature prediction task, introducing a policy network grounded in a decision transformer architecture. The authors conducted experiments on multiple datasets and demonstrate that their approach outperforms current state-of-the-art AFA methods in both predictive accuracy and feature acquisition efficiency. Claims And Evidence: Looks good to me. Methods And Evaluation Criteria: Looks good to me. Theoretical Claims: No theoretical claims. Experimental Designs Or Analyses: Looks good to me. Supplementary Material: N/A Relation To Broader Scientific Literature: N/A Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: 1. The paper addresses an important problem in machine learning: efficient feature acquisition in scenarios where data collection is costly or time-consuming. 2. The approach is intuitive and leverages explainability methods (SHAP, LIME) in a novel way to determine instance-specific feature importance rankings. 3. The authors reframe the active feature acquisition (AFA) problem as a feature prediction task, which allows them to use a decision transformer architecture for the policy network. 4. The proposed method outperforms state-of-the-art AFA techniques in both predictive accuracy and feature acquisition efficiency across multiple datasets. Weaknesses: 1. The method relies on the accuracy and reliability of local explanation methods. If the explanations are not accurate, the performance of the AFA framework could be affected. 2. The computational cost of generating feature importance rankings using methods like SHAP can be high, especially for large datasets or complex models. 3. The complexity of the decision transformer architecture may require significant computational resources and expertise to implement and train. Other Comments Or Suggestions: A typo in Ln 291: ImageNette -> ImageNet Questions For Authors: The authors acknowledge that RL-based methods are theoretically capable of finding the optimal policy, but their method outperforms them empirically. It would be good to have a deeper discussion of why this might be the case. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Response to W1: We agree that the performance of our AFA framework is influenced by the accuracy of local explanation methods. However, our experiments show that rankings generated by widely used explanation techniques (e.g., SHAP, LIME) consistently match or exceed the performance of rankings derived from existing AFA baselines. This suggests that current explanation methods already provide sufficiently reliable signals for guiding feature acquisition. Moreover, explainability research is advancing rapidly, and we expect that continued improvements in explanation quality will further strengthen the effectiveness and generalizability of our framework. Response to W2: We acknowledge the reviewer’s concern regarding the computational cost of explanation methods like SHAP. However, recent advances, such as FastSHAP, have significantly improved efficiency through approximation strategies. Moreover, in practical settings such as medicine, AFA would typically be used with pre-existing, well-trained models tailored to specific conditions. In such scenarios, explanation methods are often already applied to ensure interpretability and clinical trust, which is an important requirement in medical AI [1, 2]. Our framework is designed to take advantage of these precomputed feature importance rankings, avoiding the need to rerun explanation algorithms during training. This makes our approach both practical and computationally efficient in real-world deployments. Response to W3: As shown in our ablation experiments, our framework is flexible and can accommodate simpler policy models, albeit with some trade-off in performance. While a decision transformer requires more resources than other architectures that leverage for example, a ResNet block, the decision transformer consistently achieved superior results across tasks, motivating its inclusion as the default choice. To support ease of adoption and reproducibility, we will release all implementation details and code upon acceptance, enabling researchers and practitioners to readily integrate or adapt our method. Response to Q1: While RL-based methods are theoretically capable of discovering optimal acquisition policies, in practice they often suffer from instability, high variance, and sensitivity to hyperparameters [3,4,5]. As discussed in [6], greedy-based methods, despite their lack of formal optimality guarantees, frequently outperform RL approaches due to their simplicity and stability during training. Moreover, under certain assumptions about the data distribution, greedy strategies can be provably near-optimal [7], making them a compelling practical alternative. We showed that our framework, which avoids high-variance RL training by using a supervised learning objective with a decision transformer, achieves strong empirical performance while benefiting from interpretability and ease of optimization. Additionally, we evaluated the performance of an RL-based method (OPL [8]) in our experiments (please see Table 3 in our response to Reviewer 1). Our experimental results further suggest that RL-based approaches struggle to match the stability and effectiveness of our method, References: [1] Hill, E.D., Kashyap, P., Raffanello, E. et al. Prediction of mental health risk in adolescents. Nat Med. 2025 Mar 5. doi: 10.1038/s41591-025-03560-7. [2] Dai, L., Sheng, B., Chen, T. et al. A deep learning system for predicting time to progression of diabetic retinopathy. Nat Med 30, 584–594 (2024). [3] Erion, G., Janizek, J. D., Hudelson, C., Utarnachitt, R. B., McCoy, A. M., Sayre, M. R., ... Lee, S. I. (2022). Coai: Cost-aware artificial intelligence for health care. Nature biomedical engineering, 6(12), 1384. [4] Chattopadhyay, A., Chan, K. H. R., Haeffele, B. D., Geman, D., Vidal, R. (2023). Variational information pursuit for interpretable predictions. arXiv preprint arXiv:2302.02876. [5] Covert, I. C., Qiu, W., Lu, M., Kim, N. Y., White, N. J., Lee, S. I. (2023, July). Learning to maximize mutual information for dynamic feature selection. In International Conference on Machine Learning (pp. 6424-6447). PMLR. [6] Gadgil, S., Covert, I., Lee, S. I. (2024). Estimating conditional mutual information for dynamic feature selection. International conference on learning representations.. [7] Chen, Y., Hassani, S. H., Karbasi, A., Krause, A. (2015, June). Sequential information maximization: When is greedy near-optimal?. In Conference on Learning Theory (pp. 338-363). PMLR. [8] Kachuee, M., Goldstein, O., Kärkkäinen, K., and Sarrafzadeh, M. Opportunistic learning: Budgeted cost-sensitive learning from data streams. In International Conference on Learning Representations, 2019.
Summary: This paper introduces a novel approach to active feature acquisition (AFA) by reframing the problem as a feature prediction task guided by explainability-driven rankings. Specifically, the authors leverage local explanation methods (e.g., SHAP, LIME, TreeSHAP) to generate instance-wise feature importance rankings. These rankings are used to supervise a decision transformer that learns a policy for acquiring the next most informative feature for each instance. Empirical results across a diverse set of tabular and image datasets (including several medical datasets) show consistent improvements over state-of-the-art AFS methods. Notably, to my knowledge, this is the first known work to directly incorporate local explanation methods as a guiding signal for sequential feature acquisition, offering a more stable and interpretable alternative to traditional reinforcement learning or mutual information-based approaches. Claims And Evidence: Yes. The claims made in the submission are supported by clear and convincing evidence. The experiments are thorough and diverse, covering nine datasets. Performance comparisons with state-of-the-art methods, ablations (e.g., different explanation techniques, architectures), and oracle benchmarks support the central claims. Methods And Evaluation Criteria: Yes. The use of decision transformers, per-instance explainability-based supervision, and standard evaluation on public benchmarks are all appropriate for the AFS setting. No additional explanation necessary here. Theoretical Claims: N/A — the paper does not include novel theoretical derivations or formal proofs that require scrutiny. Experimental Designs Or Analyses: Yes, the experimental design is sound. The authors use well-established backbones like ResNet variants, a GPT-3 mini version for the transformer, and standard training protocols. Supplementary Material: Yes. I reviewed the supplementary material — specifically the algorithmic components. They are clearly described and support the main claims. Relation To Broader Scientific Literature: The paper builds on two key streams of literature: (1) active feature acquisition, especially via reinforcement learning or mutual information-based methods; and (2) explainability methods like SHAP and LIME. This work is distinct in its fusion of these two domains, using post-hoc explanation tools during training to stabilize and guide sequential feature acquisition. I am not aware of any prior attempts that combine local explanation techniques with sequential policy learning in this manner, making the contribution both novel and timely. Essential References Not Discussed: One possibly relevant line of work the authors may consider discussing is INVASE (Yoon et al., ICLR 2018), which also performs instance-wise variable selection using a policy-based network but learns feature importance end-to-end rather than using post-hoc explanations. It would be helpful for the authors to clarify how their use of SHAP/LIME compares in terms of ranking stability and generalization performance. Other Strengths And Weaknesses: Strengths: 1. Strong combination of explainability and AFA. 2. Effective use of decision transformers in a new setting. 3. Generalizable two-stage training framework. 4. Broad empirical evaluation on both image and medical datasets. 5. Well written paper. Weaknesses 1. The paper provides a helpful analysis (e.g., Table 4) showing that the learned acquisition policy aligns well with the explainability-based feature rankings, which supports the overall training framework. That said, an important open question is the stability of the explanation methods themselves. Post-hoc techniques like SHAP and LIME can produce variable feature rankings across different runs or model initializations, especially in high-dimensional settings. While the two-stage training strategy may help mitigate this, the paper does not directly evaluate how such variability might affect the robustness of the learned policy. It would strengthen the work to include a ranking stability analysis (e.g., across seeds or models), and assess how sensitive the acquisition policy is to such fluctuations. 2. Additionally, it would be helpful to contrast this approach with training-time feature selectors like INVASE (Yoon et al., 2018), which do not rely on post-hoc explainability and may produce more stable, model-aligned importance scores. A comparison or discussion here would strengthen the positioning and generalizability of the approach. Other Comments Or Suggestions: Line 312: "due to to" → Typo, please correct to "due to". Questions For Authors: 1. How robust is your method to noise or instability in the feature importance rankings generated by SHAP or LIME? Would using a weaker model (or different initialization) to compute rankings degrade downstream acquisition policy quality significantly? 2. How does your method compare to training-time feature selection methods like INVASE that don’t rely on post-hoc explainability? Could this offer better stability or generalization? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Response to W1 & Q1: We appreciate the reviewer’s comments on the potential variability of post-hoc explanation methods and their impact on the robustness of our learned acquisition policy. To address these concerns, we conducted several robustness evaluations. As shown in Table 3, our method performs consistently across different explanation-based ranking approaches (e.g., SHAP, LIME), demonstrating resilience to variations in feature importance estimations. Additionally, we tested the impact of using a weaker model to compute rankings: when replacing ResNet-18 with ResNet-10 on CIFAR-10, performance declined only marginally (78.61\% to 78.22\%), suggesting our method is robust to model capacity differences in the ranking stage. As further suggested, we evaluated the stability of feature importance rankings across random seeds. Specifically, we trained three initial models using different random seeds, resulting in three distinct ranking orders for each explanation method in Table 3. For each ranking order, we trained our policy network three times to account for variability in our model’s initialization. The updated Table 3 now reports the mean and standard deviation across these nine runs, demonstrating stable performance and indicating that our method is robust to fluctuations in explanation-based rankings: # Updated Table 3 | | Spam | Metabric | CPS | CTGS | CKD | | --- | --- | --- | --- | --- | --- | | T-SHAP | 0.955 ± 0.001 | 69.83 ± 0.41% | 67.45 ± 0.13% | 0.916 ± 0.001 | 0.836 ± 0.07 | | LIME | 0.953 ± 0.002 | 69.15 ± 0.18% | 67.06 ± 0.36% | 0.913 ± 0.001 | 0.822 ± 0.09 | | K-SHAP | 0.956 ± 0.002 | 69.57 ± 0.33% | 67.32 ± 0.56% | 0.915 ± 0.001 | 0.831 ± 0.005 | | IME | 0.954 ± 0.001 | 69.78 ± 0.10% | 67.12 ± 0.61% | 0.916 ± 0.001 | 0.8258 ± 0.1 | | INVASE | 0.927 ± 0.002 | - | 68.37 ± 0.23% | 0.912± 0.003 | 0.8305 ± 0.09 | | OPL* | 0.889 ± 0.002 | 61.54 ± 0.23% | 63.45 ± 0.80% | 0.864± 0.005 | 0.7003 ± 0.04 | *We also evaluated an RL-based method (OPL [1]). Since we are unable to update our figures during the rebuttal period, we present this baseline result in table format. Response to W2: Our AFA framework fundamentally differs from end-to-end feature selection methods like INVASE. While INVASE integrates feature selection directly into model training via policy gradients, AFA decouples the feature importance estimation from policy learning. This has two advantages: (a) it enables interpretability aligned with widely used post-hoc explanation tools, and (b) it simplifies training by avoiding the high variance and complexity of reinforcement learning. Response to Q2: Thank you also for suggesting the INVASE [2] method . Our approach is compatible with any ranking order, including those produced by INVASE. Based on your suggestion, we evaluated our method using INVASE-derived rankings and included these results in Table 3 (excluding the Metabric dataset due to computational constraints). Running INVASE on Metabric proved infeasible within the rebuttal window due to its high computational cost. References: [1] Kachuee, M., Goldstein, O., Kärkkäinen, K., and Sarrafzadeh, M. Opportunistic learning: Budgeted cost-sensitive learning from data streams. In International Conference on Learning Representations, 2019. [2] Yoon, Jinsung, James Jordon, and Mihaela Van der Schaar. "INVASE: Instance-wise variable selection using neural networks." International conference on learning representations. 2018. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My questions have been addressed, so I will be maintaining the same score. --- Reply to Comment 1.1.1: Comment: We appreciate the reviewer's follow-up and are glad that the concerns have been resolved.
null
null
null
null
null
null
Layer-wise Quantization for Quantized Optimistic Dual Averaging
Accept (poster)
Summary: *I am not familiar with this line of research, so my confidence in the following review is limited. While I will provide feedback based on my understanding, please keep in mind that my assessment may not be entirely precise.* - The paper provides theoretical guarantees for layer-wise quantization and demonstrates its advantages over global quantization. - It then introduces a new layer-wise quantization algorithm, QODA, designed to solve the variational inequality problem in a distributed setting. - By leveraging gradients stored from previous iterations, QODA improves the computational efficiency of the existing method, Q-GenX. The authors evaluate their approach on GAN fine-tuning, showing that QODA with layer-wise quantization outperforms its global quantization counterpart, Q-GenX. Claims And Evidence: **Q1:** To my understanding, the QODA algorithm appears to be agnostic to the choice between layer-wise and global quantization. The authors present results for both approaches under the QODA-based extended Adam optimizer. However, the impact of the QODA algorithm itself does not seem to be explicitly ablated in the experiments. It would be helpful if the authors could provide results comparing Q-GenX training and QODA, both using the global quantization setting. Methods And Evaluation Criteria: See the comments and questions above. Theoretical Claims: See the comments and questions above. Experimental Designs Or Analyses: See the comments and questions above. Supplementary Material: I have reviewed all the Appendices, while I have not checked the correctness of the proofs. Relation To Broader Scientific Literature: See the comments and questions above. Essential References Not Discussed: See the comments and questions above. Other Strengths And Weaknesses: See the comments and questions above. Other Comments Or Suggestions: See the comments and questions above. Questions For Authors: See the comments and questions above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your time and your comments on our work. We will address your concern as follows: Q1: To my understanding, the QODA algorithm appears to be agnostic to the choice between layer-wise and global quantization. The authors present results for both approaches under the QODA-based extended Adam optimizer. However, the impact of the QODA algorithm itself does not seem to be explicitly ablated in the experiments. It would be helpful if the authors could provide results comparing Q-GenX training and QODA, both using the global quantization setting. A1: In Appendix G, we have compared Q-GenX and QODA in full precision setting with no quantization for a bilinear game with the feedback that is is corrupted by relative noise. Under this setting, Q-GenX and QODA are equivalent to GEG and ODA, respectively. From Figure 2, we can observe that the performance of ODA is significantly better than GEG. In fact, it is known in the literature that GEG diverges for certain standard settings like this simple bilinear problem [1,2]. ODA with stepsize seperation can help to prevent this issue [1]. References: [1] Hsieh et al., No-regret learning in games with noisy feedback: Faster rates and adaptivity via learning rate separation, NeuRIPS 2022 [2] Daskalakis et al., Training GANs with Optimism, ICLR 2018 --- Rebuttal Comment 1.1: Comment: I appreciate the authors’ response in the rebuttal. I will keep my score, but I hope the Area Chairs to keep in mind that I am not familiar with this line of research, so my confidence in this review and score is limited. --- Reply to Comment 1.1.1: Comment: Hi Reviewer X8sM, Thank you very much for your time reading our response and reviewing our work. Best regards, Authors
Summary: Authors develop a general layer-wise quantization framework with tight variance and code-length bounds, adapting to the heterogeneities over the course of training on variational inequalities (VIs) method. Authors first apply layer-wise minimizing quantization variance upon the quantization progress. Then authors proposing a novel Quantized Optimistic Dual Averaging (QODA) algorithm with adaptive learning rates, which achieves competitive convergence rates for monotone VIs.Experimental results reveal that the proposed methods achieve 150% speedup over the baselines in end-to-end training time for training Wasserstein GAN on 12+ GPUs. Claims And Evidence: Some claims are not supported in the manuscripts. See "Other strengths and weaknesses" for details. Methods And Evaluation Criteria: I've checked all theoretical and qualitative analysis and claims in this paper. See "Other Strengths And Weaknesses" part of this review for my major & minor concerns about the methodology and equation derivation. Theoretical Claims: I've checked all theoretical and qualitative analysis and claims in this paper. See "Other Strengths And Weaknesses" part of this review for my major & minor concerns about the methodology and equation derivation. Experimental Designs Or Analyses: I've checked all experimental settings, comparison and results in this paper. See "Other Strengths And Weaknesses" part of this review for my major & minor concerns about the experimental part. Supplementary Material: Any possible details in the supplementary material is checked. Relation To Broader Scientific Literature: All contributions are technical and all datasets used for experiments are open-sourced. Thus no key contributions of this paper related to the broader scientific literature. Essential References Not Discussed: All necessary references are discussed. Other Strengths And Weaknesses: ## Major weaknesses 1. Many claims in the section of introduction are not proved or solved by the methods proposed by authors, maybe they are, but authors do not provide any experiment about these claims: (1). From line32 to line45, authors mention "large-scale distributed training", however, I do not find any experiments about NNs trained on "large-scale" nodes. (2). In the second paragraph of introduction, authors claim that layers and structures in different DNNs can be diverse and previous arts lack generalization. Also I do not find extensive enough experiments about different structured NNs, e.g., RNN, CNN, RWKV, Mamba, etc. (3). The first three paragraphs are talking about diverse NNs and distributed across nodes, which can be different devices. Then why in the last paragraph, authors only proposed a method based on distributed VIs. The writing logic is weird. 2. In section 2.1, authors claim that this method is quantization, however, $\\mathbf{s} := [sign(v_i), \\cdots]^{\\top}$, the sign function is only used in network binarization, which is an extreme condition of quantization. Therefore, the claim or the definition of $\\mathbf{s}$ is wrong, if quantization, it would be $\\mathbf{s} := [round(v_i), \\cdots]^{\\top}$ 3. How is the Assumption 2.4 obtained? If just define the oracle $g(\\cdot)$ to satisfy the two conditions, then how to prove that authors can always find a proper $g(\\cdot)$? 4. What does the $\\sigma_R$ in the Assumption 2.5 represent for? 5. In the first paragraph in section3.1, authors use SGD as optimizer and QSGD with several series work as baseline. However, when distributed training large-scale LLMs, ViTs, and MLLMs, Adam and Admaw are the most chosen optimizers. Thus the methods proposed by authors can not be directly generalized to the aforementioned conditions. 6. The "Quantization Variance" paragraph in section 3.1 is just describing that first calculate the quantization variance of each $m$ level and minimizing them, second based on absolute noise, replacing $A(x)$ by the oracle $g(\\cdot)$. However, there are too many different symbols and notations, the writing is very hard to follow, while the method itself is simple. 7. What does $Y_t$ in Algorithm1 represent for? 8. Same as question 5, if Adam and Adamw are applied, the equation ODA would be different and need specialized design. 9. Seems like the only novelty of section 4 is proposed a middle variable $Y_t$ compare to previous methods? 10. Why only use 5-bit for experiments? What about other bit-width setting? The experiments are not extensive enough to prove the effectiveness of the proposed method. 11. Why only compare the timecost in Table 1&2, what about the performance?Would your methods affect the performance? Other Comments Or Suggestions: See above. Questions For Authors: See above. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the detailed comments. 1: Regarding (1), we believe our significant speedup with 16 GPUs are sufficent for a **theory** paper on distributed VIs (DVIs). We list all the related theory DVIs works (in top ML conferences) and no. of GPUs used. [2] is the only one with 16 GPUs like us, while the rest use fewer or only simulated nodes on CPU. |Paper| no. of GPUs| |--------|-------| |[1]|4| |[2]|16| |[3]|0, simulated nodes| |[4]|0, simulated nodes| |[5]|0, only CPU| Regarding (2), our main novelty is not to show the known heterogenity in the literature. This heterogenity provides the motivations for us to design our quantization method. Our focus is on providing the **theoretical** framework and bounds for layerwise quantization and introducing QODA for distributed VIs with strong theoretical guarantees. 2: There is a misunderstanding. Throughout the paper, we usually denote a vector that will be quantized later $\mathbf{v}$, simply represented by the tuple ($ ||\mathbf{v}||_q, s, u$). This does not mean that the vector $\mathbf{v}$ is quantized as such. For quantization, we refer you to lines 188-192, where the quantization of $\mathbf{v}$ is defined with quantization random variable $q_\ell$. We will move the Section 2.1 to Section 3 to improve clarity. 3: The assumption 2.4 is standard in stochastic VIs [6, 7]. Classical work [7] list out several general operators in Section 2.2 that falls under this assumption, such as the Matrix Minimax problem [8,Section 2.2.2]. 4: $\sigma_R$ is a constant that shows how the error vanishes near a solution of VI [6]. We will fix the writing as: ... and there exists $\sigma_R >0$ such that $\mathbb{E}\left[\|U(\mathbf{x}, \omega)\|^2_\ast\right] \leq \sigma_R \|A(\mathbf{x})\|_\ast^2$. 5+8: The key point is that our layer-wise quantization framework is versatile and independent of the underlying optimizer. For instance, we demonstrate its applicability with Transformer XL (Table 3). Meanwhile, QODA is a novel method we introduce with theoretical guarantees for DVIs, and its contribution is distinct from the broader layer-wise quantization framework. For Qn8, QODA is our novel method along with its guarantees for DVIs since it recovers the order optimal O(1/T) rate which Adam and GD cannot [8]. 6: We simplify the notations as follows: We consider an arbitrary layer type and remove index $m$. Let $\upsilon$ be the index of a level w.r.t an entry $u\in[0,1]$ such that $\ell_{\upsilon} \leq u < \ell_{\upsilon+1}$. Let $\xi(u) = (u - \ell_{\upsilon})/(\ell_{\upsilon+1} - \ell_{\upsilon})$ be the relative distance of $u$ to the level $\upsilon + 1$. For a sequence $\bf{\ell}$, define the random variable $q_{\bf{\ell}}(u)= \ell_{\upsilon} \text{ with probability }1-\xi(u)$ and $=\ell_{\upsilon+1} \text{ with probability }\xi(u)$. The rest will follow a similar simplication. 7: For optimistic methods, this extra variable serves as a predictive term. It is used to "look ahead" to the gradient in the next iteration based on previous gradient information. That is, this anticipatory step with $Y_t$ allows the update rule to be more informed, improving convergence speed and stability. 9: Our novelties here are to design quantization with ODA and to propose adaptive learning rates that leads to optimal guarantees (section 5) under milder assumptions. Firstly, our optimistic approach reduces one “extra” gradient step that extra gradient methods like Q-GenX [1] take, hence reduceing the communication burden by half. Secondly, we design the adaptive learning rates (equation (4)) to obtain guarantees with fewer assumptions. 10: We selected 5-bit compression because our experiments indicated that it is the lowest bit-width at which we reliably recover baseline accuracy. We also tested 4-bit compression, the training performance was nearly identical. Our goal is to show that even with aggressive compression, our method maintains baseline performance while offering significant computational benefits. 11: We refer the reviewer to Figure 1. In lines 373-375 (right half page), we states that in Fig 1, QODA approach not only recovers the baseline accuracy but also improves the performance relative to Q-GenX. Refs: [1] Distributed extra-gradient with optimal complexity and communication guarantees, ICLR 2023, [2] Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees, NeuRIPS 2022 [3] Byzantine-Tolerant Methods for Distributed Variational Inequalities, NeuRIPS 2023 [4] Stochastic Gradient Descent-Ascent: Unified Theory and New Efficient Methods, AISTATS 2023 [5] Communication-Efficient Gradient Descent-Accent Methods for Distributed Variational Inequalities: Unified Analysis and Local Updates, ICLR 2024 [6] Polyak, Introduction to optimization, 1987 [7] Solving Variational Inequalities with Stochastic Mirror-Prox algorithm, Stochastic Systems 2011 [8] Training GANs with Optimism, ICLR 2018
Summary: This paper introduces a layer-wise quantization framework that adapts to heterogeneities over the course of training (DNNs). Instead of applying a uniform quantization strategy across all layers, the proposed approach optimizes quantization sequences per layer with tight variance and code length. Building on this framework, the authors propose Quantized Optimistic Dual Averaging (QODA), a distributed solver for variational inequalities (VIs) that integrates optimism to reduce communication overhead and improve convergence rates. Empirical results show that QODA accelerates Wasserstein GAN training by up to 150% while maintaining competitive performance. Claims And Evidence: The claims seems to be well supported by empirical evidence an theory. Methods And Evaluation Criteria: Experiments focus on training speed, comparing QODA against Q-GenX and uncompressed baselines on GANs (WGAN on CIFAR-10/100) and Transformer-XL (WikiText-103). Theoretical Claims: The convergence guarantees for QODA follow principles from distributed optimization and variational inequalities, though I did not verify every proof in depth. Experimental Designs Or Analyses: I checked the experimental design design and anlyses and it appears okay to me. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: - The work extends gradient quantization techniques (e.g., QSGD, adaptive compression methods) by making them layer-aware and variance-optimal. - It contributes to distributed optimization by integrating optimistic gradient methods into quantized variational inequality solvers. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - The layer-wise quantization approach is well-motivated. - The variance and code-length bounds extend previous quantization results and improve efficiency. - The empirical results demonstrate significant training speed improvements. - The scalability study (Table 2) confirms that QODA maintains efficiency up to 16 GPUs. Weaknesses: - The paper does not analyze accuracy trade-offs—does layer-wise quantization affect generalization or model stability? - A comparison with other quantization techniques would highlight the benefit of this approach. - For someone who is not that experienced in this area, this paper is not easy to understand. I believe the Introduction could be improved a bit to provide the context. Paper also seems to be a bit overusing the notations. Other Comments Or Suggestions: - A discussion on QODA’s compatibility with mixed-precision training would be valuable. - Adding an ablation study on how different layers benefit from different quantization levels would strengthen the paper. - Writing could be improved a bit. Questions For Authors: - How does layer-wise quantization affect final model accuracy? - How does QODA compare to quantization-aware training (QAT) and post-training quantization (PTQ)? - Can QODA be integrated with mixed-precision methods to further improve efficiency? Ethical Review Concerns: NA Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you very much for your time and your comments on our work. We will address comments in the following QnA format: Q1: The paper does not analyze accuracy trade-offs-does layer-wise quantization affect generalization or model stability? A1: We hope to clarify that in Figure 1, we show that our method can recover the accuracy of the full-precision baseline model. This suggests that layerwise quantization does not have an adverse impact on accuracy, while significantly reduces the time per optimization step (Table 1 and 2). Moreover, for the Transformer-XL task, Table 3 shows that layer-wise compression provides a better accuracy-compression trade-off than global compression, obtaining better perplexity score while improving training speed. Q2: A comparison with other quantization techniques would highlight the benefit of this approach. How does QODA compare to quantization-aware training (QAT) and post-training quantization (PTQ)? A2: In brief, the methods are complementary. Quantization-Aware Training (QAT) methods seek to produce models with quantized _weights and activations_ during training, and as such, compress these elements during the training process. PTQ methods aim to do so in a single compression step, e.g. by using layer-wise solvers to find a good quantized weight assignment [1,2]. By contrast, we investigate _gradient_ compression, with the goal of reducing the amount of communication employed during the training process itself. As such, the objectives are orthogonal, and QAT methods could be used in conjunction with QODA. We will add the above discussion with references [1,2] to related work section for future manuscripts. Q3: I believe the Introduction could be improved a bit to provide the context. Paper also seems to be a bit overusing the notations. Writing could be improved a bit. A3: Understanding the complexity of layer-wise quantization, we have tried to simplify the notations whenever possible (line 176-178, line 278-279). Since the paper is theory-focused, we try to provide the most rigorous model and anlysis. We will try to improve as follows: - We make and will add the following schematic to help the visualization of our framework in the introduction: https://imgur.com/a/P3AFsPo, which highlights how layer-wise quantization can optimally choose different compression type depending on the importance of the layers. - For notational simplicity, we consider an arbitrary layer type and remove index $m$ in the initial discussion of Section 3.1 as follows: (Fix a type $m$.) Let $\upsilon$ denote the index of a level with respect to an entry $u\in[0,1]$ such that $\ell_{\upsilon} \leq u < \ell_{\upsilon+1}$. Let $\xi(u) = (u - \ell_{\upsilon})/(\ell_{\upsilon+1} - \ell_{\upsilon})$ be the relative distance of $u$ to the level $\upsilon + 1$. For a sequence $\bf{\ell}$, we define the following random variable $q_{\bf{\ell}}(u) = \ell_{\upsilon} \text{ with probability } 1-\xi(u)$ and $\ell_{\upsilon+1} \text{ with probability } \xi(u)$. The later part in Section 3.1 will follow a similar simplication. Q4: How does layer-wise quantization affect final model accuracy? A4: We refer the reviewer to Figure 1, where we compare applying layer-wise quantization with the full-precision baseline. The figure indicates that the adaptive QODA approach not only recovers the full-precision baseline accuracy but also improves convergence relative to Q-GenX [3] (line 373-375 on the right half page). Moreover, Table 1 and 2 show that QODA achieves this accuracy under much faster time per optimization than the full-precision baseline. Q5: Can QODA be integrated with mixed-precision methods to further improve efficiency? A discussion on QODA’s compatibility with mixed-precision training would be valuable. A5: Yes, indeed. Mixed-precision methods seek to reduce the computational cost of one or more of the three matrix multiplications employed during training for e.g. linear layers (one on the forward and two on the backward pass). As noted above, QODA is complementary to such techniques, as it serves to reduce the precision in which the gradient is stored and transmitted. Thus, mixed-precision techniques do not directly address communication costs, but could be applied in conjunction with QODA, to reduce both computational and communication costs. From the reviewr feedback, we will elaborate this extension in the future manuscript. Q6: Adding an ablation study on how different layers benefit from different quantization levels would strengthen the paper. A6: We are currently running the experiments and will provide the result in the discussion period. References: [1] Ashkboos et al., EfQAT: An Efficient Framework for Quantization-Aware Training, 2024. [2] Frantar et al., GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers, ICLR 2023. [3] Ramezani-Kebrya et al., Distributed extra-gradient with optimal complexity and communication guarantees, ICLR 2023. --- Rebuttal Comment 1.1: Comment: Thanks for addressing my concerns. I am pleased to stick my current rating and recommend acceptance for this paper. --- Reply to Comment 1.1.1: Comment: Dear Reviewer j331, Thank you very much for your response. We complete the abalation study in Q6 and report the results as follows: To further demonstrate the advantage of performing quantization on a layerwise basis, we conducted an ablation experiment on Transformer-XL [1]. In this test, we compared the test perplexity resulting from quantizing only the positionwise feed-forward layer (FF), the embedding layer, and the attention layer (i.e., the matrices containing all the parameters of k, q, and v at each layer), respectively. We used the PowerSGD quantization method [2] with varying quantization levels (ranks). All experiments were trained on WikiText-103, based on the implementation from [3]. Each setup was repeated four times with different seeds, and the results are shown in https://imgur.com/a/3Dz7hgP. As seen in the figure, given the same compression level, quantizing the embedding layer results in a much larger drop in performance. This supports our intuition that layerwise quantization could be more beneficial, as different layers exhibit varying sensitivity to quantization. We appreciate your valuable suggestions and will include this ablation study in our future manuscript. References: [1] Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q. V., and Salakhutdinov, R. Transformer-XL: Attentive language models beyond a fixed-length context, arXiv:1901.02860, 2019. [2] Vogels, T., Karimireddy, S. P., and Jaggi, M. PowerSGD: Practical low-rank gradient compression for distributed optimization, NeurIPS’19. [3] Markov, I., Alimohammadi, K., Frantar, E., and Alistarh, D. L-GreCo: Layerwise-adaptive gradient compression for efficient data-parallel deep learning, MLSys'24.
Summary: Based on theoretical analysis, this paper presents a layer-wise quantization framework that adapts to the unique properties of different neural network layers. This framework leads to the development of the Quantized Optimistic Dual Averaging (QODA) algorithm, which uses adaptive learning rates to achieve competitive convergence in variational inequalities and significantly speeds up training of Wasserstein GANs on GPUs. Claims And Evidence: - The motivation in terms of layer-wise quantization (which is directly related to Mishchenko et al., 2024) in training could be further clarified, as layer-wise quantization is not a well-known concept in this field. Methods And Evaluation Criteria: This paper provides a theoretical analysis for previous empirical work very well, it provides a tight guarantees for layer-wise quantization. Theoretical Claims: I have checked the theoretical analysis. Although promising, I personally believe that the use of symbols can be further reduced to improve readability as the underlying math is straightforward. Experimental Designs Or Analyses: The experimental results of this paper are relatively limited as most of the main body is used for theoretical analysis. Supplementary Material: Supplementary materials include necessary additions to theoretical analysis. Relation To Broader Scientific Literature: This work extends the previous works, e.g., adaptive layer-wise quantization (Markov et al.) and QGen-X (Ramezani-Kebrya et al.). Essential References Not Discussed: As far as I know, this submission discusses the related works properly. Other Strengths And Weaknesses: More empirical experiments could further enhance this paper, however, since the contributions of this paper lie mainly in theoretical analysis, I would not consider this as a major weakness. Other Comments Or Suggestions: N/A Questions For Authors: For GANs, do the authors observe any instabilities during training? Or did the authors attempt to train GANs on more challenging datasets besides CIFAR to observe the effectiveness of the method? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for reviewing and giving us feedbacks. We will address your comments in a QnA temmplate. Q1: The motivation in terms of layer-wise quantization (which is directly related to Mishchenko et al., 2024 or [4]) in training could be further clarified, as layer-wise quantization is not a well-known concept in this field. A1: Based on the reviewer's feedback, we propose adding the following brief summary at the beginning of Section 3: As mentioned in the introduction, previous literature has shown that different types of layers in deep neural networks learn distinct hierarchical features—from low-level patterns to high-level semantic representations [1, 2]. These layers also vary in the number of parameters they contain and in their impact on final accuracy [3]. Thus, the key motivation for our layer-wise quantization scheme is to account for this heterogeneity and provide a more effective approach to quantizing neural network training. We also want to stress that our method is not "directly related" to the block (p-)quantization approaches described in [4, 5]. In lines 58-59 on the right-hand side of the page, we emphasize that block (p-)quantization [4, 5] is **fundamentally different** from the layer-wise quantization presented in our paper, as detailed in Appendix A.2. These are three fundamental distinctions between block quantization and our layer-wise quantization: - Each of our layer or block in this context has different adaptive sequences of levels (Section 3). This is why our method is named **layer-wise**. The works [4, 5] on the other hand applies the same p-quantization scheme $\text{Quant}_p$ to blocks with different sizes, implying that the nature and analysis of two methods are very different. - The way the quantization is calculated for each block or layer are different. [4] study and provide guarantees for the following type of p-quantization (for all blocks): $\widetilde{\Delta}=\|\Delta\|_p \operatorname{sign}(\Delta) \circ \xi,$ where the $\xi$ are stacks of random Bernoulli variables. In our work, the sequence of levels for each layer is adaptively chosen according to the statistical heterogeneity over the course of training (refer to equation (MQV) in the main paper). - The guarantee in [8, Theorem 3.3] only cover p-quantization rather block p-quantization. In our Theorem 5.1, we provide the quantization variance bound for any arbitrary sequence of levels for each layer in contrast to that for only levels based on p-quantization [4]. Q2: For GANs, do the authors observe any instabilities during training? Or did the authors attempt to train GANs on more challenging datasets besides CIFAR to observe the effectiveness of the method? A2: We did not observe any additional training instabilities when applying our compression method to GANs. As demonstrated in the FID plots (Figure 1 of the paper), the training curves for QODA closely follow — and at times are even smoother than — the no compression baseline, indicating that our method does not introduce extra instability. We believe the results and speedup observed on CIFAR-10 and CIFAR-100 provide strong evidence that QODA is as stable as the baseline. References: [1] Zeiler et al., Visualizing and understanding convolutional networks, ECCV 2014. [2] He et al., Deep residual learning for image recognition, CVPR 2016. [3] Dutta et al., On the discrepancy between the theoretical analysis and practical implementations of compressed communication for distributed deep learning, AAAI 2020. [4] Mishchenko et al., Distributed learning with compressed gradient differences. Optimization Methods and Software 2024. [5] Wang et al., Theoretically better and numerically faster distributed optimization with smoothness-aware quantization techniques, NeuRIPS 2022.
null
null
null
null
null
null
Enhancing Decision-Making of Large Language Models via Actor-Critic
Accept (poster)
Summary: This paper proposes a gradient-free Actor-Critic framework (LAC) to enhance the decision-making capabilities of LLMs. LAC integrates a value-based critic that offers quantitative feedback to guide policy improvement and employs a gradient-free optimization approach to update actor. Experimental results demonstrate that LAC outperforms GPT-4 with ReAct in Alfworld and BabyAI-test tasks. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: Yes. Experimental Designs Or Analyses: Yes. Supplementary Material: Yes. Relation To Broader Scientific Literature: The proposed framework demonstrate the potential of 'actor-critic' for LLM-based decision-making scenario. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. This paper proposed a token-level action evaluation method, and utilized a gradient-free method to update policy LLM. The idea is totally makes sense and Concise. 2. The experiment setting and provided implementation details are comprehensive. Weakness: 1. Lack of In-depth analysis and explanation. This work constructs a tailored actor-critic framework in decision-making scenario, and the performance improvement results is promising. Nevertheless, in the experimental results part (i.e., sections 5.2, 5.3, 5.4), we didn't see a very illustrative explanation to reveal the potential reason behind this improvement. This limits the impact of this paper as basically, the actor-critic framework is somehow similar with the general domain (e.g., reasoning). 2. From this paper, i cannot see the technical difference of the actor-critic framework between decision-making scenario and the general LLM reasoning scenario. It seems that this paper just simply adapts the general actor-critic framework into decision-making task without domain-specific justification or design. This limits the overal novelty of this paper. 3. The overall paper writing is barely satisfactory; some parts lack intuitive explanations, making it difficult for readers to understand. Other Comments Or Suggestions: 1. More in-depth analysis of the experiment results. Questions For Authors: 1. LATS is also one of the baselines, which uses external feedback and the MCTS method to improve performance. Why did the author only report its' performance in the WebShop benchmark? Can it be adapted to another benchmark? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: > `Q1: Lack of intuitive explanations and in-depth analysis.` **A:** We appreciate the reviewer's emphasis on intuitive explanations and analyses. In Fig.10 and 11 (Appx.A.7), we have provided detailed illustrative analyses using representative examples from ALFWorld and BabyAI-Text, which demonstrates into how each component of our actor-critic framework (LAC) influences action selection. Fig.10 presents a concrete scenario where the agent's goal is to "put a saltshaker in the drawer." At a critical decision step (Step 4), we observe the following intuitive distinction between components. - LLM-based prior policy alone mistakenly suggests "go to drawer 1" because the base LLM model overlooks that the agent has already found the correct object ("saltshaker 1") in cabinet 2. This error exemplifies the common hallucination problem in LLMs, which occurs when the model disregards previous states and incorrectly recommends irrelevant actions. - In contrast, the critic suggests "take saltshaker 1 from cabinet 2" because it evaluates potential actions by predicting future trajectories and determines that this action will successfully pick up the correct object. - Our method leverages these distinct insights by optimizing the prior policy's action distribution based on the critic's evaluation (see Lines 4-8 in Algo.1). It effectively corrects the errors introduced by the prior policy, balancing the strengths of prior policy (flexible but sometimes inaccurate) and critic evaluations (accurate but computationally intensive). This illustrative example explicitly reveals why integrating actor (prior policy) and critic leads to substantial performance improvement. > `Q2: The difference of the actor-critic framework between the decision-making scenario and the general LLM reasoning scenario.` **A:** We respectfully disagree with this claim. Compared to classic RL actor-critic algorithm, our method is distinguished by two key contributions that specifically address significant challenges in enhancing LLMs' decision-making capabilities: (1) To effectively extract action evaluation information from LLMs, we propose a novel Q-function estimation approach (Sec.4.1) that leverages LLMs' internal belief about success or failure of the current task; (2) To efficiently utilize these action evaluation insights for policy optimization, we formulate the policy improvement problem as a KL-divergence constrained optimization and derive a closed-form solution (Sec.4.2), allowing us to optimize the policy in a gradient-free way. Compared to the general LLM reasoning scenario, our method introduces two novel features that are specially designed for sequential decision-making problems: (1) To support long-term planning, our critic module first predicts possible future trajectories for action evaluation, which significantly improves the accuracy of action-value estimation. (2) To improve LLM's decision-making ability, our method introduces a deliberate and principled integration of LLMs as both prior policy (actor) and action evaluation (critic). Rather than simply prioritizing one over the other as in prior work, our approach synergistically combines the two, achieving a balanced framework optimized explicitly for sequential decision-making scenarios. > `Q3: LATS's performance in other benchmarks.` **A:** LATS cannot be directly applied to the other benchmarks we used for two main reasons. (1) LATS requires the ability to revert the agent to earlier states in the environment, which ALFWorld and BabyAI-Text do not support. LATS relies on _model-free_ MCTS, using environment simulator as a world model, reverting simulators to earlier states during tree search. This limitation is also noted in their original paper (Page 9). (2) While it might be possible to modify these environments to make them reversible, it would create an unfair comparison. Our method and other baselines do not rely on simulators during reasoning in ALFWorld and BabyAI-Text, whereas LATS would gain an advantage from this modification. Nevertheless, we still attempted to adapt LATS for ALFWorld by using LLMs as world models, similar to our method, for a fair comparison. The results, presented in the table below, show that LATS fails in almost all tasks. This is because its tree search severely depends on the environment simulator for precise state transitions. With only LLM-based world models, the state transitions often deviate from the actual environments, due to LLMs' inherent hallucinations and partial observability of ALFWorld. **Table 1:** Performance comparison of LAC (Ours) and LATS in ALFWorld. |Methods / Models|CodeLlama-7B|Gemma-7B|Llama-3-8B|Mistral-7B| |-|:-:|:-:|:-:|:-:| |LAC (Ours)|0.00|0.00|0.03|0.00| |LATS|0.79|0.84|0.78|0.79| Thanks for your efforts and insightful comments! We hope our clarification addresses your concerns and sincerely appreciate it if you could re-evaluate our work. Any further feedback is much appreciated. --- Rebuttal Comment 1.1: Comment: Thanks for the explanation based on a case study. After rebuttal, we still argue that the proposed approach mostly combines existing general actor-critic ideas, leaving the novelty concerns. Therefore, I will keep my score. Thanks for the author's efforts. --- Reply to Comment 1.1.1: Comment: > `Q4: After rebuttal, we still argue that the proposed approach mostly combines existing general actor-critic ideas, leaving the novelty concerns. Therefore, I will keep my score. Thanks for the author's efforts.` **A:** We appreciate the reviewer's timely feedback and for acknowledging our experimental explanation. To address the reviewer’s concern regarding novelty, we would like to clarify further why our method represents a substantial advancement beyond simply combining existing actor-critic ideas. We also would like to emphasize that while our method is inspired by the general actor-critic paradigm, it introduces key innovations specifically tailored to LLM-based decision-making, which, to our best knowledge, have not been explored in prior actor-critic work: 1. **Q-function estimation without explicit rewards** (Sec. 4.1): Unlike traditional actor-critic methods that typically rely on explicit predefined reward signals, our approach formulates a Q-function that **leverages LLMs' internal success and failure probabilities** to estimate action values. This strategy enables the critic to estimate action values effectively in scenarios where external rewards are sparse or entirely absent. Such scenarios are prevalent in real-world decision-making tasks involving language models, clearly distinguishing our contribution from classical actor-critic methods. 2. **Gradient-free policy optimization** (Sec. 4.2): Instead of updating the policy via gradient-based learning, we derive **a closed-form solution for KL-constrained optimization**. This gradient-free approach circumvents the complexities associated with differentiating through LLM-generated actions, providing an efficient and effective alternative explicitly tailored for LLM-based policies. To further strengthen our claim of novelty, we will clearly articulate these distinctions in the revised manuscript, emphasizing how these specific innovations directly address practical limitations in existing actor-critic frameworks when applied to large language models. Additionally, we will reinforce our claims by clearly referencing the empirical results, which demonstrate the practical effectiveness of our approach on challenging decision-making benchmarks. We thank the reviewer again for highlighting this crucial point, as it has helped us sharpen and better communicate the unique contributions of our work. We hope this further clarification helps demonstrate that our approach is not merely an adaptation of existing actor-critic ideas but introduces novel and necessary modifications to make actor-critic viable for LLM-based decision-making.
Summary: The paper proposes LAC, a framework for improving decision-making capabilities of LLMs by integrating base LLM with action evaluations derived from token logits and trajectory rollouts. The authors conduct experiments across ALFWorld, BabyAI-Text, and WebShop benchmarks and show the superiority of LAC over baselines. ## update after rebuttal Thanks for running the additional evaluation on Crafter. Based on the new results, it seems there is overlap between the Naive baseline and LAC. Overall, I find the idea interesting, but my main concern remains about LAC's effectiveness on more complex tasks. Therefore, I will keep my current score. Claims And Evidence: The paper's claims about LAC improving LLM decision-making are generally supported by the experimental results across ALFWorld, BabyAI-Text, and WebShop benchmarks. The experiments show consistent performance improvements over baselines. The ablation studies adequately demonstrate that each component contributes to the overall performance. However, the significance of these improvements is somewhat limited by the simplicity of the chosen benchmarks, which may not sufficiently challenge newer LLMs. Methods And Evaluation Criteria: - The evaluation criteria and benchmarks (ALFWorld, BabyAI, WebShop) are now too simple for latest LLMs. Testing on more challenging environments like WebArena [1] or BALROG [2] would provide a more compelling evaluation. - Not a concern but it would be interesting to see how the method performs with reasoning-LLMs like DeepSeek-R1 Distill Llama or Qwen which might also be suitable for the policy network (π_LLM) [1]: Zhou, S., Xu, F. F., Zhu, H., Zhou, X., Lo, R., Sridhar, A., ... & Neubig, G. WebArena: A Realistic Web Environment for Building Autonomous Agents. In The Twelfth International Conference on Learning Representations. [2]: Paglieri, D., Cupiał, B., Coward, S., Piterbarg, U., Wolczyk, M., Khan, A., ... & Rocktäschel, T. (2024). Balrog: Benchmarking agentic llm and vlm reasoning on games. arXiv preprint arXiv:2411.13543. Theoretical Claims: yes - appendix B.1 Experimental Designs Or Analyses: - The experiments are generally sound with appropriate comparisons to relevant baselines and comprehensive ablation studies. - The computational analysis is also useful Supplementary Material: yes - appendix and a high-level overview of the code Relation To Broader Scientific Literature: Improving decision-making capabilities of LLMs is a very relevant problem statement in today's times and this paper aims to improve the capabilities of LLMs in order to become better agents Essential References Not Discussed: In related works, it would be nice to cite works like: Koh, J. Y., McAleer, S., Fried, D., & Salakhutdinov, R. (2024). Tree search for language model agents. arXiv preprint arXiv:2407.01476. Other Strengths And Weaknesses: ## Strengths: - Well-written and easy to follow with clear explanations - Comprehensive experiments and ablation studies that isolate the impact of each component ## Weaknesses: - [Minor] The approach mostly combines existing ideas rather than introducing fundamentally new concepts. Although I personally feel a lot of agentic LLM applications are more about implementation and orchestration and less about research novelty - so this is a minor concern of mine. - Evaluation on relatively simple benchmarks that don't sufficiently challenge newer LLMs - as I said above, I would like to see the results on either WebArena or BALROG or another challenging benchmark - Lack of discussion on failure cases - Since agentic LLMs are already seeing rapid adoption in applied use-cases, it would be nice to have small section on the failure cases of using Q_llm (are there any specific kind of tasks/trajectories where Q_llm doesn't generalize?) Other Comments Or Suggestions: Line 135 column 2: typo: typo: "few-show" -> "few-shot" Questions For Authors: 1. see "Other Strengths And Weaknesses" 2. Curious if LAC can also be extended to multimodal LLMs for decision-making in multimodal benchmarks like VisualWebArena [1]? [1]: Koh, J. Y., Lo, R., Jang, L., Duvvur, V., Lim, M. C., Huang, P. Y., ... & Fried, D. VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks. In ICLR 2024 Workshop on Large Language Model (LLM) Agents. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your comments and valuable suggestions. Here we provide detailed explanations to address your questions. > `Q1: Lack of discussion on failure cases. It would be nice to have a small section on the failure cases of using Q_llm (are there any specific kinds of tasks/trajectories where Q_llm doesn't generalize?)` **A:** We have provided illustrative examples from ALFWorld and BabyAI-Text in Figures 10 and 11, respectively. In these cases, the critic ($Q_{LLM}$) alone may occasionally fail to select the correct actions. This is because the estimation of Q-values relies on the LLM's ability to predict future trajectories. When the LLM experiences significant hallucinations, the Q-values can deviate, leading to suboptimal action selection. For example, in Figure 10, the agent needs to find and take a "saltshaker". In step 2, the critic suggests "go to drawer 1", because it incorrectly predicts that the saltshaker is there, resulting in a suboptimal action. In contrast, the prior policy suggests a more systematic search, starting with "cabinet 1" and then moving to "cabinet 2", where the target saltshaker is actually located. Similarly, there are failure cases where incorrect action evaluations of critic lead to unsuccessful task completion. For instance, in the task "find two soapbars and put them in the cabinet", the critic mistakenly identifies a "soapbottle" as a "soapbar", causing the agent to take the wrong object and fail the task. In summary, the critic may struggle to generalize in cases where the LLM suffers from significant hallucinations, such as mis-predicting future trajectories due to partial observability or incorrectly identifying target objects due to the base LLM's inherent hallucinations. Thank you for the suggestion, and we will expand the discussion on failure cases in the revised version. > `Q2: In related works, it would be nice to cite works like: Tree search for language model agents. arXiv preprint arXiv:2407.01476.` **A:** Thank you for pointing out this relevant work. We will include this work in the revised version of our paper. > `Q3: [Minor] The approach mostly combines existing ideas rather than introducing fundamentally new concepts. This is a minor concern of mine.` **A:** While our method draws inspiration from the actor-critic algorithm in classic RL, it is distinguished by two key contributions that address significant challenges in enhancing LLMs' decision-making capabilities: (1) We propose a novel Q-function estimation approach (Sec. 4.1) to extract action evaluation information from LLMs that leverages LLMs' internal belief about success or failure of the current task; (2) We formulate the policy improvement problem as a KL-divergence constrained optimization and derive a closed-form solution (Sec. 4.2), allowing us to optimize the policy in a gradient-free manner using the action evaluation. > `Q4: How does the method perform with reasoning-LLMs like DeepSeek-R1 Distill Llama or Qwen?` **A:** We have conducted preliminary experiments with reasoning LLMs like DeepSeek-R1-Distill-Qwen-7B. However, we observed that they often tend to overthink rather than output direct environmental actions in both our method and the baseline ReAct. For instance, even when we explicitly prompt the reasoning LLMs to output actions (e.g., "Please make sure your response is in the following format:\n> {The action you choose}"), the models still generated detailed explanations but avoided selecting the next action. A typical response might be: "I need to find a key to open the safe or locate the pencil in the drawers. Since I can’t (…), I’m unable (…), I must (…)". This issue has also been noted in prior work [1]. We believe that using reasoning LLMs for decision-making tasks requires deeper exploration to balance internal reasoning with effective environmental interaction. > `Q5: Can LAC be applied to more challenging environments (WebArena or BALROG), and multimodal decision-making tasks (VisualWebArena)?` **A:** Thank you for the suggestion. Due to the limited time and computing resources available during the rebuttal period, we were unable to conduct experiments on these benchmarks, which generally require larger base LLMs for better performance. Given that our method has already demonstrated effectiveness in various complex sequential decision-making tasks (e.g., ALFWorld, BabyAI-Text and Webshop), we believe it can be extended to these benchmarks as well. We will explore this direction in future work. Thanks again for your efforts and insightful comments! We hope our clarification addresses your concerns. Any further feedback and discussions are much appreciated. --- [1] Cuadron, Alejandro, et al. "The Danger of Overthinking: Examining the Reasoning-Action Dilemma in Agentic Tasks." arXiv preprint arXiv:2502.08235 (2025). --- Rebuttal Comment 1.1: Comment: Thank you authors for addressing my concerns. However, I still believe WebShop is a significantly simpler environment compared to WebArena or BALROG. Additionally, I disagree that larger base LLMs are strictly necessary for demonstrating your method’s effectiveness. BALROG, for instance, already benchmarks models such as Llama 3B, 7B and 11B, which indicates feasibility for evaluating smaller-scale models. A valuable demonstration would involve comparing the performance of your method using a smaller model, such as Llama 3B, or 7B, against the baseline without your proposed approach. Consequently, I maintain my original evaluation score. --- Reply to Comment 1.1.1: Comment: > `Q6: Evaluating LAC on more complex benchmarks like BALROG.` **A:** Thank you for the timely feedback and the constructive suggestion. Our paper has already included experiments on BabyAI-Text, one of the benchmarks from BALROG. To address your concern regarding environmental complexity, we conducted preliminary experiments on Crafter, another benchmark from BALROG. Crafter is a 2D survival game specifically designed to test long-horizon reasoning and sequential decision-making, with tasks involving resource gathering, crafting, and combat. It represents a significantly more complex setting than ALFWorld and WebShop. Due to time and resource constraints during the rebuttal phase, we evaluated our method (LAC) on this benchmark using Llama-3.1-8B-it, following BALROG's official evaluation protocol. We compare LAC with several representative baselines from BALROG's GitHub repository. The preliminary results are summarized in the following table: **Table 1:** Performance comparison of LAC (Ours) and other baselines in Crafter (from BALROG). | Methods / Models | Llama-3.1-8B-it | | :------------------------------------------ | :---------------: | | **LAC** (Ours) | **25.91% ± 1.93%** | | **Naive** (direct action generation) | 20.45% ± 4.26% | | **Robust Naive** (formatted actions) | 4.55% ± 1.57% | | **CoT** (ReAct, reason then act) | 18.64% ± 3.24% | | **Robust CoT** (reason + formatted actions) | 15.46% ± 3.59% | | **Few-Shot** (in-context examples) | 12.73% ± 1.25% | As shown in the table above, our method achieves higher performance than other available baselines under identical evaluation settings. These preliminary results provide further evidence of the robustness, effectiveness, and adaptability of our proposed actor-critic approach (LAC), particularly in significantly more challenging and complex decision-making environments. It is worth noting that CoT performs worse than the baseline **Naive** on the Crafter benchmark. After discussing it with the authors of the BALROG paper, we hypothesize that it is due to the model's inconsistency within their chain-of-thoughts. Two contiguous chains of thoughts might lead model to take actions which push towards different goals, which is not ideal. The authors also note that this is a problem especially with smaller weaker models. We appreciate your suggestion and will include these results in the revised version.
Summary: To enable planning with a next-token generation autoregressive model, this paper proposes to evaluate each action by a critic. Rather than directly self-judging actions, the critic ranks them based on the output logits, associated with the likelihood of predicting actions being good or bad. The policy is then updated by selecting the action with the highest logit value. This approach is tested on several tasks, including AFAWorld, BabyAI, and Webshop. Across all models, it consistently demonstrates dominant performance over other baselines. Claims And Evidence: As claimed, the paper presents a method to leverage LLMs' prior knowledge for action evaluation by using prediction logits to classify actions as good or bad. Long-term planning is achieved by annotating actions as good or bad based on their effects on the final goal, as described in Appendix C.3. However, the process of gradient-free policy updates remains unclear. In Equation 5, the new policy is obtained by multiplying exponentials of Q-values, but several questions arise: (1) How many actions are evaluated per state? (2) Is the policy updated after each action, or in batch updates? (3) How are policy parameters updated, particularly in the case of an infinite action space? Methods And Evaluation Criteria: The proposed method effectively enables long-term planning by evaluating actions and updating policies with fewer steps while achieving enhanced performance. Evaluation is conducted on several control tasks, measuring success rate, rewards, and computational requirements across various base models. Additionally, ablation studies are also thoroughly performed. Theoretical Claims: No, I did not check the proof. Experimental Designs Or Analyses: For experimental results shown in Figure 2, the proposed method, LAC, is finetuned, while other baselines are not. As shown in Figure 9, the performance of ReAct can indeed be improved with further finetuning. The comparison is unfair. Is it possible to report performance without finetuning for comparison and show in a separate figure that the performance can be further enhanced? Supplementary Material: I checked implementation details and extra experimental results, which are clearly stated. Relation To Broader Scientific Literature: The paper is well fitted into the literature with enough baselines chosen. Essential References Not Discussed: Not to my knowledge. Other Strengths And Weaknesses: See other sections. Other Comments Or Suggestions: See other sections. Questions For Authors: 1. How can a proportional policy be updated without gradients? 2. What does maximizing Q mean if it does not represent cumulative rewards? Why optimize Q-values instead of maximizing the prediction probability? 3. In Equation 4, what is the distribution of the history h? 4. What task is depicted in Figure 5? Why does LAC require fewer steps? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thanks for your comments and valuable suggestions. Here we provide detailed explanations and experimental results to address your questions. > `Q1: Questions regarding gradient-free policy updates: (1) How many actions are evaluated per state? (2) Is the policy updated after each action, or in batch updates? (3) How are policy parameters updated, particularly in the case of an infinite action space?` **A:** (1) The number of candidate actions is a hyperparameter, set to 5 in our experiments, that is, we evaluate the top 5 candidate actions per state. Empirically, evaluating 3-5 top actions is generally sufficient for the benchmarks used, as these actions are sampled by leveraging LLMs' prior knowledge. In most cases, effective actions are included within this set. (2) The policy is improved by reweighting with Eq.5 after evaluating all sampled candidate actions for the current state. (3) We do not update policy _parameters_ directly. Instead, we derive a new policy from the original one using a closed-form solution (Eq.5-6) of the policy improvement objective (Eq.4). For infinite action spaces, we evaluate the policy by assessing a set of top candidate actions, similar to how we handle finite action spaces. > `Q2: Report performance without finetuning for comparison and show in a separate figure that the performance can be further enhanced.` **A:** Thank you for the suggestion. We will include comparisons without finetuning in the revised version of the paper. The results, presented in the table below, show that our method outperforms the baseline both with and without finetuning, and the performance can be further improved with finetuning. With finetuing, our method provides more consistent performance with different LLMs. **Table 1:** Ablation studies on the impact of finetuning. | Methods / Models | CodeLlama-7B | Gemma-7B | Llama-3-8B | Mistral-7B | | ---------------------- | :----------: | :------: | :--------: | :--------: | | LAC (w/o finetuning) | 0.39 | 0.59 | 0.71 | 0.57 | | ReAct (w/o finetuning) | 0.20 | 0.54 | 0.31 | 0.34 | | LAC (w/ finetuning) | 0.79 | 0.84 | 0.78 | 0.79 | | ReAct (w/ finetuning) | 0.38 | 0.70 | 0.73 | 0.65 | > `Q3: How can a proportional policy be updated without gradients?` **A:** We update the distribution by adjusting the probabilities of the top candidate actions, while leaving the probabilities of other actions unchanged. Specifically, we formulate the policy improvement objective as a KL-constrained optimization problem (Eq.4). We then derive a closed-form solution (Eq.5) to this problem, which is a weighted combination of the original policy and the action evaluation values. > `Q4: What does maximizing Q mean if it does not represent cumulative rewards? Why optimize Q-values instead of maximizing the prediction probability?` **A:** In our method, maximizing the Q-function corresponds to maximizing the success probability for the current task. The benchmarks simulate realistic scenarios where there is no immediate reward during execution, and the environment only provides feedback on task completion (success or failure) at the end of each episode. To model this, we design the Q-function to be positively correlated with success probability using a Sigmoid function (Eq.1), ensuring that maximizing Q effectively maximizes the success probability. While directly maximizing the success probability is also valid and could lead to a different Q-function formulation, we compared our approach against it in Appendix A.2. Our method outperformed it in most tasks and models. We speculate this is because our method leverages more internal information from LLMs, utilizing both success and failure probabilities (as derived in Eq.2), resulting in more accurate and stable action evaluations. While there could be other formulations that use additional internal information, our approach remains both simple and effective. > `Q5: In Equation 4, what is the distribution of the history h?` **A:** The history $h_t$ in Eq.4 is not a distribution, but rather a given context for the policy and critic. > `Q6: What task is depicted in Figure 5? Why does LAC require fewer steps?` **A:** The task depicted in Figure 5 is from ALFWorld. The metric "Steps per task" is calculated as an average over both successful and failed tasks. LAC requires fewer steps due to its higher success rate, enabling it to complete tasks within the maximum step limit, while other baselines often reach this limit without completing the tasks. If we only consider successful tasks, the step cost is similar across methods: Ours: 15.32 steps, ReAct: 17.75 steps, and RAP: 16.36 steps. Thanks again for your efforts and insightful comments! We hope our clarification addresses your concerns. Any further feedback and discussions are much appreciated. --- Rebuttal Comment 1.1: Comment: By sampling only a few top candidate actions, the method does not compute the closed form of the new policy; there remains a nonzero probability of missing the true argmax action. This approach also differs from the actor-critic framework, which updates policies iteratively. The paper lacks sufficient discussion of these weaknesses. Nevertheless, the closed-form expression offers a convenient mechanism for reweighting action probabilities using the judgments produced by the LLM, potentially leading to improved performance on shown tasks. Given these considerations, I will maintain my current score. --- Reply to Comment 1.1.1: Comment: > `Q7: By sampling only a few top candidate actions, the method does not compute the closed form of the new policy; there remains a nonzero probability of missing the true argmax action. This approach also differs from the actor-critic framework, which updates policies iteratively. The paper lacks sufficient discussion of these weaknesses.` **A:** We thank the reviewer for highlighting an important aspect of our approach, which we recognize deserves further discussion and clarification. **(1) Regarding the approximation by sampling a few top candidate actions:** We agree that sampling top candidate actions introduces a nonzero probability of missing the true argmax action, especially when distinctions among candidate actions are subtle. However, this choice is primarily driven by **computational practicality**: explicitly computing or evaluating the full action distribution for large or open-ended action spaces common in LLM-based decision-making is typically intractable. Empirically, we find that generating a small subset of candidate actions from a strong LLM prior is often sufficient to include promising actions, thus making the trade-off between computational efficiency and accuracy acceptable. **(2) Regarding the comparison to classic iterative actor-critic frameworks:** The reviewer correctly notes that our approach deviates from traditional iterative gradient-based actor-critic frameworks. However, this deviation—employing a one-shot, gradient-free policy improvement—is an intentional design choice driven by the computational challenges of applying gradient-based optimization to LLM-generated textual actions. We view this as a **strength**, as it provides a practical and efficient alternative specifically tailored for LLM decision-making tasks. Our approach can significantly reduce computational overhead without substantially sacrificing performance. We sincerely thank the reviewer for emphasizing these points, which we believe will improve the clarity, rigor, and practical impact of our manuscript.
null
null
null
null
null
null
null
null
Are Large Brainwave Foundation Models Capable Yet ? Insights from Fine-Tuning
Accept (poster)
Summary: The authors performed the analysis of the current Brainwave Foundation Models. The results indicate that current LBMs show limited improvement over traditional deep-learning models. The authors further introduced LoRA for the fine-tuning of LBMs. LoRA fine-tuning technique can sufficiently reduce the training parameters and training efficiency when applying LBMs to downstream tasks. ## After rebuttal Thank the authors for their responses. I am inclined to keep my score, considering the technical contribution aspect. Claims And Evidence: Yes Methods And Evaluation Criteria: The preprocessing step makes sense, but what's the variation of the accuracy in Table 1? Does the result show any statistical significance? These LBMs can be applied to different tasks by only training the classifier. The parameters for the classifier would be much smaller compared to fine-tuning the whole model. It would be good to show the performance with fixed LBMs but learnable classifiers. This could be helpful to demonstrate the necessity of the LoRA finetuning. Theoretical Claims: N/A Experimental Designs Or Analyses: The authors provide a detailed analysis of the number of ranks, the layer types that apply LoRA, and the dropout parameter. However, the necessity of LoRA itself is not demonstrated, especially when compared with the case; only the classifier is trainable. Supplementary Material: No supplementary material is attached. Relation To Broader Scientific Literature: It is important to study how to apply the existing foundational models, such as LaBraM and NeuroGPT, for various downstream tasks. The authors leverage LoRA. for fine-tuning which is an existing technique. Essential References Not Discussed: Additional models for EEG training, such as BIOT [1] and EEGformer [2], can be included in the study. Newer ones, such as CBraMod [3] and NeuroLM [4], are also worth discussing. [1] Chaoqi Yang, M Westover, and Jimeng Sun. Biot: Biosignal transformer for cross-data learning in the wild. In Advances in Neural Information Processing Systems, volume 36, pp. 78240–78260, 2023. [2] Chaoqi Yang, M Westover, and Jimeng Sun. Biot: Biosignal transformer for cross-data learning in the wild. In Advances in Neural Information Processing Systems, volume 36, pp. 78240–78260, 2023. [3] Wang, Jiquan, et al. "CBraMod: A Criss-Cross Brain Foundation Model for EEG Decoding." arXiv preprint arXiv:2412.07236 (2024). [4] Jiang, Wei-Bang, et al. "NeuroLM: A Universal Multi-task Foundation Model for Bridging the Gap between Language and EEG Signals." arXiv preprint arXiv:2409.00101 (2024). Other Strengths And Weaknesses: The paper is well-structured and easy to understand. However, the necessity of the LoRA for current LBMs should be more clearly demonstrated. Also, it would be good to have a comprehensive discussion about the different LBMs. Also, a more technical contribution is expected. Other Comments Or Suggestions: See above. Questions For Authors: Why the classifier architectures connected with LaBraM and NeuroGPT are different? Is there any overlap in the dataset between the ones used in the pertaining and the downstream tasks? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful review and valuable feedback. As mentioned in our response to Reviewer QMtY, we have decided also to extend our analysis results to capture one more popular BCI paradigm, namely Motor. Therefore, we added a new movement benchmark based on the High Gamma dataset (R T Schirrmeister et al, Deep learning with convolutional neural networks for eeg decoding and visualization. Human brain mapping, 38(11):5391–5420, 2017). Please see our response to Reviewer QMtY for the updated results Table. This extends our thorough analysis even further as mentioned by the current Reviewer bhSh. The performance difference between NeuroGPT (the best foundation model in the new analysis) and EEG-Inception (the best baseline network) is 1.2\% (improvement compared to the previous 0.5\%) but with a considerable increase in the number of trainable parameters and larger std among folds. As highlighted by the reviewer, although we demonstrate 10‐fold cross‐validation performance, the original manuscript does not show whether that small improvement margin of 1.2\% is statistically significant or not. For that reason, we conducted paired-t tests between EEGInception (the best baseline network) and all examined foundation models. The results are demonstrated below, showing a clear statistical significance for NeuroGPT in many benchmark tasks: Table: P-values of paired-t tests between EEGInception and finetuned foundation models. Bold values indicate statistically significant results $(p < 0.05)$. | Models | Motor | ERP | Memory | Sleep | Eyes | |-----------------------------------------|---------------|---------------|--------------|--------------|--------------| | EEGInception / LaBraM | 0.5860 | **0.0123** | 0.1090 | 0.2995 | 0.4468 | | EEGInception / NeuroGPT (full model) | **0.0314** | **0.0401** | **0.0041** | 0.2226 | 0.8979 | | EEGInception / NeuroGPT (Encoder) | **0.0072** | **0.0051** | **0.0399** | **0.0495** | 0.1056 | As suggested by the reviewer, we conducted another study: fixed LBMs but learnable classifiers. The results are shown in the table below: Table 2: Classification accuracy of foundation models where all parameters except classification heads are frozen. Each model was trained for 20 epochs with 10-fold cross-validation. **Bold values indicate the best performance per task or overall.** | Model | Motor | ERP | Memory | Sleep | Eyes | Mean Accuracy | |-------------------------|-------------|-------------|--------------|-------------|-------------|---------------| | LaBraM | 0.297 | **0.884** | **0.670** | **0.608** | 0.717 | 0.635 | | NeuroGPT (full model) | 0.366 | **0.884** | 0.656 | 0.597 | 0.734 | 0.647 | | NeuroGPT (encoder) | **0.431** | 0.883 | 0.655 | 0.602 | **0.746** | **0.663** | From the results above, training just the classification yields models that lack behind baselines by a large margin almost 8-10\% (comparing with the results in our response to reviewer QMtY). This further demonstrates that current brainwave foundation models lack essential elements to fully capture the diverse nature of EEG and largely outperform all current state-of-the-art baselines in various tasks with minimum required fine-tuning. This constitutes full-model fine-tuning necessary and in turn makes PEFT techniques like LoRA extremely valuable. The classifiers in both foundation models were designed based on the implementations of the original foundation models papers (LaBraM and Neuro-GPT). As mentioned in our response to Reviwer 9Has, both foundation models (LaBraM and Neuro-GPT) have in their pre-training datasets paradigms that include motor-, ERP-, sleep- and eyes-related tasks (not the specific datasets chosen for the downstream tasks). Interestingly, in the memory task—which was not explicitly included in the pre-training datasets—baseline models slightly outperform foundation models. We thank the reviewer for the additional references provided. In our studies, we included the two state-of-the-art open-source EEG foundation models of that time. The trained weights of CBraMod and NeuroLM were released close to the ICML submission deadline. Including these models in our analysis is not possible due to rebuttal's time constraints. But we will definitely acknowledge these recent works in the background section of our revised manuscript. In the camera-ready version of the manuscript, we will incorporate the paired-t test analysis, the frozen foundation models analysis and the aforementioned discussion points. We sincerely thank the reviewer for their valuable comments, which have helped us to improve our work.
Summary: In this paper, the authors compare state‐of‐the‐art Large Brainwave Foundation Models (LBMs) with traditional deep learning baselines on multiple EEG‐based tasks and find only marginal accuracy gains despite a massive increase in parameters. They then apply Low‐Rank Adaptation (LoRA) to substantially reduce trainable parameters without sacrificing performance and show that adapting multiple model components simultaneously (e.g., convolutional, fully connected, and attention layers) yields the greatest benefit. Claims And Evidence: The authors make a strong claim that Large Brainwave Foundation Models (LBMs) offer only a marginal improvement over traditional deep nets in EEG tasks, but they never show whether that small margin is statistically significant. Although they do 10‐fold cross‐validation, there is no clear significance testing or confidence intervals for the performance numbers they report. It’s difficult to conclude if the LBMs are truly better or if the difference is just random variation. Methods And Evaluation Criteria: The comparison and evaluation in this paper are quite standard. However, significance testing was missed, which seriously undercuts the authors’ key assertions. Theoretical Claims: There is no theoretical claims in this paper. Experimental Designs Or Analyses: The comparison in the current version is not valid since there is no significance test, this is especially important when the increase is marginal like 0.5% Supplementary Material: No supplementary material in this paper. Relation To Broader Scientific Literature: This paper appears to be the first to systematically compare large‐scale, pre‐trained EEG foundation models against conventional deep learning baselines in BCI tasks. While earlier studies have applied large‐model concepts to EEG or introduced individual LBMs, no prior work has comprehensively benchmarked multiple LBMs and standard architectures side by side. Essential References Not Discussed: The reviewer is not aware of any missing reference. Other Strengths And Weaknesses: The key problem of this paper is that there is no significance test. With statistical test added, this paper could be a good contribution to this field. The authors are suggested to revise the title to specify EEG modality, since there are also brain foundation models on fMRI. Other Comments Or Suggestions: N/A. Questions For Authors: Could authors add statistical test to all the comparisons provided? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful review and valuable feedback. As mentioned in our response to Reviewer QMtY, we have decided also to extend our analysis results to capture one more popular BCI paradigm, namely Motor. Therefore, we added a new movement benchmark based on the High Gamma dataset (Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball. Deep learning with convolutional neural networks for eeg decoding and visualization. Human brain mapping, 38(11):5391–5420, 2017). Please see our response to Reviewer QMtY for the updated results Table. This extends our thorough analysis even further as mentioned by the current Reviewer MQLQ. The performance difference between NeuroGPT (the best foundation model in the new analysis) and EEG-Inception (the best baseline network) is 1.2\% (improvement compared to the previous 0.5\%) but with a considerable increase in the number of trainable parameters and larger std among folds. As highlighted by the reviewer, although we demonstrate 10‐fold cross‐validation performance, the original manuscript does not show whether that small improvement margin of 1.2\% is statistically significant or not. For that reason, we conducted paired-t tests between EEGInception (the best baseline network) and all examined foundation models. The results are demonstrated below, showing a clear statistical significance for NeuroGPT in many benchmark tasks: P-values of paired-t tests between EEGInception and finetuned foundation models. Bold values indicate statistically significant result $(p < 0.05)$ | Models | Motor | ERP | Memory | Sleep | Eyes | |-----------------------------------------|---------------|---------------|--------------|--------------|--------------| | EEGInception / LaBraM | 0.5860 | **0.0123** | 0.1090 | 0.2995 | 0.4468 | | EEGInception / NeuroGPT (full model) | **0.0314** | **0.0401** | **0.0041** | 0.2226 | 0.8979 | | EEGInception / NeuroGPT (Encoder) | **0.0072** | **0.0051** | **0.0399** | **0.0495** | 0.1056 | We agree with the reviewer "brain foundation models" could refer to models on fMRI as well therefore we have used the term "Brainwave Foundation Models (LBMs)", clearly hinting the EEG modality. In the camera-ready version of the manuscript, we will incorporate the paired-t test analysis. We sincerely thank the reviewer for their valuable comments, which have helped us to improve our work. --- Rebuttal Comment 1.1: Comment: Thanks for adding significance testing and additional task, I have increased my rating to weak accept. While the results are helpful in EEG community to think more about the current state-of-the-art, the paper can be benefited from deeper and more thorough analysis including why EEG foundation models' performance are limited and how to improve them.
Summary: The paper evaluates the performance of two Large Brainwave Foundation Models, LaBraM and NeuroGPT, by fine-tuning them on multiple EEG-based benchmark tasks. The authors compare these LBMs to well-known deep learning baselines (e.g. EEGNet, EEGInception) and investigate parameter-efficient fine-tuning via Low-Rank Adaptation (LoRA). Their results show that LaBraM slightly outperforms baselines but at a much higher parameter cost, while NeuroGPT lags in performance. They also find that LoRA can significantly reduce trainable parameters in LBMs without harming accuracy. Claims And Evidence: - The paper claims that existing LBMs do not yield substantially better performance than smaller specialized networks, and the authors provide numerical evidence that LaBraM improves average accuracy by only about 0.5% over baselines. - LoRA can retain or even improve performance of LBMs while drastically cutting trainable parameters. Their experiments across rank settings and layer types support this. Methods And Evaluation Criteria: - The authors use standard EEG preprocessing steps including bandpass filtering, notch filtering, resampling that are consistent with typical BCI literature. - Fine-tuning is done with 10-fold subject-independent cross-validation, ensuring robust performance estimates. - The evaluation focuses on classification accuracy across four EEG tasks, which is a standard metric for BCI tasks. - The approach to LoRA is well-detailed, including how the low-rank adapters are integrated into each layer. Theoretical Claims: - The paper focuses on empirical evaluations. Experimental Designs Or Analyses: - The paper uses four benchmark datasets (ERP, working memory, sleep staging, and eyes open/closed). This selection strengthens the generalizability of findings. - Baselines are well-chosen (EEGNet, EEGInception). These are standard architectures for EEG classification. - Fine-tuning setups are described in sufficient detail, including the rank hyperparameters for LoRA, dropouts, and parameter counts. Overall, the experiments are well-structured. Supplementary Material: N/A Relation To Broader Scientific Literature: - The paper positions LBMs alongside established large-scale modeling trends in NLP and computer vision, citing GPT-like training (NeuroGPT) and codebook pre-training (LaBraM). - The discussion references prior EEG-based deep learning approaches (EEGNet, EEGInception) and prior BCI tasks. This frames their contributions within a standard BCI pipeline and highlights the difference between specialized vs. foundation models. Essential References Not Discussed: N/A Other Strengths And Weaknesses: Strengths: - Thorough experimental design with multiple downstream tasks. - Clear demonstration of how LoRA can be systematically applied in EEG foundation models. - Useful ablation studies clarifying which layers benefit most from adaptation. Weaknesses: - The performance improvement of LBMs over smaller baselines is minimal, raising questions about practical benefits. Other Comments Or Suggestions: - It may be helpful to elaborate on interpretability aspects of LBMs vs. smaller EEG models, especially given the importance of model explainability in BCI applications. Questions For Authors: - Are there specific EEG paradigms where LBMs offer bigger gains, or is the 0.5% improvement consistent across tasks? - How sensitive are LBMs to the choice of pre-training datasets? Could incorporating additional diverse EEG tasks in pre-training yield larger improvements? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful review and valuable feedback. As mentioned in our response to Reviewer QMtY, we have decided also to extend our analysis results to capture one more popular BCI paradigm, namely Motor. Therefore, we added a new movement benchmark based on the High Gamma dataset (Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball. Deep learning with convolutional neural networks for eeg decoding and visualization. Human brain mapping, 38(11):5391–5420, 2017). Please see our response to Reviewer QMtY for the updated results Table. This extends our thorough analysis even further as highlighted by the current Reviewer 9Has. As highlighted by the reviewer, explainability is crucial in BCI applications. Therefore, developing interpretable AI models is essential. In recent years, many research papers have developed interpretable EEG models with strong performance. As we head toward large brainwave foundation models, maintaining this requirement remains critical. While Neuro-GPT follows a black-box approach, LaBraM incorporates a neural codebook method, enhancing model interpretability, as discussed in the background section of the manuscript. As shown in the analysis Table in our response to Reviewer QMtY, foundation models slightly outperform baseline models by an average of 1.2\%. However, as the reviewer rightly points out, a closer examination of individual tasks and the pre-training datasets used by these models is worth mentioning. Both foundation models (LaBraM and Neuro-GPT) have in their pre-training datasets paradigms that include motor-, ERP-, sleep- and eyes-related tasks. From the results, we observe that foundation models achieve comparable performance to baseline models in ERP, sleep, and eyes tasks. NeuroGPT significantly outperforms baseline models in motor. Interestingly, in the memory task—which was not explicitly included in the pre-training datasets—baseline models slightly outperform foundation models. This evidence further strengthens our claim that these models have yet to reach their full potential - particularly in achieving significant generalization across new tasks. By integrating domain-specific knowledge, such as leveraging various EEG modalities, and employing tailored training strategies, like brain-inspired masking techniques, these models could fully capture the diverse nature of EEG and largely outperform all current state-of-the-art baselines in various tasks with minimum required fine-tuning. In the camera-ready version of the manuscript, we will incorporate the aforementioned discussion points. We sincerely thank the reviewer for their valuable comments, which have helped us to improve our work.
Summary: An interesting perspective-style paper comparing state of the art EEG-focused ML models in traditional versus foundation application. The paper is well written and presents a solid comparison of two methods resulting in a statement state-of-the-art LBMs achieve only marginal improvements (0.5%) over traditional deep architectures. The results is an important message to the community, that simple transfer of methods between domains is sometimes not necessarily improving outcomes. Claims And Evidence: This paper delivers a thorough and insightful comparison of open-source datasets, providing a valuable perspective/review that is highly relevant to the ICML community. The analysis is well-structured and demonstrates a strong understanding of the field. This is a solid contribution that fits the ICML scope perfectly. Methods And Evaluation Criteria: The authors have meticulously selected benchmark BCI and sleep study datasets, reflecting the current state-of-the-art within the EEG community. Furthermore, the rigorous evaluation of both cutting-edge LBMs and established deep learning architectures underscores the paper's contribution to the field. This is a highly valuable contribution, offering important and insightful results. Theoretical Claims: The selection of Low-Rank Adaptation (LoRA) for parameter-efficient fine-tuning (PEFT) of large pretrained models is a particularly insightful technical and theoretical contribution. This decision enables a clear and solid comparative analysis across diverse tasks. This is a very well-executed piece of work. Experimental Designs Or Analyses: The authors have made excellent choices in selecting datasets and comparing models, reflecting current state-of-the-art practices. This paper provides a comprehensive analysis, yielding significant and well-supported results. Supplementary Material: Given the nature of this review/perspective submission, supplementary materials are not expected, and their absence does not detract from the paper's overall contribution. Relation To Broader Scientific Literature: This is a strong contribution that effectively evaluates the current state-of-the-art in the EEG field, specifically comparing foundation models with classical methodologies. This work has the potential for considerable impact. Essential References Not Discussed: The provided references are comprehensive and relevant to the scope of this work. Other Strengths And Weaknesses: This is a solid contribution with the potential for significant impact within the field. Other Comments Or Suggestions: None. Questions For Authors: The concluding statements are carefully worded and diplomatic. Adding a more pronounced perspective on the future outlook for LBMs in this application could offer valuable insight to readers. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their thoughtful review and valuable feedback. As the reviewer highlights we have meticulously selected various benchmark BCI paradigms datasets, reflecting the current state-of-the-art within the EEG community. We decided also to extend these results to capture one more popular BCI paradigm, namely Motor. Therefore, we added a new movement benchmark based on the High Gamma dataset (Robin Tibor Schirrmeister, Jost Tobias Springenberg, Lukas Dominique Josef Fiederer, Martin Glasstetter, Katharina Eggensperger, Michael Tangermann, Frank Hutter, Wolfram Burgard, and Tonio Ball. Deep learning with convolutional neural networks for eeg decoding and visualization. Human brain mapping, 38(11):5391–5420, 2017). The updated results are presented below: Table: Classification accuracy of finetuned foundation models and standard deep learning architectures, reported as mean (std). Each trained/finetuned for 20 epochs with 10 fold cross-validation. Bold values indicate best performance per task or overall, italic values indicate next best performance. | Model | Motor | ERP | Memory | Sleep | Eyes | Mean | |-----------------------|---------------|---------------|--------------|--------------|--------------|--------------| | EEGNet | 0.657 (0.087) | **0.912** (0.009) | _0.660_ (0.022) | 0.624 (0.037) | 0.803 (0.061) | 0.731 (0.024) | | EEGInception | 0.590 (0.087) | 0.896 (0.007) | **0.669** (0.021) | _0.688_ (0.057) | 0.823 (0.038) | 0.733 (0.021) | | LaBraM | 0.614 (0.096) | _0.911_ (0.013) | 0.643 (0.040) | **0.704** (0.025) | _0.840_ (0.041) | _0.742_ (0.023) | | NeuroGPT (full) | _0.682_ (0.083) | 0.904 (0.012) | 0.610 (0.052) | 0.665 (0.030) | 0.821 (0.052) | 0.736 (0.025) | | NeuroGPT (encoder) | **0.695** (0.085) | 0.908 (0.012) | 0.634 (0.035) | 0.647 (0.024) | **0.843** (0.045) | **0.745** (0.027) | The performance difference between NeuroGPT (the best foundation model in the new analysis) and EEG-Inception (the best baseline network) is 1.2\% (improvement compared to the previous 0.5\%) but with a considerable increase in the number of trainable parameters and larger std among folds. In terms of a more pronounced perspective on the future outlook for LBMs, we strongly believe that this extensive study highlights critical considerations for the research community. In the camera-ready version of the manuscript, we could also add that we believe the future of Large Brainwave Foundation Models (LBMs) should go beyond merely adopting transfer techniques from other domains. Instead, they should integrate domain-specific knowledge—such as leveraging various EEG modalities—and employ tailored training strategies, like brain-inspired masking techniques. These are essential elements to fully capture the diverse nature of EEG and build an efficient and effective LBM that will largely outperform all current state-of-the-art baselines in various tasks with minimum required fine-tuning. In the camera-ready version of the manuscript, we will incorporate the aforementioned discussion points. We sincerely thank the reviewer for their valuable comments, which have helped us to improve our work. --- Rebuttal Comment 1.1: Comment: The reviewer affirms the acceptance decision, acknowledging the authors' successful integration of revisions and explanations.
null
null
null
null
null
null
Mind Your Step (by Step): Chain-of-Thought can Reduce Performance on Tasks where Thinking Makes Humans Worse
Accept (poster)
Summary: The paper investigates the conditions under which CoT prompting, a widely used technique to improve the performance of LLMs/LMMs, can actually reduce model performance. The authors draw inspiration from cognitive psychology, focusing on six tasks where verbal thinking (deliberation) has been shown to impair human performance. They adapt these tasks to evaluate the impact of CoT on state-of-the-art models, finding that CoT significantly reduces performance in three of the six tasks (implicit statistical learning, face recognition, and classification of data with exceptions), while the effects are mixed or negligible in the remaining three (logical inconsistency, spatial intuitions, and working memory). The paper suggests that tasks where verbal thinking harms human performance may also be problematic for models using CoT. Claims And Evidence: The claims made in the paper are generally supported by clear and convincing evidence. The authors conduct extensive experiments across multiple state-of-the-art closed-sourced models (e.g., GPT-4, Claude, Gemini) and provide detailed results showing significant performance drops when CoT is applied to tasks like implicit statistical learning and face recognition. It is better to provide a deeper exploration of why CoT fails in these specific tasks, as the current analysis is somewhat surface-level. Methods And Evaluation Criteria: The methods and evaluation criteria are appropriate for the problem at hand. The authors adapt well-known psychological tasks to evaluate LLMs/LMMs, scaling them up to modern use cases. The evaluation metrics (accuracy, number of passes, etc.) are standard and well-suited to the tasks. Theoretical Claims: The paper does not make strong theoretical claims. The authors primarily focus on empirical results and the heuristic that connects human cognitive limitations to model performance under CoT. Experimental Designs Or Analyses: The experimental design is sound. The authors adopt tasks from psychological studies to fit the capabilities of LLMs/LMMs. - The paper does not explore the impact of different CoT prompting strategies in depth, which could provide more insights into why CoT fails in certain tasks. - The tasks where CoT does not harm performance (e.g., logical inconsistency) are not analyzed as thoroughly as the tasks where it does. Supplementary Material: Yes. The supplementary material includes detailed descriptions of the tasks, prompts, and datasets used in the experiments. I also reviewed the additional results, such as per-round accuracy analysis for the classification task with exceptions. Relation To Broader Scientific Literature: The paper connects well with the broader literature on CoT prompting and cognitive psychology. It builds on prior work showing that CoT can improve performance on certain tasks (e.g., symbolic reasoning) but also acknowledges cases where CoT can be detrimental. Essential References Not Discussed: It is better to include a more thorough discussion of prior work on the limitations of CoT prompting. For example, recent studies have shown that CoT can increase harmful outputs or fail in tasks requiring planning (e.g., Kambhampati et al., 2024). Additionally, the paper does not discuss alternative prompting strategies like Tree-of-Thought or Self-Consistency, which have been shown to improve reasoning in some cases. Including these references would provide a more comprehensive view of the current state of CoT research. Other Strengths And Weaknesses: Strengths: - The paper tackles the significant and underexplored issue of when CoT might harm model performance. - The experiments are well-designed and provide clear evidence of CoT's negative impact in certain tasks. Weaknesses: - The paper lacks a deeper theoretical explanation for why CoT fails in certain tasks. Although the proposed heuristic is reasonable, it lacks rigorous testing or validation. - The analysis of tasks where CoT does not harm performance is less thorough. It raises some questions about the generalizability of the findings. - It is better to include a broader discussion of alternative prompting strategies and their potential to mitigate the issues identified. - The heuristic connecting human cognitive limitations to model performance is interesting but not rigorously tested. Could the authors provide more evidence or experiments to validate this heuristic? Other Comments Or Suggestions: - The paper is well-written and easy to follow, but it is better to provide a more detailed discussion of the implications of the findings for the design of LLMs and LMMs and their reasoning prompting. - The authors should consider exploring the impact of different CoT prompting strategies (e.g., Tree-of-Thought). Questions For Authors: Please kindly see the weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review. **Essential References:** “Include a more thorough discussion of prior work on the limitations of CoT prompting (e.g., Kambhampati et al., 2024).” **Response:** This paper is already cited in our related work (L124-126): “In related settings such as planning, there is little benefit from CoT prompting (Kambhampati et al., 2024)”. Feel free to suggest more references if others are missing. **Weakness 1 & 4:** “The proposed heuristic is reasonable, but lacks rigorous testing or validation.” “Could the authors provide more evidence or experiments to validate this heuristic?” **Response:** We agree with the reviewer that more depth is better. We add **4 additional experiments**. We vary the difficulty of tasks 1–3, and conduct an ablation on temperature. 1. Artificial grammar learning, varying complexity of FSG: We conduct an ablation reducing nodes in the finite state automata that generate the artificial grammars. While the original had 6 nodes, we iteratively reduce to 5, 4, and 3 nodes by merging nodes (see https://imgur.com/a/VCBQYEB). Across all valid FSGs with no unused nodes, we observe the following accuracies: 5 nodes, zero-shot = 0.886, CoT = 0.766 4 nodes, zero-shot = 0.837, CoT = 0.665 3 nodes, zero-shot = 1.000, CoT = 1.000 We see across varying complexity that CoT consistently hurts performance. 2. Facial recognition, varying level of similarity: We conduct an ablation reducing task difficulty: Instead of similar faces, we sample 5 faces with different descriptions. For a visual example, see https://imgur.com/a/1PsqIzd. Across 100 randomly chosen sets, CoT continues to drastically reduce performance. GPT-4o has a zero-shot accuracy of 0.61, but CoT accuracy is only 0.32, corroborating our findings that CoT reduces performance. 3. Classifying data with exceptions, binary oracle feature: In this task, the oracle feature was license plates, which mapped to the correct category without exceptions. However, LLMs may find it difficult to build a map from 6 character license plates to a binary class. We conduct an ablation where we change the oracle feature to a binary feature. We replaced “license plate” with “license plate type”, a feature with labels {0, 1}. Other features remained the same. We evaluated GPT-4o with 25 trials for up to 7 passes over the list. Direct prompting took an average of 1.84 passes to get all labels correct, while only 1 / 25 CoT trials achieved perfect classification within 7 passes, so CoT took >250% more passes to learn all labels. 4. For our temperature ablation, please see our response to reviewer 3 (fD9f). **Weakness 2:** “The tasks where CoT does not harm performance are not analyzed as thoroughly as the tasks where it does.” **Response:** We focused on the negative cases because the vast majority of the literature so far has focused on the positive impacts of CoT. Understanding the negative cases can help us identify the settings where CoT is likely to fail in the future. **Weakness 3:** The paper does not discuss alternative prompting strategies like Tree-of-Thought or Self-Consistency, which have been shown to improve reasoning in some cases.” **Response:** We discuss both Tree-of-Thought (ToT) and self-consistency in the paper, and we also conducted an ablation on ToT for artificial grammar learning. See Discussion, L427-438, “Types of inference-time reasoning”. For artificial grammar learning, ToT improved accuracy on GPT-4o (64.55% vs. 62.52%), but was still far from zero-shot performance (94.00%), providing support that our findings extend across these techniques. We are happy to conduct more analyses before the camera ready if the reviewer believes it is imperative. **Other Comments:** “it is better to provide a more detailed discussion of the implications of the findings for the design of LLMs and LMMs and their reasoning prompting.” **Response:** Great idea! We add the following to our discussion. Please feel free to suggest changes. Implications for the design of LLMs, LMMs, and prompts In our experiments, we observe that CoT can also perform worse at certain types of tasks. This suggests that models should be flexible in choosing when to use CoT. Towards this, one promising direction is rational metareasoning — when people are faced with a task, they often make trade-offs between increased costs of reasoning and marginal benefits they would attain. We could prompt LLMs to do the same before solving a task. In this direction, De Sabbata et. al., (2024), trained LLMs to use intermediate reasoning steps only when necessary by incorporating a term into the reward function that penalizes unnecessary reasoning. Future works may further incorporate types of reasoning failures such as the ones we study into these training objectives. De Sabbata, C. Nicolò, Theodore R. Sumers, and Thomas L. Griffiths. "Rational metareasoning for large language models." arXiv preprint (2024)
Summary: This paper aims to explore the settings where CoT reduces performance from the perspective of cognitive psychology. It focuses on six tasks where extensive thinking affects human performance. For three of these tasks, the authors find that current models experience a performance drop when allowed more reasoning. In the other three tasks, models exhibit both positive and negative effects. This work demonstrates how insights from psychology literature can inspire the evaluation of LLMs and inference-time reasoning. Claims And Evidence: The claims made in the paper are well supported by both psychological analysis and evaluation results. Methods And Evaluation Criteria: The evaluation is the main contribution of this work and is well-performed. The paper also provides a detailed description of the evaluation data curation. Theoretical Claims: No theoretical claims or proofs were discussed in this work. Experimental Designs Or Analyses: The experimental design heuristics are derived from psychology literature, making them a reasonable choice for studying LLM behavior. The experiments are conducted on a variety of models but lack a consistent model size scaling. Supplementary Material: I briefly reviewed the data generation process. The authors provide detailed steps for creating the data. Relation To Broader Scientific Literature: This work provides a fresh perspective on cases where CoT may fail to improve performance. It represents a novel contribution to the community and could inspire further exploration of the underlying reasons and test-time inference to mitigate such issues. Essential References Not Discussed: NA Other Strengths And Weaknesses: **Strengths:** 1. The paper is well-written and easy to follow, even for readers without a background in psychology. 2. It draws inspiration from psychological findings and offers novel insights. The studied topic is of interest to the community. 3. The overall evaluation is comprehensive and sound. **Weaknesses:** 1. It is difficult to determine the generalizability of the findings, especially since each task is specifically designed for its respective category. 2. Most models are evaluated with a temperature of 0 during inference. Since the authors also mention advanced prompting methods such as self-consistency, it would be interesting to see whether the findings still hold when sampling multiple times using temperature sampling. Other Comments Or Suggestions: NA Questions For Authors: 1. Is there a correlation between performance drop and model size? The current selection of models appears somewhat arbitrary, making it difficult to observe a clear scaling relationship. Have you conducted any analysis to examine this trend? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging review! **Weakness 1:** “It is difficult to determine the generalizability of the findings, especially since each task is specifically designed for its respective category.” **Response:** **A. Generalizability within categories** The studies we choose are the most classic or well known studies in the psychology literature with respect to its category. In our experimental design, we strived to create the most representative task with minimal edits to the original study. In addition, we did not cherry-pick results — all the results represent the first iteration of testing said category. **B. Generaliability with respect to variations** To show generalizability across variations of the task, we conduct **3 additional experiments** varying the problems in each of the failure cases. We find that our results are consistent across variations in difficulty, which provides further confidence in the generalizability of the findings: 1. Artificial grammar learning, varying complexity of underlying FSG: We conduct an ablation reducing the nodes in the finite state automata that generate the artificial grammars. While the original had 6 nodes, we iteratively reduce to 5, 4, and 3 nodes by merging nodes (see https://imgur.com/a/VCBQYEB). Across all valid FSGs with no unused nodes, we observe the following accuracies: 5 nodes, zero-shot = 0.886, CoT = 0.766 4 nodes, zero-shot = 0.837, CoT = 0.665 3 nodes, zero-shot = 1.000, CoT = 1.000 We see across varying complexity that CoT consistently hurts performance. 2. Facial recognition, varying level of similarity: We conduct an ablation reducing the difficulty of the task. Instead of similar faces, we sample 5 faces with different descriptions. For a visual representation, see https://imgur.com/a/1PsqIzd. Across 100 randomly selected sets, we find that CoT continues to drastically reduce model performance. GPT-4o has a direct prompting accuracy of 0.61, but CoT accuracy is only 0.32, corroborating our findings that CoT reduces performance. 3. Classifying data with exceptions, binary oracle feature: In this task, the oracle feature was license plates, which mapped to the correct category without exceptions. However, LLMs may find it difficult to build a map from 6 character license plates to a binary class. We conduct an ablation where we change the oracle feature to a binary feature. We replaced “license plate” with “license plate type”, a feature with labels {0, 1}. Other features remained the same. We evaluated GPT-4o with 25 trials for up to 7 passes over the list. Direct prompting took an average of 1.84 passes to get all labels correct, while only 1 / 25 CoT trials achieved perfect classification within 7 passes, so CoT took >250% more passes to learn all labels. This mirrors our findings that CoT hurts performance in this type of task. **C. Generalizability across categories** Our generative process for the categories of human failure tasks was as follows: 1. Two senior cognitive scientists generated all cases they could come up with in which explicit or verbal thinking impairs human performance on some task. Our list branches cognitive psychology (e.g., task 1), perception (task 2), educational psychology (e.g., task 3), and spatial cognition (task 5). 2. We categorized these under themes and chose tasks that were most representative of each category based on the literature. 3. We then adapted these tasks to an LLM, ensuring that our dataset matches ML standards of scale and LLM / LMM applications, yielding our final 6. Thus, we have reasonable belief that our list of categories is generalizable across the psychology literature. At the same time, we acknowledge that we are limited by the coverage of psychological literature, and thus generalizability across types of tasks is restricted (see Discussion, “Scope of application”). **Weakness 2:** “Most models are evaluated with a temperature of 0 during inference. Since the authors also mention advanced prompting methods such as self-consistency, it would be interesting to see whether the findings still hold when sampling multiple times using temperature sampling.” **Response:** This is a great suggestion. We conducted some ablatons on different temperatures (t = 0.5, 0.7) across the full 4400 problems for the artificial grammar learning task on GPT-4o. Accuracies were as follows: t = 0, zero-shot = 87.5, CoT = 64.4 (original results for reference) t = 0.5, zero-shot = 88.3, CoT = 63.6 t = 0.7, zero-shot = 87.8, CoT = 63.6 Thus, our results seem to be robust to variations in temperature sampling.
Summary: This work delves (no, not written by an LLM) into the conditions under which CoT works. Often in ML research, only positive results are presented, and the many things that don't work never see the light of day. The contribution of this work is firstly to make explicit that CoT doesn't always works, and more importantly to test a hypothesis of why/when it doesn't work. As a good starting point, the authors look at the behavioral psychology literature, and shortlist 6 tasks for which human performance drops with (more) thinking. Proxy tasks are then performed by a number of SOTA LLMs/LMMs to investigate if CoT also causes performance drops in these models. ## Update after rebuttal: In review of the additional experiments and the reasonable responses, I upgrade my review from 2 to 3. However, overall while I like the approach and sympathize with the conclusions, I think the piece of work could still be improved significantly. **If there were a 2.5 rating option, I would select that.** Regardless of acceptance/rejection, I think the work may not be entirely convicing to a more traditional ML/CS readership. I urge the authors to consider how their case can be made more convincing, and whether perhaps a different venue (if rejected) or additional venue for follow-on work (if accepted) with a different readership/slant may be better. Claims And Evidence: 1) I like the goal of this work, but ultimately it's an observational / analytical report. While the paper purports to "understand and predict" (Introduction, pg 1 right column top para) when CoT has a negative effect, the paper doesn't really predict in the rigorous or quantitative sense. It would have been much stronger if ultimately a large variety of tasks were performed by the LLMs/LMMs with and without CoT, and some sort of predictive computational model trained. In other words, what we really want (but the authors didn't do) is that given a new/unseen task, predict accurately whether CoT should be used or not. 2) If only 3 of the 6 tasks had clear negative effects in models (i.e. like humans), then there are very mixed results, and it's not clear whether the work can make any strong claims, other than just reporting these results. 3) On page 8, first para, the authors write: "strongly supporting the hypothesis that our heuristic is better than random at finding failure in CoT"... this seems to be a very weak and uninteresting claim. Methods And Evaluation Criteria: Methods and evaluation criteria were generally sound. Theoretical Claims: Not applicable. Experimental Designs Or Analyses: Experimental designs / analyses were generally sound. Supplementary Material: I did not review the supplementary material. Relation To Broader Scientific Literature: Interesting contribution to investigate deeper the phenomenon of CoT, where many papers and model use it, but never discuss or show the setting / expts in which CoT doesn't help or makes things worse. This is a negative (or unclear) result in the sense that of the 6 tasks tested, only 3 fulfilled the (implicit) expectation or hypothesis that CoT would produce performance reductions, as they did with humans. I mean this as a neutral comment (negative results can be valuable). Essential References Not Discussed: Nil. Other Strengths And Weaknesses: Nil. Other Comments Or Suggestions: Nil. Questions For Authors: Nil. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful review. We are also grateful that the reviewer appreciates our approach of providing our full results rather than cherry-picking cases. **Claims and Evidence 1:** “While the paper purports to "understand and predict" when CoT has a negative effect, the paper doesn't really predict in the rigorous or quantitative sense.” **Response:** The sense in which we are able to predict cases where CoT has negative effects is that we are able to identify risky tasks based on the psychological literature, for which we do find large negative effects from CoT in model performance. We do agree that this is a looser interpretation of “predict” — however, the failure cases that we do find are within structured categories that share psychological explanations, thus allowing us to rigorously predict reductions in performance for altered variations of these tasks. To illustrate this, we provide results from **3 additional experiments**: 1. Artificial grammar learning, varying complexity of underlying FSG 2. Facial recognition, varying level of similarity between faces 3. Classifying data with exceptions, binary oracle feature These additional experiments each have changes that significantly differentiate them from their original counterparts by varying difficulty/complexity. Across all these experiments, we find large decreases in CoT performance compared to zero-shot, demonstrating the within-category robustness of our findings. Please see our response to reviewer 3 (fD9f) for details on each additional experiment. **Claims and Evidence 1, pt 2:** “It would have been much stronger if ultimately a large variety of tasks were performed by the LLMs/LMMs with and without CoT, and some sort of predictive computational model trained. In other words, what we really want (but the authors didn't do) is that given a new/unseen task, predict accurately whether CoT should be used or not.” **Response:** We agree that this a valuable goal for future research. However, it’s also worth mentioning that this is incredibly hard: In decades of overthinking research in psychology, there is still no progress towards a generalizable “overthinking” classifier for humans. Thus, it seems reasonable that building something similar for LLMs would also be very difficult. The current state of literature on CoT largely focuses on tasks developed in the NLP literature such as MMLU. Our paper builds towards the goal the reviewer suggests by exploring a novel set of tasks inspired by the psychology literature, which we find result in a number of large negative effects of CoT. Such cases are uniquely informative for understanding the limits of CoT, and relevant to developing better predictive models. At the same time, we certainly do not claim to have found the predictive computational model that the reviewer proposes. We highlight this in our discussion (“Scope of application”, page 8): “While our psychology-based heuristic offers a strategy for identifying failure cases of CoT, it is unlikely to cover all cases where CoT decreases performance. Existing psychological research has been guided by a variety of theoretical and practical considerations, but does not offer an exhaustive or representative sample of all tasks, and will miss cases that are uniquely interesting to study in models but not humans. Thus, we envision our contribution to be complementary to existing evaluation methods in natural language processing.” **Claims and Evidence 2:** “If only 3 of the 6 tasks had clear negative effects in models (i.e. like humans), then there are very mixed results, and it's not clear whether the work can make any strong claims, other than just reporting these results.” **Response:** In previous studies of CoT, negative impacts are much rarer than those seen in the six cases we considered. To support the claim that our tasks resulted in a higher failure rate, we conducted a bootstrapping significance test that found that our method of searching for CoT failures is more effective than previous attempts. This includes quantifying by both failure magnitude (estimated p < 0.000001) and failures irrespective of magnitude (estimated p < 0.00011). See Section 4.5 in the paper for details. Thus, we believe that we can make the claim that our method for exploring CoT failure tasks is more efficient than (and also complementary to) previous endeavors. **Claims and Evidence 3:** "strongly supporting the hypothesis that our heuristic is better than random at finding failure in CoT"... this seems to be a very weak and uninteresting claim. **Response:** We agree this claim was not particularly strong in its original statement. We have tried to clarify our claim by replacing this with “our heuristic is much more efficient than past endeavors at finding failures in CoT”, which is also more precise. We welcome further suggestions from the reviewer for how to phrase the takeaways of this section more clearly.
Summary: As Chain-of-thought (CoT) prompting becomes a widely used practice, this paper aims to answer the limitations of the approach. Authors propose a "heuristic" for determining limitations of CoT by drawing a comparison between CoT prompting and humans engaging in verbal thought. Inspired by psychological literature, six tasks where it's claimed that verbal thought perils performance were chosen and adapted them to evaluate LLMs and LMMs. Experiments show that three of these tasks see a drastic decrease in performance caused by adding CoT to zero-shot prompting, and the effect is more pronounced in stronger models. In other tasks, however, performance was either improved or not affected by CoT. Some bootstrapping results were provided to prove the effectiveness of the heuristic of task choosing. Claims And Evidence: The major claim of the paper is the effectiveness of the proposed heuristic: LLMs/LMMs perform worse with CoT in tasks where verbal thinking perils humans. Authors claim that, although not perfect due to intrinsic differences between humans and models, the heuristic allows for a more efficient identification of tasks unfit for CoT. The claim was supported by choosing 6 tasks guided by the heuristic, evaluating and analyzing effects of CoT, and conducting bootstrapping experiments to prove that the effectiveness of the heuristic is beyond random selection. The claim is fairly intuitive and, given the significant experiment and bootstrapping results, largely convincing. The experiments involved a wide variety of models and produced results with high confidence in some scenarios; there are also experiments corresponding to scenarios where authors predicted the heuristic would be less convincing; bootstrapping against random selection is considered too. However, I find it hard to be convinced by a qualitative heuristic. It's not explicitly stated, not to mention theorized, how the 6 tasks were chosen based on the heuristic, and nor is how should the heuristic be applied to a broader task pool. Therefore I don't see how the experiment results conclude the effectiveness of the heuristic. And this is not obvious: for some of the tasks, psychological literature only claim that participants "cannot provide verbal basis" rather than "perform worse when providing verbal basis"; why are such tasks chosen based on the heuristic? Another thing is the brevity of the bootstrapping section. I imagine ablation studies in proving this claim is quite important, but there's no explanation of the setting and illustration of complete results to make this section convincing enough. Methods And Evaluation Criteria: The method proposed is to detect unfit-for-CoT tasks based on whether verbal contemplation perils human performance in these tasks. Evaluation criteria is a subset of intuitive reasoning tasks in psychology literature, on which experiments were performed to study the effect of CoT. As mentioned before, the "heuristic" proposed is a little confusing as a method: it lacks in practicability (i.e. how to determine extents to which tasks are appropriate for CoT in an arbitrary setting) and informativeness (i.e. what aspects and what extents of human under-performance under verbal deliberation signals a unfit-for-CoT task). Evaluation was more focused on performance change of models with respect to CoT addition in separate tasks. Indeed promising results that align with authors' prediction were provided, but I'd like to see more overall evaluations on whether the given heuristic is providing trustworthy predictions of CoT impairment. Otherwise, this paper would seem more like a case study of discrete tasks that's a little less powerful in supporting the method. I'm aware of the bootstrapping study, but the setting and results are still a little hazy. What does the 378 comparisons consist of? Are the 6 tasks newly selected to the same as before? If so, why sample multiple times again, as it is already done in the main experiments? Theoretical Claims: I didn't see explicit theoretical claims and/or proofs, except for the comparison between human underperforming with verbal contemplation and LLM/LMM underperforming with CoT. This claim is comprehensive in that multiple defects of the comparison were discussed, and that it's based on solid psychological evidence, but I don't think it's a provable theoretical claim, at least it's not framed to be one. Experimental Designs Or Analyses: The main experiments were conducted with 6 tasks based on the psychological heuristic on models with ranging capabilities. Results for each model in zero-shot prompting and CoT prompting were provided, as well as p-values. Detailed analyses were present in both experiments where the hypothesis hold true and false. However, some details of the results were not in line. In some experiments, o1-preview was used as the CoT version of GPT-4o, while other experiments directly used GPT-4o with CoT; in some tasks, some models were evaluated on subsets of the problems while others were evaluated on the entire problem set; p-values were provided in some studies but not others. Reasons for these disparities were not explicitly stated (or maybe I missed them?) The analyses were a little lacking to me as well. For example, it's claimed that in a certain task “CoT often improved performance, attributable to both the low base performance and the logical reasoning component", but can these be solved as well by few-shot prompting? Does CoT improve performance in few-shot settings as well? Moreover, sound explanations for situations where performance drop caused by CoT is less pronounced (even negligible) for weaker models were lacking. Supplementary Material: I reviewed all the code provided as supplementary material. The task generation and evaluation scripts and api calling scripts were quite well-organized, but I didn't find the inference code for open-source models present in the paper, such as Llama 3. Relation To Broader Scientific Literature: Previous studies on limitations of CoT have focused on the computational expenses, problems caused by its sensitivity to prompting and problems in training (i.e. over-fitting, difficulties in evaluation). This paper provides a new light on the intrinsic incompatibility of some tasks, encouraging carving a clear boundary for applications of CoT. Should the authors solidify the method (i.e. the proposed "heuristic"), I believe this would be a novel line of work to build upon. Essential References Not Discussed: I'm not aware of any. Other Strengths And Weaknesses: The introduction of psychological studies and the parallel between human performance and predictions with LLM is brilliant; authors also removed previous assumptions that CoT's perils arise mainly from sensitivity to prompts, and focused on tasks themselves. Clarity of writing needs improving, especially in that figures showing experiment results and settings are lacking. The lack of theoretical framework also seems like a problem. Other Comments Or Suggestions: Please refer to the previous sections. Questions For Authors: I've stated some of mine confusions in the 'Claims', 'Method' and 'Experiments' section; It'd be great if these confusions can be answered. One of the more important questions is clarification on the setting and results of your bootstrapping experiment. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: **Methods And Evaluation Criteria 2:** “I'd like to see more overall evaluations on whether the given heuristic is providing trustworthy predictions of CoT impairment.” We agree with the reviewer that more depth is better. We add **4 additional experiments**: 1. Artificial grammar learning, varying complexity of underlying FSG 2. Facial recognition, varying level of similarity between faces 3. Classifying data with exceptions, binary oracle feature 4. Ablation on temperature sampling Across all these experiments, we find large decreases in CoT performance compared to zero-shot, demonstrating the within-category robustness of our findings. Please see our response to reviewer 3 (fD9f) for details on each additional experiment. **Claims And Evidence 1, Methods And Evaluation Criteria 1 & 3** “How the 6 tasks were chosen based on the heuristic [...] "cannot provide verbal basis"” “How should the heuristic be applied to a broader task pool” “practicability and informativeness” **Choosing tasks based on the heuristic:** In Section 3, we provide an overview of the verbal thinking literature, including that artificial grammar experiments found that humans “cannot provide verbal basis” for their judgments. However, in section 4.1, paragraph “Human failure”, we specify that “In the artificial grammar learning task, humans prompted to verbalize performed more poorly than those who were not so prompted (Fallshore & Schooler, 1993)”. For each of the other tasks, we also justify the choice in section 4. We recognize that this could be better structured for readers, and we have redrafted section 3 for the camera ready to provide better clarity on why each task was chosen. **How the heuristic can be applied to a broader task pool:** We chose the tasks based on the six most well-studied categories of human verbal thinking failures. However, each category does not only consist of one such task, but instead a broad range (e.g., verbal overshadowing does not only apply to facial recognition, but also phenomena like wine-tasting! (Melcher and Schooler, 1996)) Thus, for other tasks, we could search for stimulus patterns (such as reliance on another modality) that are based in psychological findings, and predict that e.g., due to verbal overshadowing, performance of that task with CoT would also be poor. We do acknowledge that this approach is limited in that it only covers patterns or categories that are studied in the psychology literature. At the same time, it nicely complements existing ML approaches (e.g., Sprague et. al., 2024). We highlight this limitation in our discussion (“Scope of application”, page 8). **Claims And Evidence 2:** “Brevity of the bootstrapping section. There's no explanation of the setting and illustration of complete results to make this section convincing enough” We agree that this could be more detailed. We have updated the writing to include the following points (we omit the full subsection due to space constraints). For the larger population, we take all evaluations that compare zero-shot and CoT in a recent metastudy, Sprague et al., (2024), for a total of 378. Models evaluated include Llama 2 7b, Mistral 7b, Llama 3.1 8b, Llama 3.1 70b, Gemma 2 9b, Phi-3 Small 8k, Qwen 2 7b, Qwen 2 72b, GPT-4o Mini, Gpt-4o, Claude-3 Haiku, Claude-3.5 Sonnet, Gemini 1.5 Flash, and Gemini 1.5 Pro. Tasks evaluated span various domains such as mathematical reasoning (e.g., GSM8k-Hard), commonsense reasoning (e.g., CommonsenseQA), soft reasoning (e.g., AGIEval LSAT AR Soft Reasoning), and various commonly used benchmarks (e.g., MMLU-Pro, Big-Bench Hard). For our experiments, we take all comparisons between zero-shot and CoT in our 6 tasks, for a total of 50. These are exactly all of the comparisons that we list in tables 1–6 in the main paper. For task 3, our main metric was number of rounds and not accuracy, so we replaced this with the difference in classification accuracy (e.g., y-axis of Figure 5). For each comparison, we take the percentage accuracy decrease (consistent with the Sprague et. al., paper) and use this as the value of the datapoint. We then bootstrap 100,000 samples of size 50 from the population and compute the mean percentage accuracy decrease. None of these 100,000 means were lower than the average percentage mean that we obtained in our experiments. For each comparison, we labeled accuracy decreases from CoT compared to zero-shot. Separate to the previous analysis, we bootstrapped 100,000 samples of size 50 and counted the number of accuracy decreases. Only 11 of the 100,000 samples had more instances of performance decreases than the 50 datapoints in our experiments. **Other Responses** Due to rebuttal length limitations, we were not able to provide all our answers in the box provided. For answers to the remaining questions, we provide the following anonymous link: https://cryptpad.fr/doc/#/2/doc/view/YYLHK9InmafRO6yRL0legsg2tB4IUv6KY-3Whm+ddhw/ --- Rebuttal Comment 1.1: Comment: Thank the authors for the efforts in providing extra experiments and details. Experiment details provided in the external link address my concern over the validity of experiments pretty well, and the additional tasks seem solid too. However, I still believe that a necessary condition for acceptance is your work's significance and how this significance matters to a larger community. On seeing the title of this paper, I was initially expecting a robust framework over impairment caused by CoT and heuristics based the framework. You "envision your contribution to be complementary to existing evaluation methods", but at least it's not explicit to me how can I incorporate your results when I conduct such evaluation over reasoning tasks. --- Reply to Comment 1.1.1: Comment: Thank you for engaging further with our rebuttal! We are happy to hear that our additional experiments and provided details have resolved concerns regarding validity. **Reviewer Rebuttal Comment:** (aspect 1) "how this significance matters to a larger community [...] I was initially expecting a robust framework over impairment caused by CoT and heuristics based the framework" **Response:** We agree that something like this would be the final goal of this line of research --- a robust test-time algorithm that can reliably infer when to use reasoning. However, it’s also worth mentioning that this is incredibly hard: In decades of overthinking research in psychology, there is still little progress toward a generalizable “overthinking” classifier for humans. Thus, it seems reasonable that building something similar for LLMs would also be very difficult. We believe this paper's significance is that we shed light on a psychology-inspired connection to help explain when CoT failures occur -- leveraging promising findings with significant statistical power. Parallels with human cognitive errors **suggest that these failures we observe are not arbitrary, but also reflect deeper patterns in reasoning.** We will also adjust the title and framing to better reflect its role as a valuable scientific observation to build upon, rather than a finalized solution. **Reviewer Rebuttal Comment:** (aspect 2) "complementary to existing evaluation methods [...] how can I incorporate your results when I conduct such evaluation over reasoning tasks." **Response:** As the reviewer has stated, the questions of when and why CoT failures occur is under-explored. Current literature on CoT has only focused on tasks in the NLP literature e.g., MMLU. Our paper explores a novel set of tasks inspired by the psychology literature, which resulted in a number of large negative effects from CoT. Such cases are **uniquely informative for studying limits of CoT because of existing psychology literature that explain why these failures happen**, and are thus relevant to developing better predictive models. In addition to providing a valuable foundation for future work, **we will also release our six scaled-up task datasets as a human overthinking benchmark that ML practitioners can use** to evaluate human-like overthinking failures. Thank you very much for your feedback --- it helps us solidify our contribution and frame our position more clearly. Sincerely, Authors of Mind Your Step (by Step)
null
null
null
null
null
null
A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks
Accept (poster)
Summary: This paper tackles the recurrent approach to the large action model, where current robotic policies are de facto implemented as transformers. The paper systematically analyzes xLSTM under various conditions and across different environments, demonstrating the superior performance of xLSTM compared to transformer-based policies. Due to the recurrent characteristics of xLSTM, the Recurrent Large Action Model (RLAM) can efficiently scale up to large contexts without incurring the quadratic complexity associated with increasing tokens. ## update after rebuttal After carefully reading the rebuttal, I have decided to keep my original score. Claims And Evidence: The claims of this work are clear and well-supported. The xLSTM-based RLAM efficiently exploits large contexts, with sufficient ablation studies on each xLSTM block (mLSTM and sLSTM). The authors also conducted latency and throughput studies, comparing xLSTM to transformer-based architectures, demonstrating that the xLSTM-based architecture is more lightweight and efficient to deploy compared to transformers. Methods And Evaluation Criteria: The proposed method focuses on utilizing xLSTM with an existing RL model backbone, evaluating it with various RL benchmarks in offline RL with large transitions. Although the paper does not propose a new architecture, the comprehensive analysis of xLSTM-based RLAM is sufficient to identify trends in efficiency and performance for recurrent-based models compared to transformers through experiments on various benchmarks. Theoretical Claims: I checked the theoretical claims, especially Equation 2, which is based on the Decision Transformer framework. I also reviewed the reasoning behind excluding action tokens, which is well-justified. Experimental Designs Or Analyses: I reviewed the experimental design, and the paper provides sufficient information. However, in Section 4.4 (Inference Time Comparison), the model setup with custom kernels was not fully clear. Therefore, I have complaint that I did not entirely understand how the custom kernels were designed and adapted to accelerate inference speed, where this is not common knowledge for general readers. Supplementary Material: I fully reviewed the supplementary material, including implementation details. The paper provides detailed information about the environment and hyperparameters used for the experiments. Relation To Broader Scientific Literature: The contribution of this paper is not in introducing a novel architecture but in demonstrating the feasibility of recurrent models for large action models. With comprehensive analysis and ablation studies, the paper provides an important milestone for recurrent-based robotic action models. Essential References Not Discussed: Overall, the paper sufficiently discusses related references. Other Strengths And Weaknesses: Strengths 1. The recurrent-based action model is especially important in online, real-world settings, where context length varies dynamically, and parallelization is difficult. I think this paper would marks a potential turning point for real-world agent design. 2. The paper thoroughly analyzes different types of recurrent-based large action models, including xLSTM and Mamba, conducting ablation studies to assess performance and efficiency. Weaknesses 1. While the paper argues for the importance of recurrence in real-world settings, all experiments are conducted in offline RL. To fully validate RLAM, real-world experiments would be highly beneficial. 2. The paper extensively analyze architectural efficiency and performance, but it does not fully explore the advantages of recurrence in long-term memory retention. More experiments in memory-intensive environments could highlight how recurrent backbones differ from transformers in terms of state persistence and recall. Other Comments Or Suggestions: none Questions For Authors: 1. The paper adopts 256-bin discretization for continuous actions, following [Reed et al., 2022] and [Brohan et al., 2023b]. Many robotic models (e.g., OpenVLA [Kim et al., 2024]) use this discretization, especially for autoregressive architectures; however, some (e.g., ACT [Zhao et al., 2023]) output continuous actions directly. Are there specific reasons for modeling with discretization rather than direct continuous actions, or was this decision influenced by the Decision Transformer backbone? 2. The paper replaces Decision Transformer’s transformer backbone with xLSTM. Are there alternative architectures beyond Decision Transformer that might better suit recurrent models? Do the authors have insights into what could be an even better design for recurrent large-action models? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you for your constructive feedback. We appreciate your positive assessment and are very glad that you believe that our work may mark a potential turning point for real-world agent design. **Real-World Experiments:** We agree that experiments in real-world robotics settings would be valuable. While we believe that such experiment are out-of-scope for the current work, evaluating modern recurrent architectures in real-world robotics settings is an exciting direction for future research. Nevertheless, we believe our findings will transfer to real-world scenarios and that the advantages of recurrent architectures may be even more pronounced in real-world settings. For example, we conduct our inference time comparison on a high-end data-center GPU with 40GB of RAM and the Transformer runs OOM for larger context sizes. In contrast, applications on edge devices in real-world scenarios, may have to deal with less powerful accelerators, making the use of modern recurrent architectures attractive. Moreover, their ability to handle long sequences without increasing computational cost/requirements can be particularly beneficial for complex real-world applications, which often exhibit longer-term dependencies. **Memory-intensive Environments:** We again agree that exploring the advantages of modern recurrent architectures in long-term memory retention more would be interesting. While we cannot provide a detailed study within the time-frame of the rebuttal, we believe that this direction is interesting for future work. Note that the experiments on Dark-Room, which exhibits sparse rewards and a partially-observable observation space, go in a related direction (Figure 4 and 16). There we find that recurrent backbones compare favorably to Transformers (especially xLSTM [7:1], which enables state-tracking). In the meantime, we refer to [1] for a comparison of vanilla LSTM to the Transformer in memory intensive RL environments. Furthermore, we refer to the Figure 5 in [2], for a comparison outside the field of RL of xLSTM and Transformer on associative recall tasks. **Discretization for Continuous Actions:** The reviewer is correct, that we make use of discretization of continuous actions similar to prior works. The 432 task we consider exhibit both discrete/continuous action inputs and image/vector-based state representations. The use of discretization in our large action models is motivated by the need to handle both discrete and continuous action spaces. To this end, every action dimension is discretized into 256 uniformly spaced bins. The shared action head is used to predict the action bins of all continuous action dimensions jointly, which allows handling envs with different action spaces. Furthermore, because of the shared action head, we do not (have to) rely on autoregressive prediction of continuous action dimensions, which further speeds up inference for all backbones. At inference time, the number of dimensions of the current environment is known, and we extract the respective dimensions from the joint predictions. We describe this procedure in detail in Appendix B.3. **Alternative Designs:** In this work, our goal is to better understand whether modern recurrent backbones can be alternatives for large action models (LAMs) and focus our analyses on the DT setup as correctly identified by the reviewer. DTs rely on return-conditioning, but existing large-scale robotic models instead rely on behavior cloning (e.g., [2]). Therefore, we validate that our findings also transfer to a behavior cloning setting at the 206M scale (Section 4.3 and Appendix E.2). Exploring alternative architectures beyond the DT and BC settings is an exciting direction for future work. We believe that exploring MoEs [4,5] for multi-task settings, which may require specialization, can be a promising approach. Moreover, we would like to point the reviewer to an interesting concurrent work that studies the design space of imitation learning policies [6]. Once again, thank you for your helpful comments and positive assessment of our work. [1] “When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment”, NeurIPS, 2023\ [2] “xLSTM: Extended Long Short-Term Memory”, NeurIPS, 2024\ [3] “RT-2: Vision-Language-Action Models Transfer Web Knowledge to Robotic Control”, ArXiv, 2023\ [4] “Mixture of Experts in a Mixture of RL settings”, RLC, 2024\ [5] “Mixtures of Experts Unlock Parameter Scaling for Deep RL”, ICML, 2024\ [6] “X-IL: Exploring the Design Space of Imitation Learning Policies”, ArXiv, 2025 --- Rebuttal Comment 1.1: Comment: Dear Authors, thank you for your detailed response. I believe most of my concerns have now been addressed. --- Reply to Comment 1.1.1: Comment: Thank you for your reply. We are happy to hear that your remaining points have been addressed.
Summary: This paper investigates various architectures, including Transformers, Mamba, and xLSTM, for reinforcement learning. Building on the Decision Transformer framework, it systematically compares these architectures across 432 tasks spanning six datasets. The empirical results highlight xLSTM’s advantages in both performance and inference speed, demonstrating its effectiveness as a scalable alternative. Claims And Evidence: The claims in the paper are supported by thorough experiments. Methods And Evaluation Criteria: The proposed method and evaluation make practical contributions to the community. Theoretical Claims: There is no theoretical proof to check in this paper. Experimental Designs Or Analyses: The paper conducts comprehensive experiments and ablation analyses. Supplementary Material: I reviewed the experimental details and additional results. Relation To Broader Scientific Literature: The paper is based on the Decision Transformer [1] and further explores recent architectures Mamba and xLSTM to improve the performance and inference speed. [1] Chen, Lili, et al. "Decision transformer: Reinforcement learning via sequence modeling." *Advances in neural information processing systems, 2*021. Essential References Not Discussed: The references are enough to understand the paper. Other Strengths And Weaknesses: Strengths: - The paper is well-written and easy to follow. - The paper provides extensive experiments and ablation studies. - The paper compares the latent representations between Transformer and xLSTM, which is informative. Weaknesses: - The paper is based on Decision Transformer which has not been used much these days. It would be better to extend the architectures to other reinforcement learning or imitation learning settings. - The paper mainly compares Transformer, Mamba, and xLSTM from empirical evaluations, but does not discuss the reason. It would be valuable to provide more discussions regarding why xLSTM can outperform Mamba and xLSTM. - The paper compares the inference time using different context lengths from 50 to 24576. However, there is no indication that the model will perform better with a very long context length. Additionally, most robot learning tasks do not contain such long trajectories. It would be beneficial to discuss scenarios where they offer a significant advantage. Other Comments Or Suggestions: The paper introduces Transformer, Mamba, and xLSTM in the Introduction and Related Work parts. Since these architectures are the main focus, maybe it makes sense to explain their differences using a single paragraph. Questions For Authors: - I am a bit confused regarding the experiment setting. The paper uses 6 different datasets. Do the authors train one model over all the tasks? Additionally, in Section 3.2 “Shared action head”, can the authors explain more about action discretization? - From my understanding, Mamba and xLSTM will only inference faster when there are more than several thousand tokens (I am not absolutely sure). However, in Figure 6, the xLSTM performs still faster with a small context length. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful feedback and positive assessment of our work. **Imitation Learning:** We agree with the reviewer that studying modern recurrent architectures in settings other than the DT setting is important. In this work, our goal is to better understand whether modern recurrent backbones can be alternatives for large action models (LAMs) and focus our analyses on the DT setup. However, we want to highlight that we already provide experiments in an imitation learning / behavior cloning (BC) setting at the 206M scale, as suggested by the reviewer (Section 4.3 and Appendix E.2). Notably, performance trends across backbones mirror the results in the DT setting (Figures 27, 28), which indicates that our findings generalize beyond the DT framework. Moreover, we agree with the reviewer that studying modern recurrent architectures in RL is interesting. While we believe that such experiments are out-of-scope for this work, we listed online RL with modern recurrent architectures in our future work section (Lines 431-435). **Differences between Backbones:** While we discuss when and why to use which backbone (Section 5), we agree with the reviewer that an extended discussion on the reasons for performance differences between backbones is useful: * We observed that xLSTM outperforms Transformers/Mamba in terms of sequence prediction and env performance (Figure 2). One benefit of xLSTM is that it enables state tracking [1] via sLSTM blocks, which Transformers/Mamba cannot. This property can be useful for partially observable environments and may be a reason for the enhanced ICL performance (Figure 4). Another reason may be the enhanced domain-separation in the embedding space (Figure 5), which could facilitate task identification at inference time. Moreover, our results reflect performance improvements in language settings [2]. * Nevertheless, the differences may depend on the environment. For example, Transformers may have advantages for tasks where exact recall is required. For such tasks, self-attention is typically superior (Figure 5 in [2]) and can be important for decision-making tasks [3]. Therefore, the choice for the right backbone may depend on the task at hand. **Long Context:** * Generally, performance improves when training with longer sequences both for DT and xLSTM (Section 4.3). For DT, the avg. normalized score increases from 0.3 with C=1 to 0.7 with C=50 (Figure 23). Similarly, for xLSTM it increases from 0.4 to 0.8 (Figure 25). Furthermore, domains with longer episode lengths, like DMC or Atari, benefit more from increasing context (Figures 24, 26). This is because the history helps to predict the next action (e.g., by observing past mistakes). This highlights that LAMs can strongly benefit from increased context length, even on the simulated environments we consider. * We agree with the reviewer that for our environments, increasing the context to several thousand timesteps may not result in better performance. However, we believe that the ability to handle longer sequences can be beneficial for complex real-world applications that exhibit longer-term dependencies. Similarly, longer context can be beneficial for ICL applications, which benefits from keeping multiple episodes in the context, as indicated in Figure 4. We added a short discussion on long sequences in real-world tasks to our manuscript. **Experiment Setup:** Yes, we train a single model on datasets comprising 432 tasks. The environments contain both discrete/continuous action inputs and image/vector-based state representations. To handle both discrete/continuous actions, we make use of discretization. Every action dimension is discretized into 256 uniformly spaced bins. The shared action head is used to predict the action bins of all continuous dimensions jointly, which allows handling envs with different action spaces. At inference time, the number of dimensions of the current environment is known, and we extract the respective dimensions from the joint predictions. We describe this procedure in detail in Appendix B.3. **Inference Speed:** The reviewer is correct that the benefits of modern recurrent architectures become more important at longer sequence lengths. Importantly, the inference speed and memory consumption of xLSTM/Mamba does not change with increasing sequence lengths, which is particularly apparent in memory consumption (Figure 6c). It is true that in our experiments there are some inference time speed-ups at shorter context lengths, which may result from the different kernel implementations. Note that the sequence lengths are measured in timesteps with each timestep containing 3 tokens (largest C is 76K tokens). We revised our manuscript, and hope to have clarified your questions. [1] The Illusion of State in State-Space Models, ICML 2024 \ [2] xLSTM: Extended Long Short-Term Memory, NeurIPS, 2024 \ [3] When Do Transformers Shine in RL? Decoupling Memory from Credit Assignment, NeurIPS, 2023
Summary: The authors propose changing the Decision Transformer (DT) backbone from a transformer to the recently-proposed xLSTM. They perform large-scale experimentation and compare to other DT backbones. Claims And Evidence: The authors contribute: - A Large Recurrent Action Model (LRAM) using an xLSTM with favorable performance - Comparisons against other recurrent backbones - They release code and datasets In my opinion, they do what they claim. I will note that the claims are fairly weak. Methods And Evaluation Criteria: The authors evaluate their method by examining: - Scaling capacity compared to other popular LRAMs - Returns across many popular benchmarks (Atari, MuJoCo, etc) - The embedding space - Latency and resource usage compared to a decision transformer These experiments make sense, given they are comparing DT backbones. Theoretical Claims: The claims made are empirical and cannot be proven. Experimental Designs Or Analyses: The experiments seem well-founded and span a wide range of tasks and model baselines. Supplementary Material: The authors presents more experiments, but I did not read too deeply. Relation To Broader Scientific Literature: Recent prior work on DT has focused on changing backbones with Mambas, State Space Models, etc. It is only natural to consider the xLSTM as well. Essential References Not Discussed: None that I am aware of Other Strengths And Weaknesses: The paper is well written and easy to follow. The authors are the first to use an xLSTM within a decision transformer framework, and provide a thorough study of their architecture with large number of experiments. However, it is a bit disappointing that there is not much novel here besides changing the DT backbone with another well-known sequence model. As such, there appear to be relatively incremental increases in performance. Other Comments Or Suggestions: There isn't much new here, but the authors provide more than sufficient empirical validation of their method. Questions For Authors: The meaning of the embedding space (fig 5) is not clear to me. What does it mean if two Atari game embeddings are closer than an Atari and ProcGen embedding? Wouldn't the latter case suggest better generalization (generalizing across platforms, rather than within platforms). Could you comment on this? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful feedback. We are glad that you consider our experiments well-founded and the paper well written. **Embedding space:** * Regarding your question on the embedding space, we want to clarify that Figure 5 is constructed using the aggregated hidden states (averaged across the sequence) from the final layer of the respective agents and visualized via UMAP (see Appendix F for details). * The purpose of this visualization is to examine how the models organize their representations of different environments. In general, tasks within the same domain tend to share similar input characteristics - such as visual inputs (e.g., image frames) possible actions to perform, and reward structures - and are therefore more likely to be “grouped” together in the embedding space. Consequently, when embeddings of Atari games are closer to each other than to Procgen games, it indicates that Atari games share more similar underlying dynamics or inputs structures compared to Procgen. For reference, we now also include the embedding space plots for Mamba in our updated manuscript, as suggested by Reviewer 8NAr14. * We observe that xLSTM exhibits a slightly more refined and better-separated embedding space, which may be a reason for its better final performance. In contrast, DT produces embeddings with less clear separation between domains. This suggests that for the environments we consider in this work, it may be beneficial for the model to learn more “separate” representations, potentially because it facilitates task identification at inference time. However, we fully agree with the reviewer that generalization across domains is generally desirable if environments share structural similarities. Therefore, we believe that studying the learned embedding spaces of multi-task agents in environments that share more structure across domains is interesting for future work. We hope to have clarified your remaining question. Thank you again for your positive assessment of our work.
Summary: The authors introduce a Large Recurrent Action Model (LRAM), which replaces traditional Transformer architectures with xLSTMs to address real-time robotic applications. They demonstrate that the proposed model achieves significant speed improvements without compromising performance. Experimental validation across 432 tasks from six domains confirms LRAM's superiority over Transformers regarding inference speed while maintaining competitive predictive performance. ## Update after rebuttal The authors have partially addressed my questions. However, without any experimental results (e.g., via an anonymous link), the claims cannot be verified by the reviewers. Therefore, I maintain my score. Claims And Evidence: The authors' claims are supported by substantial experimental evidence, demonstrating that xLSTM generally performs at least as well as the Decision Transformer (DT) and often surpasses it. However, the experimental results also show that xLSTM frequently achieves performance similar to the state-space model (SSM) "Mamba." While the authors clearly illustrate xLSTM's speed advantages over DT, the paper lacks explicit evidence comparing inference speed against state-space models, which somewhat limits the strength of claims regarding real-time applicability. Lastly, the authors acknowledge that the primary target application of LAMs is robotics and that they only tested in simulations; they believe that their findings translate to real-world scenarios. Although there is no direct evidence of this translation in the current work, the authors address this limitation in their discussion. Methods And Evaluation Criteria: The proposed methods and evaluation criteria, including benchmark datasets, appear appropriate for the targeted problem. Evaluations of performance scores, inference speed, and the impact of fine-tuning are reasonable and thorough. Theoretical Claims: No major theoretical claims requiring validation are presented, as the study primarily emphasizes empirical model performance. Experimental Designs Or Analyses: The experiments and analyses are well-structured, effectively demonstrating the benefits of the xLSTM architecture in inference efficiency and multitask performance. A significant contribution is the thorough investigation into the beneficial effects of xLSTM block ratios, confirming that sLSTMs are advantageous for state-tracking over longer horizons. This aligns with findings in related fields (e.g., time series forecasting, (P-LSTM, Kong et al., ArXiv 2024), (xLSTM-Mixer, Kraus et al., 2024)), broadening the potential impact of this work. However, since the authors highlight dataset compilation as a contribution, evaluating how different data source distributions affect model performance would further enhance the paper's insights into offline RL action model pretraining. Supplementary Material: The supplementary materials were briefly reviewed, primarily to verify dataset details and additional experimental setups. Relation To Broader Scientific Literature: The authors build on existing large action models by addressing the inherent limitations of Transformer-based approaches in real-time robotics. By integrating and extending recurrent networks particularly through xLSTM architectures the paper advances prior work that has traditionally relied on Transformers. Moreover, by discussing state-space models (SSMs), the work connects to a broader scientific trend of improving inference speed and scalability in sequence modeling. Essential References Not Discussed: None that I am aware of. Other Strengths And Weaknesses: S1. Clear demonstration of xLSTM’s advantages for potential real-time robotic inference. S2. Robust experimental validation across diverse robotic tasks. S3. Valuable dataset compilation and pipeline. W1. Although omitting actions from your policy formulation (Equation 2) is deliberate, its current form may confuse readers comparing it directly to Equation 1. Clarifying the differences, possibly through color-coding or multi-line formatting, would enhance readability. W2. As mentioned, omitting Mamba’s performance from the experimental comparisons limits the impact of the work, as readers would benefit from a direct performance comparison between DT, xLSTMs, and Mamba. W3. The model configuration and their particual area of application is not quite clear to me, as the results in figure 3 are not properly discussed. Why does [1:0] work so much better on procgen than [7:1]. Other Comments Or Suggestions: I am really leaning towards a full accept, however in its current form the experiments do not provide the entire picture, as the performance of SSMs are missing. Questions For Authors: Q1: The dataset compilation is highlighted as a notable contribution. Could you explain the rationale behind the specific data ratios presented in section 4.1? Understanding the impact of different ratios on multitask capabilities would be insightful. Q2: Can you provide embedding plots for Mamba alongside xLSTM and DT in Figure 5? Q3: Could you include latency results for Mamba in Figure 6 to clarify the comparative performance advantages of xLSTMs versus other state-space models (Mamba)? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your helpful feedback on our work and your positive assessment. We are glad that you consider it a clear demonstration, robust experimental validation and valuable dataset pipeline. We address your open points in the following. **Policy formulation:** We agree with the reviewer that the differences between our policy formulation that omits actions (Equation 2) and the formulation in Equation 1 can be highlighted more clearly. Many thanks for the suggestion of using color coding. We now introduced color coding and believe it enhances readability. **Data ratios:** * A key motivation behind our dataset compilation was the scarcity of suitable existing datasets that span a large number of simulated tasks. To address this, our primary target was to assemble a collection of datasets that span as many tasks as possible to enable a robust comparison of sequence model architectures. To facilitate usability for future works, we consider standard benchmarks (e.g., Meta-World, Atari, Procgen) that are widely adopted by the community. Therefore, we hope that our data pipeline can serve as a solid basis for future research on multi-task agents. * Note that during pre-training, every domain is represented with approximately equal proportion in every update step (see Section 4.2 and Appendix C). Because the dataset sizes vary across domains (due to different numbers of tasks), this results in different numbers of total repetitions per dataset (Table 1). We opt for equal proportions, because we aim to study how the different backbones perform across domains, rather than optimizing performance on specific domains. * We agree with the reviewer that understanding the impact of the data ratios on multitask capabilities would be insightful. Varying the data ratios would, for example, allow studying potential interferences between the 432 tasks. While we believe that a thorough investigation on data ratios is out-of-scope for this work, because of the excessive cost or pre-training with varying ratios, we believe that studying data ratios in large action models represents an interesting direction for future work. We hope that our data pipeline and datasets may facilitate such future studies. We updated our manuscript to include a short discussion about the rationales behind our data ratios. **Embedding plots for Mamba:** Thank you for the suggestion. We added the embedding plots for Mamba to Appendix F in our manuscript (we cannot update the PDF). Mamba exhibits a more refined embedding space than DT, but slightly less refined than xLSTM. However, while a more refined embedding may help with task identification at inference time, it is unclear how strongly it benefits final performance. **Inference speed of Mamba:** We agree with the reviewer that the inference speeds for Mamba were missing from our original manuscript and are important to provide the entire picture. Initially, we focused our inference time tests on DT and xLSTM, as xLSTM tends to be slightly faster than Mamba in prior work (see Figure 9 in [1]). Therefore, we now repeated the inference time analyses conducted in Section 4.4 with Mamba and provide the detailed results in our updated manuscript. As expected, we find that Mamba exhibits the same linear scaling properties as xLSTM. Consequently, Mamba exhibits advantages over DT in terms of speed and memory for longer sequence lengths. In our experiments, Mamba runs slightly slower compared to xLSTM (0.002 sec/step vs. 0.0048 for B=16 in Figure 6b), but requires slightly less GPU RAM. We hypothesize that this is because the Mamba kernels are not compatible with `torch.compile`, which may result in a slowdown. With compatible kernels, that gap might be closed. Moreover, Mamba achieves throughputs similar to xLSTM. When using larger batch sizes with xLSTM, we found that decreasing the head dimension (more heads, same total hidden dim) is important for enabling higher throughput, and added an ablation on this to the Appendix. This is because a higher head dimension incurs more FLOPS. Generally, these results reflect findings in existing works [1,2]. Our manuscript improved considerably by addressing and incorporating your comments: thank you. In particular, believe that the additional results for Mamba strengthen the paper. [1] “xLSTM: Extended Long Short-Term Memory”, NeurIPS, 2024\ [2] “xLSTM 7B: A Recurrent LLM for Fast and Efficient Inference”, Arxiv, 2024
null
null
null
null
null
null
On Fine-Grained Distinct Element Estimation
Accept (poster)
Summary: This paper considers the distinct elements problem in a distributed setting. There are $\alpha$ servers and a universe of $n$ elements, represented as a frequency vector $S$ over all elements in the universe. Each server has a subset $S_i$ of the items. The goal is for the server to send messages to a coordinator such that the number of non-zero entries of $S:=\cup S_i$, denoted by $F_0(S)$ are estimated up to a multiplicative $(1\pm \varepsilon)$ factor. The authors give a messaging protocol that, assuming an upper bound of $\beta$ on the sum of overlaps $|S_i\cap S_j|$ over all pairs, uses $O(\alpha\log n\log\log n + \sqrt{\beta} \varepsilon^{-2}\log n)$ bits, bypassing the $O(\alpha(\log n + \varepsilon^{-2}))$ worst case lower bound. The authors also offer further improvements if the number if the number of pairwise collisions is promised to be less than $F_0(S)$. In addition, the authors show that in the regime where there results are interesting, that is in the regime where $\beta$ is small compared to $\alpha$, the dependency on $\beta$ achieved by their algorithms is also necessary. Finally, the authors also showcase that their ideas can also be extended to other fields, most notably streaming algorithms where they achieve a multi-pass streaming algorithm with only $\text{polylog}(\varepsilon^{-1},\log n)$ bits of space, as opposed to the polynomial dependencies on $\varepsilon^{-1}$ that are typically necessary. The algorithms use subsampling ideas to first obtain a coarse estimation of $F_0(S)$ and then likewise to refine the estimation to a the desired $(1\pm \varepsilon)$ factor. Claims And Evidence: - Methods And Evaluation Criteria: - Theoretical Claims: The proofs in the main body are correct. I only skimmed the results in the supplementary material, but the proof of the first streaming algorithm is also correct. Experimental Designs Or Analyses: - Supplementary Material: - Relation To Broader Scientific Literature: The distinct elements problem is very well studied, but now in the classic sense effectively closed. Some research on this area is still active, but those results typically focus on more general problems. The authors show that the worst case bound of $\Theta(\alpha (\log n + \varepsilon^{-2}))$ can be improved and that the assumption that they are making is, in some sense, necessary. Essential References Not Discussed: The related work is reasonable, so missing the following reference did not affect my rating of the paper. Mridul Nandi, N. V. Vinodchandran, Arijit Ghosh, Kuldeep S. Meel, Soumit Pal, Sourav Chakraborty: Improved Streaming Algorithm for the Klee's Measure Problem and Generalizations. APPROX/RANDOM 2024 It also uses subsampling as opposed to sketching ideas for distinct elements. Other Strengths And Weaknesses: I think that the algorithm would be a lot more convincing if it recovered the worst case bounds, or were at least competitive with the worst case algorthms without the assumption that $\beta$ is small. Additonally, most beyond worst case results analyze an algorithm that is already widely used. In this case a worst-case performance is not good and the algorithm is designed from scratch. The modelling of the problem and determining the parameterization in terms of $\beta$ has value. Unfortunately, the ideas themselves and the analysis is not too novel. The techniques have been around for quite a while and while piecing them together is not trivial, I also did not feel like I learned much when reading this paper. I feel it is a borderline paper, which I rounded up to a weak accept. Other Comments Or Suggestions: - Questions For Authors: - Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and constructive criticism. > Mridul Nandi, N. V. Vinodchandran, Arijit Ghosh, Kuldeep S. Meel, Soumit Pal, Sourav Chakraborty: Improved Streaming Algorithm for the Klee's Measure Problem and Generalizations. APPROX/RANDOM 2024 > It also uses subsampling as opposed to sketching ideas for distinct elements. Thank you for pointing out this reference. We have added a reference as an additional work that uses subsampling. > I think that the algorithm would be a lot more convincing if it recovered the worst case bounds, or were at least competitive with the worst case algorthms without the assumption that $\beta$ is small. Additonally, most beyond worst case results analyze an algorithm that is already widely used. In this case a worst-case performance is not good and the algorithm is designed from scratch. We emphasize that our bounds are competitive with the worst case bounds. We can further optimize the $O(\alpha\log n\log\log n)$ term to $O(\alpha\log n)$ by referencing an existing technique instead of Algorithm 1, so that overall the only difference between our bounds and the worst-case bounds is the $O(\log n)$ communication required by sending the identity of each sample. On the other hand, many datasets often exhibit some sort of skewed distribution and our results show that the communication can be significantly improved upon known bounds in these settings. > The modelling of the problem and determining the parameterization in terms of $\beta$ has value. Unfortunately, the ideas themselves and the analysis is not too novel. The techniques have been around for quite a while and while piecing them together is not trivial, I also did not feel like I learned much when reading this paper. We acknowledge that the core techniques used (e.g., subsampling, sketching primitives adapted for communication) are built upon existing foundational work in streaming and communication complexity. However, we believe the novelty lies in: 1) Identifying the collision parameter C/β as the key factor governing complexity beyond the worst case. 2) Designing protocols specifically tailored to leverage this parameter. 3) Providing a complete analysis including matching upper and lower bounds in this parameterized setting, which requires non-trivial adaptation and combination of existing techniques. While the tools may be familiar, establishing the tight parameterized complexity is our core technical contribution. We would also like to highlight novel techniques, such as using robust statistics to achieve our streaming algorithm for distinct elements, which allows for accurate estimation even in the presence of adversarial noise and memory constraints; to the best of our knowledge, no prior results utilize robust statistics for distinct element estimation in the streaming setting. Additionally, although there is a decent body of literature on distributional assumptions and fine-grained complexity, surprisingly there is comparatively much less literature for sublinear algorithms. In particular, there has recently been a number of works in the area of learning-augmented algorithms that specifically use distributional assumptions in addition to machine-learning advice to achieve better guarantees. Thus although distributional assumptions have been studied in the past, we respectfully believe there is value in exploring these directions in new areas, particularly for the purposes of bridging practical algorithms and impossibility results in theory.
Summary: The submission provides a more detailed theoretical analysis of the Distributed Distinct Element Estimation problem and shows that under certain assumptions on the distribution of the data points across servers (in particular concerning the number of collisions), previous lower bounds can be overcome. The main contributions include a new theoretical algorithm/protocol with more refined guarantees and an experimental evaluation. Claims And Evidence: Yes. Methods And Evaluation Criteria: Yes. Theoretical Claims: I did not spot any issues with the theoretical claims. Experimental Designs Or Analyses: I did not spot any issues with the experimental design. Supplementary Material: I did not carefully check the technical details in the supplementary material. Relation To Broader Scientific Literature: There is a well-documented connection to previous work. Essential References Not Discussed: N/A. Other Strengths And Weaknesses: The conceptual idea behind the submission is a good one: there are well-established lower bounds for Distributed Distinct Element Estimation, but these assume certain distributions and it is natural to ask whether the lower bounds can be circumvented when the distributions are closer to those that may occur in some real-world scenarios. The authors show that this is indeed the case. They also provide lower bounds which show that under the considered assumptions, their results are basically tight. The proofs are non-trivial, and overall the contributions are sufficient and in line with what I would expect from a primarily theoretical ICML paper. The problem itself also seems well-studied, although I am wondering whether it is still relevant in contemporary ML applications. If the authors are aware of more recent applications of the problem in ML, they are welcome to provide some. The paper is also well-written and nicely discusses the implications / context of the obtained results. ## Update after rebuttal I maintain my assessment and score. Other Comments Or Suggestions: N/A. Questions For Authors: In the algorithmic contributions (Theorem 1.1 and 1.2), what would happen if we mis-estimated the number of collisions? In other words, is there any way of using the obtained results when one is not certain about the number of collisions (e.g., can one "test" different choices)? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We thank the reviewer for their positive assessment and thoughtful questions. > The problem itself also seems well-studied, although I am wondering whether it is still relevant in contemporary ML applications. If the authors are aware of more recent applications of the problem in ML, they are welcome to provide some. Thank you for this question. While Distributed Distinct Elements is a classic problem, variations and related counting/frequency estimation tasks remain highly relevant in modern large-scale ML and data analysis. Examples include: estimating cardinalities in federated learning settings without sharing raw data, analyzing feature overlap or user reach across distributed datasets/services, monitoring network traffic statistics (unique flows/IPs), and estimating distinct items in large graphs or databases distributed across clusters. We will add references or discussion points highlighting these contemporary ML applications in the revised introduction or related work. > In the algorithmic contributions (Theorem 1.1 and 1.2), what would happen if we mis-estimated the number of collisions? In other words, is there any way of using the obtained results when one is not certain about the number of collisions (e.g., can one "test" different choices)? Our algorithm is fairly robust, as the error in the estimate for the number of collisions can be translated to an additional number of samples and thus communication. For instance, if the estimate is incorrect by a $O(\log n)$ factor, we can still handle this using an extra $O(\log n)$ factor in the communication. In general, if $\beta$ is underestimated, the protocol's correctness guarantee may fail. If $\beta$ is overestimated, the protocol remains correct but uses more communication than necessary. On the other hand, it's not quite clear how to test different choices, because an incorrect choice could erroneously lead to an estimate that "looks" correct for the incorrect choice.
Summary: The paper studies distinct element estimation in a distributed setting, where $\alpha$ servers each hold a subset of elements from $[n]$. The goal is to compute the total number of distinct elements approximately while minimizing communication cost. For a $(1 + \epsilon)$-approximation, prior works establish tight bounds of $\Theta(\alpha / \epsilon^2 + \alpha \log n)$, assuming a constant fraction of elements appear in a constant fraction of servers. This paper explores a setting where most elements are not widely replicated across servers. Under the assumption that the number of pairwise collisions $C$ satisfies $C = \beta \cdot O(\min (F_0(S), 1 / \epsilon^2 ))$, where $F_0(S)$ denotes the number of distinct elements, the authors improve the upper bound to $O(\alpha \log n \log \log n + \sqrt{\beta} (\log n) \cdot \min (F_0(S), 1/ \epsilon^2))$, with further improvements when $C < F_0(S)$. The paper also establishes matching lower bounds. Algorithms 1 and 2 build on standard techniques, but their analysis leverages the newly introduced pairwise collision parameter. Claims And Evidence: Yes, the claims are supported by mathematical guarantees. Methods And Evaluation Criteria: Yes Theoretical Claims: Is Algorithm 1 necessary? As stated in Line 58 on Page 2, a one-pass streaming algorithm for distinct element estimation can be transformed into the distributed setting, yielding a protocol with $O(\alpha / \epsilon^2 + \alpha \log n)$ bits of communication. Since Algorithm 1 aims for a constant-factor approximation of the number of distinct elements, one could instead apply the aforementioned protocol, achieving a communication cost of $O(\alpha + \alpha \log n)$, which appears lower than that of Algorithm 1. Experimental Designs Or Analyses: Experiments contain validation of the theoretical setting. Supplementary Material: No Relation To Broader Scientific Literature: The paper falls into the category of exploiting data patterns to overcome existing theoretical bounds. Essential References Not Discussed: N/A Other Strengths And Weaknesses: The paper utilizes the input pattern to improve the current bounds. Other Comments Or Suggestions: 1. See Theoretical Claims. 2. Line 124, $C = O(\alpha)$? Maybe something is missing there. Questions For Authors: See Theoretical Claims. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback and the pertinent question regarding Algorithm 1. > Is Algorithm 1 necessary? As stated in Line 58 on Page 2, a one-pass streaming algorithm for distinct element estimation can be transformed into the distributed setting, yielding a protocol with $O(\alpha/\epsilon^2 +\alpha\log n)$ bits of communication. > Since Algorithm 1 aims for a constant-factor approximation of the number of distinct elements, one could instead apply the aforementioned protocol, achieving a communication cost of $O(\alpha +\alpha\log n)$, which appears lower than that of Algorithm 1. This is a great point. We originally had Algorithm 1 as a warm-up to the main results in Algorithm 2 and Algorithm 3 because both the algorithmic structure and the corresponding analysis are similar but simpler. However, given the additional $\log\log n$ factor incurred by Algorithm 1, we agree that it would be better to simply reference an existing protocol that achieves communication cost $O(\alpha+\alpha\log n)$. Thanks for your suggestion! > Line 124, $C=O(\alpha)$? Maybe something is missing there. We have corrected the typo to be $C=O(\alpha)\cdot F_0(S)$, consistent with the implications of the statement -- thanks for pointing this out.
Summary: In this paper the problem of distinct element estimation is studied. More precisely, we are given an universe [n] and $\alpha$ servers and each of the servers receives a subset of the universe. Now, the goal is to compute a $(1+\epsilon)$-approximation of the number of distinct elements using minimal communication among the servers. This problem is well-studied and worst case bounds exists. These bounds, however, are based on assumptions which do not hold in practice. Thus, this problem is studied parameterized by the number of pairwise collisions. Using this parameter the paper presents a protocol with a low number of bits for communication, breaking previous lower bounds if the parameter is small. Also, matching lower bound under this parameter a presented. Finally, a proof-of-concept implementation is provided which shows the effectiveness of the new protocol. Claims And Evidence: I think all claims in the paper are convincing. Methods And Evaluation Criteria: For the theory, I say yes. For the experiments, I think the data set is quite small. I think without too much effort more data sets can be generated, for example by considering each batch of 1 million events individually. This then results in 40 times as many data sets. I don't think that this is a big issue since the main focus of the paper is theory and the experiments so far already show that the new approach is very good in practice. Theoretical Claims: Since I am not an expert in the field, I only checked some small proofs and tried to understand the ideas of the proof. I skipped most proofs in the appendix. Experimental Designs Or Analyses: Yes, I checked the experimental details. I am convinced by the result. I only think that more data should be used to obtain higher quality results (see Methods And Evaluation Criteria). However, I don't think that this would change the results drastically. Supplementary Material: see: Theoretical Claims Relation To Broader Scientific Literature: The key idea is to use the parameter C of pairwise collisions. This is a new idea and this parameter is small in practice. For the unparameterized setting the authors cite all relevant literature as far as I can tell. Essential References Not Discussed: No, see Relation To Broader Scientific Literature Other Strengths And Weaknesses: combining parameterized algorithmics and distinct element estimation is a natural idea and shows the future potential of this approach for other related problems Other Comments Or Suggestions: l 87 c 1: please define zipifan distribution l 124-129 c1: please provide an explanation for this behavior l 237 c2: constant factor >> 4-approx l 670: what is P,Q? l 678: I think this formal description of the problem should be part of the main body l 750: Here your arguments were to short for me to understand the proof as a non expert. Providing more details here would be very helpful for me. For example instead of ``using standard expectation and variance techniques'' please provide the precise arguments. (similar in l 770) l 797L where does the factor 100 come from? l 866: this seems natural; but please provide the padding argument here l 877: why ``Unfortunately''? I think this is a positive result l 974: please provide a reference for this lower bound appendix C.1 had some nice motivation; I think this should also be mentioned in the main body Questions For Authors: Q1: You say that the new protocol is better if C is small. But this argument only takes the second summand into account. In your new bound the first summand is larger than the first summand in the old bound. Why is this first summand not important/dominated by the second? Q2: I don't understand why for non-binary vectors you want $v_i^{(a)}\ge 1$ and $v_i^{(b)}\ge 1$. For me, it seems more natural to require $v_i^{(a)}= v_i^{(b)}$. Does this make the problem significantly more complicated? Is this problem relevant in practice? Is this studied? Q3: In most theorems you use completely different probabilities. Why is this the case? Is there an easy argument that any constant probability can be used? If yes, please provide such an argument. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed feedback and insightful questions. > For the experiments...more data sets can be generated We agree that increasing the number of datasets could provide additional empirical validation. Given the structure of the data, we anticipate that the overall pattern of results would remain similar. However, we see the value in this approach and will explore generating additional datasets to further strengthen our experimental evaluation > l 87 c 1: please define zipifan distribution We have added the formal definition of Zipfian distribution in the preliminiaries and provided a forward pointer at this location. > l 124-129 c1: please provide an explanation for this behavior We have corrected the typo to be $C=O(\alpha)\cdot F_0(S)$, consistent with the implications of the statement -- thanks for pointing this out! > l 237 c2: constant factor >> 4-approx We have specified this in the algorithm. > l 670: what is P,Q? $P$ and $Q$ are the two distributions $\mu_1$ and $\mu_2$ -- we have unified the notation now. > l 678: I think this formal description of the problem should be part of the main body Thanks for the suggestion, we have moved this formal description to the main body in Section 2 and swapped out some of the full proofs to the appendix. > l 750: Here your arguments were to short for me to understand the proof as a non expert...please provide the precise arguments. (similar in l 770) We have adjusted to language to clarify that the estimator provides an unbiased estimate to the total number of distinct elements and because of the number of samples, the variance is also sufficiently small. Then by applying a concentration inequality such as Chebyshev's inequality, it follows that the resulting estimator provides a $(1+\epsilon)$-approximation to the total number of distinct elements. > l 797L where does the factor 100 come from? The number 100 comes from being a sufficiently large constant to apply the concentration inequalities that would result in a $(1+\epsilon)$-approximation. We have clarified this. > l 866: this seems natural; but please provide the padding argument here We have added a description of the padding argument to the corresponding location in the appendix. > l 877: why ``Unfortunately''? I think this is a positive result This is a positive result, but it rules out the proposed construction for a "hard distribution" toward the desired lower bounds, which is arguably unfortunate in the given context. We have rephrased this language. > l 974: please provide a reference for this lower bound Thanks, we have added the reference to this lower bound, appearing in Jayram and Woodruff (2013). > appendix C.1 had some nice motivation; I think this should also be mentioned in the main body Although there is insufficient room in the main body to include the entire motivation, we have added some of the motivation to the main body and included a pointer to Appendix C.1 for additional discussion. > Q1: You say that the new protocol is better if C is small. But this argument only takes the second summand into account. In your new bound the first summand is larger than the first summand in the old bound. Why is this first summand not important/dominated by the second? Our first summand is $O(\alpha\log n\log\log n)$ while the first summand in the old bound is $O(\alpha\log n)$. Due to the small $O(\log\log n)$ factor, we did not optimize the first summand. In fact, this discrepancy is due to Algorithm 1, which we included because it leads to a natural description of Algorithm 2 and 3. Instead, we can also reference an existing procedure so that our first summand becomes $O(\alpha\log n)$, without changing any other terms in our bound. Thus, we match the first summand of the old bound over all regimes. This was also picked up by Reviewer iXZ8, who pointed out the connection. > Q2: I don't understand why for non-binary vectors you want $v_i^{(a)}\ge 1$ and $v_i^{(b)}\ge 1$. For me, it seems more natural to require $v_i^{(a)}=v_i^{(b)}$. Does this make the problem significantly more complicated? Is this problem relevant in practice? Is this studied? Intuitively, an item $i$ is shared across multiple servers $a$ and $b$ if $v_i^{(a)}\ge 1$ and $v_i^{(b)}\ge 1$. For the purposes of the distinct elements problem, there is no difference from the perspective of a server $a$ if $v_i^{(a)}=1$ or $v_i^{(a)}>1$ because either way, the server $a$ knows that $i$ has a non-zero count. > Q3: In most theorems you use completely different probabilities. Why is this the case? Is there an easy argument that any constant probability can be used? If yes, please provide such an argument. Yes, any constant probability larger than $\frac{1}{2}$ can generally be used. This is because by taking $O\left(\log\frac{1}{\delta}\right)$ independent instances of an algorithm and computing the median, the probability of success is boosted to $1-\delta$. --- Rebuttal Comment 1.1: Comment: Thanks for your answers! About Q3: Thanks, this is what I expected. In my opinion it is slightly better to use the same probability in each statement to avoid confusion. About the experiments: My intuition is the same that more data generated from this data set should yield the same result. But nonetheless it is better to have such an evaluation. Hence, I will keep my current score. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the additional feedback and the continued correspondence. > About Q3: Thanks, this is what I expected. In my opinion it is slightly better to use the same probability in each statement to avoid confusion. We have unified each statement to have the same probability $\frac{2}{3}$ across the document. > About the experiments: My intuition is the same that more data generated from this data set should yield the same result. But nonetheless it is better to have such an evaluation. Following the reviewer suggestion, we have generated $20$ datasets from this larger dataset and conducted experiments on these datasets. The behavior is mostly the same, so that when each server is expected to send only 16 samples (or more), the worst performance across the 20 iterations is roughly 92% accuracy, while the average is closer to 95%. The line plots for the worst, average, and best case performances across the 20 iterations are available at https://anonymous.4open.science/r/parameterized-distributed-distinct-elements-8E7D/samples-vs-err-central.png We have also uploaded the entire code to the anonymous repository at https://anonymous.4open.science/r/parameterized-distributed-distinct-elements-8E7D/ (though the csv file from CAIDA itself is too large to upload)
null
null
null
null
null
null
Causal Abstraction Learning based on the Semantic Embedding Principle
Accept (poster)
Summary: The authors use a category theoretical formalization SCM and causal abstraction to derive and optimize similarity measures over the measurable spaces of observed distribution of the corresponding low- and high-level representations of data. Towards this existing notions of $\alpha$-abstractions and constructive causal abstractions are leveraged to formulate a semantic embedding principle (SEP) that implies a right-inversive (=existence of a consistent high- to low-level map) abstraction which allows for the formalization of a distance measure between low- and high-level data. They formulate a "SEP-based CA Learning"-Problem and employ several optimizations to minimize distances between the probability measures of the high- and low-level distributions. To obtain unique solutions from observational data, prior knowledge about the functional form of the abstraction is induced to enforce the constructiveness of the abstraction. The authors reformulate the learning problem under the prior knowledge constraint as an Riemannian optimization problem within a Stiefel manifold. The authors propose and derive several variants of a "Linear Semantic Embedding Principle Abstraction Learner" (LinSEPAL-ADMM, LinSEPAL-PG, CLinSEPAL) over smooth and non-smooth setups utilizing different optimization methods. Ablation experiments on synthetic data under the presence of full and partial prior knowledge are presented, showing robust performance of the CLinSEPAL method. The best performing LinSEPAL method is furthermore applied the real-world fMRI brain data, where the authors simulate the coarsening of brain regions of interest (ROI) between a raw data model and combination several ROI into macro ROI. In a second experiment, uncertainty about the grouping of some ROI is simulated such that the method has to decide the assignment of ROI to several macro ROI, resulting in moderate errors. **Update after rebuttal.** The rebuttal and following answers were able to fully address my concerns regarding the role of $\mathbf{B}$, partial prior knowledge. The subsequent explanation and presented proof on bounded eigenvalues and constructive CAs were able to further strengthen the contributions of the paper. Considering this and the other reviewers discussion I have raised my score to an accept. Claims And Evidence: The authors argue that previous assumptions about availability of interventional data, structural knowledge of the SCM or functional form seem to be unrealistic or are infeasible to obtain. To overcome this problem, the authors work under the assumption of partial prior knowledge on the structure of factor assignments between the causal abstractions being available. To my understanding, the assumed prior knowledge is embedded in the form of a assignment matrix which indicates the relation between low- and high-level features, such that the CA learning problem transforms into a parameter regression problem under the presence of full prior knowledge. Even though the authors present a convincing case where full prior knowledge might available, learning the exact parameter assignment is arguably the core problematic of CA learning. In that regard, the authors present a final experiment on brain data where uncertainty about factor assignments is induced to the prior information. However, the property to recover correct factor assignments is only analyzed marginally. (See 'Methods And Evaluation Criteria' below). Under the assumption of given prior information however, the presented algorithms seem yield convincing results, with LinSEPAL-ADMM and LinSEPAL-PG declining under partial knowledge (with missing factor assignments) and CLinSEPAL even obtaining robust performance in the latter case. Methods And Evaluation Criteria: The authors report results for distribution distance in terms of KL-divergence, F1 score and Frobenious absolute distance of the learned map and analyze the correctness of the learned factor assignments ('learned morphisms'). The reported metrics are suited to asses the performance of the respective algorithms. Theoretical Claims: The authors present semantic embedding principle (SEP) that implies the existence of a right-inversive causal abstraction (CA) between the high- and low-level data measures. Furthermore, the problem of SEP-based CA learning is formalized, which states as a goal, the learning of a CA which minimizes the distance between data representations and complies with the SEP. Both, the definition of SEP and the SEP-based CA learning, follow naturally from the category theoretic and measure theoretic formalizations. The problem formulation within a Stiefel manifold and the consequent formalization as a Riemannian optimization problem is laid out clearly and seems to be correct. While I am not an expert in the proposed optimization algorithm I followed the derivations of the CLinSEPAL (Sec. 5.2) method in Appendix J, which, to the best of my knowledge, seems to be without immediate errors. Experimental Designs Or Analyses: A first experiment is conducted over synthetically generated data. The data generation and chosen dimensions of the experimental setups are reasonable to demonstrate the correct workings of the algorithms. Algorithms are evaluated under full and partial evidence, meaning the omission of variable assignments in the prior knowledge. As discussed in 'Claims And Evidence', the algorithm should be tested to recover CAs in the light of uncertainty/multiple possible factor assignments / morphisms. The experiment on coarsened brain regions of interest with full prior knowledge seems to follow domain specific knowledge and is sound and well conducted. With regard to the evaluation of partial knowledge the authors might indicate the specific prior knowledge matrix $\mathbf{B}$ that is used for the respective low, medium and high setups. Particularly, to indicate the number of assignments the algorithm can choose from in each setting. Finally, I would like to recommend to still report results of all methods for all metrics in Fig. 4. Worse performing methods might still yield reasonable results --even if, for the wrong reasons-- and might give insights on whether those metrics can be used to make validate the validity of the prior knowledge. Supplementary Material: I checked preliminaries and discussions on category theory and measure-theoretic formalizations of SCM and causal abstraction in appendices B-D and F. Together, with the more detailed discussion on Stiefel Manifolds in Appendix E. The presented discussions and formalisms where presented clearly and seemed to be consistent. I briefly checked the derivations of the proposed optimization methods in appendices H, I and J. While I am not fully familiar with the employed optimization methods, I found no immediate errors. Appendices K and L, relating to experimental setup and results, aligned with the claims of the main paper. Relation To Broader Scientific Literature: Being able to relating findings across different levels of causal abstractions is an important goal as it allows allows for the transformation of results between different approaches and yields models that allow to communicate low-level findings on a higher-level. Within the past few years, the automated learning of causal representation, therefore, has attracted increasing interest. The authors cite and discuss relevant approaches of the field which come --do to the inherent unidentifiability of factors from observational data-- with different particular assumptions. The proposed method(s) require prior knowledge on the function structure of abstraction, motivated on a measure theoretic and category theory theoretic formalization of SCM. To the best of my knowledge, the presented approach mark a novel contribution in terms of motivation and optimization. Essential References Not Discussed: To the best of my knowledge, the authors cite and discuss relevant literature of causal abstraction learning and related category theoretical perspectives. While the authors utilize on a measure-theoretic view on acyclic SCM which is induced via recursive applications of push-forward measures, a more general view (that, e.g., allows for cyclic causal relations) might be taken with the use of transition probability kernels as formalized by Park et al. [1]. [1] Park, J., Buchholz, S., Schölkopf, B., & Muandet, K. (2023). A measure-theoretic axiomatisation of causality. *Advances in Neural Information Processing Systems*, *36*, 28510-28540. Other Strengths And Weaknesses: **Strengths** The problem setup is well setup and derived. Intermediate steps over category theoretical are laid out clearly and follow naturally. Albeit the strong use of prior concepts and formalisms, all concepts are well described and the derivations in the paper are mostly self-contained. The embedding of the problem into Stiefel Manifolds is an interesting insight and allows for the application of well-known Riemannian optimization methods. The presented synthetic and real-world experiments confirm the robust application of the derived methods. Interestingly, the particular CLinSEPAL approach is demonstrated to perform well on real-world brain data, even under the presence of only partial knowledge. **Weaknesses** The main weaknesses have mainly been discussed in the prior sections and mostly regard the identification of morphisms in the case of partial prior knowledge and specifically concern the following points: 1) The presence of full prior knowledge might be an even stronger constraint than the knowledge about underlying DAG and possibly transforms the task into a mere parameter regression problem(?). The authors might elaborate (or compare) to the specific differences to a naive regression approach for the full prior information setting that simply regresses the non-zero entries of $\mathbf{B}$. 2) The arguably interesting case of identifying the right morphisms in the presence of only uncertain partial prior knowledge is not demonstrated over synthetic data. The paper might be improved by adding analysis on the effects of uncertain partial prior knowledge over synthetic data. 3) In regard to the former point and evaluation is presented on real-world brain data. However, details on the extend of uncertainty are not specified. The authors might add the utilized $\mathbf{B}$ matrices or specify the number of non-zero entries per row per setting. Other Comments Or Suggestions: * The caption of figure 5 ("from Ind (right) to Prob (left)") does not match the labels in the figure. * $\mathbf{V}^{\star}$ is referred to in line 203 before its actual definition in eq. 4. Questions For Authors: I would like to ask the authors to reply to the points listed in the weaknesses. Furthermore, I have the following questions on (possibly minor details of) the paper: 1) Could the authors elaborate on the derivation of Eq. 3? Specifically, the log determinant term seems to only involve a single determinant, while KL divergence it is usually formulated as a quotient involving both of the arguments. 2) Could the authors, elaborate on the condition of constructive CAs being bound by the range of eigenvalues? The mentioned relation was not immediately obvious and was not explained further in the paper. 3) What is the reason for inducing $\mathbf{S}$ for the smooth problem setup? How is it different from learning $V$ directly (and why is this distinction not necessary in the nonsmooth approach)? Minor: * Possible misunderstanding or typo [l165 (right column); 'zeroing distance function']: Wouldn't $\chi_{\#}^{\alpha^*_{\mathcal{X}}}$ map $\chi^{\mathcal{l}}$ onto $\chi^{\mathcal{h}}$. Why can $\chi^{\mathcal{l}}$ be on both sides of the equation? Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you to the Reviewer for their effort, valuable comments, and appreciation of our work. We address below all the Reviewer’s concerns, also providing an additional theoretical contribution in [Q2]. We are happy to further discuss any additional concerns. _Claims_ See [W1-W2]. _Experimental Designs_ [Uncertainty] See [W2]. [Brain application] See [W3]. [Fig.4] Agreed, we will report the results for our proposed methods that do not guarantee constructiveness in the Appendix. _Weaknesses_ [W1] We do not assume the availability of aligned (i.e., jointly sampled) data from the SCMs. Hence, we cannot pose any regression problem. Assuming jointly sampled data are available (but again, that’s a different work from ours), denoting by $h$ the number of nodes in the high-level model and considering full-prior knowledge, then one could solve separately $h$ linear regression problems subject to unitary $\ell_2$-norm constraints for the vectors of coefficients for complying with SEP. Thus, the individual problems are not mere regressions, rather nonconvex problems due to the constraints $\beta_i^\top \beta_i=1$, $i \in [h]$ and $\beta_i$ being the $i$-th _vector_ of coefficients. Thanks, we will add these details as a _Remark_. [W2] We believe there is a misunderstanding, probably due to lines 369-370 where we say _“by forgetting the mapping for 25%, 50%, and 75% of the variables”_. This passage in the text may be ambiguous to the reader and we’ll modify it. We clarify below this point that raised a major concern. In lines 433-435 we say: _“We then express this partial information via uncertainty over B, meaning that some rows of B have more than one entry equal to one”_. This is exactly what we do in the partial prior setting of the synthetic experiments in Sec.6. Indeed, by _“forgetting the mapping for 25%, 50%, and 75% of the variables”_ we mean that 25%, 50%, and 75% of the rows in $B$ have all entries equal to one. Thus, for each case, a specific fraction of nodes in the low-level considers all nodes in the high-level as plausible abstractions and our algorithms have to identify the correct high-level node for each low-level one. Hence, Fig.4 already shows the results for partial prior knowledge on synthetic data. [W3] Agreed, we will add the $B$ used in the experiments to App. L. _Comments_ [C1] Thank you, we will correct the typo in Fig.5. [C2] Thanks, we will introduce $V^\star$ before. _Questions_ [Q1] In Eq.(3) we look at KL as a function of $V$. The constant term is exactly $-\log{ \det{\Sigma^h}} - h$. We will specify this in line 202 2nd col. [Q2] Below we translate Remark 1 into a rigorous additional result we will add to the paper. It establishes a spectral characterization for linear CA and Gaussian measures, valid for any information-theoretic metric and $\phi$-divergence. _Theorem._ Let $\chi^\ell \sim N(0_\ell, \Sigma^\ell)$, $\chi^h \sim N(0_h, \Sigma^h)$, where $\Sigma^\ell$ and $\Sigma^h $ are positive definite and $\ell>h$. Denote by $0<\lambda\_1\leq…\leq \lambda\_\ell$ the eigenvalues of $\Sigma^\ell$, and by $0<k\_1\leq…\leq k\_h$ those of $\Sigma^h$. If a linear CA $V \in \text{St}(\ell,h)$ complying with SEP from $\chi^\ell$ to $\chi^h$ exists, then $$ \lambda\_i \leq k\_i \leq \lambda\_{i+\ell-h}, \forall i \in [h]. \tag{1} $$ _Proof._ If a linear CA $\mathbf{V} \in \mathrm{St}(\ell,h)$ exists, then $\chi^h=\varphi\_\text{push}^V(\chi^\ell)$. Thus the kernel of any information-theoretic metric or $\phi$-divergence is nonempty, and $V^\top \Sigma V=\Sigma^h$ implying the eigenvalues of $V^\top \Sigma V$ are those of $\Sigma^h$. By the Ostrowski’s theorem for rectangular $V$ (cf. Th. 3.2 in Higham & Cheng, 1998) we have $$ k_i = \theta_i \mu_i, \quad i \in [h]; \tag{2} $$ where $$ \lambda\_i \leq \mu\_i \leq \lambda\_{i+\ell-h}, \tag{3} $$ and $$ \text{eigvls}(V^\top V)\_1 \leq \theta_i \leq \text{eigvls}(V^\top V)\_h . \tag{4} $$ Since $V \in \text{St}(\ell,h)$, by (4) $\theta_i=1$ for each $i \in [h]$. Substituting the latter into (2), we get $k_i=\mu_i$, thus obtaining (1) by (3). _Ref.:_ Higham, N. J., & Cheng, S. H. (1998). Modifying the inertia of matrices arising in optimization. Linear Algebra and its Applications, 275, 261-279. [Q3] The matrix $S$ guarantees a constructive CA. Consider the partial prior knowledge case where $B$ has more than a single 1 per row. By disentangling the support $S$ and the coefficients $V$ we can learn and enforce the support of CA to be constructive through the second and third constraints in Eq. (5). In the nonsmooth case we propose a simpler unconstrained formulation at the price of losing constructiveness guarantees for CA. In fact, we simply penalize entries in $V$ corresponding to zeros in $B$. Clearly, this does not guarantee the constructiveness of $V$, especially in the case of partial prior knowledge. [Minor] Thanks, on the LHS should be $\chi^h$. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers regarding my questions. I believe that all comments on the relation to regression/non-convex optimization, the comment on the KL term in Eq. 3 and the clarifications on the role of $S$ will strengthen the paper. **W2:** I was indeed under the assumption that the statement in line 369 was implying additional restrictions on the already given partial knowledge. Thank you for clearing up this point. **Q2:** While I have to admit that I have not yet fully understood the theorem, I appreciate the effort. I would like to kindly ask the authors whether there exists an intuitive interpretation of the theorem. Particularly, regarding the condition under which for a linear constructive CA (might) exist, and particularly rather under which conditions it is guaranteed to not exist? Considering, that causal abstraction learning is an inherently difficult problem to solve in general, I do not share the view of the other reviewers on the linearity of the abstractions being a downside of the approach. In the light of the other ongoing discussions I have raised my score to a weak accept for now. --- Reply to Comment 1.1.1: Comment: We are pleased that our response has addressed the Reviewer's concerns, and we sincerely appreciate their acknowledgment that learning a causal abstraction is a complex challenge, even in the linear case. We greatly value the Reviewer's constructive approach and believe that the final version will be an improvement over our initial submission. \ Below is a geometric intuition for the theorem, leading to the derivation of a necessary condition for the existence of a causal abstraction between $N(0, \Sigma^\ell)$ and $N(0, \Sigma^h)$. We hope it will aid in the understanding and assessment of our results. __[Covariance as an ellipsoid.]__ In the theorem we consider w.l.o.g. zero-mean Gaussian distributions. Therefore, the causal information lies in the variance of the distributions. As is well known, we can imagine the covariance matrices $\Sigma^\ell$ and $\Sigma^h$ as $\ell$-dimensional and $h$-dimensional ellipsoids, respectively.\ __[Role of eigenvectors and eigenvalues.]__ The eigenvectors identify the axis of the ellipsoids, and the square root of the eigenvalues the length of each axis. Thus, the eigenvectors form two bases of $\mathbb{R}^\ell$ and $\mathbb{R}^h$, viz. $U^\ell$ and $U^h$, respectively. In particular, the eigenvectors are the columns of these matrices.\ __[Projection of the low-level ellipsoid.]__ Consider now $V \in St(\ell, h)$. The columns of $V$ are also orthonormal and therefore define the basis of an $h$-dimensional subspace of $\mathbb{R}^\ell$. When $V$ is applied to $\Sigma^\ell$, the eigenvectors in $U^\ell$ are projected onto the latter basis. At this point, there are two aspects to notice: * **[Eigenvectors (axis) of the projected ellipsoid as a combination of those in $U^\ell$.]** First, the eigenvectors for the projected abstract measure $N(0, V^\top \Sigma^\ell V)$ will not simply be the projections of the eigenvectors of $\Sigma^\ell$ onto the subspace, but can still be written in terms of the eigenvectors in $U^\ell$. This can be seen simply by considering the eigendecomposition. We have $\Sigma^\ell = U^\ell \Lambda U^{\ell^\top}$ implying $V^\top \Sigma^\ell V = V^\top U^\ell \Lambda U^{\ell^\top} V = Q^\top \Lambda Q$, where we defined $Q = U^{\ell^\top} V$. Specifically, $Q$ expresses the basis identified by $V$ in terms of linear combinations of the eigenvectors in $U^\ell$. Consequently, the new eigenvectors of the subspace, which are written in the basis $V$, can also be written as linear combinations of those in $U^\ell$. * __[Variance cannot be increased by the projection.]__ Second, these new eigenvectors identify an $h$-dimensional ellipsoid that is, in a geometric sense, “contained” in the original $\ell$-dimensional ellipsoid. Since the projection $V \in St(\ell,h)$, it defines a contractive projection. Thus it cannot create additional variance but only combine (redistribute) existing one. This means that the variance along any direction in the subspace - determined by an eigenvalue of $V^\top \Sigma^\ell V$ - is a weighted combination of the variances (eigenvalues) in the original space, with the weights given by the entries of $V$ (which sum up to one in a squared sense). Consequently, each axis (direction) of the projected ellipsoid has a length (variance) that is bounded both below and above by the lengths of the axes of the original ellipsoid. Since the variance relates to the eigenvalues as discussed above, this means the eigenvalues of $V^\top \Sigma^\ell V$ must lie within an interval determined by the minimum and maximum eigenvalues of $\Sigma^\ell$. **[Projected ellipsoid coincides with the high-level one for optimal V.]** Let us now consider $V$ to be the optimal causal abstraction that we assume exists. From the spectral point of view, $V$ aligns the eigenvectors of the projected ellipsoid with those of $\Sigma^h$. Therefore, it is possible for us to derive a necessary condition for the existence of the optimal abstraction by looking at the spectral decomposition of $\Sigma^h$ as if it were that of the optimal projection $V^\top \Sigma^\ell V$.\ **[Additional contribution of the theorem and answer to Reviewer’s question.]** Our theorem does not simply state that the eigenvalues of $\Sigma^h$ lie within the range determined by those of $\Sigma^\ell$ (as indicated by Remark 1, directly stemming from the second point above). Indeed, the theorem characterizes the length of each of the new axes of the projected ellipsoid (which for optimality we can interpret as the ellipsoid of $\Sigma^h$) in terms of the length of the old axes (eigenvectors in $U^\ell$), providing a more precise necessary condition for the existence of a causal abstraction between $N(0, \Sigma^\ell)$ and $N(0, \Sigma^h)$. Specifically, sorting the axis by their lengths (square root of eigenvalues), the length of the $i$-th new axis must lie between the lengths of the $i$-th and $(i+\ell-h)$-th old axis identified by the columns of $U^\ell$ (cf. Eq.(1) in the theorem).
Summary: This paper introduces a framework for learning causal abstractions (CA) when structural causal models (SCMs) are unknown, interventional data is unavailable, and observational data is misaligned. The authors proposed the Semantic Embedding Principle (SEP), which helps to reconstruct the relationship between the low-level and high-level causal variables. In particular, this paper focuses on linear CA problem. The linear CA problem can be formulated as a Riemannian optimization problem on the Stiefel. Various optimization algorithms are proposed to solve this problem. Claims And Evidence: Claims are supported by evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria make sense. Theoretical Claims: I did not check the soundness of all the proof. Experimental Designs Or Analyses: I checked the design of the experiments. Supplementary Material: I briefly skimmed the appendix but did not check the proof. Relation To Broader Scientific Literature: This paper studies causal abstraction without interventional data and prior knowledge about the SCM. This is an important problem because many current causal abstract algorithm requires some kind of conjecture about the underlying SCM [1,2]. Without prior knowledge, these methods fail. Essential References Not Discussed: I think the author should at least discuss the following two papers [1,2]. These papers study one application of causal abstraction in the literature. [1]Geiger, Atticus, et al. "Causal abstractions of neural networks." Advances in Neural Information Processing Systems 34 (2021): 9574-9586. [2]Geiger, Atticus, et al. "Finding alignments between interpretable causal variables and distributed neural representations." Causal Learning and Reasoning. PMLR, 2024. Other Strengths And Weaknesses: Strength: The paper introduces the Semantic Embedding Principle (SEP), a novel and well-motivated approach to ensuring that high-level causal knowledge is faithfully embedded in low-level models. Unlike prior works that rely on full knowledge of SCMs, known DAG structures, or interventional data, this method operates when only partial prior knowledge is available. The methods are empirically validated on synthetic data as well as resting-state fMRI data, showcasing their effectiveness in neuroscience applications. Weakness: This paper focuses on the linear CA problems. Since many real-world causal systems are inherently nonlinear, this limits the immediate applicability of the approach. In the experiment part, there is no direct comparison with base line methods. Other Comments Or Suggestions: 1. It would be helpful if the author could provide some background knowledge on category theory, which can help the readers understand this paper better. Most readers may not be familiar with category theory. 2. Line 257: the paper states that rows of the support $(B\odot S)^T$ must sum up to one, but later in (iii) "the columns of the support $B\odot S$" must contain at least a one. Wouldn't the columns of $B\odot S$ be the rows of $(B\odot S)^T$ ? It seems to me (ii) already implies (iii). 3. The hyperlink (NA1)-(NA5) and (A1) do not work. I am kind of confused what is non-assumption. Questions For Authors: I appreciate it if the author could clarify the following problems. 1. It seems to me that I do not need category theory to understand the SEP. Could the author explain how the category theory formulation lead to this principle? Or what is beneficial using the category theory language? 2. In the experiment part, the authors compare the performance of three proposed methods. Can the authors compare their methods with existing methods [1,2]? I understand the setting may be a bit different. It is also beneficial to see without prior knowledge how good can the proposed method perform. 3. If I understand this paper correctly, this paper is doing CA based on observational data. In practice, what we care about most is what would happen if we do interventions, so one key question here is after learning the abstraction map using the proposed method, can the abstraction map stay consistent under interventions? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for their effort, valuable comments, and appreciation of our work. We address below the Reviewer’s concerns in a concise manner due to text limit. We are happy to further discuss any additional concerns. __Weaknesses__ __[W1]__ The Reviewer is right, in real-world applications, systems often display nonlinear interactions. However, the weakness they point out does not apply to our work since _we do not assume any linearity of the low- and high-level SCMs_. Indeed, from both the theoretical and learning perspectives, (i) the category-theoretic treatment of SCM, (ii) SEP in Def. 4.1, and (iii) the SEP-based CA learning problem in Prob.1 are general and do not make any assumptions on the functional forms of the involved SCMs. It’s only from the application perspective that we decline Prob.1 to the case of linear CA. However, the linearity of the CA does not imply the linearity of the causal modules of the SCMs (cf. “Counterexample” in reply to Reviewer 232e). __[W2]__ We agree with the Reviewer on the importance of the comparison with baselines. Unfortunately, we were not able to find any baseline suitable for a fair comparison as we dropped many assumptions of existing methods, viz. (NA1)-(NA5), as the Reviewer acknowledges. Requirements: * Zennaro et al. (2023): (NA1), (NA2) * Felekis et al. (2024), $(\tau,\omega)$ CA setting: (NA2),(NA3) * Massida et al. (2024): (NA4),(NA5) * Kekic et al. (2024) perform _targeted reduction of an SCM_ in the $(\tau,\omega)$ abstraction setting, a different task than ours. Require (NA3) (NA4) * Dyer et al. (2024) consider a different problem within the setting of $(\tau,\omega)$-abstraction. Require (NA3) * Geiger et al. (2021) and (2024) leverage CA formalism to rigorously analyze explainability of neural networks, a different task from ours. \ Given a neural network as a low-level SCM, the assumption is that there exists a high-level interpretable SCM. The goal is to evaluate the alignment between the two. Here, “low-level” means “black-box”; “high-level” refers to “human-understandable” SCMs built from theoretical and empirical modeling work. Differently, in our work “low-” and “high-level” mean “micro”(fine-grained) and “macro”(coarse-grained). It’s not a mere difference of terminology. Consider the SCMs $\mathcal{B}$ and $\mathcal{N}$ in Sec.3 of Geiger et al. (2024): $\mathcal{B}$ is a high-level model for $\mathcal{N}$ although both have the same structure. In our work, since there is no difference in interpretability between SCMs, the previous setting would lead to a contradiction, since it would mean considering as a (macro) abstraction of an SCM a rotation of it. \ Further, Geiger et al. (2021) and (2024) use _interchange intervention training_ (IIT) objectives, developed for neural networks and requiring the possibility to perform interventions over both the black-box and human-interpretable models. Citing from Geiger et al. (2024): _“interchange intervention (also known as activation patching), in which a neural network is provided a ‘base’ input, and sets of neurons are forced to take on the values they would have if different ‘source’ inputs were processed”_. Due to (NA1)-(NA3), it is not possible to adapt IIT to our setting. __Comments__ __[S1]__ We devoted App.B to “Category theory essentials”. If the paper will be accepted, we will add a concise background to the main using the extra page. __[S2]__ Thanks, there is a typo in the text. The corrected version is: _“... the columns of the support $(B \odot S)^\top$ must sum up to one, [...] the rows of the support $(B \odot S)^\top$ must contain at least a one.”_ __[S3]__ Thanks, we will fix the link. A non-assumption is an assumption made by existing methods that we do not make. We will specify it better in the paper. __Questions__ __[Q1]__ We work purely at the semantic (distributional) level dropping (NA1)-(NA5). Category theory (CT) is applied to isolate the distributional layer of the SCMs. This has an impact on SEP in Def. 4.1 since CT requires the involved mappings to be measurable. Finally, CT is used to generalize the existing CA $\alpha$-framework – which is posed in category-theoretic terms – into our setting. __[Q2]__ Please refer to [W2]. We can add the reported discussion in our manuscript to motivate the absence of a comparison with baselines. Additionally, among future works we will add the investigation of SEP in the setting of Geiger (2021, 2024). It is an intriguing research question that could lead to jointly aligning and compressing human-understandable models to AI ones in a principled manner. __[Q3]__ As stated in lines 194-195, _“Only if we identify the true constructive abstraction, we are guaranteed interventional consistency.“_. We plan to investigate in which cases interventional consistency can be guaranteed without additional assumptions (cf. Sec. 8). _References_: Already in the paper. Geiger (2021) and (2024) are [1,2] of the review. --- Rebuttal Comment 1.1: Comment: Thanks for clarifying my questions. I have raised the score to 3. I have some comments for the authors. - From definition 2.3 and my understanding, causal abstraction aims to find the triple $<\mathcal{R}, m, \alpha>$. In Problem 1, the authors assume that the mapping $m$ is known, which may be restrictive. In many cases, the most difficult part is to find the corresponding relationship $m$. - I also want to point out that the setting in Geiger et al. (2021) and (2024) is actually similar. In their setting, they consider a neural network as a low-level model and a high-level causal model. What they try to do is to find an alignment (can be formulated as an $\alpha$-abstraction in your terminology). While I understand that their approach is different than yours, I would say the problem is similar. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer again for their appreciation and constructive comments leading to an improvement over our initial submission. We believe there is still room to clarify some key aspects of our work, and we hope the discussion below will aid in the understanding and assessment of our results. 1. Point 1 was raised by Reviewer p62B as well (see point 1 in their reported weaknesses), who, after our response, recognized CA learning as a difficult challenge even in the linear case with full-prior knowledge and jointly sampled data (a requirement we drop, viz. NA5). Basically, the reason is that CA learning results in a nonconvex learning problem even in the latter case that seems trivial only at first sight. We refer the Reviewer to our discussion with Reviewer p62B. Having said that, there are two points to be highlighted here: * First, our approach only assumes partial prior knowledge of $m_{\mathcal{X}}$, allowing for the realistic scenario in which users may have no prior information about certain structural maps. We evaluated our methods in this challenging setting using both synthetic and real-world brain data, and they demonstrated robust performance across these contexts (see Sec.6 and Sec.7). * Second, we believe assumptions should not be judged in absolute terms, by stripping them away from the context. Let us consider the works of Geiger et al. dealing with DNNs. In the case of DNNs, it is reasonable – and smart – to exploit the full-knowledge of and access to the SCMs since they are inherently provided by the application setting. There is also no issue related to feasibility, cost and ethics regarding interventions, which can be performed on those known SCMs essentially for free. Conversely, since DNNs are black-boxes, it would be unreasonable for them to grant our assumption (A1) connecting some nodes in the low-level to the high level. Indeed, the discovery of that connection is in their aim in the interpretability setting. In other words, Geiger et al. aim at explaining DNNs implementing the causal abstraction analysis. \ By contrast, in domains such as neuroscience and finance, the situation is markedly different. We do not have full-knowledge of and access to the SCMs, and we can rarely make assumptions on them. Thus it is important to drop (NA1), (NA2), and (NA4). Since we do not know the SCMs, we cannot generate data from them as for DNNs. Moreover, obtaining jointly sampled data can be hard (NA5) due to, for instance, feasibility and privacy issues. Think to traders operating in financial markets, or neuroscience teams acquiring data from patients. Also, obtaining interventional data can be problematic for ethical, feasibility, and cost reasons (NA3). This is a well-known problem within the causality community, and motivated the development of causal discovery methods over the years. Conversely, what is reasonable and often feasible in these application areas is leveraging domain-specific knowledge that can be translated into partial information about $m_{\mathcal{X}}$. As shown in the paper, in neuroscience we can leverage the way brain atlases are built. Similarly, in finance, one can use the knowledge that broad industry portfolios are constructed from finer-grained indexes (as formalized in the Global Industry Classification Standard). This is the rationale behind assumption (A1), making it relevant and justifiable across a range of domains. 2. Concerning point 2 , we agree with the Reviewer that the tasks are related, although there are some fundamental differences discussed in our previous rebuttal and point 1 above. We thank the Reviewer again for their constructive comments about the work of Geiger et al. Indeed, the addition of these references as well as a discussion on the potential interplay between our and their work will enrich the paper and broaden its relevance for the ML community. We will add this material to the final version as already agreed. Nevertheless, we wish to reiterate that, in its current form, there is a fundamental mismatch in the required inputs that prevents a direct and fair empirical comparison between our methods and those of Geiger et al.
Summary: This paper addresses the challenge of learning causal abstractions (CAs) between structural causal models (SCMs) at different resolutions, a critical task for bridging causal evidence across scales (e.g., molecular vs. organism-level processes). The authors propose the Semantic Embedding Principle (SEP), which enforces that embedding a high-level SCM into a low-level one and abstracting back should reconstruct the original high-level model. They formalize SEP using a category-theoretic framework, decoupling structural and functional components of CAs. For learning, they focus on linear CAs under partial prior knowledge (assumption A1), framing the problem as Riemannian optimization over the Stiefel manifold. They develop optimization methods (LinSEPAL-ADMM, LinSEPAL-PG, CLinSEPAL) tailored to smooth/nonsmooth objectives in Gaussian settings, validated on synthetic and neuroimaging data. Claims And Evidence: The claims are generally supported by providing some definition. However, the paper still lacks clarity when introducing those new terminologies, e.g., the introduction of causal abstraction. Methods And Evaluation Criteria: Yes, the (LinSEPAL-ADMM, LinSEPAL-PG, CLinSEPAL) are well-suited for the problem. Theoretical Claims: The theoretical claims seems sound. However, the paper does not provide detailed proofs for the identifiability of causal abstractions. Experimental Designs Or Analyses: The experimental design is generally sound but the experiments lacks discussion with the related causal representation methods. Supplementary Material: The supplementary material was reviewed particularly the theory formalization. Relation To Broader Scientific Literature: The key contribution is somewhat limited in the broader literature on causal reasoning and representation learning. While the introduction of SEP and its formalization using category theory builds on prior work in causal abstraction (Rubenstein et al., 2017; Beckers & Halpern, 2019) and extends it to a learning framework, the paper could better situate itself relative to causal representation learning methods, which address similar challenges but from a different perspective. Essential References Not Discussed: Yes, the paper would benefit from a more thorough discussion of causal representation learning methods, such as CausalVAE [1] and SCM-VAE [2], which also aim to learn causal structures from data but focus on disentangled representations rather than multi-resolution abstractions. Other Strengths And Weaknesses: Strengths - A causal abstraction learning method is proposed with the integration of Riemannian optimization. - Experiments on synthetic and real-world brain data showcase the feasibility of the approach under varying levels of prior knowledge. Weaknesses: - As for clarity, I think in the start of paper, it should first clarify what is causal abstraction and how is it related and difference to the similar concept of causal representation learning. - Moreover, this paper fails to properly motivate why we need the causal abstraction and how it can be used in terms of application and how other methods (e.g., the causal representation learning methods [1-2]) fails on those tasks. - As for the causal abstraction, can you show the identifiability of the causal abstraction? Moreover, is the the structure of the SCM need to be priorly specified for causal abstraction? [1] Yang, Mengyue, et al. "Causalvae: Disentangled representation learning via neural structural causal models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021. [2] Komanduri, Aneesh, et al. "Scm-vae: Learning identifiable causal representations via structural knowledge." 2022 IEEE International Conference on Big Data (Big Data). IEEE, 2022. Other Comments Or Suggestions: N/A Questions For Authors: See the weankesses above. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for their effort and valuable comments. We address below the Reviewer’s major concerns on the relationship between CA learning and CRL, and we are happy to further discuss to clarify any additional points. __Claims And Evidence__ Causal abstraction is intuitively introduced at the beginning of Sec.1, from line 46 1st col to line 49 2nd col; and formally in Def. 2.3. However, we are happy to improve our manuscript as per Reviewer suggestion by adding the proposed text in italic reported below, within our reply to “Weaknesses”. __Theorethical Claims__ We do not make any claim about the _identifiability_ of the causal abstraction for the reasons reported below in [W3.2]. __Experimental Designs Or Analyses__ Please refer to [W1-W2] below. CA and CRL tackle different learning tasks, although a comparison between them is valuable to further improve the paper. __Relation To Broader Scientific Literature__ Please refer to [W1-W2] below. We believe that our work is correctly placed within the literature, although considering the comparison with CRL is valuable and planning to add it in the revised version of the manuscript. __Essential References Not Discussed__ Please refer to [W1-W2] below. __Weaknesses__ __[W1-W2]__ We agree with the Reviewer that an additional comparison between CA learning and CRL within Sec.1 would improve the paper, specifically aiding the reader in setting apart CA learning from CRL. We propose to add the following discussion in the revised version: _Causal abstraction (CA) learning aims at learning a mapping between two different SCMs, for instance, the architecture of a neural network and a human-interpretable causal model [2], or an SCM of brain regions of interest (ROIs) and one of brain functional activity (see Sec.7)._ _Within CA literature, it is usual to distinguish between low- and high-level variables (or SCMs). Although the same adjectives are usually employed also in causal representation learning (CRL, [3]) literature, they convey different meanings in the two research fields._ _In CA, both low- and high-level refer to endogenous variables being causal and observed. Specifically, the latter are said to be causal since they relate to each other within the SCM, and are the relevant variables for interventions and counterfactual reasoning. Instead, in CRL, the low-level variables are observed but not causal, that is, mere mathematical functions of high-level causal but unobserved variables, where causal has the same meaning as above. As an example, the low-level variables could be the pixels of an image, whereas the high-level ones are concepts related by an SCM [4, 5]. Additionally, the high-level variables could also not be labeled [3]._ _Consequently, also the goal of CRL is deeply different from that of CA: Given the low-level variables, CRL algorithms aim at learning (i) the high-level variables and (ii) the causal structure underlying these variables. In brief, while CRL extracts a meaningful causal representation from non-causal data to improve model performance and interpretability [4, 5]; CA learns mappings between already meaningful representations to enable causal knowledge transfer and communication between SCMs working at different levels of abstractions._ __[W3.1]__ Identifiability of CA is not key as identifiability of causal structures in CRL and causal discovery. In CA learning, we mainly care about (structural and distributional) interventional consistency. It is well known that there might exist multiple causal abstractions between low- and high-level SCMs (see Example 5.2 in [1] where symmetry allows for multiple interventionally consistent causal abstractions). __[W3.2]__ As we drop (NA1), (NA2), and (NA4), our work shows that it is possible to learn causal abstractions without assuming any knowledge of the underlying low- and high-level SCMs. _References_ [1] Zennaro, F. M., Bishop, N., Dyer, J., Felekis, Y., Calinescu, A., Wooldridge, M., & Damoulas, T. (2024, September). Causally Abstracted Multi-armed Bandits. In Uncertainty in Artificial Intelligence (pp. 4109-4139). PMLR. [2] Geiger, A., Lu, H., Icard, T., & Potts, C. (2021). Causal abstractions of neural networks. Advances in Neural Information Processing Systems, 34, 9574-9586. [3] Schölkopf, B., Locatello, F., Bauer, S., Ke, N. R., Kalchbrenner, N., Goyal, A., & Bengio, Y. (2021). Toward causal representation learning. Proceedings of the IEEE, 109(5), 612-634. [4] Yang, M., Liu, F., Chen, Z., Shen, X., Hao, J., & Wang, J. (2021). Causalvae: Disentangled representation learning via neural structural causal models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9593-9602). [5] Komanduri, A., Wu, Y., Huang, W., Chen, F., & Wu, X. (2022, December). Scm-vae: Learning identifiable causal representations via structural knowledge. In 2022 IEEE International Conference on Big Data (Big Data) (pp. 1014-1023). IEEE. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification and I would like to raise the score to 3. --- Reply to Comment 1.1.1: Comment: We thank the Reviewer for their appreciation. We are glad that our rebuttal effectively addressed the Reviewer’s concerns. As the rebuttal period is nearing its end - and considering the new conference rules, this may potentially be our last message visible to the Reviewers - we would like to take this opportunity to provide a brief summary. Our goal is to encourage discussion among reviewers and support a fair and thorough evaluation of our work. **We thank again all the Reviewers for their constructive approach**. **All concerns raised were factually addressed in the rebuttal**, leading to score increases from Reviewers Mguo, rKnp, and p62B, whose initial assessments were negative. Reviewer 232e was already positive. In their initial reviews, the following strengths were noted: * **Novelty** of the Semantic Embedding Principle (SEP) for CA learning (Reviewer rKnp) * Its **clear motivation** and **presentation** (Reviewers rKnp and p62B), * The formulation of **Riemannian CA learning problems** on the Stiefel manifold (Reviewers 232e, Mguo, and p62B) * And the methods' **practical applicability**, made possible by relaxing five restrictive assumptions common to prior work across various domains The main concerns were addressed as follows: * Provided a **factual clarification** that our application setting does **not imply linearity** of the underlying SCMs (Reviewers 232e and rKnp) * **Integrated discussions** - outlined in our rebuttal - on: - The distinction between CA learning and causal representation learning (Reviewer Mguo) - The lack of suitable baselines in the existing literature (Reviewer rKnp) - The comparison and interplay with the work of Geiger et al. (Reviewer rKnp) * **Resolved a misunderstanding** concerning the synthetic experiments (Reviewer p62B) * **Clarified** that nonconvexity arises in regression problems even under full prior knowledge and joint sampling from the SCMs (Reviewer p62B). This clarification led the Reviewer to recognize the inherent complexity of CA learning. Additionally, we proved a **new theorem** establishing a necessary (spectral) condition for the existence of a linear CA between Gaussian measures, and provided its **geometrical interpretation** (Reviewer p62B).
Summary: This paper formalises the problem of causal abstraction in category theory language, then it introduces the Semantic Embedding Principle (SEP). Intuitively, SEP states that if we go from a high level model to a low level and then abstracting back, one should get the initial high level model back, the way I understand it is that high level models should contain all information of the low level model that produced it. The authors make some general assumptions in causal abstraction into non-assumptions and using SEP they and certain assumptions they establish the problem as a Riemannian Optimisation problem which then they go on to test on synthetic data and then real data from brain networks. Claims And Evidence: From what I understood of the paper (and I have to admit I don’t think I fully understood it), I would say the claims are only partly supported. Let me elaborate. The authors claim that most of the previous work makes assumptions that are unacceptable in the causal abstraction setting and thus are not so applicable whereas their assumptions are based on information of the structure of CA. For example, the assumptions they use for the experiments are linear causal abstractions, constructability of the abstraction and Gaussianity of the errors. I don’t know how the authors feel about this, but I would say that these assumptions are as strong as assuming the functional form of the SCM. In the end the optimisation ends up being similar to other research, namely the KL divergence between the pushforward of the low level distribution against the high level model. Methods And Evaluation Criteria: Yes, their methods seem reasonable to me. Although as mentioned above, they seem very similar to what we already see in the abstraction literature with the backing of the Categorical formalisation and the exception of the need for interventional data, which they don’t assume have access to. Theoretical Claims: Not in detail. Experimental Designs Or Analyses: I checked what is on the main document. They seem reasonable. Furthermore, I appreciate the application to brain networks as a real world scenario. Supplementary Material: I skimmed through all of the supplementary material with more focus on A-F and skipping D almost entirely. Relation To Broader Scientific Literature: The contribution seems relevant to the literature. It gives the categorical perspective from Rischel (2020) applicability. Essential References Not Discussed: The authors do a good job at referencing related literature. There is nothing very obvious that I believe they missed. Other Strengths And Weaknesses: Strengths: - I think the use of the categorical language to go beyond interventional consistency is interesting and has the potential to allow the application of causal abstraction in other areas like they show with the brain networks. - The optimisation procedure and its description looks very good as well although admittedly the details are beyond my optimisation knowledge in relation also to the time I can invest into reviewing the paper. Weaknesses: - Unless I’m misunderstanding something in the paper the excessive claims, as discussed above seem to be the greatest weakness. Other Comments Or Suggestions: Please see the weaknesses for some potential discussion. Additionally, I would like to know what do the authors think is needed to change the assumption of having a linear abstraction to something more complex. That is, how easy is it to find a space that satisfies SEP that we can also optimise on? Questions For Authors: None Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the Reviewer for their effort, valuable comments, and appreciation of our work. We address below the Reviewer’s concerns. We are happy to further discuss to dispel any additional concerns. __Claims And Evidence__ __[Our claims]__ Our claims are highlighted in the “Contributions” paragraph of Sec.1. We support them both theoretically and empirically. We are open to reducing any eventual excessive claims as per Reviewer suggestion. At the moment, it is not clear to us which claim is problematic. We kindly ask the Reviewer to specify not supported or justified claims. __[Assumptions]__ We believe the Reviewer’s concern is centered on the assumptions behind our work. We clarify this point below. Regarding the SOTA, we do not believe – and do not state in the manuscript – that the assumptions of existing works are _unacceptable_. We say they are _restrictive_ to tackle CA learning in real-world applications (cf. line 60 1st col, lines 398-399 2nd col). Accordingly, we make only one assumption, viz. (A1), supported by empirical evidence (cf. Sec.7). Also leveraging (A1), we pose the general SEP-based CA learning problem (cf. Prob1). This problem does not assume any specific (i) functional form for CA, (ii) probability measures for the involved SCMs, and (iii) distance function for quantifying the misalignment (cf. lines 167 1st col - 170 2nd col). Prob.1 should be read as a learning paradigm for CA rather than a single learning problem. As an application, we decline Prob.1 to the case of (i) linear CA, (ii) Gaussian measures for the endogenous, and (iii) KL divergence, arriving at Prob.s 2 and 3. We remark that (i)-(iii) are not assumptions within our work, rather a particular case of Prob 1. Concerning the Reviewer’s statement, it’s unclear to us what they mean by “Gaussianity of the errors”. If error stands for exogenous variables, then we remark that we do not make any such assumption in our work. If they mean Gaussianity of the endogenous probability measures, then we provide below a simple counterexample showing that (nonlinear) functional forms other than linear for causal modules are compatible with the (i)-(iii). We will consider lognormal distributed variables as those are relevant in application domains such as quantitative finance: stock prices are considered to be log-normally distributed (Black and Scholes, 1973; Fama, 1965). __Counterexample.__ Denote by $U_i$ and $L_i$ the exogenous and endogenous variables of the low-level SCM. Denote by $V_j$ and $H_j$ the exogenous and endogenous of the high-level SCM. Let $T:[0,1] \rightarrow [0,1]$ be a measure preserving map (e.g., $T(x)=1- |2x-1|$), $f$ and $g$ be the quantile function and CDF of two Gaussians, $N(0, c_1^2+c_2^2+c_3^2)$ and $N(0, c_4^2)$, respectively. _Causal Abstraction complying with SEP_: $H_1=1/\sqrt{2} (L_1 + L_2)$, $H_2=L_3$. * Low-level SCM: * Exogenous: $U_1, U_2, U_3$; each following $\log{N(0,1)}$ * Endogenous: $L_1=\log{U_1}; L_2=\log{U_2}; L_3= (f \circ T \circ g) (c_1 L_1 + c_2 L_2 + c_3 \log{U_3})$ * Observational distributions for the endogenous: $L_1 \sim N(0,1), L_2 \sim N(0,1), L_3 \sim N(0,c_4^2)$. * High-level observational distributions entailed by CA: $H_1 \sim N(0,1)$, $H_2 \sim N(0, c_4^2)$ * High-level SCM: * Exogenous: $V_1, V_2$; each following $\log{N(0,1)}$ * Endogenous: $H_1 = \log{V_1}, H_2 = (f \circ T \circ g)(d_1 H_1 + d_2 \log{V_2})$; with $d_1=\sqrt{ c_1^2 + c_2^2 }$ and $d_2=c_3$. * Observational distributions for the endogenous: $H_1 \sim N(0,1)$, $H_2 \sim N(0, c_4^2)$ Finally, the KL divergence is used as an objective function in Prob. 2 and 3, and not limited to any specific probability measure and functional form for the SCM. __[Relations to existing methods]__ We deliberately selected KL divergence for its relevance in ML applications. Furthermore we consider the application of KL to probability measures of different dimensionality, differently from previous work (Kekic et al., 2023; Dyer et al., 2024). Additionally, Zennaro et al. (2023) use a regularized Jensen-Shannon divergence, Felekis et al. (2024) an $\omega$-informed cost of transport with entropic and do-intervention regularization terms; Massida et al. (2024) perform OLS estimation between the low- and high-level SCMs data. None of the works above is similar to the Riemannian optimization problems, viz. Prob.s 2 and 3, posed in our linear CA application. _Additional Ref.s To Those in The Paper_ [1] Black, F., & Scholes, M. (1973). The pricing of options and corporate liabilities. Journal of Political Economy, 81(3), 637-654. [2] Fama, E. F. (1965). The behavior of stock-market prices. The Journal of Business, 38(1), 34-105. __Weaknesses__ See “Claims” above. __Comments or Suggestions__ Def. 4.1 and Prob. 1 are general and suitable for any CAs. SEP requires the CA having a right-inverse (cf. lines 175-176 1st col), this is the condition to be enforced during the optimization process.
null
null
null
null
null
null
MedXpertQA: Benchmarking Expert-Level Medical Reasoning and Understanding
Accept (poster)
Summary: This paper introduces MedXpertQA, a challenging benchmark comprising 4,460 clinical medical questions. Comparing to previous simple Question-Answering (QA) datasets, MedXpertQA conducts difficulty and diversity filtering to ensure the data quality. Additionally, the authors evaluate frontier LLMs on the proposed benchmark. Evaluation results are also thoroughly discussed. ## update after rebuttal The authors' rebuttal addressed most of my concerns. I decided to keep my original score as accept. This paper is a valuable contribution for the medical LLMs community. Claims And Evidence: This paper claims the increased difficulty and diversity of the proposed MedXpertQA dataset. These claims are supported by the observed decline in performance of evaluated LLMs and the analysis of data distribution. Methods And Evaluation Criteria: The data collection and filtering in this paper is reasonable and solid. Table 1 offers a comparison among current medical QA datasets, which supports validating the quality of the proposed benchmark. Theoretical Claims: This paper employs the Brier score to evaluate the posterior difficulty of each question, which is a logical approach. Experimental Designs Or Analyses: Tables 3 and 4 benchmark leading LLMs on MedXpertQA, suggesting the effectiveness of reasoning LLMs (such as o1 and QVQ-72B) on addressing challenging medical questions. However, several weaknesses still exist in the experiment setting: 1. Lack of evaluation on different test-time scaling methods (such as RAG) and prompting strategies (few-shot, CoT) on the proposed benchmark. Will the retrieval-based test-time scaling methods facilitate the solution of challenging medical reasoning problems? 2. Lack of benchmarking of specialized medical LLMs. Will these medical LLMs perform better on knowledge questions? Supplementary Material: The additional material details the methodology for data filtering and includes a case-by-case comparison with other medical question-answering datasets. Relation To Broader Scientific Literature: This paper provides a more challenging benchmark to further evaluate the performance of LLMs in the medical domain. Comparing to previous works such as medqa[1], medmcqa[2], pubmedqa[3], proposed MedXpertQA can better evaluate the capability of solving challenging clinical problems. [1] Jin, Di, et al. "What disease does this patient have? a large-scale open domain question answering dataset from medical exams." Applied Sciences 11.14 (2021): 6421. [2] Pal, Ankit, Logesh Kumar Umapathi, and Malaikannan Sankarasubbu. "Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering." Conference on health, inference, and learning. PMLR, 2022. [3] Jin, Qiao, et al. "Pubmedqa: A dataset for biomedical research question answering." arXiv preprint arXiv:1909.06146 (2019). Essential References Not Discussed: The main contribution of this paper lies in presenting a varied and demanding medical benchmark. It would be beneficial if the author could offer a comparison regarding difficulty and data distribution with the medical part of Humanity's Last Exam[4]. [4] Phan, Long, et al. "Humanity's Last Exam." arXiv preprint arXiv:2501.14249 (2025). Other Strengths And Weaknesses: For weaknesses, please refer to Experimental Designs and Essential References part. Other Comments Or Suggestions: Does AvgR and AvgK in Table 1 refer to the performance on reasoning and understanding subsets? The author should clarify this. Questions For Authors: Please refer to weaknesses. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: We appreciate your insightful comments and hope to further address your concerns. # Response 1 - Experimental Designs Or Analyses ### **1.1 Evaluation on Different TTS Methods** Thank you for the valuable suggestion. We will include these comparisons in the next versions of the paper. At the same time, the main motivation of our work is the high-quality benchmark itself, and dedicated works that focus on comparing inference-time methods (MedPrompt [1]) are better references for researchers interested in this direction. We also note that the inference scheme we use, *i.e.* zero-shot CoT, aligns with mainstream consensus for evaluating foundation models. We hope the above points address your concerns. ### **1.2 Specialist Model Results** Your suggestions are very valuable. We agree with the need to include medical specialist models in our evaluation. We did consider it, but only left it out due to time constraints. As preliminary experiments, we evaluated a specialist text model UltraMedical-70B [2]: [UltraMedical-70B Results](https://postimg.cc/RJRkLgVN). We provide the results of o1 for comparison, showing that the specialist model still falls behind the most advanced general reasoning model. We plan to more thoroughly survey domain-specific models and present the full results in the next version of our paper. We will also systematically compare the performances of generalist models, specialist models, and humans to gain further insights. # Response 2 - Essential References Not Discussed We totally agree that comparisons with other demanding benchmarks will be informative. We provide the dataset statistics of Humanity’s Last Exam (Biology / Medicine): [Humanity’s Last Exam Dataset](https://postimg.cc/1nZRWYPH). We note obvious discrepancies in relative model performances between the two benchmarks, which demonstrate the informative value of diverse benchmarks. Regarding benchmark difficulty, we do recognize that the absolute scores for the Biology / Medicine subset of HLE are lower than MedXpertQA. However, this does not take away from the quality and value of MedXpertQA for two reasons: - First, most of HLE's biology / medical questions focus on biology instead of clinically relevant medical tasks, for instance: [Biology Question Example](https://postimg.cc/t1xwzcP4). - Second, questions in HLE covering comprehensive patient information, thus supporting realistic clinical reasoning, as do most questions in MedXpertQA, are scarce (~35 in total). Many questions related to medical tasks only cover single steps within the complex reasoning process required to form a clinical decision for a patient. For example, the following question represents the step of interpreting a single statistic (R-R interval) from a single piece of patient information (ECG results): > What is the longest R-R interval in seconds on this rhythm strip ECG? In contrast, questions in MedXpertQA that involve ECGs typically include the image as one piece of information within a multifaceted, realistic patient profile. The answerer not only interprets the ECG but also needs to consider the role of this information within a more complex decision, *e.g.* proposing a diagnosis or treatment. The example illustrated in Figure 2 in our paper precisely shows this scenario. Therefore, while medical questions in HLE effectively pinpoint challenging individual tasks, they are less informative of models' holistic abilities and less clinically relevant. As Reviewer y9fU happened to mention, "reasoning-heavy" and "difficult" are two different concepts, and we believe MedXpertQA has an advantage over HLE in terms of the former. - Third, MedXpertQA and HLE are fundamentally different types of benchmarks. The construction of HLE was extremely labor-intensive, requiring experts to manually design individual questions, whereas the construction process of MedXpertQA is scalable and systematic, enabling large-scale evaluations over questions that are clinically relevant and diverse. Moreover, the difference in benchmark scale is crucial. It is in fact possible for us to achieve a similar level of difficulty by further stringent filtering, reducing the dataset size — we simply need to retain questions that stump current models. This will, however, contradict the need for systematic and comprehensive coverage in medical evaluation, as we mentioned in Section 3.1. We believe MedXpertQA strikes a good balance. # Response 3 - Other Comments Or Suggestions Yes, thanks for your suggestion! We will clarify this in the caption later. ### **References:** [1] https://arxiv.org/abs/2311.16452 [2] https://arxiv.org/abs/2406.03949 --- Rebuttal Comment 1.1: Comment: The author rebuttal addressed most of my concerns. I will keep my score at this stage. --- Reply to Comment 1.1.1: Comment: Thanks for your response. Some new information regarding the comparison between MedXpertQA and HLE has caught our attention, so we would like to present it here to further address your concerns. --- Previously, we obtained the Biology / Medicine (B/M) scores from the original HLE paper for comparison. A recent study has conducted an evaluation specifically on HLE (Med) and MedXpertQA. *It is important to note that this evaluation is not on the B/M subset but rather on a more fine-grained subset, Medicine, which also serves as a medical evaluation benchmark.* The authors selected medicine questions from the B/M subset in HLE. Their fine-grained subset allows for new comparisons on both dataset statistics and benchmark difficulty. Surprisingly, our benchmark appears to be more challenging than HLE (Med): https://postimg.cc/RNdmR5nz. **Quick Look:** | Benchmark | Llama3.1-Instruct-8B | Mistral-Instruct-7B | | --------------- | -------------------- | ------------------- | | HLE (Med) | 13.6 | 14.6 | | MedXpertQA Text | 13.2 | 11.4 | **Statistics:** Below, we present statistical information on the HLE (Med) benchmark based on the released dataset: | Benchmark | # Size | # Avg Lens | | --------------- | ------ | ---------- | | HLE (Med) | 103 | 224.39 | | MedXpertQA Text | 2450 | 257.37 | The HLE, derived from original questions contributed by nearly 1,000 experts representing over 500 institutions across 50 countries, has attracted considerable attention. It has emerged as one of the most challenging benchmarks for assessing the limitations of state-of-the-art models. It can be observed that MedXpertQA is not only **more comprehensive** but also **more challenging**. In comparison to HLE (Med), which was meticulously constructed through extensive human effort, our benchmark is approximately **24x** larger and presents an even greater level of difficulty. This makes it the most extensive and demanding medical question-answering benchmark to date. --- We hope that the above clarifications fully address all of your concerns.
Summary: Authors present a new expert-level knowledge and reasoning benchmark for real-world clinical scenarios. It seems to be the largest Multi-modal dataset in this category (with human annotations). It also is the second largest in the text only category. There seems to be a new barrier in this particular task with this new dataset for the most common LMM models. Claims And Evidence: Yes. (Minor question on this asked at the end) Methods And Evaluation Criteria: Yes. (Not aware of the best practices in general) Theoretical Claims: None. Experimental Designs Or Analyses: Sound. Supplementary Material: Not in depth. I assure everyone involved I have adhered to the leakage prevention statement in Appendix A. Relation To Broader Scientific Literature: Extremely important. 1. While there are larger datasets, this seems to be the most varied and largest expert annotated dataset. 2. There is a clear need for a tougher benchmark. The results here depict a nice "barrier". 3. Overall evaluation framework seems to be well within the accepted norms across various previous publications in the domain. Essential References Not Discussed: None missing to my limited knowledge. Other Strengths And Weaknesses: While not a weakness per say, why not evaluate a domain specific model such as LLaVA-Med (https://github.com/microsoft/LLaVA-Med)? Other Comments Or Suggestions: 1. If o1 has such a high score in USMLE (stand-alone), does that simply imply that other exams are harder? Why is there such a drastic reduction in performance when exams of other countries are considered. It would be interesting to have a comment on that whether this stems from the dataset structuring or is representative of the exams themselves. 2. While I am not a clinician, I am not sure there is succinct literature proving that exam questions (which seem to be the sole data source) are completely "real-world" representative. (Note: In my short search, I could not find a definite citation for the agreement or disagreement of this.) Maybe having a comment on this can provide stronger confidence to a non-clinical user/reader? Questions For Authors: 1. It seems to me that, this is a "simple (only in terms of comparison, not effort)" extension of previous datasets by collecting questions across more exams. Would it then not be easy to fine-tune on these exams (which I imagine are available to all, even for a price) and get a boosted score knowing that all the questions are limited to these exams? Ethical Review Flag: Flag this paper for an ethics review. Ethics Expertise Needed: ['Legal Compliance (e.g., GDPR, copyright, terms of use)'] Ethical Review Concerns: Massive medical dataset involving human expert annotation. Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: **Thank you very much for your recognition of our work and your valuable suggestions!** # Response 1 - Other Strengths And Weaknesses **You make a great point. Please refer to Response 1.2 (Specialist Model Results) to Reviewer vM2L. Thank you for your understanding!** # Response 2 - Other Comments Or Suggestions ### **2.1 o1 Performance on USMLE** - We want to clarify that the difficulty of MedXpertQA is not due to additional question sources or countries. During dataset construction, we found USMLE questions to be the most difficult, even compared to specialist board exams. Specialty assessments were included to enhance clinical relevance, not to increase difficulty. - To explain your observation of o1's high scores: USMLE questions in our benchmark and those in MedQA [1] are markedly different. USMLE does not publicize exam questions, so "USMLE questions" usually refer to mock questions devised by third-party experts, with varying difficulties. For MedXpertQA, we collected questions from high-quality, diverse sources and performed rigorous dataset construction steps, e.g. question filtering. This ensured the difficulty of MedXpertQA, accounting for o1’s lower score compared to other benchmarks. In summary, the difficulty of MedXpertQA is primarily due to the dataset construction process. Note: EDiR questions only make up a small percentage of MedXpertQA, and the large majority of questions still represent US exams. ### **2.2 Real-World Representative** **Point 1:** First, we would like to emphasize that even if exam questions do not completely simulate real-world clinical scenarios, this is **a common issue faced by all existing medical benchmarks** (every baseline in Table 1, 2), not a new challenge introduced by our work. However, these benchmarks still represent the most important and impactful way to evaluate medical AI. For example, MedQA has already been cited over 100 times this year alone and has been used to evaluate frontier medical AI models such as MedPaLM 2 [2] and Med-Gemini [3]. We believe MedXpertQA can greatly contribute to medical AI progress, and that its impact is amplified by the widespread use of similar, yet less clinically relevant alternatives. Moreover, your concerns are totally valid - we are aware of some works calling into question whether exam questions are real-world representatives. However, full representation of clinical relevance seems to be an elusive and distant topic that no existing benchmark can achieve, nor is it a claim of our work. We intend to convey in good faith that new work should be recognized primarily for the significant improvements it introduces over existing works, rather than for pursuing overarching goals. **Point 2:** Secondly, we need to clarify that we do NOT claim that MedXpertQA **completely** simulates real-world scenarios. Our claim has always been that it **significantly improve** (in Abstract) clinical relevance compared with previous, widely adopted benchmarks, which we believe is sufficiently proved and represents an important contribution in itself. Compared with previous benchmarks, MedXpertQA improves clinical relevance through fundamental improvements for both subsets: - For Text, our addition of medical specialist evaluations is certain to improve relevance, since realistic medical tasks are highly specialized and assigned to different departments. A single, general evaluation suite is evidently inadequate for evaluating the full spectrum of clinical tasks. GMAI-MMBench [4] raised a similar claim. Our discussions with medical expert collaborators further verified this point. - For MM, current benchmarks commonly design surface-level questions without realistic patient information, leading to extremely limited relevance. MedXpertQA's questions were constructed by human experts and intended to demonstrate how well a medical student will likely perform facing real patients. These expert-designed questions far surpass those constructed through fixed templates or automatic LLM generation of existing benchmarks in quality and relevance. The corresponding images are also realistic and diverse. # Response 3 - Questions for Authors Again, your concerns are reasonable, and we were also worried about this during our work. This is the main reason we did not publicize the sources of our questions. We also note that this issue is ubiquitous for benchmarks across different domains, such as the AIME datasets in mathematics. While omitting sources may not solve the problem completely, there's not more we can feasibly do for now, and we hope you understand the difficulty behind this issue. ### **References:** [1] [https://arxiv.org/abs/2009.13081](https://arxiv.org/abs/2009.13081) [2] [https://arxiv.org/abs/2305.09617](https://arxiv.org/abs/2305.09617) [3] [https://arxiv.org/abs/2404.18416](https://arxiv.org/abs/2404.18416) [4] [https://arxiv.org/abs/2408.03361](https://arxiv.org/abs/2408.03361)
Summary: The paper introduces MedXpertQA, a novel and challenging benchmark for evaluating expert-level medical knowledge and reasoning. MedXpertQA consists of 4,460 questions covering 17 medical specialties and 11 body systems, divided into text-based (Text) and multimodal (MM) subsets. The authors employed a rigorous methodology for benchmark construction, including extensive filtering based on both AI and human expert evaluations, data synthesis to mitigate leakage risks, and multiple rounds of expert review to ensure accuracy. Additionally, they developed a reasoning-oriented subset to facilitate the assessment of advanced reasoning capabilities in medical AI models. The authors evaluate 16 leading language and multimodal models on MedXpertQA, demonstrating that current state-of-the-art models still face significant challenges in expert-level medical reasoning tasks. Claims And Evidence: The authors clearly state the limitations of existing medical benchmarks and provide detailed justification for the development of MedXpertQA: - Statistics on the coverage of the benchmark across specialties, body systems, and task types - Explanation of data collection, filtering, and quality assurance processes - Evaluation of 16 models and quantitative analysis demonstrating the benchmark's difficulty level compared to existing benchmarks The clinical relevance claim is supported by including questions from 17 medical specialty board exams, though additional validation with practicing clinicians would further strengthen this claim. Two claims that could benefit from stronger evidence: - Data leakage prevention: The authors' approach to preventing data leakage primarily on having LLMs "rephrase the question through alternative expressions or structural adjustments while preserving all original information." This simple paraphrasing strategy is unlikely to be effective against modern LLMs that can recognize semantic equivalence despite surface-level rewording. The metrics in Table 5 (perplexity and n-gram similarity) may not adequately capture whether models have seen semantically equivalent content during training. More rigorous methods and analysis would be needed to substantiate this claim. - Reasoning-oriented evaluation: The distinction between reasoning and understanding questions was made using GPT-4o, but the paper would benefit from clearer operational definitions and validation of these categorizations by medical experts. The paper does not sufficiently demonstrate that the "reasoning" subset genuinely requires medical reasoning rather than just being more difficult questions. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are appropriate and well-motivated for the problem at hand. The authors' approach to benchmark creation is methodical, involving (1) data collection from authoritative medical sources, including USMLE, COMLEX-USA, specialty board exams, and image-rich sources; (2) filtering using both AI expert filtering and human expert filtering (3) similarity filtering to ensure diversity and remove redundant questions; (4) question and option augmentation to mitigate data leakage and increase difficulty and (5) expert review. The evaluation metrics (accuracy) and zero-shot CoT prompting method are standard and appropriate for this task. The creation of distinct reasoning and understanding subsets enables a more nuanced evaluation of models' capabilities. Some areas that can be improved: - no explicit discussion is provided on realistic patient contexts—would MedXpertQA be applicable to realistic scenarios requiring sequential diagnostic reasoning in medical records? - no ablation study is presented to measure the impact of individual filtering steps. Understanding how much each filtering stage contributes to difficulty would be beneficial. - The paper would benefit from more detail on how humans quantified the difficulty, and finer-grained analysis of human annotations across different medical specialties would add real value for diverse specialties. Theoretical Claims: The paper makes limited theoretical claims, focusing primarily on empirical evaluation. The authors' claim about the relationship between inference-time scaling and medical reasoning capabilities is supported by their experimental results. Experimental Designs Or Analyses: The experimental designs and analyses are well-executed. The authors evaluate a diverse set of 16 models ranging from proprietary to open-source, including both vanilla models and inference-time scaled versions. The zero-shot CoT prompting approach is appropriate for the evaluation. The breakdown of performance by task type (reasoning vs. understanding) and by the medical system provides valuable insights into model capabilities and limitations. Below are some points that can be improved: - The data leakage analysis methodology using perplexity and n-gram-based metrics to assess potential memorization can potentially be a main issue for the realistic usage of the constructed benchmark. - Further analysis is needed to determine whether MedXpertQA’s reasoning subset accurately captures distinct clinical reasoning. Supplementary Material: I read the supplementary material on case studies, expert review guidelines and statistics on identified errors, and complete prompts used for attribute annotation and data augmentation. Relation To Broader Scientific Literature: The authors position MedXpertQA effectively within the broader landscape of medical benchmarks. They provide a comprehensive comparison with existing text-based benchmarks (PubMedQA, MedQA, MedMCQA, MMLU) and multimodal medical benchmarks (VQA-RAD, VQA-Med, Path-VQA, SLAKE-En, PMC-VQA, OmniMedVQA, GMAI-MMBench, MMMU), highlighting key differences in terms of complexity, clinical relevance, and diversity. Essential References Not Discussed: The paper has a good coverage of relevant literature. However, a few potentially relevant works not discussed include: - Relevant research on medical reasoning benchmarks - Recent work on medical AI evaluation and how such benchmarks might eventually be to understand LLMs' practical real-life usage. Other Strengths And Weaknesses: Strengths: - The benchmark focuses on expert-level questions with filtering and augmentation, addressing the insufficient difficulty of existing benchmarks like MedQA and providing new evaluation datasets for recent models. - Developing a reasoning-oriented subset demonstrates the recognition that medicine provides a rich context for assessing complex reasoning. - By incorporating questions from 17 medical specialty board exams, MedXpertQA achieves clinical relevance and specialization diversity compared with previous benchmarks. Weaknesses: - Inadequate Data Leakage Prevention: The paper's approach to preventing data leakage relies primarily on simple paraphrasing ("rephrase the question through alternative expressions or structural adjustments"). This strategy is likely insufficient against modern LLMs that can recognize semantically equivalent content despite surface-level rewording. More sophisticated techniques would be needed to genuinely prevent the leakage of medical exam questions that may already be in training data. - While the reasoning vs. understanding categorization is valuable, the paper lacks clear operational definitions of different reasoning types in medicine and relies on GPT-4o for these annotations, which could be a concern on the validity. - Unclear validation of reasoning - do models exhibit clinically meaningful reasoning or just perform well on structured multiple-choice formats? Other Comments Or Suggestions: - The benchmark would benefit from establishing human performance baselines on the released benchmark, which would provide valuable context for evaluating model performance and validating the difficulty levels. - Consider including confidence calibration analysis for the evaluated models, as this is particularly important in high-stakes medical domains. - Minor typos: Page 4, paragraph 2: "we instruct an LLM to annotate each question with its most relevant human body system" - It's unclear which LLM was used Questions For Authors: - The paper mentions that MedXpertQA includes questions from 17 American specialty board exams. Could you clarify if the ground truth responses for the questions are also collected by rich sourcing? Are there expert annotations involved for quality control on responses? - What criteria did you use to quantify "difficulty" in medical questions, and how did you calibrate these metrics across different humans to ensure the measurement of challenge levels is consistent? - The approach to preventing data leakage relies primarily on paraphrasing ("rephrase the question through alternative expressions or structural adjustments"). Given that modern LLMs can recognize semantically equivalent content despite rewording, why do you believe this strategy is sufficient? Did you consider a verification or test with more extensive modifications that might more effectively prevent recognition? - The metrics in Table 5 (perplexity and n-gram similarity) may not fully capture whether models have seen semantically equivalent content during training. Did you explore alternative methods to evaluate potential leakage? - Do any MedXpertQA questions require decision-making over time (e.g., follow-up management after an initial diagnosis)? Would adding a longitudinal subset improve clinical realism? - Since MedXpertQA is largely based on medical exams, have you considered incorporating non-medical exams to improve generalizability? Could MedXpertQA be used to evaluate interactive AI models that engage in back-and-forth questioning, mimicking real physician-patient interactions? - How do you determine whether a question requires deep multi-step reasoning versus factual recall? Happy to adjust scores if those major concerns can be solved. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your thoughtful comments! # Response 1 - Claims And Evidence ### **1.1 Leakage Prevention** First, we note that data leakage prevention is an extra precaution we took on top of our main contribution, a challenging, clinically relevant benchmark. Models' subpar performance on MedXpertQA already reflects that its questions haven't been well learned during pretraining. Our literature review found no effective method for reducing leakage risk of benchmarks, so we used the intuitive LLM rewriting strategy. We detail our extensive efforts here: https://postimg.cc/D4w1hw7g. Changing questions drastically tends to lower their quality and relevance, and our method balances general quality and low leakage risk. Our method exceeds "simple paraphrasing" - it combines meticulously designed instructions with strict multi-round human reviews and error correction (AppendixD). ### **1.2 Leakage Risk Evaluation** - We intend to solidly show MedXpertQA's low leakage risk, not devise a new method. - We highlight the validity of the metrics we used (Please reply for further clarification. Thanks!). - No previous publication in medical AI benchmarking covered this. - A recent work compared leakage risks of different benchmarks, showing the superiority of MedXpertQA: https://postimg.cc/yDrvtTB3. ### **1.3 Reasoning-Oriented Evaluation** - We initially considered having experts label Reasoning (*R*)/Understanding (*U*), but we realized that this task is quite straightforward for LLMs given clear prompt guidelines. We also provide expert-written answers and explanations collected from sources (Table13). This dense guidance enables a simplified form of annotation under expert supervision. - We sufficiently considered the distinction between reasoning complexity and general difficulty when designing our labeling prompt (Table13). - Human reviewing: For 10% sampled questions (Text-490, MM-400), reviewers found 28 and 11 questions incorrectly labeled as *R*. We think this error rate (~4.3%) is acceptable. - Empirical results (Section 5.2) reflect the validity of annotations. LRMs perform much better than their backbones on *R*, and this does not hold on *U*. We even note the opposite trend for Qwen-series models on *U*, which would not hold if *U* questions were merely easier. For single models, *R* scores aren't consistently lower than *U*. # Response 2 - Methods And Evaluation Criteria ### **2.1 Sequential Reasoning** Two of our question sources included many sequential questions. We found that each question had sufficient context and wasn't dependent on answering previous ones, so we used individual questions instead of preserving the sequence. Though MedXpertQA dissects the multistep process into separate questions, its coverage of all stages of clinical decision-making (Figure3) ensures that it tests wide-ranging abilities needed for realistic multistep tasks. Recent works, e.g. MedChain[1], have focused on this, and we'll add discussions of these works in the next version of the paper. ### **2.2 Statistics on Individual Filtering Steps** Please see Response 2.1 to Reviewer Rrtx ### **2.3 Difficulty Quantification and Calibration** We combine two metrics: https://postimg.cc/RNrP4tWF. # Response 3 - Experimental Designs Or Analyses Please see Response 1 # Response 4 - Essential References Not Discussed We'll add these works in the next version, omitting it here due to space limitations. # Response 5 - Other Strengths And Weaknesses **1 & 2:** Please see Response 1 **3:** We show examples in F.1, F.3 (with human analyses of model errors). Models handle complex clinical information and make nuanced decisions between different possibilities. In addition, fine-grained analysis of model reasoning will be more informative once models achieve better performance. # Response 6 - Other Comments Or Suggestions - Please see Response 2.2 to Reviewer Rrtx - Valuable suggestion! We will consider adding it in the next version. - In Page 4 paragraph 2, we used GPT-4o-2024-11-20. We'll clarify this in revisions. # Response 7 - Questions For Authors 1. Ground truth answers were collected from question sources, whose QA pairs were designed by medical experts. 2. Difficulty: -> Response 2.3 3. Leakage Prevention: -> Response 1.1 4. Risk Evaluation: -> Response 1.2 5. Sequential decision-making: -> Response 2.1 6. While MCQA is not tailored for interactive chat models, MedXpertQA covers diverse tasks, some of which touch on clinical questioning: https://postimg.cc/ZCVYzGXs. 7. Reasoning: -> Response 1.3 --- While responding to your comments, we noticed aspects where our current paper did not sufficiently reflect our efforts. That being said, we believe the drastic improvement in benchmark difficulty is a convincing indication of our extensive efforts. We hope our response addresses your concerns and look forward to incorporating relevant information into the paper. ### **References:** [1] https://arxiv.org/abs/2412.01605
Summary: In this work, the authors contribute a new synthetic dataset for the evaluation of medical reasoning of large language models (LLM), and the newest models in this class, also called large reasoning models (LRM). The creation of the dataset follows several steps that are well described, to ensure the benchmarking tasks are varied, of sufficient difficulty and do not suffer from data leakage with the training data of the models. As the authors outline, it is very important to shape the benchmarks properly as they, in turn, shape the model development. ## update after rebuttal I have slightly increased my grade after the discussion period, but some important points have not been addressed by the authors, in particular a comment about the relatively low human performances on their benchmark, a comment about the need of the different steps based on the number of questions filtered out at each step, and the measure of any performance bias for different genders for the models present in the benchmark. The question of the question difficulty is key to the relevance of this benchmark, as the authors claim that one of the advantages of this benchmark is its difficulty, but it may be related to impossible questions, which undermines the relevance of the difficulty. Claims And Evidence: The authors claim that their benchmark is of adequate difficulty and robust, and provides a satisfying evaluation of the models, which seems well supported by evidence. However, the claim that the benchmark reflects real-world clinical data and scenarios is more difficult to prove (as a writer) and to verify (as a reviewer). My main concerns is that the audience of this conference has an extremely limited medical background, and the existence of a benchmark has the power to drift the orientation of future research for medical informatics, so the audience of this paper, to further discuss the content of the benchmark and the relevance of the medical tasks should include medical doctors. A journal of medical informatics might be better suited for this task. That being said, providing benchmarks with tasks from other domains is valuable for the improvement of future developed models, but the claims of relevance and usefulness for the domain of origin should be attenuated in the absence of a thorough multi-disciplinary discussion. One concern that I have regarding relevance would be that the state of the patient is provided as a clean summary narrative to the model, which already incorporates the doctor's work to gather all relevant information. It's possible that this preliminary data selection and organization is the most difficult task for the doctor. Methods And Evaluation Criteria: The main evaluation criterion for the relevance of the benchmark is the performances of the different tested models on this benchmark, which seems to exhibit a good discrimination ability between the different models. There is also a series of selection criteria involved through the different steps of the benchmark construction, however there are simply described, and it would be very informative to provide the distribution of metrics across all the 37k initial questions, the number of filtered questions at each step, and the filtering threshold with respect to the whole distribution, to better grasp which filtering steps have been the most crucial, as it could inform the construction of future similar benchmark, in the medical field of other fields. Detailing more the article as a good methodology to create a benchmark would be a way to make it more adapted to this conference's audience rather than to a medical informatics journal if that is the authors' wish. The authors have actually relied on similar information from previous works to assess the data leakage, illustrating the relevance of a more thorough reporting of all the steps of their work in details. It would be also extremely insightful as at some of those steps, there is a human performance assessment. This performance assessment is not on the complete final benchmark, but can be a very good proxy of human performance. Maybe it would be worth reporting the 16 models' performances at this intermediate step to provide a comparison, even if this intermediate performance should be taken with cautious as it occurs before the mitigation of data leakage step. In the impact statement, the authors mention the importance of ethical concerns, mostly with respect to medical data privacy, however there are well-known issues with models biases. It could be relevant to assess the performances of the different models for different categories of patients. Gender seems to be mentioned in most medical questions. How about ethnicity? Also in this line of thought, the mention that this benchmark is in English language and might be tailored to US (or English-speaking country) medicine could be discussed. Theoretical Claims: no theoretical claims. Experimental Designs Or Analyses: As mentioned before, a lot of relevant "experimental" results of evaluation of the benchmark through all the steps are not reported, which is a major issue of this article, except for the table D in Supplementary that does not mention the number of questions evaluated at this step, only the number of questions flagged by the experts (unless I missed the info in the text, but it should be mentioned again with the table). Figures like figure 4 for all models should be reported in the supplementary as well. Figure 5 reports interesting results of performance variation with respect to the medical specialty. Do the other models have the same pattern? how about human performances assessed in step 2? Supplementary Material: I have reviewed most of the supplementary material, but not the expert review guidelines. Relation To Broader Scientific Literature: This article brings a new evaluation benchmark for LLMs and LRMs. The main novelty of this paper seems to be the discrimination power for method evaluation, however, it is not clear that this line of tasks is relevant for actual clinical practice, though it might be interesting to evaluate a different aspect of model reasoning in general. Essential References Not Discussed: It would be interesting to discuss and cite relevant literature comparing the reasoning ability of those models in different domains to see if there is a specificity to the medical domain, or said otherwise, does the evaluation of this medical reasoning task provide a different ranking of the models compared to other reasoning benchmarks? Other Strengths And Weaknesses: The provided dataset seems more diverse and thorough than previously existing benchmarks Other Comments Or Suggestions: there is a typo MMMU p. 5 Questions For Authors: I have disseminated questions throughout the review. I attempt a summary of the main ones here, but more details and context is given in the previous sections 1) provide the distribution of metrics across all the 37k initial questions, the number of filtered questions at each step, and the filtering threshold with respect to the whole distribution, to better grasp which filtering steps have been the most crucial, as it could inform the construction of future similar benchmark, in the medical field of other fields. (addressed in the rebuttals, but no discussion about which steps are important and why) 2) report human performances (and model performances on the benchmark at this step) (done for a junior doctor, not yet for a senior doctor, the overall performance would need further discussion as human performances are low. Are the questions unsolvable because of a lack of information???? that would importantly affect the relevance of the benchmark if half of the questions are actually not sovable) 3) figure 4 for all models - addressed when possible (with the table) 4) figure 5 for all models + human performances - addressed 5) variations of performances for gender, and ethnicity if/when available (not addressed so far in the rebuttal, the authors have reported the number of questions for male and female patients, but not the associated performances) 6) change of model ranking with respect to other reasoning tasks in the literature (to show the relevance of yet another LLM benchmark) 7) discussion of the actual relevance for example in routine clinical use Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We hope that our clarifications fully address your concerns! # Response 1 - Claims And Evidence ### **1.1 Target Audience** - We agree with the value of expert insight. We worked closely with medical practitioners when designing and reviewing MedXpertQA. - Audience from an AI background is irreplaceable. Since the goal of MedXpertQA is to help researchers understand and improve model limitations, our most important target audience is medical AI researchers. ### **1.2 Real-World Representation** Please see Response 2.2 to Reviewer gh7D. ### **1.3 Concern on Patient Data Collection** - This is only one type of question in MedXpertQA, whose subtasks span different stages of medical decision-making (Figure 3). It also contains questions requiring data processing and organizing: https://postimg.cc/ZCVYzGXs. - We acknowledge that MedXpertQA doesn't explicitly model the multi-step process of single patients. This is an inherent limitation of MCQA, and relevant works, e.g. MedChain [1], tackle the issue. Nevertheless, MedXpertQA's diverse task coverage ensures that it adequately tests wide-ranging capabilities. - Finally, patient data collection may not be most challenging. Poor results of sota models reflect the difficulty of downstream clinical decision-making, the focus of MedXpertQA. # Response 2 - Methods And Evaluation Criteria ### **2.1 Benchmark Construction Details** We've attempted to present the construction process concretely and in detail. We provide metric formulas, hyperparameters, etc. This level of detail exceeds many prominent benchmarks, e.g. MMLU-Pro, MMMU. If any aspect remains unclear, we can provide further clarification. - Unfortunately, we are unable to provide the initial dataset distribution, as labeling 37k questions for multiple attributes would be too costly. We appreciate your understanding. - Statistics on questions remaining after each step: [Dataset Filtering Stats](https://postimg.cc/zVggJc4r). - Our dataset construction is tailored to the medical domain, thus not intended for direct application to general AI. Our main contribution is the benchmark itself, especially MM, which fills crucial gaps in current medical multimodal evals. ### **2.2 Human Performance Evaluation** We already have some preliminary human performance results: https://postimg.cc/fkkJCjy0. The details: - **Expert (Junior) Score**: Response distributions collected from question sources mostly come from medical students and reflect human performance. Rewriting has little impact on human performance, since it retains question information. Thus, human performance on original questions can be compared with final model results. The large response number of each question (up to 238k) makes these stats highly representative. Fewer than 200 final questions lack response data, and we'll hire humans of similar caliber to answer them and complete the human performance evaluation. We'll incorporate this information before 4.8 AoE. - **Expert (Senior) Score**: We will assess human experts with medical licenses/MDs for a separate expert performance measurement. These experiments will take longer and will be added in future versions. ### **2.3 Model Biases** - MedXpertQA primarily aligns with practices from the US. We'll cover this in the revised paper. - Model bias analysis, while important, exceeds the scope of this work. Bias mitigation should be conducted by model developers, and works such as [2,3,4] would be better references for practitioners interested in these issues. - We provide MedXpertQA's coverage of patient demographics. Gender (from keyword matching): - **Text**: Male 1025, Female 903 - **MM**: Male 874, Female 550 - **Total**: Male 1899, Female 1453 For ethnicity, we re-examined 100 questions and found no mentions. # Response 3 - Experimental Designs Or Analyses See Response 2 (R2) for stats. Table D was from expert evaluations. **Experts not only reviewed each question, but also did multiple rounds of editing.** Figure 5 results for all models: https://www.hostize.com/zh/v/TH3ddN2Bz3. # Response 4 - Essential References Not Discussed Please see R2 to Reviewer vM2L. For MedXpertQA specifically, an interesting result is the noticeable gap between R1 and o1, which would be unexpected if we directly extrapolated from other reasoning benchmarks. # Response 5 - Questions For Authors 1. -> R2.1 2. -> R2.2 3. Figure 4 requires data on paired backbone LLMs and LRMs, thus can't be done for all models. Please refer to the main results table for results on more models. 4. -> R3 5. -> R2.3 6. -> R4 7. -> R1.1-1.3 MedXpertQA's main goal is evaluating models' fundamental medical abilities. These abilities provide crucial support for clinical use, but downstream applications are not our focus. # References: [1] https://arxiv.org/abs/2412.01605 [2] https://www.nature.com/articles/s41591-024-03113-4 [3] https://www.nature.com/articles/s41746-025-01503-7 [4] https://www.nature.com/articles/s43856-024-00601-z --- Rebuttal Comment 1.1: Comment: I thank the authors for their answers. Regarding response 2.3, I totally understand that the bias mitigation is not the authors' concern, but the benchmark could report the performance change between questions regarding men and women, the same way results for different medical fields are reported. The dataset filtering stats show little variation after the first two steps, could you discuss that more? what value does this bring to the benchmark performance with respect to the involved work? --- Reply to Comment 1.1.1: Comment: Thanks for your response! We appreciate your feedback and have made every effort to address your concerns thoroughly. # Question 1: We have performed an additional analysis comparing all model performances on male and female patients, with the following results: *Text*: https://postimg.cc/wtB3qrqz, *MM*: https://postimg.cc/rz4swxkD. We see that there are no consistent trends of model bias. On *Text*, more models have slightly higher accuracies on male patients, and on *MM*, more models have higher accuracies on female patients. The performance gaps are small across all models. This is expected since patient gender is not a decisive factor for most questions in MedXpertQA (specific symptoms and examination results are generally the most important). # Question 2: After the first two steps, the primary factors influencing the number of questions are **Similarity Filtering** (*Edit Distance Filtering* and *Semantic Similarity Filtering*) and **Expert Review**, which filtered out 54 and 223 questions, respectively. These two steps primarily focus on data quality, whereas the first two steps are designed to assess difficulty. **Since our data is collected from high-quality and authoritative sources, it is expected that filtering based on difficulty has a greater impact on the dataset size compared to quality-based filtering.** The roles of these two filtering stages are as follows: - **Similarity Filtering**: Although this step has a relatively small impact on the number of questions, it is crucial for maintaining the robustness of the benchmark. Our evaluation of traditional visual medical benchmarks, such as SLAKE [1], reveals that these benchmarks often generate QA pairs based on fixed question templates, resulting in a limited number of question types (as shown in Table 1 of VQA-RAD [2]). For example, in SLAKE, 18 questions are *"Does the picture contain a lung? Answer ‘yes’ or ‘no’."* Among these, 13 answers are "Yes." The performance of models varies significantly across different question types. Once a model learns specific features or shortcuts associated with a question type, it can exploit these patterns, leading to benchmark hacking. Thus, **Similarity Filtering is essential**. Furthermore, due to the high diversity of our dataset, which is not generated from fixed templates, a relatively small number of filtered questions at this stage is expected. Although this filtering stage does not significantly reduce the dataset size, it removes highly similar questions that could otherwise bias model performance, undermining the robustness of the evaluation. **To ensure the benchmark’s reliability, we consider this step indispensable.** - **Expert Filtering**: Human involvement in dataset construction is critical for ensuring both **domain expertise and factual accuracy**. This is particularly important in specialized fields such as medicine, where expert review is necessary to maintain data quality. During the final step (multi-round expert review), experts directly corrected the most problematic questions they identified and deleted a smaller portion. Since direct editing is involved, the impact of expert reviewing is not fully reflected in the change in the total question number. Please see Table 6 in our paper for more details on this stage (a full tally of flagged questions including fine-grained error types). These modifications significantly enhance data reliability. [1] https://arxiv.org/abs/2102.09542 [2] https://www.nature.com/articles/sdata2018251
null
null
null
null
null
null
AdvI2I: Adversarial Image Attack on Image-to-Image Diffusion Models
Accept (poster)
Summary: The paper presents an adversarial attack on image-to-image diffusion models to generate NSFW content. The authors train a Variational Autoencoder (VAE) to encode NSFW content into clean images and introduce an adaptive attack method to circumvent existing NSFW defense mechanisms. Through experiments on two image-to-image diffusion models, the authors demonstrate that their method can effectively bypass current defenses and generate NSFW content. Claims And Evidence: Yes Methods And Evaluation Criteria: Yes Theoretical Claims: Not applicable Experimental Designs Or Analyses: Yes Supplementary Material: Yes, Experiment part. Relation To Broader Scientific Literature: The paper evaluates the generation of NSFW content by diffusion models, an important topic concerning the appropriate use of these models. Essential References Not Discussed: The author proposes an adversarial attack method, but it is primarily evaluated in a white-box scenario and on similar model architectures during the main evaluation. Other Strengths And Weaknesses: Strength: 1. The paper is well-written and easy to follow. 2. The exploration of NSFW topics in image-to-image diffusion models is novel. 3. The attack methods, which use a trained generator to modify only the image input, are effective against current defenses. 4. The experimental results are robust and demonstrate the effectiveness of the proposed attack in a white-box scenario. Weaknesses: 1. Since the adversarial image generator is trained using white-box diffusion models, it is crucial to evaluate its performance in black-box scenarios to demonstrate the generalization of the proposed attacks. However, the major experiments are conducted in a white-box setting. Additionally, the performance on the latest SDv3.0 is significantly lower, which poses a notable limitation for the further deployment of the proposed attacks. 2. The proposed method incorporates the concept of current defenses into the training of the adversarial image generator, which ensures better performance against these defenses. The authors should evaluate the method against more unseen or advanced defenses, such as using multi-model LLMs for defense, to further validate the usability of the proposed attacks. 3. The experiments are conducted solely on open-source models. It is essential to assess the performance on models with online services, such as Midjourney and Leonardo.Ai, to gain a comprehensive understanding of the method's effectiveness. 4. The experiments are performed only on a test set used for training the generator with hundreds of images, which may not be sufficient to evaluate an adversarial attack method on large SD models. Moreover, the performance on SDv3.0 indicates that the training data is critical for the success of the proposed attacks, as they are trained with the target diffusion model. The authors should provide guidelines on selecting the training dataset and explain the rationale behind their choices. Other Comments Or Suggestions: The proposed method has potential to be a strong defense, the paper can be largely enhanced if the author can construct a defense method. Questions For Authors: 1. Please follow the weakness for questions. 2. Can diffusion-based purification counter the proposed method? For instance, could a purification method that gradually removes the NSFW concept, as defined by Equation 1, during its diffusion process effectively mitigate the attack? 3. In the MMA-Diffusion paper, a multimodal attack is presented. However, it is unclear which implementation of MMA-Diffusion is used in this paper. Could you provide further clarification on the specific implementation and how it relates to the proposed method? Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the insightful comments. Below, we thoroughly address each point with additional experiments and clarifications. **W1:** Experiments mostly conducted in a white-box setting; limited effectiveness on SDv3.0. **A1:** We have conducted additional black-box experiments to evaluate transferability across I2I models. **Table 7** in manuscript shows strong black-box transferability from SDv1.5 to SDv2.0 and SDv2.1 (ASR of **80.5%** and **84.0%**, respectively; adaptive version: **73.5%** and **77.5%**), validating real-world applicability. We also evaluated AdvI2I-Adaptive under defenses across multiple I2I models. The results demonstrate the attack persistence when transferring from SDv1.5 to (black-box) SDv2.0 and SDv2.1. |Target Model|w/o Defense|SLD|SD-NP|GN|SC| |-|-|-|-|-|-| | SDv1.5-Inpainting|78.5%|75.0%|75.5%|72.5%|72.0%| | SDv2.0-Inpainting|73.5%|72.5%|75.5%|69.5%|67.0%| | SDv2.1-Inpainting|77.5%|73.0%|76.0%|73.0%|70.0%| | SDv3.0-Inpainting|33.0%|30.5%|30.5%|27.0%|30.0%| Moreover, we extended our evaluation to newer diffusion models. Transferring attacks from SDv1.5 to SDXL-Turbo and FLUX yields ASRs of 62.5% and 74.0%, respectively, further highlighting the method’s generalization capability across model architectures in black-box setting. |Source Model|Target Model|ASR| |-|-|-| |SDv1.5-Inpainting|SDXL-Turbo|62.5%| |SDv1.5-Inpainting|FLUX.1-dev ControlNet Inpainting-Alpha|74.0%| Regarding the explanations on reduced ASR on SDv3.0, please refer to our response **Reviewer rvyv's A6**. **W2:** Evaluation against multimodal LLMs. **A2:** We appreciate this valuable suggestion. Evaluating AdvI2I against a multimodal LLM defense (GPT-4o) significantly reduced ASR from 81.5% to 9.5%, demonstrating promising defensive potential. However, we observed practical limitations of directly using multimodel LLM as defense: - **High computational overhead**: GPT-4o detection (**2.94 sec/image**) exceeds generation time (**1.39 sec/image**, SDv2.1 with 50 steps). - **Misclassifications** due to limited visual understanding, even with carefully designed prompts ( (Detailed examples and failure cases are in this [link](https://anonymous.4open.science/r/ICML-2025-rebuttal-B583/GPT_defence_for%20YL1t.pdf).)). - Potential vulnerability to adaptive adversarial attacks [1, 2]. Thus, while promising, multimodal LLM-based defenses currently face practical challenges. Improving these methods remains an important future research direction. [1] Stop Reasoning! When Multimodal LLM with Chain-of-Thought Reasoning Meets Adversarial Image. [2] On the Robustness of Large Multimodal Models Against Image Adversarial Attacks. **W3:** Evaluation on online services. **A3:** Thank you for this suggestion. We conducted additional experiments on Leonardo.Ai (Phoenix 1.0). Despite strong protections of the online platform, AdvI2I (**34.5%**) and AdvI2I-Adaptive (**31.5%**) still significantly outperform the baseline MMA (**28.0%**), further confirming our method’s real-world effectiveness. |Method|ASR| |-|-| |MMA|28.0%| |AdvI2I (ours)|34.5%| |AdvI2I-Adaptive (ours)|31.5%| **W4:** Limited evaluation dataset size and dependency on training data. Guidelines for dataset selection needed. **A4:** We clarify that our training and test samples were randomly split without complete overlap. We have also verified sample transferability of AdvI2I on a completely unseen set (new images and prompts never seen during training) in the manuscript's **Table 5**. For training dataset selection, we specifically chose non-NSFW images from the "sexy" category provided by the NSFW Data Scraper [1] because these images prominently feature people, closely aligning with real-world scenarios and attack contexts that attackers are likely to target. Regarding the explanations on reduced ASR on SDv3.0, please refer to our response **Reviewer rvyv's A6**. [1] https://github.com/alex000kim/nsfw_data_scraper?tab=readme-ov-file#nsfw-data-scraper **C1:** Potential as a defensive method. **A5:** We appreciate your insight. We tested our method’s defensive potential by using AdvI2I to embed a "wearing clothes" concept into images. Interestingly, this significantly reduced the ASR (**96.5% → 24.5%**) of explicit prompts (e.g., "Make the woman naked") on SDv1.5-Inpainting. This demonstrates AdvI2I’s broader conceptual versatility and suggests promising future defensive applications. We will include this discussion in our revised paper to suggest directions for future research. **Q1:** Can diffusion-based purification counter the proposed method? **A6:** Please see our response **Reviewer FsK8's A1**. Thank you. **Q2:** The implementation of MMA-Diffusion used in this paper. **A7:** MMA generates adversarial text prompts and corresponding images to bypass diffusion model safety filters. We replaced standard prompts in our dataset with MMA-generated adversarial prompts and optimised adversarial images from our test images. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed responses. The authors have addressed most of my concerns. However, there are some common suggestions among all the reviewers regarding the dataset and transferability issues. The dataset is relatively small to effectively demonstrate the performance of the proposed attacks in practical scenarios. Additionally, the performance drop in SDv3.0 further indicates that the choice of dataset is crucial for the success of the proposed attack. Although the authors provided some experimental results with newer diffusion models, I still believe that transferability should be a major focus of the paper (with comparisons to baselines) rather than a minor aspect. Therefore, the current version of the paper may not meet the conference criteria. The paper could be significantly improved by enhancing the section on defense strategies. I will keep my score.
Summary: - This paper proposes AdvI2I, a novel framework that induces diffusion models to generate NSFW content using adversarial images. - It circumvents existing defense mechanisms, such as Safe Latent Diffusion (SLD), without modifying text prompts, underscoring the urgent need for stronger security measures to prevent the misuse of I2I diffusion models. Claims And Evidence: - Yes, the details can be found in the Strengths and Weaknesses of my review below. Methods And Evaluation Criteria: - It makes sense, and the authors provide a fair comparison. Theoretical Claims: - No proofs are provided in this paper. - I believe the idea makes sense. Experimental Designs Or Analyses: - Since most settings are the same as the baseline, I believe the experiments are fair enough. - I would like to request that the authors release the code as soon as the paper is accepted. Supplementary Material: - There is no suppl.. Relation To Broader Scientific Literature: - The paper investigates the security issues of I2I diffusion models and proposes an adversarial image-based approach to attack I2I diffusion models, addressing the limitations of previous text-based attacks in [1][2]. - [1] Yang, Y., Gao, R., Wang, X., Ho, T.-Y., Xu, N., and Xu, Q. MMA-diffusion: Multimodal attack on diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition - [2] Ma, J., Cao, A., Xiao, Z., Zhang, J., Ye, C., and Zhao, J. Jailbreaking prompt attack: A controllable adversarial attack against diffusion models. arXiv preprint arXiv:2404.02928, 2024. Essential References Not Discussed: NAN Other Strengths And Weaknesses: - Strengths - This paper explores a novel problem, that is, adversarial image attacks targeting I2I diffusion models. - The paper proposes a well-designed and effective novel pipeline. - The paper considers various experiments, including transferability and image quality. The experimental results, which compare against baselines and different defense methods, demonstrate the method’s generalizability. - The paper is well-written and easy to follow. - Weaknesses: - AdvI2I relies on adversarial images. It would be beneficial to explore the robustness of the method against adversarial defense strategies, such as DiffPure [1], to demonstrate the effectiveness of adversarial image attacks in a more comprehensive manner. - [1] Nie W, Guo B, Huang Y, et al. Diffusion models for adversarial purification. arXiv preprint arXiv:2205.07460, 2022. - There is no detailed analysis of runtime comparisons, nor is there adequate discussion on the computational costs and efficiency of the proposed method. This could limit its practical feasibility. - The main results are limited to SDv1.5-Inpainting and InstructPix2Pix, excluding more advanced versions of Stable Diffusion or other models like FLUX. This limits the generalization potential of AdvI2I. - The proposed attack requires a white-box setting with a safety checker. The experiments in the appendix only evaluate transferability for ViT-L/14-based models. Is this evaluation comprehensive enough to assess the effectiveness of the method on existing diffusion models? - The method section introduces the concept of Adaptive Attack. How does it differ from the Image-Modal Attack in MMA? Please clarify the distinction between the two approaches. Other Comments Or Suggestions: - In line 15 of Algorithm 1, should it refer to Eq. 2 instead of Eq. 4 when not using AdvI2I-Adaptive? Questions For Authors: - I would like the authors to address the weaknesses outlined above. - How can SAM be used to defend your model in a similar way to MACE: Mass Concept Erasure in Diffusion Models? I believe applying SAM defense at the end of the model could effectively mitigate most attacks. Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for insightful feedback. We have conducted additional analyses and experiments to address your concerns. **W1:** The robustness of the method against adversarial defense strategies, such as DiffPure. **A1:** We have now evaluated the robustness of AdvI2I against DiffPure as suggested. DiffPure reduces the ASR for the SDv1.5-Inpainting model (nudity concept) from **82.5% to 72.5%**, a modest decline indicating AdvI2I’s robustness to such purification defenses. |**Method**|w/o Defense|**DiffPure** | |:-|:-|:-| |Attack VAE|41.5%|33.5%| |**AdvI2I (ours)** |82.5%|72.5%| **W2:** Runtime comparisons or discussion on the computational costs of the proposed method. **A2:** Thank you for highlighting this point. We conducted additional experiments comparing AdvI2I to MMA on SDv1.5-Inpainting, measuring ASR and runtime cost. The results clearly demonstrate AdvI2I’s efficiency advantage. **AdvI2I maintains high ASR (82.5%) at an extremely low attack runtime (only 0.008 sec/image)**, demonstrating superior practical utility. |**Method**|**ASR (%)**|**Average Time Cost (s)**| |-|-|-| | MMA| 42.0| 415.984| | **AdvI2I (ours)**| 82.5| 0.008| **W3:** More advanced versions of Stable Diffusion or other models like FLUX. **A3:** We appreciate this valuable suggestion. We have now evaluated AdvI2I on recent advanced diffusion models, including SDXL-Turbo (incorporating a refiner block) and FLUX (using diffusion transformers). As summarized in table below, AdvI2I consistently exhibits strong generalization capabilities across these new models. | Method | Model| w/o Defense | SD-NP | GN| SC| |-|-|-|-|-|-| |AdvI2I| SDXL-Turbo| 82.5%| 79.5% | 66.0% | 18.5% | |AdvI2I| FLUX.1-dev ControlNet Inpainting-Alpha | 80.0%| 76.5% | 65.0% | 18.5% | **W4:** The proposed attack requires a white-box setting with a safety checker. **A4:** The safety checker we evaluated is widely adopted across diffusion models. However, to comprehensively assess black-box transferability, we additionally evaluated AdvI2I-Adaptive using an entirely different NSFW detector (MHSC [1]) as the safety checker. Results below confirm that AdvI2I-Adaptive achieves significantly higher transferability (**41.0%**) compared to MMA (**20.5%**), underscoring the generalization advantages provided by our learned adversarial noise generator. | Method| w/o Defense | SC| Black-box MHSC | |-|-|-|-| | MMA| 68.5%| 64.5%| 20.5%| | **AdvI2I-Adaptive (ours)** | 78.0%|70.5%|41.0%| **W5:** Clarify the distinction between AdvI2I-Adaptive and the Image-Modal Attack in MMA. **A5:** The key differences are as follows: - Attack Requirement: MMA’s adversarial image targets only the safety checker, and still requires an unsafe prompt to induce the diffusion model to generate NSFW content. This reliance makes it easier to defend against (see Table 2 in our manuscript). In contrast, AdvI2I-Adaptive requires only an adversarial image, which simultaneously fools both the diffusion model and the safety checker, without needing an unsafe prompt. - Attack Method: MMA performs direct image-space optimization per input, while AdvI2I-Adaptive uses a generator to produce adversarial images conditioned on clean inputs. This makes our method more effective (see Table 9 in the manuscript) and more efficient (see A2 of Reviewer FsK8). **Q1:** In line 15 of Algorithm 1, should it refer to Eq. 2 instead of Eq. 4 when not using AdvI2I-Adaptive? **A6:** You are correct; line 15 of Algorithm 1 should reference Eq. 2 when describing the standard AdvI2I (non-adaptive version). We sincerely appreciate your careful observation and will correct this mistake. **Q2:** How can SAM be used to defend your model similarly to MACE: Mass Concept Erasure in Diffusion Models? I believe applying SAM as a defense could effectively mitigate most attacks. **A7:** We appreciate your insightful suggestion. However, we found that directly applying SAM to detect and mask NSFW content of generated images is ineffective. Specifically, SAM indiscriminately masks entire human bodies or clothes, regardless of actual NSFW content presence. In our validation with 200 non-NSFW images, SAM consistently produced masks inaccurately labeling human figures as containing nudity—even when explicitly prompted with specific sensitive body parts (e.g., "breasts," "genitalia") that were absent. This suggests SAM inherently attempts to mask areas matching textual prompts, regardless of content appropriateness, and struggles to accurately interpret abstract concepts such as "nudity". Consequently, using SAM as an effective defense would require methodological refinements, which we believe represent a valuable direction for future research. We provide representative examples in [this link](https://anonymous.4open.science/r/ICML-2025-rebuttal-B583/SAM_detect_NSFW_for_FsK8.pdf). Thank you again for these valuable comments. We will include these experiments and analyses in the revised version.
Summary: This paper proposes AdvI2I, a framework for adversarial image attacks on image-to-image (I2I) diffusion models to induce NSFW content generation without modifying text prompts. By training a generator to inject perturbations aligned with NSFW concept vectors (extracted via contrastive text pairs), AdvI2I bypasses defenses like LLM filters. The enhanced AdvI2I-Adaptive further improves robustness through Gaussian noise and NSFW embedding similarity minimization. Experiments demonstrate high attack success rates against defenses, exposing I2I model vulnerabilities and urging stronger safeguards. Claims And Evidence: The claims are empirically supported through: (1) demonstration that adversarial text prompts are easily defended against (34-96% ASR reduction via LLM filters in Table 2), and (2) AdvI2I-Adaptive's maintained high ASR (>70%) under SLD and safety checker defenses. However, the low transferability to SDv3.0 (34% ASR) lacks sufficient analysis to explain generalization limits. Methods And Evaluation Criteria: Evaluation Limitations: The attack success metrics depend entirely on algorithmic detectors (NudeNet, Q16 classifier). While these provide quantitative benchmarks, they cannot fully capture real-world human perceptual thresholds. Theoretical Claims: No theoretical analysis is provided. Experimental Designs Or Analyses: 1. Dataset Limitations: The dataset (400 images from the "sexy" category) exhibits selection bias and lacks representation of critical NSFW concepts such as political extremism, hate symbols, or graphic violence. And Small sample size (200 test images) raises concerns about statistical significance, especially for FID. 2. Model Coverage: Evaluations lack of state-of-the-art models like SDXL and PixArt-α, limiting insights into modern I2I pipelines. 3. The low ASR (34%) on SDv3.0 is attributed to data filtering without ablation studies. Supplementary Material: Yes, I have reviewed the whole supplementary material. Relation To Broader Scientific Literature: Previous text-based attacks, such as QF-Attack, Ring, and MMA, have optimized adversarial text prompts to induce NSFW content in T2I models. In contrast, AdvI2I generates adversarial perturbations directly on input images to induce NSFW content. Essential References Not Discussed: No. Other Strengths And Weaknesses: Strengths: 1. Clear Motivation. Identifies critical limitations of text-based adversarial attacks and proposes AdvI2I generating adversarial image perturbations to induce NSFW content in I2I diffusion models. 2. Well-designed method: combining NSFW concept vector extraction with adversarial generator, enabling efficient perturbation generation Weaknesses: 1. Concerns regarding sample transferability: The method proposed in this paper relies on generating adversarial samples through image generation models. This approach may result in a heavy dependency on training data and known I2I (image-to-image) models, casting doubt on its effectiveness when applied to unknown samples or untested I2I models. The low ASR (34%) on SDv3.0 seems verify this point. 2. Impractical methodology: The adversarial noise intensity in the experiments is excessively high (ranging from 32/255 to 128/255). Such noise levels introduce visually perceptible anomalies, which would likely trigger immediate rejection of the samples in subsequent processing pipelines. Furthermore, overly conspicuous noise is also susceptible to detection by defense models, ultimately rendering this approach impractical in real-world applications. 3. Evaluation Limitations: The attack success metrics depend entirely on algorithmic detectors (NudeNet, Q16 classifier). While these provide quantitative benchmarks, they cannot fully capture real-world human perceptual thresholds. 4. Dataset Limitations: The dataset (400 images from the "sexy" category) exhibits selection bias and lacks representation of critical NSFW concepts such as political extremism, hate symbols, or graphic violence. And Small sample size (200 test images) raises concerns about statistical significance, especially for FID. 5. Model Coverage: Evaluations lack of state-of-the-art models like SDXL and PixArt-α, limiting insights into modern I2I pipelines. 6. The low ASR (34%) on SDv3.0 is attributed to data filtering without ablation studies. Other Comments Or Suggestions: No. Questions For Authors: No. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thank you for the insightful comments and suggestions. **W1:** Concerns regarding sample transferability. **A1:** Our training and test samples were randomly split, meaning images are not entirely overlapping. We have also verified sample transferability of AdvI2I on unseen data (new images and prompts never seen during training) in manuscript's **Table 5**. For the model transferability, manuscript's **Table 7** shows that AdvI2I achieves high ASRs (80.5% on SDv2.0 and 84.0% on SDv2.1) when transferred from SDv1.5. Regarding the relatively lower ASR on SDv3.0, please see our explanation in response **A6**. We also extended our evaluation to newer diffusion models. Results below highlighting the method’s transferability across model architectures. |Source Model|Target Model|ASR| |-|-|-| |SDv1.5-Inpainting|SDXL-Turbo|62.5%| |SDv1.5-Inpainting|FLUX.1-dev ControlNet Inpainting-Alpha|74.0%| **W2:** High adversarial noise intensity. **A2:** We compared AdvI2I against baselines using a lower noise bound (16/255) on InstructPix2Pix. The results confirm that AdvI2I maintains effectiveness. |Method|ε|w/o Defense|SLD|SD-NP|GN|SC| |-|-|-|-|-|-|-| |**AdvI2I**|16/255|70.5%|68.5%| 70.0%|55.5%|14.5%| |**Adaptive**|16/255|70.5%|65.5%|68.5%|62.5%|55.5%| In addition to quantity results, manuscript's **Figure 2** also qualitatively shows that our noise generator produces perturbations (64/255) that remain visually imperceptible to humans. **W3:** Evaluation depends on algorithmic detectors. **A3:** We follow the common practice in prior studies [1, 2, 3, 4] to use these detectors (NudeNet, Q16) for consistent quantitative benchmarking. These detectors are trained on human-labeled datasets and thus reflect human perception to some extent. To further strengthen our evaluation, we tested AdvI2I using a multimodal LLM (GPT-4o) with carefully designed prompts to simulate human judgment. The prompt, examples and results are shown in [the file](https://anonymous.4open.science/r/ICML-2025-rebuttal-B583/GPT_evaluation_for_rvyv.pdf). The results closely align with algorithmic detectors. [1] Safe latent diffusion: Mitigating inappropriate degeneration in diffusion models. [2] Ring-A-Bell! How Reliable are Concept Removal Methods For Diffusion Models?. [3] Sneakyprompt: Jailbreaking text-to-image generative models. [4] Mma-diffusion: Multimodal attack on diffusion models. **W4:** Insufficient coverage of concepts, and small test size. **A4:** We appreciate this suggestion. In addition to the "nudity" and "violence" concept covered in our manuscript, we further evluated the suggested "political extremism" concept. The concept vector is constructed with prompts related to "extremism" and "terrorism.". The results confirm AdvI2I's versatility across diverse NSFW concepts. | Method| Concept| w/o Defense|SLD|SD-NP|GN|SC| |-|-|-|-|-|-|-| | **AdvI2I**|Extremism|76.5%|73.0% | 73.5%|60.5%|27.5%| | **AdvI2I-Adaptive**|Extremism|74.5%|70.0%|72.5%|71.5%|72.0%| Regarding sample size, our primary metric (ASR) typically does not require massive samples for reliable evaluation. Previous works ([1] Ring-A-Bell: 200 prompts, [2] MMA: 60 images) employed similar or smaller evaluation sets. Nonetheless, we now increased our test set to 500 images, and AdvI2I continues to demonstrate consistent effectiveness. |Method|w/o Defense|SLD|SD-NP|GN|SC| |-|-|-|-|-|-| |**AdvI2I**|75.2%|71.2%|71.6%|62.0%|17.0%| |**AdvI2I-Adaptive**|71.4%|65.4%|65.8%|68.6%|65.4%| Additionally, for image quality (**Section F** of our manuscript), we use several metrics in addition to FID. These metrics are less sensitive to sample size and consistently validate the high visual quality of our generated images. **W5:** Evaluations on new I2I models. **A5:** Please see our response **Reviewer FsK8's A3**. Thank you for your advice. **W6:** The low ASR on SDv3.0. **A6:** The relatively lower performance on SDv3.0 is primarily due to its explicitly filtered training dataset, as noted in [1]. Interestingly, when we directly used prompts to request nudity content on SDv2.1 and SDv3.0 without defenses, SDv2.1 easily generated such content, while SDv3.0 did not. And even adversarial prompts (e.g. QF, Ring, etc.) fail significantly more often on SDv3.0 compared to earlier SD models. |Model|QF|Sneaky|Ring|MMA| MMA-Mask | |-|-|-|-|-|-| |SDv1.5 | 68%|48%|98%|100%| 64%| |SDv2.1 | 62%|46%|88%|94%| 64%| |SDv3.0 | 14%|34%|48%|46%| 28%| This suggests that SD3.0 has less risk of generating NSFW content, regardless of attack methods. Therefore, a potential future direction to enhance I2I safety is to totally nullify the NSFW concept from the model by thoroughly cleaning the training data. [1] Scaling rectified flow transformers for high-resolution image synthesis. Thank you again for your valuable feedback.
null
null
null
null
null
null
null
null
How to set AdamW's weight decay as you scale model and dataset size
Accept (poster)
Summary: This paper proposes a simple framework for understanding weight decay in AdamW through its similarity to an exponential-moving-average of weight updates. The hypothesize that the timescale of the EMA (analogous to the half life) is a useful quantifier of the training dynamics. In particular, the authors show that the timescale should be preserved when scaling either the model width or dataset size. This is empirically validated by experiments across several (smallish) deep learning tasks. This offers useful guidance on how to scale the weight decay hyperparameter across task sizes, which is of significant value to the community. **Post-rebuttal update:** The authors have promised changes that sufficiently address all my concerns. I have raised my score from 3 (weak accept) to 4 (accept). Claims And Evidence: In general the claims made in the paper are decently supported by evidence. Below are some (minor) comments about the exact phrasing. **The weights generated by AdamW can be written as an EMA of recent weight updates:** Typically EMA is viewed as a smoothing operation (e.g. in the wikipedia reference provided in the paper). In this case the input sequence typically doesn't depend on the EMA output. This is not the case in AdamW, later updates depend on the previous ones. With a typical EMA in signal processing, you would expect different values of the time constant to vary the amount of smoothing. In AdamW it can give you an entirely different input sequence as well (fundamentally different, more like a dynamical system than EMA). To derive Equation 9, there is also the key assumption of the initial weights being zero (almost never the case in deep learning). Finally the learning rate changing over time would skew the exact EMA form. Overall I feel it would be more fair to say that it can be **approximated as an EMA** where these core assumptions or simplifications are prominently stated. **The optimal weight decay scales with 1/dataset size:** I think this is conditioned on the learning rate being constant. There are some other works that claim the learning rate should be lower for longer runs (see Schaipp 2501.18965 for a recent example). If that modification is made I am not sure the optimal weight decay is lower. The paper focuses on showing the EMA time coefficient for longer runs is lower, but there are two avenues for lowering this (via either $\eta$ or $\lambda$). **The optimal weight decay increases with model width:** Similar to the previous point, I believe this assumes the learning rate is scaled via muP. If the learning rate is kept constant (no muP) the optimal weight decay may stay the same. Methods And Evaluation Criteria: The proposed framework of viewing weight decay as an approximates EMA makes sense and the scaling rules are appropriate and useful for deep learning. Theoretical Claims: I read through them but did not verify the math step by step. Experimental Designs Or Analyses: I reviewed the experiments and they seem reasonable. It would be nice to see experiments on larger scale networks or datasets, but I understand this may not be feasible. Supplementary Material: Yes, I looked through all the supplementary material. Relation To Broader Scientific Literature: The paper makes a decent attempt at discussing related concepts in literature. However there are several places where I feel the connections are not sufficiently detailed. **Using decoupled weight decay in muP**: This paper proposes scaling the weight decay inversely with the learning rate in PyTorch AdamW. This is equivalent to keeping the weight decay constant in the original “fully-decoupled” AdamW. Other works found this empirically before this paper (e.g. Wortsman 2309.14322). I feel this should be made more explicit in the paper, especially since one of the highlighted claimed contributions is showing “When using AdamW with fixed weight decay, μP learning rate scaling breaks down, but proper weight decay scaling restores its effectiveness”. I believe this exact conclusion could be drawn from prior work as well. **Effective learning rates:** Viewing AdamW as an update-EMA is **very closely related** to using “relative” (or “rotational”) effective learning rates (Wan 2006.08419, Kosson 2305.17212). The paper briefly mentions these works but does not explain the relation (which is crucial for understanding how this work relates to a large body of prior work on weight decay). The relative learning rate specifies how large the relative change in the weights is on average. Intuitively it shouldn't be too hard to see why this is conceptually related to an EMA, decaying the prior weights at each timestep by e.g. 10% may result in something like a 10% relative change in the weights on average. This can also be formally shown. Given some conditions on the input sequence which are also needed for most of the EMA interpretations to make sense, there is a direct mapping from the EMA decay coefficient to the relative learning rate in the steady-state (see brief sketch below). The other way also works but is less specific. EMA is one way of achieving a given relative learning rate, but there are others e.g. for scale-invariant systems you can exponentially increase your learning rate schedule instead of doing EMA (Li 1910.07454) or keep the weight norm constant and modify the update size accordingly (Kosson 2305.17212). EMA to Relative Updates: Consider an EMA given by $a_{t+1} = (1-\gamma)a_t + \gamma b_t$. Let's assume that the $b_t$ values are independently random and identically distributed over time with mean zero, and focus on the relative updates in the steady state (large $t$). The relative update size for this scalar case is best defined as $r = \sqrt{\mathbb{E}[(a_{t+1}-a_{t})^2] / \mathbb{E}[a_t^2]}$. To compute $r$ we need to compute $\mathbb{E}[(a_{t+1}-a_t)^2] = \gamma^2 (\mathbb{E}[b_t^2] + \mathbb{E}[a_t^2])$, using independence and mean zero. We can relate $\mathbb{E}[a_t^2]$ to $\mathbb{E}[b_t^2]$ by expanding $a_t$ in terms of $b_t$, squaring, removing the cross terms that are zero due to independence, and finally approximating the sum as infinite (large $t$). This gives $\mathbb{E}[a_t^2] = \frac{\gamma^2}{\gamma^2 - 2\gamma} \mathbb{E}[b_t^2]$, which results in $r=\sqrt{2\gamma}$. Plugging this in **gives an exact match between the EMA timescale proposed in this work and the prior work on effective learning rates in AdamW** (Kosson 2305.17212). Overall I feel the EMA timescale, EMA decay rate, and the relative learning rate are essentially just slightly different characterizations of the same underlying phenomenon. The time coefficient may be a useful quantity as the authors argue, but it feels more like a slight variation of existing works rather than a fundamentally new perspective on weight decay. Like the time coefficient, the effective learning rate also has its strengths. For example it is better defined when the learning rate is changing, it can be measured directly, it allows you to obtain the same learning dynamics without weight decay and / or eliminate the arbitrariness in the weight norms, and is useful for transferring hyperparameters from one optimizer to another. **Similar or identical conclusions to prior work:** Some of the phenomena discussed when explaining how "EMA view of AdamW provides an explanation for several striking observations made in the literature" have already been explained by prior work, for example Kosson 2305.17212. These include identifying two separate processes for the learning rate and weight decay (lines 387L to 398L) and the scheduling effects where $\eta$ matters at the beginning but $\eta\lambda$ later in training (line 402L). Note that as discussed by Kosson 2305.17212, changing the global learning rate while keeping the weight decay fixed also affects the learning dynamics of gains and biases (i.e. 3 effects not just the 2 mentioned), something that matters in practice as observed in Appendix A but is not discussed around lines 376R and 393L. Essential References Not Discussed: Yes, I feel that key components of this work are closely related or overlap with prior literature on effective learning rates (see above), but this is not sufficiently discussed. Other Strengths And Weaknesses: The main strength of the paper is providing a simple view on weight decay and practical guidance on how to set it backed by sufficient although small scale experiments. The main issues are undisclosed relations to prior work and sometimes a lack of clarity or specificity. I think these can be easily addressed which I have already reflected in my rating. The discussion in the paper feels somewhat limited. In particular there are important questions that are not brought up or addressed: - The paper conjectures that the EMA time horizon should be kept constant but offers little to no explanation for why this should be the case. - With a learning rate schedule the time coefficient changes throughout training. Is this important? Other Comments Or Suggestions: I think characterizing (Blake 2024) and (Bjorck 2024) as "follow-up works that follow up on the concepts presented in this paper and cite us" is a bit of a stretch (they seem more like concurrent works). You use "swiped" in several places where I think you might mean "swept". The paper could use a thorough read through for spelling mistakes. Questions For Authors: Q1: Do you agree with my characterization of the relationship between the proposed EMA view and relative learning rates or do you know of a better way to explain this relation? Q2: Which parts of this paper are specific to AdamW and would not apply to SGD with momentum, Lion or other optimizers? Q3: How would you justify the time coefficient remaining constant during scaling? Q4: Does the fact that the time coefficient varies throughout training matter and which value should be transferred? Q5: Is a lower weight decay strictly better for longer runs / larger datasets or is decreasing the learning rate as good? I would consider raising my score if the authors can convincingly answer these questions and address the other concerns I described before about clarity and relation to prior work. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thanks for your extraordinarily extensive and thoughtful review! We agree with all your points and have extensively updated our working manuscript in response. ## Claims And Evidence **Approximate EMA:** Fixed. We do NOT **assume that the initial weights are zero in Eq 9**! We assume that the initializations for the two networks use the same random noise, but different scalings (Eq 8). We are assuming **learning rate rate does not change as we train on more data** (see 375r-379r, which mirrors a point from Kosson), and we assume **muP scaling for width**. We have (re) emphasized all these points in the manuscript. ### Q1 Prior work Thanks for highlighting **Kosson**: This is a great paper that should have had a bigger impact. We cite Kosson in the related work (line 424l), stating that the key difference with our work is that Kosson does not discuss hyperparameter transfer. Nonetheless, we agree that there are more connections and have extended the discussion of Kosson in the Related Work, along with writing an "extended prior work" section that fully elaborates on this connection. Let us know if you'd like specific wording as we frame this connection! **Wortsman** studies the relationship between the final performance and learning rate. In their Fig 6, they find that performance is insensitive to learning rate, $\eta$, when fixing $\eta \lambda$, while it is more sensitive if $\lambda$ is fixed. We see this as directly connecting to the **Kosson** result, that $\eta$ matters early on and $\eta \lambda$ matters later on. The connection to our contributions on EMA and hyperparameter transfer are *definitely there*, but are far more indirect (though let us know if you're actually thinking of a different result from Wortsman). ### Q2 We believe the result is **more general than AdamW**, applying to any setting where the updates have roughly constant magnitude as the scale of the underlying parameters varies, eg Lion Sophia Muon etc but not SGD. ### Q3: EMA timescale constant The EMA viewpoint suggests a reason why the time horizon should be constant. We have edited the manuscript to include this fuller discussion. An EMA can very approximately be understood as averaging over the last $\tau_{iter}$ samples (Eq 7), as it downweights considerably any datapoints more than $\tau_{iter}$ steps ago. What does this mean for training neural networks? We assume that each datapoint provides useful information, so you can't drop or considerably downweight datapoints without harming performance. As discussed above, the EMA downweights considerably datapoints that it saw more than $\tau_{iter}$ iterations ago. Thus, to avoid considerably downweighting datapoints, $\tau_{iter}$ should be larger than the number of iterations required for all the datapoints (i.e. one epoch). The above reasoning suggests that you can set $\tau_{iter} = \infty$, or $\lambda = 0$, as that would would average over every update. But it would also reduce AdamW to Adam. Of course, we know that AdamW (i.e non-zero $\lambda$) works better. To resolve this conundrum, we conjecture that you don't want to average over all updates. We believe that updates from very early in training are detrimental, as they are based on early settings of the weights. To forget these initial updates we can use a $\tau_{iter}$ which is smaller than the total number of iterations in the training run. These two observations give a "natural range" for the optimal $\tau_{iter}$: somewhere between the number of iterations for one epoch and the total number of iterations. This natural range is simpler if you measure the timescale in terms of epochs: it's somewhere between $1$ and the total number of epochs. We find experimentally that the optimal timescale does lie in this range (Fig 1,2). As this natural range is fixed wrt model and dataset size, we conjectured that the optimal EMA timescale was fixed. ### Q4 We transfer the sequence of timescales through training. We believe it's optimal to have a scheduled timescale across training: scheduling makes it easier to forget detrimental initial updates while doing long-timescale averaging towards the end of training. All existing "decoupled weight decay" implementations (e.g Composer from MosaicML) change the timescale by changing the learning rate. While you could also use $\lambda$, we leave that for future work. ### Q5 We agree that you could **modify timescale using either $\eta$ or $\lambda$**. Our motivation for modifying $\lambda$ is explained in the first paragraph of the Related work (line 375r-379r). Related to this, we ran an extra [experiment](https://anonymous.4open.science/r/3MDG98-66F3/response_pdf.pdf) showing that if you fix $\lambda$, then the optimal $\eta$ changes as dataset size increases. In contrast, if you use our scaling for $\lambda$, then the optimal $\eta$ is fixed. We will run a complete set of experiments on this for the camera-ready. --- Rebuttal Comment 1.1: Comment: Thank you for thoroughly addressing my concerns. I have no significant concerns remaining and will raise my review score to accept to reflect this. Some brief responses / acknowledgements: - Equation 9: Yes, I meant Equation 7 for the EMA update, sorry. - On Wortsman: Agreed, the paper focuses more on learning rate sensitivity than transfer. One of the practical takeaways for me was to use decoupled weight decay for muP, but it doesn't really justify it as muP breaking down rather than increased learning rate sensitivity. I think this is sufficiently different to justify your claim. - Thank you for addressing the rest of the concerns and answering the questions, they all seem good to me. --- Reply to Comment 1.1.1: Comment: Dear reviewer xMai Thank you for raising the score, and we sincerely thank you again for your insightful, detailed, and extensive comments. With the suggested changes and clarifications included, we indeed find the manuscript to have much better clarity and specificity! Thanks, Authors
Summary: This paper studies the AdamW optimizer. the authors provide empirical studies on the hyperparameters of the AdamW under different settings. Specifically, the paper first reformulate the AdamW itself is an EMA. then provide experiments on resnet, vit and LLM training, showing some empirical weight decay parameters tuning rules when increasing the dataset and model size. Finally, the authors discussed the relationship between weight decay and \muP learning. Claims And Evidence: yes Methods And Evaluation Criteria: yes Theoretical Claims: yes Experimental Designs Or Analyses: yes Supplementary Material: yes, i reviewed the whole appendix Relation To Broader Scientific Literature: None Essential References Not Discussed: None Other Strengths And Weaknesses: Strength - the authors conducted extensive numerical experiments to support their claims. - the reported results could be helpful in ML applications. Weaknesses: - the paper is mainly empirical results based. though the authors provided experiments on resnet, LLM, vit, it could be difficult to generalize to all other machine learning models. - the reformulation of AdamW to EMA looks trivial and incremental to me. the author didn't claim what are the benefits by this reformulation. - though there are extensive experimental evidence, the theory part of this paper is weak. the authors didn't provide solid theoretical understanding to the phenomenon they observed. Other Comments Or Suggestions: none Questions For Authors: none Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: Thanks for your careful review. # Value of the EMA perspective We fully agree that the reformulation of AdamW to EMA is almost trivial. Our key insight was noticing that this almost trivial connection provides novel, powerful insights into hyperparameter transfer for weight decay, that we then validated extensively. It is precisely the simplicity of our approach that makes it easy for people actually training neural networks to understand and apply. Indeed, we know that at least one "unicorn" is using training recipes inspired by the work here for training large scale foundation models. Finally, we believe our work is **the very first paper** studying the problem of transferring weight decay parameters across problem sizes. # Experiment design As you noted, we took the rules for hyperparameter transfer suggested by the EMA view, and validated them through extensive experiments on the three major classes of machine learning models (ResNets, LLMs and VITs). # Theoretical understanding While our results are supported by Theorem 1 (proof in Appendix A1), we do agree that our results are mainly around validating the simple but powerful viewpoint on hyperparameter transfer provided by the connection between AdamW and EMA.
Summary: The authors study the scaling behavior of the optimal AdamW weight decay hyperparameter with respect to model and dataset sizes. They provide a theoretical insight by framing AdamW's learned weights as an exponential moving average (EMA) of recent updates, identifying the EMA timescale as the key underlying hyperparameter. Empirically, the authors demonstrate that the EMA timescale (in epochs) remains roughly constant across scales. ## update after rebuttal I confirm that I have read the authors' response to my review and have revised my review accordingly, where appropriate. Claims And Evidence: In theorem 1, is the "scale-invariant network" assumption too strong? Methods And Evaluation Criteria: Most experiments are conducted on vision tasks, but they have multiply epochs, while for LLM pre-training, we only do single epoch given the large data size. It would be great if the author could give a table to summarize the scaling "rules" of weight decay in terms of model size, data size, batch size, etc. Theoretical Claims: The authors' interpretation of AdamW's weight updates as an exponential moving average (EMA) provides valuable insights. However, similar EMA-based interpretations of adaptive optimizers and their implicit constraints have been recently explored in related literature. In particular, Chen et al. (ICLR 2024, Theorem B.6, "Lion Secretly Solves Constrained Optimization: As Lyapunov Predicts") and Liu et al. (NeurIPS 2024, Theorem A.1, "Communication Efficient Distributed Training with Distributed Lion") explicitly derived EMA forms for optimizer updates, highlighting their implicit constrained optimization behavior. Additionally, Xie and Li (ICML 2024, "Implicit Bias of AdamW: ℓ∞-Norm Constrained Optimization") similarly analyzed AdamW's implicit constraints. To appropriately contextualize and acknowledge prior work, the authors should clearly cite and discuss these papers, clarifying how their EMA formulation aligns with or differs from these recent studies. Experimental Designs Or Analyses: This leads to a direct rule for scaling AdamW weight decay: the optimal weight decay decreases with increasing dataset size and increases with model size (under the muP learning rate scaling recommendations). Supplementary Material: very detailed and clear, nice work. Relation To Broader Scientific Literature: EMA of update has been well studied, for example in Theorem B.10. of "Lion Secretly Solves Constrained Optimization: As Lyapunov Predicts". Essential References Not Discussed: The primary contribution is identifying and characterizing the scaling rule for weight decay. While 'Implicit Bias of AdamW: ℓ∞ Norm Constrained Optimization' illustrates the implicit bias introduced by weight decay, earlier work such as 'Lion Secretly Solves Constrained Optimization: As Lyapunov Predicts' establishes the connection between the norm of trained neural network parameters and the weight decay coefficient λ, demonstrating this relationship for both Lion and AdamW. Other Strengths And Weaknesses: Novelty and Impact: The topic is relevant and timely, addressing an important practical challenge in the era of large-scale neural networks. The paper contributes substantially to our understanding of hyperparameter transferability across model and data scales. Other Comments Or Suggestions: EMA of update can be seem as kind of nice perspective to see weight decay. Have the authors use continuous time Questions For Authors: Have you explored extending these insights to other adaptive optimizers, such as AdaGrad or AdaFactor, to investigate if similar implicit bias or parameter norm constraints exist? Code Of Conduct: Affirmed. Overall Recommendation: 5
Rebuttal 1: Rebuttal: Thanks for your extremely positive review! Great catch with Chen et al. (Theorem B.6) and Liu et al. (Theorem A.1)! We have added a discussion of these papers to our working draft and adjusted our contributions section. In short, we aren't surprised that someone has used an EMA-like result/form as an intermediate result in other calculations. Our contribution is in noticing that this simple connection between AdamW and EMA provides a powerful lens on **hyperparameter scaling** specifically for weight decay. We have also added a discussion of Chen et al. (ICLR 2024) and Xie and Li (ICML 2024) to the Related Work. As far as we can see, the "implicit constraints of AdamW" noticed in these works work, refers primarily to the relationship between *the scale of the weight* and *weight decay*, which we briefly discussed in Appendix C, Fig 11, but did not claim as a contribution. To our understanding, this work did not study the transfer/scaling of weight decay across problem sizes, which is our primary contribution. Though let us know if you're thinking of a different result here. > In theorem 1, is the "scale-invariant network" assumption too strong? We aren't quite sure what you mean by "too strong" here. Do you mean that we could prove the same result with a weaker assumption? While this may be possible, we don't see how. Intuitively, the need for the scale-invariant network assumption arises from the $1/\lambda$ scaling of the weights in AdamW. If the scale of the weight matters (i.e. the network is not scale invariant), then $\lambda$ on its own affects the learning trajectory, in addition to the EMA timescale, and the ratio between the learning rate and the initialization scale $\rho = \eta / \sigma$. > It would be great if the author could give a table to summarize the scaling "rules" of weight decay in terms of model size, data size, batch size, etc. We have added this table to our working draft. > Have you explored extending these insights to other adaptive optimizers, such as AdaGrad or AdaFactor, to investigate if similar implicit bias or parameter norm constraints exist? Not as of yet, but we do expect the same considerations to transfer across all settings with constant magnitude updates and decoupled weight decay, including SignGD, Lion, Sophia, Muon, SOAP etc. For instance, it was recently noted that Muon with decoupled weight decay has the same $1/\lambda$ scaling of the max eigenvalue (see section Weight Decay from [1]) [1] Why We Chose Muon: Our Chain of Thought? https://x.com/Kimi_Moonshot/status/1897929976948965870
null
null
null
null
null
null
null
null
Discrete Neural Algorithmic Reasoning
Accept (poster)
Summary: This paper addresses the problem of neural algorithmic reasoning, where the objective is to train a neural network to mimic each step of a given classical algorithm. The authors propose a novel architecture for this task. They partition the input graph instance into discrete and continuous components and process them separately. Specifically, the discrete component is handled similarly to previous approaches, while the continuous component is used exclusively in the attention weights of the graph neural network. Empirical evaluations demonstrate that their architecture achieves SOTA performance on several algorithmic tasks. ## update after rebuttal I appreciate the authors' rebuttal and will keep my original score. Claims And Evidence: Yes, the paper provides experiments and ablation studies to support their claims. Methods And Evaluation Criteria: The proposed method seems reasonable (though I still have some uncertainties about the details; see below). In certain algorithmic tasks, it does make sense to handle scalar information differently. The method was tested on a public dataset, but the authors only presented results for a subset of algorithmic tasks, without specifying the evaluation metric. It would be better to clarify the specific evaluation metric used and provide results for all algorithmic tasks in the dataset. Theoretical Claims: NA Experimental Designs Or Analyses: Yes, the experiment is sound. Supplementary Material: No. Relation To Broader Scientific Literature: The paper contributes to the field of neural algorithmic reasoning. The proposed method makes sense and could influence future research in this area, I think. Essential References Not Discussed: No Other Strengths And Weaknesses: - A more comprehensive architecture diagram, including the encoder and decoder modules, should be provided. The current architecture is somewhat confusing (see the questions below). - The performance on other algorithmic tasks in the benchmark dataset should be presented and the evaluation metric should be clarified. Other Comments Or Suggestions: . Questions For Authors: - In an algorithmic task with hints, does the proposed method require invoking the encoder and decoder at each step? As far as I know, previous approaches call the encoder and decoder at every algorithmic step. This actually raises another question I am confused: what exactly does the discrete state refer to? Could you provide a concrete example using BFS? If these discrete states are simply the hints provided in the dataset, then except for the part of handling scalar information, the overall framework doesn’t seem significantly different from previous methods, as prior approaches also produced discrete states via the decoder, which were then fed into the next algorithmic step. - What are the specific node-level and graph-level evaluation metrics? Could you provide a concrete example? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive review and positive feedback! We address the questions below. > The performance on other algorithmic tasks in the benchmark dataset should be presented Let us highlight the main part of our contribution: we consider the proposed model in its current form as a potential answer on where perfect generalization might come from. In this sense, we investigate different architectural choices which are needed for different forms of the generalization. We agree that a more general architecture is of great interest for future work. In its current form, the proposed model is not capable of executing some algorithms from the CLRS-30 benchmark, however, simple modifications can enhance its expressivity, e.g., by supporting more complex aggregations with top-k attention (with fixed k). Also, as we mention in Section 4.1, the model can be extended with additional architectural modifications, such as edge-based reasoning. For example, edge-based attention with top-2 attention (where each edge chooses the top 2 adjacent edges to receive the message from) can implement the sort of triplet reasoning. In our work, we aim to describe the key principles that help to build perfectly generalizable models and do not focus on a general architecture capable of executing a wide range of algorithms. > what exactly does the discrete state refer to? Could you provide a concrete example using BFS? If these discrete states are simply the hints provided in the dataset, then except for the part of handling scalar information, the overall framework doesn’t seem significantly different from previous methods, as prior approaches also produced discrete states via the decoder, which were then fed into the next algorithmic step. Yes, except for the part of handling the scalar information the overall framework is similar to prior work. The main remaining difference is that we use discrete bottleneck (essentially feed hint predictions to the processor at the next step) and do not use the hidden states from the previous steps. Most of the previous methods use both re-encoded hint predictions and previous hiddens. Our architectural choice is close to ForgetNet (not G-ForgetNet) from Bohde et al. (2024). From this perspective, the difference of our method from ForgetNet can be described as adding discretization to scalar updates, using hard attention, and stepwise training (i.e. training with teacher-forcing). We will add an architecture diagram with additional clarifications to the main text. > In an algorithmic task with hints, does the proposed method require invoking the encoder and decoder at each step? As far as I know, previous approaches call the encoder and decoder at every algorithmic step. Yes, similar to prior work, the encoder is the module which maps the discrete state to the high-dimensional vector and the decoder is the module which projects the high-dimensional vectors to the logits of hints/states. Similar to hints reencoding after each step in the prior work, the encoder and decoder invoked in the discrete bottleneck after each processor step. > What are the specific node-level and graph-level evaluation metrics? Could you provide a concrete example? For all covered problems, node-level metrics represent the accuracy of predicting correct pointers from each node averaged across all nodes in all test graphs. The exception is the MIS problem, where the node-level metric is the accuracy of predicting the correct binary class (in the MIS, not in the MIS) for each node. Graph-level metrics represent the accuracy of correctly predicting all node-level outputs in a graph, averaged across all test graphs. For example, for the BFS problem, output of the algorithm is presented as a tree, where each node points to its parent in the BFS exploration tree (and the starting node points to itself). Pointer is a specific hint/output type which forces each node to point to exactly one node. Technically this is usually implemented via taking the softmax/argmax over the neighbors. We will add the example above to the paper. We thank the reviewer for thoughtful questions and we are happy to discuss further.
Summary: Neural reasoners are robust to the noisy data but struggling with out-of-distribution data. Classic symbolic algorithms have complementary features – they are crisp to noisy inputs, but applicable for any out-of-distribution data. Authors propose a novel approach that guides neural reasoners to maintain the execution of classic symbolic algorithms, so that they could reason with out-of-distribution data. The execution trajectory of classic symbolic algorithm is interpreted as a combination of finite predefined states. The approach identifies discrete states in continuous data flows, and uses a hard attention mechanism to force neural reasoners to align with discrete states. Claims And Evidence: Authors claim the proposed method is perfect (the word ‘perfect’ or ‘perfectly’ appears 22 times in the paper), evidenced by experiments on SALSA-CLRS benchmark datasets with 100% accuracy. Methods And Evaluation Criteria: The proposed approach is an encode-process-decode procedure. Input data is represented as a graph whose node and edge features are encoded into vector embeddings. The processor is a graph neural network that repeatedly updates the vector embeddings. Hint supervision trains the processor to follow the execution status of the original algorithm. When the processor finishes, the vector embeddings are fed into a decoder network. Theoretical Claims: Authors claim that they built fully discrete neural reasoners for different algorithmic tasks and demonstrated their ability to perfectly mimic ground-truth algorithm execution. Their experiments achieved perfect test scores (100%) on the multiple algorithmic tasks (with guarantees of correctness on any test data). Experimental Designs Or Analyses: Authors conducted and analysed experiments within the statistical machine learning paradigm. Supplementary Material: I am not very clear about the detailed process that described from Section 3.3 and 3.4. It would be nice that authors can give a concrete example in the supplementary material. Relation To Broader Scientific Literature: Yes. This important research is in line with the broader scientific research to improve the interpretability and reliability of neural networks. Essential References Not Discussed: Authors neglected the following two essential papers. Sphere Neural Networks for Rational Reasoning. https://arxiv.org/abs/2403.15297 Neural Reasoning for Sure Through Constructing Explainable Models. AAAI 2025 These two papers demonstrate one kind of novel neural networks that achieves symbolic-level syllogistic reasoning, with theoretical proof that this network works for any out-of-distribution input data. Other Strengths And Weaknesses: Researching how neural networks can mimic symbolic algorithms is not only a nice idea but also of huge scientific and practical value. But, authors’ experiments do not fully support their claim. A neural network perfectly mimic classic algorithm requires the network to achieve 100% on any out-of-distribution datasets. Using benchmark datasets is far beyond sufficient, even these datasets contain out-of-distribution testing items. Other Comments Or Suggestions: Authors use supervised learning to train the processor to mimic the execution trajectory of classic algorithms. In my opinion, it is impossible to provide data-independent theoretical proof that the trained processor follows the execution trajectory of classic algorithms. I would like to suggest authors to weaken the claims (it is not necessary to make such strong theoretical claims here, as your experiments are good enough) and mention the methodological limitation above. Questions For Authors: 1. “We also enforce all node and edge features to be from a fixed finite set, which we call states.” Nodes and features are represented by vectors. You enforce them to be from a fixed finite set. Do you mean the members of the set are symbolic states of the classic algorithm? 2. All input data is represented as a graph, and the processor updates features of nodes and edges. Intuitively, what represents the states of classic algorithms, the configuration of the whole graph, or all features of nodes and edges, or features of some nodes or edges? 3. The main work of the proposed method is to correctly discretize continuous data flow to align with a classic algorithm. This is not yet the whole story to train a neural reasoner to mimic classic algorithms. Classic computing can be roughly stated as data + algorithm + knowledge. How can your method distinguish data from knowledge? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for the detailed review! We addressed the questions and concerns below. > I am not very clear about the detailed process that described from Section 3.3 and 3.4. It would be nice that authors can give a concrete example in the supplementary material. For example, consider the Dijkstra algorithm. The algorithm uses the edge weight to compute the shortest distance from the starting node to each node, building the tree step-by-step. When adding a new node B to the tree and assigning a node A as a parent in the tree, the algorithm computes the distance from the starting node the the node B by summing the distance to the node A with the weight of the edge AB. To mimic this behavior with our model, we use a scalar updater module. At node level, this module receives as inputs the discrete states of the node and its neighbors and the scalar values stored in corresponding edges. Depending on the discrete states, this module updates the scalar values by one of the supported operations, which is described by the formula (lines 228-230 of the paper). In our example, this module will push the scalar (the distance from the starting node to B) from the node A to the edge AB (edge weight) and sum them. Thus, for edges in the shortest path tree, the scalar value on edges represents the exact shortest distance from the starting node, and for other edges the scalar value will represent the edge weight. This module allows one to perform any (predefined) operation with scalars needed for the algorithms, but with guarantees to choose the exact operation based on the discrete states, thus being robust to OOD scalar values. We will add this example (with an illustration) to Appendix. Additionally, you can find the Appendix C useful for this context. > Authors neglected the following two essential papers. Thank you for your suggestion, we will cover works on symbolic reasoning, including the mentioned ones, in the background section. > But, authors’ experiments do not fully support their claim. … In my opinion, it is impossible to provide data-independent theoretical proof that the trained processor follows the execution trajectory of classic algorithms. Our claim is based not only on the empirical results, but also on the design of our model. Discrete and size-independent design allows us to unittest all state transitions to ensure that the model will perform correctly on any test data. From the theoretical perspective, this is similar to proving that a specific algorithm (e.g. bubble sort or BFS) is working correctly for any input (by working correctly we mean that the outputs of the algorithm satisfy a certain condition), but with a difference that we do not need to prove the correctness of the algorithm, we only need to prove that the model will mimic the execution of the algorithm. Zooming to the node level, at each step, each node needs to select the neighbor and update its own state based on the current discrete state. Both selection and state transition can be tested independently from any input distribution, because there are only finitely many node/edge states. We refer to Appendix B for details of such proof for the BFS problem. > Nodes and features are represented by vectors. You enforce them to be from a fixed finite set. Do you mean the members of the set are symbolic states of the classic algorithm? Yes. You can find a specific example in Appendix B. > All input data is represented as a graph, and the processor updates features of nodes and edges. Intuitively, what represents the states of classic algorithms, the configuration of the whole graph, or all features of nodes and edges, or features of some nodes or edges? In general, the overall state of the algorithm is represented by all features of all nodes and edges. For example, for the BFS problem, at each step, each node is either visited or not and each edge either represents a pointer in the BFS tree or not. At each step, the processor network updates node/edge features with message passing between adjacent nodes. > The main work of the proposed method is to correctly discretize continuous data flow to align with a classic algorithm. This is not yet the whole story to train a neural reasoner to mimic classic algorithms. Classic computing can be roughly stated as data + algorithm + knowledge. How can your method distinguish data from knowledge? Zooming to the node level, each node essentially has a state transition matrix (from the architectural perspective this is an MLP that updates node states depending on the current state and the received message from the selected neighbor). In this level, this update is similar to the finite state machine, so the data is the input states and the knowledge is encapsulated in the learned transition rules (e.g., MLPs). We hope that our response addresses your concerns and will be happy to answer any additional questions during the discussion period if needed. --- Rebuttal Comment 1.1: Comment: --..allows us to unittest all state transitions to ensure that the model will perform correctly on any test data. It is hard to believe unittest can exhaust any test data. Let us take an example of an arithmetic formula, e.g. (4.1*4.5+2.3)*(3.3+52.2/2.2). This formula can be represented by a tree structure (a graph). Unittest needs to test a unit neural module that mimics addition, a unit neural model that mimics multiplication, and a unit neural model that mimics division. It is hard for me to believe you can enumerate all the real numbers to test the three neural models. I assume you will develop neural networks to mimic addition, multiplication, and division. --the knowledge is encapsulated in the learned transition rules (e.g., MLPs). This kind of knowledge (e.g. MLPs) is normally called “procedure knowledge”, different from “declarative knowledge”, such as propositional logic with negations. MLPs can only approximate propositional logical reasoning. --- Reply to Comment 1.1.1: Comment: Thank you for being involved in the discussion! To address your concern, let us recall how the ScalarUpdate module is designed. In short, at each step each node/edge in a graph has a discrete state and a scalar value. The ScalarUpdate module can select one of the predefined operations depending only on a discrete state (i.e. which operation to perform, e.g. addition, multiplication, no operation) and apply this operation to scalar values. Thus, to understand whether the module mimics the desired logic (e.g. updates the distances in the Dijkstra’s algorithm) we only need to check whether the different discrete states activate the correct operations, but we do not need to check whether the selected operation is performed correctly for each scalar value. You can find additional examples of the ScalarUpdate module with various predefined sets of operations in Appendix C, as well as a formula describing the roles of discrete states and scalar values. We are happy to engage in further discussions. Unfortunately, we are not able to post more comments. However, if needed, we can edit/extend this comment with additional clarifications.
Summary: This paper introduces a novel approach to neural algorithmic reasoning by forcing neural networks to maintain execution trajectories as combinations of finite predefined states. Claims And Evidence: I think the claims made in this submission are strongly supported by the experimentation. The author demonstrate good generalization scores on both in-distribution and out-of-distribution test data across all evaluated tasks. Methods And Evaluation Criteria: I believe the proposed methods are appropriate and well-motivated. The authors evaluate on the SALSA-CLRS benchmark which provides a challenging testbed with sparse graphs up to 100× larger than training graphs. Both node-level and graph-level metrics are reported, and comparisons are made against several baseline models. Theoretical Claims: no theory Experimental Designs Or Analyses: The author compare against multiple baselines including GIN, PGN, and state-of-the-art models, and they evaluate on diverse algorithmic tasks with different computational requirements. They also test on graphs up to 100× larger than training graphs Supplementary Material: Yes Relation To Broader Scientific Literature: I think this work makes some contributions to neural algorithmic reasoning. It builds upon prior work on the CLRS-30 and SALSA-CLRS benchmarks while addressing the fundamental limitation of poor generalization to larger graphs. Essential References Not Discussed: N/A Other Strengths And Weaknesses: 1. I find the approach conceptually simple yet highly effective 2. Perfect generalization to graphs 100× larger than training examples is impressive 3. The ability to formally verify correctness on any input is a significant advantage over previous approaches 4. The approach sacrifices some expressivity for perfect generalization, which may limit its applicability to certain problems 5. No discussion of the model size, and can it connect to LLM (scale up?) Other Comments Or Suggestions: My major question is can this method scale up to LLM. If the author can show this in an experiment, I will raise my score Questions For Authors: How does your approach scale to algorithms requiring more complex state spaces? For example, algorithms that need to maintain ordered collections or hierarchical structures? For problems where the ground truth algorithm is unknown, how might we determine the appropriate number of discrete states needed? Is there a principled way to discover minimal state representations? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review and feedback! We address the raised concerns below. > The approach sacrifices some expressivity for perfect generalization, which may limit its applicability to certain problems Let us highlight the main part of our contribution: we consider the proposed model in its current form as a potential answer on where perfect generalization might come from. In this sense, we investigate different architectural choices which are needed for different forms of the generalization. While there are several ways to improve expressiveness without losing the generalization at all, we also can improve expressiveness by reducing the generalization (as we briefly mention in lines 430-435): - Removing hard attention: as we demonstrate in our ablation experiments, using regular attention instead of hard attention yields perfect test scores for the BFS problem, but it is possible to construct adversarial examples with large neighborhood sizes where performance drops. While for more complex attention patterns (besides strictly attending to the single node) the OOD performance might be less robust, the expressivity gain is significant. - Removing feature discretization, but updating scalars with discrete operations: as shown by prior work and our ablation experiments, learning precise continuous manipulations is non-trivial and small inaccuracies in such manipulations can significantly affect the overall performance of the NARs. Thus, we can use the proposed separation between the discrete and continuous data flows and do not discretize node/edge features at all (and use discretization in ScalarUpdater). However, for non-attention-based models one needs to come up with how scalars will affect the discrete flow. > No discussion of the model size, and can it connect to LLM (scale up?) For our experiments we use the model with hidden size 128 and a total 400K parameters, which corresponds to the prior work in the field. While there is no direct need to scale up the models on covered tasks, none of the proposed ideas are based on the small model sizes. Importantly, there is some connection between our work and [1]. In short, TransNAR[1] discusses a method to enhance reasoning capabilities of large language models with the task-specific GNN-based NAR model, where any NAR model can be used as an “internal tool”. Thus, replacing the baselines GNN with the proposed DNAR model improves the quality of the tool that the language model can use and the overall performance will be limited only by the correctness of the “tool usage”, and not by the inaccuracies of the tool itself. However, there are some difficulties in measuring the direct effect of including the proposed DNAR model in the TransNAR pipeline, as the source code for TransNAR is not yet publicly available. [1] Bounsi et al. (2024), Transformers meet Neural Algorithmic Reasoners > How does your approach scale to algorithms requiring more complex state spaces? For example, algorithms that need to maintain ordered collections or hierarchical structures? We think that for any specific algorithm it is straightforward to modify the architecture (or only the hints, keeping the architecture the same). However, the main challenge arises with the goal of building the universal architecture for different algorithms and data structures. In its current form, the proposed model is not capable of executing some algorithms from the CLRS-30 benchmark, however, simple modifications can enhance its expressivity, e.g., by supporting more complex aggregations with top-k attention (with fixed k). Also, as we mention in Section 4.1, the model can be extended with additional architectural modifications, such as edge-based reasoning. For example, edge-based attention with top-2 attention (where each edge chooses the top 2 adjacent edges to receive the message from) can implement the sort of triplet reasoning. In our work, we aim to describe the key principles that help to build perfectly generalizable models and do not focus on a general architecture capable of executing a wide range of algorithms. > For problems where the ground truth algorithm is unknown, how might we determine the appropriate number of discrete states needed? Is there a principled way to discover minimal state representations? We think that different forms of discrete search are possible techniques for this problem. Additionally, possible techniques can be inspired from the finite automata theory, e.g., constructing the automata from the examples of strings from some regular language. After finding some feasible set of states one can iteratively find equivalent states and merge them. However, we leave a deeper investigation of no-hint discrete neural reasoners for future work. We hope that our response addresses your concerns and will be happy to answer any additional questions during the discussion period if needed.
Summary: The authors define a learning paradigm where they force a neural reasoner to stay exactly on an execution trajectory as provided by the algorithm they aim to imitate, thus achieve perfect generalization. The architecture allows for verification. They highlight three crucial architectural choices: feature discretization, hard attention, discrete and continuous data flow separation ## update after rebuttal: i have updated my score, see last comment. Claims And Evidence: Clear and convincing evidence. Authors do not overclaim. Methods And Evaluation Criteria: The evaluation criteria make sense and the method looks sound. Theoretical Claims: There are no significant theoretical claims. Experimental Designs Or Analyses: The experimental design and the analyses make sense. Supplementary Material: I did not check the supplemental material. The experiments seem sound. Relation To Broader Scientific Literature: I am not well versed in the broader literature to gauge this. Essential References Not Discussed: see above Other Strengths And Weaknesses: Strengths: I enjoy the perfect and provable accuracy with OOD guarantees due to algorithm verification (interpretability) It is very appealing that the models can do multiple tasks at once, albeit needing task dependent encoder/decoders. Weakness: The method mostly needs supervision from the algorithms they aim to mimic to work well, making this much less useful in practice that it could otherwise be. Their experiments without hints in Sec 6. show that for small problems some advances can be achieved, but they do not seem to be compared to existing work.what do you mean by annealing of the attention weights I am not an expert in this area and I wonder about the scope of this work. It seems very narrow, esp. given the above limitation. Other Comments Or Suggestions: In 3.3 (and 3.4): I think a simple example (with an example algorithm like dijkstra + its its inputs illustrated) would help to understand why this treatment of the scalars makes sense. You can put 4.4 in the appendix, that doesn’t really need to be in the main text. Questions For Authors: What do you mean by annealing of the attention weights? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: Thank you for your review of our paper! We address your questions and concerns below. > The method mostly needs supervision from the algorithms they aim to mimic to work well Let us note that for the current state of the field, learning with hints is an important and unsolved problem. E.g., the large body of research (Section 2.1 of the paper), including the state-of-the-art approaches (Bevilacqua et al., 2023; Bohde et al., 2024) heavily rely on different forms of carefully designed step-by-step hints and are not applicable for no-hint learning without additional modifications. However, we fully agree that learning without hints is an important and challenging problem for further developments of NAR. > Their experiments without hints in Sec 6. show that for small problems some advances can be achieved, but they do not seem to be compared to existing work We will add no-hint scores for baselines to the paper. As we write in Section 6, we focus only on the BFS algorithm as it is well-aligned with the message-passing framework, has short roll-outs, and can be solved with small states count. Even in this setup, we do not outperform baselines. Additionally, we would like to highlight the important property of the proposed models which supports our motivation to focus only on small problems: the correct state transitions can be learned only from a trivially small size. For example, for the BFS problem, it is enough to use graphs with only 3 nodes to observe that a not_visited node becomes visited or not depending on the received message. However, the subtask of selecting the parent from the multiple visited neighbors requires at least 4 nodes (where the minimum sufficient example is the complete bipartite graph K(2, 2)). To demonstrate that, we conducted additional experiments to empirically find the smallest training size for perfect fitting of each covered algorithm. For this experiment we used training with hints, but we consider such examples as additional evidence of the prospects of learning with small graph sizes. We train our models for each problem on ER(n, 0.5) graphs for different n and test the resulting models on the graphs with 160 nodes. Node level scores on graphs with 160 nodes for different training sizes: | | 3 | 4 | 5 | |--------------|----|-----|-----| | BFS | 41 | 100 | 100 | | DFS | 38 | 100 | 100 | | Dijkstra | 13 | 26 | 100 | | MST | 11 | 14 | 100 | | MIS | 79 | 100 | 100 | | Ecc. | 45 | 100 | 100 | Note that the empirical bound is around 4-5 nodes. We leave a deeper investigation of learning models without hints for future work. > What do you mean by annealing of the attention weights? By annealing of the attention weights we mean the convergence to zero of the maximum of attention weights when the count of neighbors limits to infinity. We refer to Appendix A (hard attention) for a specific graph construction which demonstrates that increasing the neighborhood size of a node breaks the ability of that node to select the most important neighbor, which supports our choice of the hard attention for strong size generalization with guarantees. > In 3.3 (and 3.4): I think a simple example (with an example algorithm like dijkstra) would help to understand why this treatment of the scalars makes sense For example, consider the Dijkstra algorithm. The algorithm uses the edge weight to compute the shortest distance from the starting node to each node, building the tree step-by-step. When adding a new node B to the tree and assigning a node A as a parent in the tree, the algorithm computes the distance from the starting node the the node B by summing the distance to the node A with the weight of the edge AB. To mimic this behavior with our model, we use a scalar updater module. At node level, this module receives as inputs the discrete states of the node and its neighbors and the scalar values stored in corresponding edges. Depending on the discrete states, this module updates the scalar values by one of the supported operations, which is described by the formula (lines 228-230 of the paper). In our example, this module will push the scalar (the distance from the starting node to B) from the node A to the edge AB (edge weight) and sum them. Thus, for edges in the shortest path tree, the scalar value on edges represents the exact shortest distance from the starting node, and for other edges the scalar value will represent the edge weight. This module allows one to perform any (predefined) operation with scalars needed for the algorithms, but with guarantees to choose the exact operation based on the discrete states, thus being robust to OOD scalar values. We will add this example (with an illustration) to Appendix. Additionally, you can find the Appendix C useful for this context. We hope that our response addresses your concerns and will be happy to answer any additional questions during the discussion period if needed. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiment. Please make sure to include the dijkstra example (and possibly even a visual illustration) into the camera-ready if this paper gets accepted. it helps. I will update my score to 3, as I think this paper gives some interesting insights.
null
null
null
null
null
null
It Takes Two to Tango: Directly Optimizing for Constrained Synthesizability in Generative Molecular Design
Reject
Summary: This work aims to generate synthesizable molecules that meet multi-parameter optimization (MPO) objectives while simultaneously adhering to a predefined set of building blocks. They design reward functions based on chemistry principles and introduce the TANGO reward function to generate synthesizable molecules with enforced building blocks using reinforcement learning (RL). Claims And Evidence: 1. On the other hand, retrosynthesis planning (Liu et al., 2017; Segler & Waller, 2017; Coley et al., 2017; Segler et al., 2018) proposes viable synthetic routes to a target molecule, and these models are often used as stand-alone tools to assess synthesizability Retrosynthetic planning consists of two key components: a one-step retrosynthesis model and a search algorithm. I believe this claim should be revised for greater accuracy and clarity to avoid potential misunderstandings. 2. Case 1: Starting-material Constrained Synthesis. A synthesis graph is considered starting-material constrained if at least one leaf node, m ∈ G(M, R), satisfies both of the following conditions: (1) m = b ∈ Benf , and (2) depth(m) = max depth: Case 1 is trivial. In retrosynthetic planning, all leaf nodes of a synthetic route must be starting materials; otherwise, the search is unsuccessful. Therefore, I do not find the constrained synthesis proposed in this work to be novel or innovative. 3. In the context of generative molecular design, previous work has shown that retrosynthesis models can be treated as an oracle and directly optimized for (Guo & Schwaller, 2024c). In this context, "oracle" refers to an idealized or authoritative source of truth that provides highly reliable or correct answers. Treating retrosynthesis models as an oracle implies assuming they have near-perfect knowledge and decision-making capabilities in predicting retrosynthetic routes. However, in reality, these models have limitations and uncertainties. In fact, the top-1 accuracy of existing retrosynthesis models on USPTO-50K is typically only 50%-60%, meaning that a significant portion of predictions are incorrect. Given this, it is unreasonable to treat retrosynthesis models as oracles. Methods And Evaluation Criteria: This paper focuses on the synthesizable drug design and uses the solvable rate for evaluation. It makes sense for the problem. Theoretical Claims: No theoretical analysis. Experimental Designs Or Analyses: I check the experimental design and think it's correct. Supplementary Material: I reviewed the supplementary material and checked the hyperparameter details. Relation To Broader Scientific Literature: The key contribution is related to our domain (synthesizable drug degisn). Essential References Not Discussed: N/A Other Strengths And Weaknesses: Weaknesses: 1. Coming from the machine learning community, I would like to clarify why this paper adopts $B_{ref}$ instead of using the entire building block space. The authors employ reinforcement learning (RL) for molecular generation, and if the building block space is too large, the reward signal during training becomes extremely sparse. This leads to low sampling efficiency, making it difficult to identify viable routes for optimization. By constraining the building block space to a smaller subset, the reward signal becomes denser, stabilizing the RL training process. However, restricting the building block space in this way significantly compromises the model's generalization ability. I do not believe this approach enables the proposed method to be truly applicable in practice. Furthermore, training RL from scratch essentially resembles a search strategy. In large language models (LLMs), RL is typically applied on top of a pretrained model rather than from scratch. A key factor in stabilizing RL training is having a strong initial starting point. Given this, I find the approach adopted in this paper unpromising. 2. The TANimoto Group Overlap (TANGO) reward function proposed in this paper closely resembles the Process Reward Model (PRM) in LLMs, which assigns rewards based on the prediction process. However, recent approaches, such as the outcome reward model used in DeepSeek R1 [1], do not reward the process itself, as LLMs can easily exploit PRMs. The paper does not discuss the potential limitations of its proposed reward function. 3. It is incorrect to treat the retrosynthesis model as an oracle. Recent work [2] has shown that these models can generate phantom predictions, highlighting their inherent uncertainties. [1] DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. [2] Retro-fallback: Retrosynthetic Planning in an Uncertain World. --- update: I can't accept this paper currently. Other Comments Or Suggestions: Use post-training instead of RL from scratch. Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback. We respectively suggest that our work has been misinterpreted and we want to clarify our framework. ## **1. Clarify retrosynthesis uses single-step model + search algorithm** We will clarify this in the updated manuscript. ## **2. Starting-material constraint is trivial, not novel, nor innovative** We want to clarify what starting-material constraint means. The reviewer points out that in retrosynthesis, all leaf nodes are starting materials. However, all leaf nodes only need to be **commercially available** for a search to be successful. We denote commercially available nodes as $B_{stock}$ (this work uses 17.7 million blocks derived from ZINC). Starting-material constraint means that at least 1 leaf node **at the max depth (first step of synthesis)** must be from $B_{enf}$ (in this work, $B_{enf}$ $\in$ [5, 10, 100] and $B_{enf}$ $\subset$ $B_{stock}$). **This is non-trivial because it heavily constrains the search**. In our notation, we use $m = b \in B_{enf}$ and $depth(m)$ = max depth, as the reviewer also writes. ## **3. Retrosynthesis models are not oracles because they are not authoritative and top-1 accuracy is low** We believe the fact that “oracles” are imperfect does not preclude its use as a term. In one of the most popular benchmarks in drug discovery, the Practical Molecular Optimization benchmark [1], the term oracle is introduced as any computational predictor. In that work, surrogate models are considered oracles, even though they are trained with limited data so they are not expected to have global coverage. For retrosynthesis models, the reviewer rightly points out top-1 accuracy is imperfect. However, top-1 accuracy itself is an imperfect measure of oracle effectiveness as **there are many ways to make the same molecule.** However, how would one assess the usefulness of such alternative pathways? We believe it is most convincing to reference real life medicinal chemistry case studies, for example presented in this paper [2], where retrosynthesis models positively impacted 7 commercial drug discovery synthesis projects. One can argue in LLMs, RLHF preference fine-tuning is also not an authoritative source of truth as humans have biased preferences. Yet, there is still value in tuning with these reward models. Therefore, we believe the term oracle is reasonable, even if they provide approximations to the "true" value. The most important aspect is that one is aware of the limitations of the oracle, which we explicitly acknowledge in our conclusions section. ## **4. Why use $B_{ref}$ instead of entire block space? Large block space = sparse reward. Small block subset = reward denser but worse generalization** We use $B_{enf}$ (we assume the reviewer meant this, please correct us if this is not true) because we are tackling **constrained synthesizability**. We kindly refer to our response to **Reviewer 5JW4**. Contrary to what the reviewer suggests, **a large block space actually makes the reward more dense, rather than more sparse.** More building blocks generally means more molecules can be “synthesized” from them. More synthesizable molecules means more molecules will have a reward as TANGO assigns a reward to all synthesizable molecules **(Fig. 2)**. Non-synthesizable molecules receive 0 reward. The reviewer is correct that **enforcing** the presence of specific blocks may restrict generalization (in the sense of what is synthesizable). ***But this is exactly our problem setting and why constrained synthesizability is challenging.*** We recognize that the reviewer may have wrote this comment due to interpreting our framework as RL from scratch instead of post-training. However, we wanted to provide an answer to help clarify our work. ## **5. TANGO resembles PRM but we should use outcome reward model** We would like to clarify that TANGO **is** an outcome reward model already. We reference Fig. 2 in our main text. Every synthesizable molecule has a synthesis pathway. TANGO measures the max similarity between every molecule node and the set of enforced blocks ($B_{enf}$). There are no intermediate rewards in our framework. Generating molecules in token space only rewards the final molecule to ensure the molecule is *chemically valid*. Therefore, TANGO returns an outcome reward given the final molecule and its synthesis pathway. ## **6. Use post-training instead of RL from scratch** We are already using post-training. Our model is pre-trained on PubChem which is a dataset of bioactive molecules. This pre-training procedure is detailed in Appendix A. This pre-training is performed once. Next, we post-train the model using RL. This is akin to LLM pre-training and RLHF fine-tuning. We hope to have clarified our work and thank the reviewer for their time. Please let us know if there are additional questions! [1] https://arxiv.org/abs/2206.12411 [2] https://pubs.rsc.org/en/content/articlelanding/2024/md/d3md00651d --- Rebuttal Comment 1.1: Comment: 1. In fact, the paper [1] already adopts a subset of the full reaction template set. While your work focuses on reducing the size of the starting material set, I believe both approaches share a common goal: constraining the search space to improve the convergence of reinforcement learning loss. [1] Amortized Tree Generation for Bottom-up Synthesis Planning and Synthesizable Molecular Design 2. I believe the term oracle is often abused in the drug discovery community. In reality, single-step retrosynthesis is far from reliable, and referring to it as an "oracle" is, in my opinion, not rigorous. 3. I recommend reviewing the DeepSeek R1 paper, which uses an outcome reward rather than a process reward. This design choice is due to the large sampling space of LLMs and the resulting sparsity of useful rewards. Previous work relied on process rewards to provide more frequent feedback. However, with a strong pretrained model, outcome-based rewards can be used directly. Your paper takes an alternative route by reducing the size of the sampling space to improve reward signal quality, but this comes at the cost of reduced generalization. 4. The outcome reward used in DeepSeek directly reflects the feasibility of a synthesis path. It is a binary signal indicating whether the path is valid or not. I encourage you to take a closer look at the DeepSeek R1 paper for details. 5. Apologies for initially overlooking this point—but since your approach is based on a pretrained model, you should theoretically already have a strong initialization. As such, it may not be necessary to restrict the size of the starting material space purely to improve convergence. ---- update The reviewer didn't address my concern. I don’t believe it is reasonable to restrict the block size to such a small value, as different products typically require different building blocks for synthesis. --- Update Additionally, some of the terminology used may mislead the community. --- Reply to Comment 1.1.1: Comment: Thank you to the reviewer for engaging with us! We respectively suggest there are fundamental aspects of our work that are being misinterpreted. We first define our problem setting again and then respond to the reviewer point-by-point. ## **What is constrained synthesizability?** Our goal is to generate molecules that satisfy multi-parameter optimization **while additionally** satisfying the following 2 properties: 1. Synthesizable 2. **Constrained synthesizable** ***2*** is more difficult than ***1*** and is the central problem we are tackling. We first describe ***1*** and then extend to ***2***. Note that the set of reaction rules is **fixed** in both ***1*** and ***2*** . "Synthesizable" means there is ***any*** synthesis pathway to a molecule, i.e., **commercially available chemicals (denoted $B_{stock}$)** can be assembled into target molecule. This is what is done in all existing works, including [1] cited by the reviewer. We now define $B_{enf} \subset B_{stock}$ and $|B_{enf}| \in [5, 10, 100]$ which represents the set of enforced blocks. Note that $|B_{enf}|$ << $|B_{stock}| (17.7M) $. **Constrained synthesizable** means that the synthesis pathway uses chemicals from **both** $B_{enf}$ and $B_{stock}$. ***A molecule that is synthesizable does not mean that it can be synthesized **while** incorporating $B_{enf}$.*** We draw an analogy to DeepSeek R1. In their definition of **Accuracy reward** on page 6 of [2], it is stated: ***“the model is required to provide the final answer in a specified format (e.g., within a box), enabling reliable rule-based verification of correctness.”*** We imagine for a second that there is a magic function that can assess correctness *regardless* of output format. Then getting the correct answer (***synthesizable***) is easier than getting the answer correct **and** in the specified format (***constrained synthesizable***). Getting the correct answer **does not** imply the correct format. Similarly, in our problem setting, a molecule that has a synthesis pathway **does not** mean the leaf nodes incorporate the enforced blocks. We hope this conveys that our problem setting is different and more difficult than general synthesizability. ## **Responding to the reviewer’s points** **1. and 5.** We are not reducing the size of the starting material set, we continue to use the full **commercially available** set of 17.7M chemicals. We only want that the synthesis pathway incorporates at least 1 enforced block ***amongst*** these 17.7M commercially available chemicals. \ **2.** We respect the reviewer’s perspective and can change our terminology to “reward model/function” in the future version. \ **3.** DeepSeek R1 = DeepSeek-V3-Base + GRPO to *incentivize* reasoning [2]. They apply **Accuracy** and **Format** rewards which assess model **output** rather than the generation process, as the reviewer also states. **Our framework is the same thing**. We take the pre-trained “PubChem Base Model” (pre-trained on 88M molecules which is a big dataset in chemistry) + RL to *incentivize* the generation of molecules that satisfy constrained synthesizability. In the exact same manner, our model has no inductive biases, it is being guided **only** by the **outcome reward (TANGO)**. ***We are not reducing the size of the sampling space to improve reward signal. The sampling space is not restricted at all. The model can generate any molecules it wants and the outcome reward guides the optimization. This is exactly the same as DeepSeek R1’s workflow.*** \ **4.** The reviewer states that the binary outcome reward **directly reflects the feasibility of a synthesis path.** However, we show that binary outcome can be sparse and brute-forcing constrained synthesizability with binary rewards is difficult. We refer to **Table 5 first row in the Appendix** showing “Brute-force” (binary outcome reward) is unstable. Our contribution is formulating the TANGO reward that is **still an outcome reward** but makes the reward **dense**. A molecule that is “synthesizable” does not imply“constrained synthesizable”, yet there is meaningful information that can be learned from this. We refer to **Fig. 2 in our main text** which shows **how** a non-zero reward can be returned even if a molecule’s synthesis pathway does not contain an enforced block. --- **Update:** We address "block restriction". 5, 10, or 100 enforced blocks and **17.7 million** general blocks. Suppose step 1 uses an enforced block -- barring chemical incompatibilities, step 2 can choose from up to 17.7M blocks. The next step can choose again from 17.7M blocks, etc. Therefore, the presence of an enforced block still allows for an **enormous synthesizable space** (**See our response to Reviewer ogSL** where existing works use **much** smaller building block sizes). \ \ [1] https://arxiv.org/abs/2110.06389 [2] https://arxiv.org/abs/2501.12948
Summary: Through this paper, the authors propose TANimoto Group Overlap (TANGO), a reward function for constrained synthesizable molecule generation based on reinforcement learning. The proposed TANGO augments molecular generative models to directly optimize for constrained synthesizability while simultaneously optimizing for other properties relevant to drug discovery. ### **Update after rebuttal** Thank you for the authors for the rebuttal. However, my concerns are not completely resolved. An approach like Synformer, which performs the nearest neighbor search with the molecule fingerprints within a given set of building blocks, has some generalizability even if the BB set is different in the training and inference phases. And since Synformer does not need to predict intermediate molecules, but only the earliest BBs, there is no need to choose from which step to adopt original BBs and from which step to adopt enforced BBs. All BB can be selected from the enforced set. Overall, I think the problem the paper is solving is already possible as a trivial extension of current methods, or at least, a comparison with a trivial extension of existing methods is essential, but not available. Claims And Evidence: The claims made in the submission are supported by evidence. Methods And Evaluation Criteria: The proposed methods and evaluation criteria make sense for the proposed problem. Theoretical Claims: There are no theoretical claims in this paper. Experimental Designs Or Analyses: There is no comparison with existing methods in this paper. In existing synthesizable molecular design literatures [1-4], a very popular strategy is to generate Morgan fingerprints of building blocks and then perform the nearest neighbor search on a library of predefined building blocks. I think that one can easily perform constrained synthesizable molecular design by restricting the library for performing nearest neighbor search to the enforced building blocks. What is the advantage of this work compared to this approach? And I strongly think that extensive further comparison with existing works is needed. --- **References:** [1] Gao et al., Amortized tree generation for bottom-up synthesis planning and synthesizable molecular design, ICLR, 2022. [2] Gao et al., Generative artificial intelligence for navigating synthesizable chemical space, arXiv, 2024. [3] Cretu et al., SynFlowNet: design of diverse and novel molecules with synthesis constraints, ICLR, 2025. [4] Sun et al., Procedural synthesis of synthesizable molecules, ICLR, 2025. Supplementary Material: I briefly reviewed the supplementary material. Relation To Broader Scientific Literature: My biggest concern for this work is the limited novelty, both conceptually and technically. The proposed framework is be a straighforward integration of a retrosynthesis prediction model (i.e., Syntheseus [5]) and a molecular generative model (i.e., Saturn [6]). Overall, it is more like a heuristic technique rather than an ML algorithm. I realize that combining two already existing methodologies does not always result in low contribution work, but in this case it's a straightforward solution and I do not think there are any particular challenges in combining the two that this work solved. --- **References:** [5] Maziarz et al., Re-evaluating retrosynthesis algorithms with syntheseus, NeurIPS AI4Science Workshop, 2023. [6] Guo et al., Saturn: sample-efficient generative molecular design using memory manipulation, arXiv, 2024. Essential References Not Discussed: The submission covers essential references. Other Strengths And Weaknesses: This work addressed a new problem in the ML domain: constrained synthesizability, but at the same time, it was not backed up with sufficient explanations and examples of why this is an important real-world problem. Other Comments Or Suggestions: -- Questions For Authors: -- Code Of Conduct: Affirmed. Overall Recommendation: 1
Rebuttal 1: Rebuttal: Thank you to the reviewer for their feedback and for the opportunity to clarify our work. `FP` = fingerprint `NN` = nearest-neighbor ## **1. Why is constrained synthesizability important?** We kindly refer to our response to **Reviewer 5JW4**. ## **2. Why existing synthesizable design works cannot be easily adapted to this problem setting** We first highlight characteristics of recent synthesizable design works and then discuss why adapting these models for constrained synthesizability is non-trivial. **SynNet** [1] * 147,505 blocks * `FP` + `NN` to select blocks * Pre-training dataset prepared by sampling blocks-reactions to generate synthetic trees **SynFormer** [2]: * Diffusion to select blocks, allowing scaling to longer `FP` * 223,224 blocks * Pre-training dataset prepared by sampling blocks-reactions to generate postfix notations [3] of synthesis **RGFN** [4]: * Either 350 or 8350 blocks * Memory constraints limiting scaling to larger block sets (see Appendix B.5 and Appendix D.3 in [6]) **SynFlowNet** [5]: * Scaling to **up to** (as stated in the paper) 200k blocks but most experiments are run with 10,000 blocks * Uses masking to enforce compatible blocks and reactions * Memory constraints limiting scaling to larger block sets (see Appendix B.5 and Appendix D.3 in [6]) **RxnFlow** [6]: * Scale up to 1.2M blocks * Non-hierarchical MDP formulation compared to hierarchical in RGFN [4] and SynFlowNet [5] allowing adaptation to changing blocks In all methods above, blocks are selected either by `FP` `NN` or softmax sampled. Constrained synthesizability means that a small set (in our work we considered sizes of 5, 10, 100) of enforced blocks are **present** in the synthesis pathways. With `FP`, the reviewer suggests doing `NN` search on the enforced blocks. However, it is non-trivial to decide at which step of the synthesis pathway to do this in the general case, e.g., choose a "normal" block at step $t$ and then force step $t+1$ to choose an enforced block? What if the enforced blocks are incompatible with the reaction? One may need to encode explicit biases into the generation process which TANGO overcomes by *incentivizing* the learning process. One could *mask* the actions to guarantee enforced blocks, but the same consideration remains in *when* to do this and what to do for block-reaction incompatibility. For SynNet [1] and SynFormer [2] which require pre-training with enumerated “pseudo” routes, should one generate more routes with the enforced blocks to increase their sampling probability? Would this diminish the model’s ability to generate diverse molecules? We believe these are open questions for model development. We next highlight a scalability limitation. We used 17.7M blocks which is 2-3 orders of magnitude larger than existing works which need to change model dimensions and/or more memory to increase block set. We can scale to 100M blocks since we only need to store the SMILES. **But why use so many blocks?** It is not *unreasonable* to consider *any and all* blocks that are commercially available. Our framework also allows freely changing block sets while the existing methods need to re-initialize (see [7] which applies Saturn using 5 distinct block stocks **without** re-training). ***TANGO guides Saturn, which is a model with no synthesizability inductive biases, to optimize for constrained synthesizability. TANGO is general and can augment these existing models to do this and designing this function is the contribution of our work.*** However, we note that it may still not be straightforward as the lower sample efficiency of GFlowNet models can make this computationally prohibitive, i.e., constrained synthesizability alone is not enough, the molecules' properties must also be optimized (see [8] for a benchmark where GFlowNets are ranked 16/25 for sample efficiency and [7] for a Saturn/GFlowNet comparison). See also Appendix A.5 in SynFlowNet [5] showing that its sample efficiency is lower than REINVENT [9], which Saturn significantly outperforms (see Table 3 in [10]). **Lastly, we report docking scores across thresholds to show that our framework performs multi-parameter optimization.** ***We now have results showing TANGO can augment GraphGA (genetic algorithm) for constrained synthesizability and will update the manuscript when we are able to do so.*** We are eager to continue discussion and hope we clarified some points. Please let us know if there are follow-up questions! [1] https://openreview.net/forum?id=FRxhHdnxt1 [2] https://arxiv.org/abs/2410.03494 [3] https://openreview.net/forum?id=scFlbJQdm1 [4] https://openreview.net/forum?id=hpvJwmzEHX [5] https://openreview.net/forum?id=uvHmnahyp1 [6] https://openreview.net/forum?id=pB1XSj2y4X [7] https://pubs.rsc.org/en/content/articlehtml/2025/sc/d5sc01476j [8] https://arxiv.org/abs/2206.12411 [9] https://jcheminf.biomedcentral.com/articles/10.1186/s13321-017-0235-x [10] https://arxiv.org/abs/2405.17066
Summary: The paper "It Takes Two to Tango" introduces a new approach to generative molecular design that explicitly optimizes for synthesizability under real-world constraints. The key problem is existing molecular generative models often optimize for molecular properties (such as drug-likeness or docking scores) but fail to ensure that the generated molecules can be synthesized. This is especially relevant when specific starting materials or intermediates must be used. To tackle this, the authors propose TANGO (TANimoto Group Overlap), a reward function designed to transform the sparse binary signal from retrosynthesis models into a more dense signal. By incorporating Tanimoto similarity, functional group overlap, and fuzzy matching, TANGO helps reinforcement learning models navigate chemical space more effectively. The generative model used is Saturn, a reinforcement learning-based autoregressive model operating on SMILES sequences. It is coupled with the MEGAN retrosynthesis model and the Retro search algorithm* to ensure that generated molecules are both synthesizable and optimized for specific chemical properties. Claims And Evidence: Overall, most claims in the paper are well-supported by experimental results, particularly the effectiveness of the TANGO reward function in improving reinforcement learning for constrained molecular generation. The authors provide strong evidence that their model can enforce synthesizability constraints while optimizing molecular properties, with extensive ablation studies confirming the advantage of TANGO over simpler similarity-based rewards. The results demonstrate that the method works across different synthesis constraints and successfully generates molecules that balance synthesizability with desired drug-like properties. However, the discussion on true synthesizability is limited—since retrosynthesis models have imperfections, it would help to assess whether the generated molecules are chemically feasible beyond just passing retrosynthesis predictions. Adding a more in-depth discussion of these limitations would make the paper stronger. Methods And Evaluation Criteria: The proposed methods and evaluation criteria are mostly well-aligned with the problem of constrained synthesizability in generative molecular design. Theoretical Claims: No issues Experimental Designs Or Analyses: No issues Supplementary Material: N/A Relation To Broader Scientific Literature: The paper situates itself within the broader scientific literature on generative molecular design, retrosynthesis modeling, and reinforcement learning for molecule generation. It builds upon prior work in synthesizability-constrained molecular design and reinforcement learning-based molecular optimization while introducing a novel approach to enforcing chemical constraints directly within the generative model. Essential References Not Discussed: No Other Strengths And Weaknesses: Addressed before Other Comments Or Suggestions: Great paper! Questions For Authors: No Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: Thank you to the reviewer for their positive assessment of our work. The point about true synthesizability is very important to us, as experimental validation is always the end goal of generative design. As we are unable to update the manuscript version at this time, we wanted to provide more discussion in our response here. The major limitation of retrosynthesis models is that proposed pathways might suffer from general feasibility (does the reaction *actually* proceed?) and selectivity problems - regioselectivity, i.e., does the reaction proceed at exactly the position we want? And stereoselectivity, i.e., does the reaction yield the correct enantiomer as the major product? To this end, reaction feasibility/selectivity in retrosynthesis has been explored by integrating quantum chemistry information either through simulations [1] or the use of machine learning force fields [2] (we cite a recent example each and a recent review here [3]). Effectively and efficiently leveraging quantum chemistry information for reaction feasibility is an open research problem. However, going forward, more capable retrosynthesis models would immediately benefit our framework. Finally, in the reaction pathways shown in the main text, amide coupling reactions are predominantly present (partially owing to how common they are). In general, this is a relatively robust reaction. In the updated manuscript, we will explicitly comment on the feasibility of the reactions. [1] https://chemrxiv.org/engage/chemrxiv/article-details/671f791c1fb27ce124d5c98c [2] https://chemrxiv.org/engage/chemrxiv/article-details/67d7b7f7fa469535b97c021a [3] https://pubs.rsc.org/en/content/articlelanding/2025/sc/d5sc00541h
Summary: This paper focuses on the challenge of directly optimizing for constrained synthesizability in generative molecular design. Controlling the synthesizability of generated molecules is crucial for closed-loop discovery and robotic synthesis automation. Existing methods have limitations, and there is a lack of molecular generative models that can enforce specific building blocks in synthesis routes. The authors propose a novel reward function named TANimoto Group Overlap (TANGO) to address this issue. The experiment shows that the TANGO reward function can guide a general-purpose molecular generative model to optimize for constrained synthesizability and perform MPO simultaneously. Claims And Evidence: 1. For Line 37 "Our framework is the first generative approach to successfully address constrained synthesizability". There might be an overclaim in this field. For example, some crystal material generation methods already investigated constrained synthesizability from the perspective of structure stability. 2. Line 53: However, to date, there are no molecular generative models that can enforce specific building blocks in the proposed routes. I did not find a detailed comparison between the existing works and the proposed method in this direction. "More recently, constrained retrosynthesis algorithms have been proposed", could you provide more explanation for this? Methods And Evaluation Criteria: The experimental setup, including the choice of the drug discovery case study (optimizing docking scores against a specific protease and QED values) make sense to the overall goal of generating useful molecules. The metrics used, such as Non-solved, Solved (Enforced), docking scores, QED values, and the number of reaction steps, are appropriate for evaluating the performance of the model in terms of synthesizability and multi-parameter optimization. Theoretical Claims: There are no Theoretical Claims. Experimental Designs Or Analyses: Yes, I did not find impactful issues for the experimental designs. Supplementary Material: I did not check the Supplementary Material. Relation To Broader Scientific Literature: This work proposed a new framework for onstrained synthesizability in generative molecular design, and has potential impact on the chemistry and meterial science, as well as the drug discovery. Essential References Not Discussed: Not found. Other Strengths And Weaknesses: 1. The core idea of TANGO and the motivation is not clear. In Figure 1, it's not clear to find the reason that simultaneously uses enforced building blocks and optimizing other properties. Other Comments Or Suggestions: 1. The font in Figure 1 is too small. Questions For Authors: Except the drug discovery, what's the potential impact on the chemistry and meterial science? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and an opportunity to clarify our problem setting. ## **Generative approach for constrained synthesizability. Why is this useful?** We answer all questions from the reviewer in this single response since they are related. **What is the problem of constrained synthesizability?** *Synthesizing* molecules can be thought of as stitching together “building blocks” (commercially available chemicals) with reactions. We will refer to this set of building blocks as $B_{stock}$. Constrained synthesizability extends this problem setting to also enforce that a **specific** set of building blocks is used in the synthesis and denote this $B_{enforced}$. We emphasize that there are **much fewer** $B_{enforced}$: $|B_{stock}|$ is 17.7 million while we used $|B_{enforced}|$ $\in$ [5, 10, 100]. This is a significantly harder problem because the model must generate molecules that can be synthesized by using **both** $|B_{enforced}|$ and $|B_{stock}|$ and blocks cannot be freely combined together as they must be chemically compatible. ***TANGO enables a completely unconstrained generative model to learn how to satisfy this constraint using reinforcement learning, while performing multi-parameter optimization of other properties***. **Why should we care about enforcing specific building blocks?** Building blocks have variable costs. One can envision enforcing the use of the cheapest set of blocks from a commercial vendor to manage costs. We may want to re-purpose certain blocks into useful molecules. For example in [1], waste molecules from industrial processes are repurposed into medicines. By definition, this is constrained synthesizability because one wants to use these waste molecules ($B_{enforced}$) **together** with general chemicals that can be purchased ($B_{stock}$). One could conceive using TANGO to repurpose waste into *de novo* molecules. In drug discovery, we often want to generate molecules that share a common scaffold and diversify from this scaffold. In Fig. 3 right panel, we show this capability explicitly where a 1-step reaction from an enforced scaffold forms a new molecule with improved properties. An important capability of TANGO here is that > 1 block can be considered simultaneously and it is not uncommon that one is interested in multiple scaffolds. There are existing works that can enforce a specific scaffold with pre-defined exit vectors to attach new chemical groups to but TANGO is much more general. **Where is constrained synthesizability useful beyond drug discovery?** In this paper, we focused on organic molecules for drug discovery. Organic molecules encompass many **classes** of matter, for example organocatalyst design [3] and functional materials design like semiconductors [4]. TANGO could be extended to tackle these problems. **Overclaim of first generative approach and no discussion of existing works** In section “Synthesizability-constrained Molecular Generation”, we cited many works that have indeed tackled synthesizability in generative models. However, our problem setting is **constrained synthesizability** for which the existing works do not tackle. More specifically, we generate molecules with an **explicit synthesis pathway** (step-by-step recipe to make the molecule) that also incorporates specific blocks. In Appendix F, we show examples of these synthesis pathways and put a box around the specific enforced block used. While we operate in the organic molecules space, we are also, to the best of our knowledge, not aware of crystal (inorganic) generation works with constrained synthesizability. For detailed discussions of why existing works cannot easily be extended to constrained synthesizability, we kindly refer the reviewer to our response to **Reviewer ogSL**. **More recently, constrained retrosynthesis algorithms have been proposed. More details about this** [5] is a specific recently proposed constrained retrosynthesis algorithm. **Retrosynthesis is a different problem setting to ours**: retrosynthesis involves generating a synthesis pathway **given a target molecule**. We do not have a “target molecule” in the same sense, as TANGO is guiding the model to **generate** molecules that have a synthesis pathway containing the enforced blocks **and** containing optimized properties. Furthermore, in [5], the algorithm is trying to propose a synthesis pathway that incorporates a **single** pre-defined block. TANGO can generalize to a **set** of blocks, shown by the results when we enforced 5, 10, 100 blocks. Thank you to the reviewer again and we would be eager to continue discussion! [1] https://www.nature.com/articles/s41586-022-04503-9 [2] https://pubs.acs.org/doi/10.1021/acs.jcim.1c00469 [3] https://onlinelibrary.wiley.com/doi/full/10.1002/anie.202218565 [4] https://pubs.rsc.org/en/content/articlelanding/2020/nr/c9nr10687a [5] https://openreview.net/forum?id=LJNqVIKSCr
null
null
null
null
null
null
3D Question Answering via only 2D Vision-Language Models
Accept (poster)
Summary: This paper proposes to address the task of 3D question-answering (3D-QA). Proposed method takes as input a set of posed RGB images and operates by only using 2D large vision language models (in this case LLAVA-OV). The method does not operate on any 3D input, and instead uses the set of available images to answer questions in 3D-QA benchmarks. As the method only relies on images, the key challenge is to select informative and diverse views that correspond to the task and more helpful for answering the question being asked about the scene. To address this, the method proposes a view-selection module named cdViews, consisting of two sub-parts: (1) viewSelect is a module trained using annotations from 3D-QA datasets to identify most critical and relevant views for answering the question, and (2) viewNMS module essentially measures pairwise view overlaps to identify and select diverse views. The proposed method is evaluated on the ScanQA and SQA datasets. ### Update after rebuttal I appreciate the authors’ rebuttal and the clarifications provided. After carefully reading the response and considering the other reviews and discussions, I agree that the main contribution of the paper lies in demonstrating that 2D VQA models can outperform 3D-based models on current 3D-QA benchmarks. This is indeed a noteworthy and interesting finding, suggesting that strong performance on these benchmarks is possible by only relying on large vision-language models (LVLMs). But this finding does not yet persuade me that LVLMs are necessarily stronger for the 3D-QA task in the general sense as claimed in this paper. This could even be instead pointing to the limitations of the existing 3D benchmarks in terms of evaluating spatial relations. Overall, while the paper makes an interesting point, I believe the analysis of the results could be strengthened, especially in terms of exploring the role and limitations of 3D reasoning in the presented approach. As noted in my initial review, I had raised several questions related to this, such as whether LVLM-based methods are truly expected to model spatial relationships in 3D scenes. While the rebuttal addressed some of these concerns, I find that some of my reservations remain, particularly with respect to the 3D spatial reasoning ability of the method and the method limitations. In the light of this, I will be keeping my original score (3). Claims And Evidence: - The submission claims to address 3D-QA, yet the method relies solely on 2D images—with the only 3D aspect being the viewNMS module that uses camera poses to select diverse views if I understood correctly. This raises questions about whether the approach truly tackles 3D-QA, which typically requires a full 3D scene representation to capture spatial relationships. Moreover, since 2D images often suffice for answering questions about nearby object pairs, it remains unclear if the method robustly handles _spatial_ reasoning. I indeed find it impressive to see that this 2D-only method can perform much better than 3D-based methods, which is an interesting finding of this submission. But I am not yet fully convinced that these gains can be attributed to the method proposed in this work, for instance if one combines the baselines for 3D/2D hybrid methods with LLAVA-OV, would we see better results as they have access to 3D reconstruction as well (and can attend to the full 3D scene)? Also, further analysis of the types of questions answered well (and those not) by the proposed method could clarify whether the method truly understands broader spatial relations, such as identifying objects facing each other across a room and never visible together in a single image (e.g., "what is located right across the window" where the answer could be a TV far away). - [L153-156] “We are the first to leverage 2D LVLMs in a zero-shot manner to address 3D understanding tasks.” - This claim is partially incorrect. Many prior works on open-vocabulary 3D scene understanding, such as OpenScene, OpenMask3D, and Open3DIS, already rely on 2D LVLMs to extract knowledge about a 3D scene in a zero-shot manner. While the claim might hold for this specific 3D-QA setting and the LVLMs such as LLAVA (instead of CLIP-style VLMs), it should be narrowed in scope to avoid misrepresentation. - [L155-157]: "We identify view selection as a critical factor in this zero-shot setting, a challenge that has not been explicitly addressed." – Again, I think that the scope of this statement should be narrowed as there are many open-vocabulary 3D scene understanding methods where object-related view selection has been explored. - [L125-127, right column]: "These methods improve 3D-QA performance, they come with trade-offs such as increased model complexity and data processing requirements. Furthermore, reconstructing 3D scenes from 2D views incurs significant computational costs, limiting the scalability in complex scenes." – While this argument is valid in terms of the costs associated with reconstruction, it does not consider the costs associated with running LVLMs multiple times on multiple views, which is the case in the proposed method. - [L142-144]: "They (2D-based methods) care more about the evaluation itself rather than explicitly exploring new methods to improve 2D LVLMs performance in 3D-QA." – This statement is vague and should be better substantiated. Methods And Evaluation Criteria: Proposed method is reasonable, and the evaluation methodology follow the established benchmarks for the task of 3D-QA. Theoretical Claims: N/A Experimental Designs Or Analyses: I think the experimental designs and analyses are generally sound, I did not identify a critical issue. Supplementary Material: I reviewed the supplementary material (appendix), which includes additional implementation details as well as a link to an anonymous repository to the source code of the project. In the supplementary text, I reviewed the additional experimental analysis. Relation To Broader Scientific Literature: The paper provides a comprehensive discussion of related works, and I appreciated the systematic discussion with 3D, 2D as well as 3D/2D hybrid methods. However, the paper should be more careful in making strong claims about being the first to leverage 2D LVLMs for 3D tasks ([L153-156]) in a zero shot manner, as prior works in open-vocabulary 3D segmentation (OpenScene, OpenMask3D, Open3DIS etc.) have followed similar zero-shot approaches. This could also be a language specificity issue, as these open-vocabulary methods largely use LVLMs such as CLIP, whereas the proposed method focuses on methods like LLAVA with conversational abilities. I think this should be made more clear in the discussion in order to more accurately position the paper. Essential References Not Discussed: The paper provides a comprehensive discussion of related works. However, it should be more careful in making strong claims about being the first to leverage 2D LVLMs for 3D tasks ([L153-156]), as prior works in open-vocabulary 3D segmentation have done something similar. Additionally, the discussion of view selection could reference prior multi-view reasoning works. Other Strengths And Weaknesses: _Strengths:_ - It is a well-written paper with very clear explanations. - The proposed methodology has an interesting take on the 3D-QA task, showing that even without having an explicit 3D scene representation it is possible to reason about the scene well compared to the purely 3D or hybrid 3D-2D methods for 3D-QA task. The findings are convincing, and I think the experimental analysis sheds light on the benefit of leveraging 2D LVLMs for 3D scene understanding tasks. - I appreciated the comprehensive discussion on related works. - The evaluation methodology is reasonable, and I also appreciated that the code is provided. _Weaknesses:_ - I was unable to see a discussion on the limitations, which is especially important given that the proposed method does not make use of a 3D scene representation of the scene while answering the questions related to the 3D scene. - As noted in the Claims and Evidence section, there are some open questions about the method’s 3D capabilities, especially its ability to capture relationships between objects that do not appear together in a single RGB frame. While this 2D-based approach shows promising performance, this might also reflect some limitations in the current evaluation benchmarks. In my opinion it is not substantially evaluated or demonstrated whether the method can capture broader-range spatial relationships. - Given the method's reliance on LLAVA-OV, it is not entirely clear to me whether the cdView component substantially contributes to the overall performance, considering that even a uniform view sampling method outperforms several 3D-based baselines. - Some claims such as the novelty of using 2D VLMs for 3D understanding in a zero-shot setting are a bit too strong and could benefit from phrasing with more care. Other Comments Or Suggestions: _Minor comments:_ - The problem formulation in [L162-164] defines the scene representation as a set of 2D views but does not explicitly include camera poses, which are necessary for the viewNMS module. This should be clarified. - L323: “.. indicates more criticality of V_i.” should be “higher criticality”. - L300 (right column): “omit the subscription for simplify” - “we omit the subscript for simplicity” - L359 (right column): “5.1 Comparisons with the State-of-the-Arts” should be “State-of-the-Art” Questions For Authors: I am happy to reconsider my score based on the answers to the following questions and the remarks discussed earlier in the Claims section: 1. How does the proposed method fundamentally differ from video-based VQA approaches, given that it does not use an explicit 3D representation? Can it reliably reason about object relationships when two objects are never observed together in a single view? 2. How does the computational cost of running LLAVA-OV multiple times on multiple views compare to the cost of performing 3D-QA using existing 3D-based or 3D/2D-based methods? Is the reduction in computational cost by not having to reconstruct a 3D scene comparable to the difference between 3D-QA inference runtimes for the proposed method and 3D-based methods? Also, it appears to me that the proposed method needs to store all RGB images at all times if I understand correctly. How do the memory requirements compare? 3. The paper does not explicitly discuss its limitations as far as I could see. What are the main failure cases where the method struggles, particularly in complex or large-scale scenes where a broader-range 3D understanding might be necessary? 4. If LLAVA-OV were integrated into existing 3D/2D hybrid methods, would we expect it to outperform the proposed approach? Did the authors explore this possibility? Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **1. "... finetuning LLaVA-OV ..."** Due to space limitation, please kindly refer to the response of PgGq `2` --- ### **2. "... analysis of the types of questions" & "a discussion on the limitations ..."** We conduct a detailed analysis of question types on the SQA dataset, with per-type performance reported in Table S2 (Supplement). The results show that our zero-shot 2D LVLM-based method achieves strong performance across most question types, particularly excelling in “what”-, “which”-, and “other”-type questions. These categories primarily involve identifying objects and reasoning about their spatial configuration within the 3D scene. Our method, powered by multi-view aggregation, can aggregate multi-view observations to infer spatial layouts and object relations in 3D environments, even when key objects are never observed within a single view. As shown in [Figure R3](https://anonymous.4open.science/r/icml_rebuttal-35FF/failure_case_with_caption.pdf) (1st and 2nd sample), the model correctly infers relationships between spatially separated entities across the room. **Limitation**: In contrast, we observe relatively lower performance on “can”-type questions, which often involve agent-centric reasoning and require understanding how the scene would change with respect to the agent’s actions or viewpoint shifts (e.g., “Can I see the faucet if I turn around?”). These cases demand the modeling of dynamic viewpoints and agent-scene interaction, which are not fully captured by current 2D LVLMs in a frozen, zero-shot setting. Overall, our analysis confirms that frozen 2D LVLMs, when equipped with informed view selection, can already support a broad range of 3D spatial reasoning tasks. --- ### **3. "... broader spatial relations ..."** Our method is capable of reasoning about spatial relationships between objects that are never observed in the same view. As shown in the second example of [Figure R3](https://anonymous.4open.science/r/icml_rebuttal-35FF/failure_case_with_caption.pdf), the printer, trash can, and paper cutter are distributed across different views with no co-observability. Despite this, the model accurately infers their spatial arrangement and correctly answers the question. This demonstrates that, even without explicit 3D reconstruction, our approach can integrate multi-view information to support global spatial reasoning. --- ### **4. "Strong claims"** - [L153-156]: change from "3D understanding tasks" to "3D-QA tasks" - [L155-157]: correct to "... as a critical factor in zero-shot 3D-QA, for which there is a lack of an efficient solution in prior works." - [L125-127, right column]: correct to "... but rely on explicit 3D reconstruction, needing additional models and causing more processing steps. In contrast, our method uses 2D views and feeds them into a unified 2D LVLM, which makes a simpler pipeline". - [L142-144]: correct to "They focus more on evaluating pretrained 2D LVLMs on 3D-QA tasks, rather than developing approaches to adapt and improve their performance for spatial reasoning." --- ### **5. "... cdView component contributes to the overall performance ..."** We agree that LLaVA-OV provides a strong foundation, and even the uniform view sampling yields competitive results. This actually motivated us to explore more in the direction of leveraging well-trained 2D LVLMs for tackling 3D-QA tasks, and then we proposed the light-weight learnable module cdViews. As we can see in Table 1 (manuscript), using cdViews brings consistent and clear performance improvements over uniform sampling across multiple benchmarks. Specifically, it yields +2.0% / +2.1% EM@1 and +7.0% / +7.1% CIDEr gains on the ScanQA test set (w/o objects), and +3.4% EM@1 improvement on SQA. These gains can be regarded as "substantial", especially considering that prior work typically improve by only 1% EM@1 or 3–4% CIDEr (see Table 1). --- ### **6. "claims zero-shot ..."** Due to space limitation, please kindly refer to the response of rVhy `1` --- ### **7. “Problem formulation”** [Lines 162–164]: “..., each associated with a camera matrix containing the position and orientation.” --- ### **8. "L323, L300, L359"** We thank the reviewer for pointing these typos out. We will fix them in the revised version. --- ### **9. "Differ from video-VQA"** Traditional video-based VQA methods, while also using 2D information, are designed to process sequential frames to understand temporal dynamics. In contrast, cdViews focuses on selecting the most informative static 2D views from a 3D scene to answer questions about the 3D space. --- ### **10. "Computation cost and memory cost"** Due to space limitation, please kindly refer to the response of PgGq `3`
Summary: In this work the authors proposed to solve 3D question answering with 2D VLMs only. Specifically, 3D scenes are first rendered into 2D images, which are then used to prompt 2D VLMs (e.g., LLaVA-OneVision). Moreover the authors found that the key to good performance is how to select views that are most relevant to the question and answer. Compared to baseline view selectors, such as uniform sampling or image retrieval, training a view selector (along with a view NMS algorithm) to choose views that maximize the performance achieves the best performance. ## update after rebuttal Following the rebuttal I raised some concerns regarding the contribution of this paper. Although I was looking forward to a meaningful discussion, reviewer pwsQ refused to defend his/her "4: Accept" decision or to engage in further discussion. I stand with reviewer M1zQ and recommend rejection. Claims And Evidence: The major finding of this paper is that 2D VLMs can achieve state-of-the-art 3D VQA performance, when compared with other 3D-LLMs. This argument is not convincing due to the unfair comparison adopted in the experiment section (see next question). Other smaller findings and arguments are mostly supported with clear evidence. Methods And Evaluation Criteria: The proposed method and evaluation criteria is sound. However, the quantitative results may not be based on fair comparison. The VLM used in the proposed method, i.e., LLaVA-OV, is significantly better than all other 3D-LLMs, in terms of both the supervised fine-tuning (SFT) data, the vision encoder, and the language model (LLaMA-3 vs Vicuna v1). The LLaVA-OV naturally has much stronger visual understanding and reasoning abilities, which presents an unfair comparison with baseline methods. Experimental results in appendix also shows that a weaker VLM (LLaVA-NeXT) significantly hurts the performance. Theoretical Claims: No theoretical claims presented. Experimental Designs Or Analyses: Again I have concerns regarding the experimental designs. See "Methods And Evaluation Criteria" part of my review. Supplementary Material: Yes. I reviewed every part of the supplementary material. The PDF present additional results of LLaVA-NeXT and other ablation studies. The authors also provided anonymous code. The code in general looks good although I haven't gone into details. Relation To Broader Scientific Literature: This work related to the broader discussion of spatial intelligence -- how to develop artificial intelligence with spatial understanding and reasoning, specifically the choice of the input modality and the design of the model (2D or 3D VLM). However, given the concerns regarding the experimental design, the main findings of this paper may not stand. The contribution of this paper to the broader research topic is limited. Essential References Not Discussed: None. Other Strengths And Weaknesses: In general this work discusses the choices of 2D and 3D VLM to achieve strong 3D spatial understanding and reasoning, given the limited 3D-related data and abundant 2D multi-modal data. This is an interesting and important question. However I have doubts about the experimental settings of this paper -- an unfair comparisons could produce very misleading results. Other Comments Or Suggestions: My major concern is regarding the unfair comparison in the experimental results. This is mainly due to the fact that the authors are not retraining any of the 2D or 3D VLMs. For instance, finetuning LLaVA-OV following a strong 3D-VLM baseline could provide important insights to this problem. Most importantly the comparisons should be conducted with comparable vision encoders and large language models. Questions For Authors: 1. How many input views are sampled, filtered, and fed into the large language model? 2. All frames, selected or not, go through the visual encoder. This may cause significant computational costs. What is the GFLOP of the proposed methods compared to baseline methods? What is the wall clock time of inference compared to baseline methods? 3. Also there should be ablation studies on the number of frames rendered and selected, e.g., the final performance v.s. the computational costs. Code Of Conduct: Affirmed. Overall Recommendation: 2
Rebuttal 1: Rebuttal: ### **1. "... the unfair comparison ..."** We agree that LLaVA-OV is a strong 2D LVLM. However, we do not consider the comparison unfair, as our core motivation and contribution lie in exploring how to leverage powerful 2D LVLMs in a zero-shot manner for 3D-QA. - Compared to 3D-based methods, current 3D-LLMs are limited by the scarcity of large-scale 3D-language data. This **data bottleneck** is exactly what we aim to address. By contrast, our method directly utilizes pretrained 2D LVLMs, which benefit from tons of 2D vision-language data. - Compared to 2D-3D hybrid methods, it may seem fairer to replace the 2D module with LLaVA-OV. However, these hybrid pipelines typically require additional training to align 2D and 3D features. For large 2D LVLMs like LLaVA-OV, this alignment process incurs substantial computational cost. This introduces a significant **resource bottleneck**. - Our method is designed as **a practical solution under both data and resource constraints**, showing that strong off-the-shelf 2D LVLMs can be effectively used for 3D-QA. Moreover, the framework is model-agnostic and generalizable to future 2D LVLMs with improved capabilities. --- ### **2. "... finetuning LLaVA-OV ..."** The reviewer suggests fine-tuning LLaVA-OV within a 2D–3D hybrid pipeline, which could potentially benefit from both strong language–vision alignment and 3D spatial context. However, this setting introduces fundamental limitations: combining 2D LVLMs with 3D inputs requires additional model components, training supervision, and modality alignment, making the pipeline heavier and less scalable. As discussed in Lines 68–74 of our Introduction, "2D features extracted from LVLMs are already well-aligned with language, but further alignment with 3D features requires careful model design and advanced training techniques. " As requested, we implement a hybrid variant based on BridgeQA, the strongest hybrid baseline in our comparisons. Specifically, we follow the BridgeQA architecture and replace its 2D LVLM with LLaVA-OV. The input views are combined with question and 3D features. For fair comparsion, we use 9 views selected by cdViews as the input. Results are reported in Table R1. | Method | Type | 2D LVLM | EM@1 | |-------------------------------------|--------|-----------|-------| | BridgeQA | 3D+2D | BLIP | 27.0 | | BridgeQA$_{LLAVA-OV}$ | 3D+2D | LLAVA-OV | 28.4 | | LLAVA-OV + $F_{cdViews}$ | 2D | LLAVA-OV | **30.1** | **Table R1**: *Evaluating fine-tuned LLaVA-OV in a 2D–3D hybrid pipeline.* --- ### **3. "... computation efficiency comparison ..."** - **Number of candidate views**: Each 3D scene contains between 6 and 382 RGB views, with an average of 79 views per question. - **Computation during view selection**: All candidate views are processed by the view selection module. However, this stage contributes only a small fraction of the total cost—less than 10% of overall FLOPs—as shown in Table R2. - **Comparison with image retrieval**: In the view selection stage, our method reduces FLOPs by 54.8% (26.5T vs. 58.6T) and runtime by 90% (0.04s vs. 0.40s). In the QA stage, FLOPs are reduced by 49.9% (268T vs. 535T), and inference time by 51.3% (1.16s vs. 2.38s). - **GPU memory usage**: As shown in Table R2, cdViews achieves a lower peak GPU memory cost, making it more suitable for deployment on devices with limited memory capacity. | Method | **FLOPs** | | **Inference Time** | | **Peak GPU Memory Usage** | | |:---------------:|:----------------------:|:------:|:------------------------:|:------:|:------------------------:|:------:| | | View Selection | QA | View Selection | QA |View Selection | QA | | $F_{uniform}$ | 0 | 535T | 0 | 2.38s | 0 | 26.96G | | $F_{retrieval}$ | 0.74T/view × 79 = 58.6T| 535T | 0.40s | 2.38s | 7.89G | 26.96G | | $F_{cdViews}$ | 0.34T/view × 79 = 26.5T| 268T | 0.04s | 1.16s | 9.56G | 21.54G | **Table R2**: *Efficiency comparison between baseline method and our cdViews.*. --- ### **4. Ablation on the number of selected views and computational cost** We provide the requested ablation on computational cost with respect to the number of selected views. As shown below, FLOPs grow approximately linearly from 268 TFLOPs (9 views) to 534.98 TFLOPs (17 views). For the performance impact of view number, please kindly refer to Figure 4 of the main paper. | # Views | FLOPs (TFLOPs) | |--------:|----------------| | 9 | 268.00 | | 10 | 299.89 | | 11 | 332.19 | | 12 | 364.95 | | 13 | 398.12 | | 14 | 431.69 | | 15 | 465.71 | | 16 | 500.14 | | 17 | 534.98 |
Summary: The paper aims to only use 2D vision-language models to address 3D question answering task. The authors propose a new framework cdViews, which select critical and diverse views and then perform 3D question answering using the 2D vision-language model. The proposed framework is evaluated on two widely used benchmarks and demonstrate the effectiveness. ## update after rebuttal Please see the rebuttal comment below. Claims And Evidence: Yes. Methods And Evaluation Criteria: The methods and evaluation are reasonable. Theoretical Claims: No theoretical claims and proofs involved. Experimental Designs Or Analyses: The experimental designs and analyses are reasonable. Supplementary Material: The reviewer reviews all parts in the supplementary material, including the pdf document and code. Relation To Broader Scientific Literature: This paper extends the field of 3D question answering by introducing a pipeline which only uses 2D vision language models. The proposed cdViews pipeline is could also be applied to other downstream 3D understanding tasks. Essential References Not Discussed: The reviewer is not aware of missing references. Other Strengths And Weaknesses: Strength: 1. The paper is well-written, with clear structure and illustrations. Weakness: 1. The labels from the viewAnnotator come from large vision language models, which can not ensure the quality for the training of viewSelector. Other Comments Or Suggestions: 1. Equation (3) is confusing, as the parameter k is not shown in the righthand side. 2. $S_i$ appears in line 289 - left column is duplicated as $S_i$ in equation 6. Questions For Authors: 1. How do you evaluate/ensure the quality of $S_i$ generated by the LVLM (equation 6)? 2. Does $S_i$ appears in line 289 - left column means the $\hat{S_i}$ in equation 7? Duplicated definition of $S_i$ is confusing. 3. For equation (12), why do you directly add these two distances together? Are they within the similar scale? Have you conducted experiments to test whether the ratio between these two would affect the performance? 4. The conclusion for viewNMS Thresholds part (line 433-left column) is unclear. Search for parameter pairs is time-consuming. Is there any insight for how to choose appropriate thresholds if we are going to apply this pipeline on new datasets? 5. Table S3 indicates the backbone matters a lot in such pipeline. Why llava-OV is doing much better? When using llava-next the performance is worse than bridge-3d. The reviewer may adjust the final rating after rebuttal based on the clarifications from the authors. Code Of Conduct: Affirmed. Overall Recommendation: 4
Rebuttal 1: Rebuttal: ### **1. “... ensure the quality for the training of viewSelector.”** We agree that using LVLMs alone for annotation may lead to unreliable views. To address this, viewAnnotator is designed to capture informative views beyond simply image matching. Specifically, we incorporate a step-by-step system prompt (shown in [Figure R1](https://anonymous.4open.science/r/icml_rebuttal-35FF/system_prompt_with_caption.pdf)) in the View Matching process. This prompt ensures that all key objects, attributes, and spatial relationships in the caption align with the image, reducing ambiguity and improving consistency. Uncertain views are explicitly excluded, enhancing the robustness of the annotation process. Additional examples of positive and negative views are shown in [Figure R2](https://anonymous.4open.science/r/icml_rebuttal-35FF/positive_negative_views_with_caption.pdf). To further validate the reliability of the positive views, we conducted a human evaluation: - We randomly selected 50 QA pairs with their associated positive views. - Three human evaluators assessed whether each view could answer the question. Their accuracy rates were 96.72%, 94.28%, and 97.56%, confirming that the quality of positive views is sufficient for training. These details and results will be included in the supplementary document of the revised version. --- ### **2. “Equation (3) is confusing ...”** We appreciate the reviewer's detailed comments. In the original Equation (3), the number of sampled views $k$ was not clearly represented on the right-hand side. We will modify the Equation (3) as follows: $$ F_{uniform}(M, k) = \{V_{i_j}\}_{j=1}^k, i_j \sim {Uniform}(1, N). $$ --- ### **3. “ ... Duplicated definition of $S_i$...”** Both instances of $S_i$ on Line 289 (left column) should be updated to $\hat{S}_i$ for consistency with Equation (7). In our notation, $S_i$ denotes the ground-truth label, and $\hat{S}_i$ represents the predicted score from the model. The expressions on Line 289 correspond to model outputs, so the appropriate notation is $\hat{S}_i$. --- ### **4. “... directly add these two distances ...”** Yes, the two distances are on comparable scales. Based on statistics from the ScanQA and SQA datasets, position distance ranges from $[0, 14.72)$, and the orientation distance is within $[0, \pi)$. Given their similar scales, they can be combined. We tested different combination weights on the validation set (Table R1). The results show that viewNMS is fairly robust to weighting variations, with equal weighting (i.e., direct summation) yielding the best performance. | LLAVA-OV | $D_{pos} \times$ | $D_{ori} \times$ | Threshold $T$ | EM@1 | |---------------------------------------------|------------------|------------------|----------------|------------| | + $F_{cdViews}$ | 0.5 | 1 | 0.5 | 29.7 | | + $F_{cdViews}$ | 1 | 1 | 0.5 | **30.1** | | + $F_{cdViews}$ | 1 | 0.5 | 0.5 | 29.8 | **Table R1:** *Ablation study on distance weighting in viewNMS. We vary the relative weights of position distance ($D_{pos}$) and orientation distance ($D_{ori}$) while keeping the threshold fixed at 0.5. Equal weighting (1:1) achieves the best EM@1 performance.* --- ### **5. “... viewNMS Thresholds part ...”** We clarify that viewNMS does not require searching over parameter pairs. As noted in our response to the previous question (regarding Equation (12)), the position and orientation distances are directly combined with equal weights, so only a single threshold needs to be selected. This threshold is chosen based on the validation performance. As shown in the line chart in Figure 7 of the manuscript, a threshold of 0.5 yields the best result. --- ### **6. “... the backbone matters a lot ...”** Our 3D-QA pipeline consists of three main steps: view selection, feeding the selected view into a 2D LVLM, and generating an answer from the LVLM. Our main contribution is the whole pipeline of using 2D LVLM for 3D-QA. Our add-on contribution is proposing cdViews for more efficient view selection in the first step. Since most network parameters (in our pipeline) reside in the 2D LVLM, the final performance heavily depends on its pre-trained knowledge. Compared to LLaVA-Next and Bridge-3D, LLaVA-OV performs better due to its improved vision-language alignment—enhancing image understanding and generating more accurate, detailed descriptions—achieved through stronger language modeling and high-quality instruction tuning on open-vocabulary tasks. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed rebuttal. The authors have addressed most of my concerns. I have read the reviews from other reviewers and authors' rebuttal. I consider the proposed method is effective with solid experimental results, bringing new insights (using 2D VLM models) to 3D-VQA tasks. Whether it is promising to follow this path on 3D-VQA tasks could be left to the community to determine in the future. Thus I would like to keep my original score.
Summary: This paper introduces cdViews, a zero-shot method for 3D question answering that avoids fine-tuning large vision-language models (LVLMs). Initially, viewSelector is employed to automatically select the most relevant views based on the input question. Then, viewNMS enhances diversity by eliminating redundant views, determined through their spatial overlap. Finally, the selected views, along with the question, are fed into a pre-trained 2D LVLM (LLAVA-OV) to generate the answer. Experimental results demonstrate cdViews' state-of-the-art performance on the ScanQA and SQA benchmarks. Claims And Evidence: 1. The claim of zero-shot is questionable. The viewSelector requires training on data from existing 3D QA datasets (ScanQA and SQA3D), which may limit its generalization to other 3D QA datasets, particularly when the 3D scenes are not from ScanNet. In such cases, the domain of 2D view images would differ, potentially affecting performance. Methods And Evaluation Criteria: 1. The viewAnnotator relies on an LVLM, which does not guarantee the accuracy of the annotated positive views. Further analysis is required to assess its reliability. Theoretical Claims: N/A Experimental Designs Or Analyses: 1. An ablation study is missing to evaluate the effectiveness of viewSelector. For instance, comparing "Image Retrieval" + viewNMS with cdViews (viewSelector + viewNMS) would help quantify the improvement brought by viewSelector over "Image Retrieval." I believe that the combination of "Image Retrieval" and viewNMS could yield comparable performance, and this is a true zero-shot method. Supplementary Material: I have reviewed the entire supplementary material, which includes additional comparison results, ablation studies, and case studies. Relation To Broader Scientific Literature: This work is related to 3D question answering and zero-shot learning, offering some insights, such as selecting critical 2D views and using pre-trained 2D LVLMs, for future exploration of zero-shot 3D tasks. Essential References Not Discussed: N/A Other Strengths And Weaknesses: N/A Other Comments Or Suggestions: Typos: Line 241: "... the zero-shot 3Q-QA..." (should it be 3D-QA?) Questions For Authors: N/A Code Of Conduct: Affirmed. Overall Recommendation: 3
Rebuttal 1: Rebuttal: ### **1. “The claim of zero-shot is questionable”** Sorry for the confusion caused. For our best method LLAVA-OV + $F_{cdViews}$, the term `zero-shot` could be more precisely scoped: we will revise Lines 153–156 to state it as `zero-shot 2D LVLM inference` (rather than fully zero-shot). We would like to clarify that our goal in this paper is to make use of 2D LVLMs (which are much better trained than 3D ones) for free, i.e., to adapt them for 3D tasks without any large-scale 3D pre-training or fine-tuning. To this end, we proposed two pure no-training methods denoted as LLAVA-OV + $F_{uniform}$ and + $F_{retrieval}$, and one with small-scale training (on the viewSelector) denoted as + $F_{cdViews}$. During inference, 3D-QA is conducted by feeding the uniformly-sampled / retrieved / selected 2D views and the question into LLAVA-OV, and LLAVA-OV outputs the answer based solely on its knowledge learned from large-scale 2D pre-training. In the paper, we will revise the motivation paragraph of the Introduction section to make these clearer. --- ### **2. “... which may limit its generalization ... ”** We address this question from two perspectives: the generalization of our 3D-QA pipeline and the trainable module viewSelector. First, our pipeline leverages 2D LVLMs for 3D tasks without large-scale 3D-language pretraining. It supports three view selection methods: viewSelector, random sampling, and image retrieval. The latter two require no training (assuming minimal domain gap between retrieval models' training data and 3D-QA data). This ensures flexibility and broad generalization of the whole pipeline. Second, viewSelector is a lightweight trainable module that easily adapts to another similar task with small-scale 3D training data. Compared to large-scale 3D-language pretraining, fine-tuning viewSelector offers a practical way to enable the generalization of 2D LVLMs to 3D tasks. --- ### **3. “... the accuracy of the annotated positive views ...”** We agree that using LVLMs alone for annotation may lead to unreliable views. To address this, viewAnnotator is designed to capture informative views beyond simply image matching. Specifically, we incorporate a step-by-step system prompt (shown in [Figure R1](https://anonymous.4open.science/r/icml_rebuttal-35FF/system_prompt_with_caption.pdf)) in the View Matching process. This prompt ensures that all key objects, attributes, and spatial relationships in the caption align with the image, reducing ambiguity and improving consistency. Uncertain views are explicitly excluded, enhancing the robustness of the annotation process. Additional examples of positive and negative views are shown in [Figure R2](https://anonymous.4open.science/r/icml_rebuttal-35FF/positive_negative_views_with_caption.pdf). To further validate the reliability of the positive views, we conducted a human evaluation: - We randomly selected 50 QA pairs with their associated positive views. - Three human evaluators assessed whether each view could answer the question. Their accuracy rates were 96.72%, 94.28%, and 97.56%, confirming that the quality of positive views is sufficient for training. These details and results will be included in the supplementary document of the revised version. --- ### **4. “An ablation study is missing...”** As suggested, we performed ablation comparing the retrieval + viewNMS and cdViews and show the results in Table R1. Combining image retrieval with viewNMS reduces the number of input views due to redundancy removal, but it brings very marginal performance improvement (+0.1%). The reason is that viewNMS prioritizes diversity, but the overall performance still depends on whether the selected views contain the necessary visual evidence for the question. This highlights a key difference between image retrieval and viewSelector--while image retrieval focuses on matching text and images, viewSelector is explicitly trained to find views that are most important for reasoning. | LLAVA-OV + Method | Image Retrieval | viewSelector | viewNMS | Best EM@1 | Optimal k | |-----------------------------------|-----------------|----------------|-----------|-----------|-------------| | $F_{retrieval}$ | ✅ | - | - | 29.1 | 17 | | $F_{retrieval}$ | ✅ | - | ✅ | 29.2 | 9 | | $F_{cdViews}$ | - | ✅ | - | 29.7 | 17 | | $F_{cdViews}$ | - | ✅ | ✅ | **30.1** | **9** | **Table R1:** *Comparison of image retrieval and cdViews with and without viewNMS on the ScanQA validation set. The best EM@1 scores are reported with the corresponding optimal $k$.* --- ### **5. “Typos: ...”** We thank the reviewer for pointing this out. We will revise it. --- Rebuttal Comment 1.1: Comment: The authors' response has addressed most of my concerns. I would like to increase my rating to 3 -- weak accept.
null
null
null
null
null
null